prompt
stringlengths 19
1.03M
| completion
stringlengths 4
2.12k
| api
stringlengths 8
90
|
---|---|---|
# coding: utf-8
# In[4]:
'''
Function List:
update_progress(job_title, progress):
show status bar of long process. Operation purpose
SummarizeDefault(raw_data,default_flag_column,by_vars,plot=False,continuous_var=False,n_bin=10):
Summarize default data by different variables
raw_data: pandas.DataFrame. The dataset contains the default column and groupby variable
default_flag_column: String. column name of the default flag in dataset
by_vars: String. Column name of the groupby variable
continuous_var: Boolean. Default as False. If select as True, function will bucket the variable to do groupby.
n_bin: int. If contunuous_var is set as True, function will bucket the variable into "n_bin" bins
plot: Boolean. Default as False. If select True, function will generate graph
return a DataFrame
CalcROCPrep(s, default_flag_data):
Preparation for ROC calculation. Not directly into result
CalcROCStats(s,default_flag_data):
Calculation of ROC stats
s: pandas.DataFrame. Score generated from regression model
default_flag_data: pandas.Dataframe. Default flag data from real transactions.
return a dictionary of 3 stats
CalcPredAccuracy(s,default_flag_data,n_bin=10,plot=False,is_default_rate=True):
Compare the generated default probability with real default flag
s: pandas.DataFrame. Score generated from regression model
default_flag_data: pandas.Dataframe. Default flag data from real transactions.
n_bin: int. default flag will be grouped into "n_bin" bins
plot: Boolean. Default as False. If select True, function will generate graph
is_default_rate. Boolean. Default as True. Select False only if this function is used to compare two series that are NOT between (0,1)
return dataframe of bucket score VS actual default
CalcModelROC(raw_data,response_var,explanatory_var_1,explanatory_var_2="",explanatory_var_3="",explanatory_var_4="",explanatory_var_5="",explanatory_var_6="",explanatory_var_7="",explanatory_var_8="",explanatory_var_9="",explanatory_var_10=""):
Directly generate ROC stats from a given model. Not suggested to use because of explanatory variable number limitation.
raw_data: pandas.DataFrame. The dataset contains all variables used in model generation.
response_var: string. Column name of response var.
explanatory_var_1:string. Column name of response var.
explanatory_var_2-10:string.Optional. Column name of response var.
return dataframe
CalcWOE_IV(data,default_flag_column,categorical_column,event_list=[1,"Yes"],continuous_variable=False,n_bucket=10):
Calculate IV of each variable
Calculation refer to: https://www.listendata.com/2015/03/weight-of-evidence-woe-and-information.html
data: pandas.DataFrame. dataset that contains all variables
default_flag_column: String. column name of the default flag in dataset
categorical_column:String. column name of the categorical column
continuous_var: Boolean. Default as False. If select as True, function will bucket the variable to do groupby.
n_bin: int. If contunuous_var is set as True, function will bucket the variable into "n_bin" bins
event_list: list. The list contains all possibility than can be considered as "event"
return [IV value of certain variable, table of calculation]
PerformRandomCV(data,response_var,explanatory_var_list=[],k=20,plot=True,sampling_as_default_percentage=False):
Do Random Cross Validation of model to check model stability.
Return standard deviation of each coefficients.
raw_data: pandas.DataFrame. The dataset contains all variables used in model generation.
response_var: string. Column name of response var.
explanatory_var_list: list. List of all names of explanatory variables
k: int. 1/k samples will be eliminated as train set.
plot: Boolean. Default as True. Plot std. dev of each coefficient.
sampling_as_default_percentage: Boolean. Default as False. If selected as True, sampling will be done as the default rate.
return std.dev of coefficients during k-time regression
PerformK_Fold(data,response_var, explanatory_var_list=[],k=10):
Perform k-fold cross validation
data: pandas.DataFrame. dataset that contains all variables.
response_var: string. column name of response variable
explanatory_var_list: list of strings. list that contains all explanatory variables
return array of each fold score
BestSubsetSelection(data,response_var,explanatory_var_list=[],plot=True):
Run best subset selection
data: pandas.DataFrame. raw dataset
response_var: string. Column name of response var
explanatory_var_list: list of string. Names of columns of all explanatory candidates.
plot: If selected as True, it will plot the selection steps graph.
return all combinations' RSS and R_squared
ForwardStepwiseSelection(data,response_var,explanatory_var_list=[],plot=True):
Do forward stepwise selection
data: pandas.DataFrame. raw dataset
response_var: string. Column name of response var
explanatory_var_list: list of string. Names of columns of all explanatory candidates.
plot: if selected as True, it will plot the selection steps graph
return dataframe with AIC,BIC,R_squared of each variable combination
CalcSpearmanCorrelation(data,var_1,var_2):
Calculate spearman Correlation
data: pandas.DataFrame. Contains var_1 and var_2
var_1,var_2: string. column name of var_1 and var_2
return float
CalcKendallTau(data,var_1,var_2):
Calculate KendallTau
data: pandas.DataFrame. Contains var_1 and var_2
var_1,var_2: string. column name of var_1 and var_2
return float
CalcSamplingGoodmanKruskalGamma(data,var_1, var_2,n_sample=1000):
Calculate GoodmanKruskalGamma estimator with sampling data. Not suggest to use into report as this is just an estimator.
data: pandas.DataFrame. Contains var_1 and var_2
var_1,var_2: string. column name of var_1 and var_2
return float
CalcSomersD(data,var_1,var_2):
Calculate Somers' D
data: pandas.DataFrame. Contains var_1 and var_2
var_1,var_2: string. column name of var_1 and var_2
return float
GenerateRankCorrelation(data,var_1,var_2):
Calculate all 4 ranked correlation.
data: pandas.DataFrame. Contains var_1 and var_2
var_1,var_2: string. column name of var_1 and var_2. The order of var_1 and var_2 can not change.
return float
'''
class ValidationTool:
import pandas as pd
import numpy as np
def __init__(self):
pass
def update_progress(job_title, progress):
'''
show status bar of long process
'''
import time, sys
length = 20
block = int(round(length*progress))
msg = "\r{0}: [{1}] {2}%".format(job_title, "#"*block + "-"*(length-block), round(progress*100, 1))
if progress >= 1: msg += " DONE\r\n"
sys.stdout.write(msg)
sys.stdout.flush()
def SummarizeDefault(raw_data,default_flag_column,by_vars,plot=False,continuous_var=False,n_bin=10):
'''
Summarize default data by different variables
raw_data: pandas.DataFrame. The dataset contains the default column and groupby variable
default_flag_column: String. column name of the default flag in dataset
by_vars: String. Column name of the groupby variable
continuous_var: Boolean. Default as False. If select as True, function will bucket the variable to do groupby.
n_bin: int. If contunuous_var is set as True, function will bucket the variable into "n_bin" bins
plot: Boolean. Default as False. If select True, function will generate graph
'''
import numpy as np
import pandas as pd
data=raw_data[[default_flag_column,by_vars]]
max_by_vars=max(data[by_vars])
min_by_vars=min(data[by_vars])
if continuous_var==False:
DefaultNum=pd.DataFrame(data[[default_flag_column,by_vars]].groupby(by_vars).agg("sum"))
DefaultNum.columns=["DefaultNum"]
GroupTotalNum=pd.DataFrame(data.groupby(by_vars).size())
GroupTotalNum.columns=["GroupTotalNum"]
else:
if raw_data[by_vars].isnull().values.any()==True:
'''
Follow the steps to deal with nan range:
'''
group_by_range=(pd.cut(np.array(data[by_vars]), np.arange(min_by_vars,max_by_vars+1,
(max_by_vars-min_by_vars)/n_bin),include_lowest=True)
.add_categories("missing"))
group_by_range=group_by_range.fillna("missing")
else:
group_by_range=(pd.cut(np.array(data[by_vars]), np.arange(min_by_vars,max_by_vars+1,(max_by_vars-min_by_vars)/n_bin),include_lowest=True))
DefaultNum=pd.DataFrame(data.groupby(group_by_range)[default_flag_column].sum())
DefaultNum.columns=["DefaultNum"]
GroupTotalNum=pd.DataFrame(data.groupby(group_by_range).size())
GroupTotalNum.columns=["GroupTotalNum"]
SummaryTable = pd.concat([DefaultNum, GroupTotalNum], axis=1, join_axes=[DefaultNum.index])
SummaryTable["DefaultProb"]=(SummaryTable.DefaultNum/SummaryTable.GroupTotalNum)
SummaryTable["Percent_of_Orbs"]=(SummaryTable.GroupTotalNum/data.shape[0])
if plot==True:
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=2, ncols=1)
SummaryTable[["DefaultNum","GroupTotalNum"]].plot(kind="bar",grid=True,ax=axes[1],title="Number in each Group")
SummaryTable[["DefaultProb","Percent_of_Orbs"]].plot(kind="line",grid=True,ax=axes[0],title="Percentage in each Group")
SummaryTable["DefaultProb"]=(SummaryTable.DefaultNum/SummaryTable.GroupTotalNum).apply('{:.2%}'.format)
SummaryTable["Percent_of_Orbs"]=(SummaryTable.GroupTotalNum/data.shape[0]).apply('{:.2%}'.format)
return SummaryTable
def CalcROCPrep(s, default_flag_data):
'''
Preparation for ROC calculation
'''
import pandas as pd
import numpy as np
df=pd.DataFrame({"score":100*(1-s),"outcome":default_flag_data})
df=df.sort_values("score")
df["cum_bad"]=df.outcome.cumsum()
df["cum_good"]=(1-df.outcome).cumsum()
df["cum_bad_perc"]=df.cum_bad/sum(df.outcome)
df["cum_good_perc"]=df.cum_good/sum(1-df.outcome)
SummaryTable=pd.DataFrame()
SummaryTable["cum_bad"]=df[["score","cum_bad"]].groupby("score")["cum_bad"].max()
SummaryTable["cum_good"]=df[["score","cum_good"]].groupby("score")["cum_good"].max()
SummaryTable["cum_bad_perc"]=df[["score","cum_bad_perc"]].groupby("score")["cum_bad_perc"].max()
SummaryTable["cum_good_perc"]=df[["score","cum_good_perc"]].groupby("score")["cum_good_perc"].max()
return SummaryTable
def CalcROCStats(s,default_flag_data):
'''
Calculation of ROC stats
s: pandas.DataFrame. Score generated from regression model
default_flag_data: pandas.Dataframe. Default flag data from real transactions.
'''
import numpy as np
import pandas as pd
df=ValidationTool.CalcROCPrep(s,default_flag_data)
pd_rate=sum(default_flag_data)/len(default_flag_data)
c_stat=0.5*np.dot(np.array(np.diff(([0]+(list(df.cum_good_perc))))),np.array(list(df.cum_bad_perc+df.cum_bad_perc.shift(1).fillna(0))).T)
ar_stat=2*c_stat-1
ks_stat=max(df.cum_bad_perc-df.cum_good_perc)
return {"c_stat":c_stat,"ar_stat":ar_stat,"ks_stat":ks_stat}
def CalcPredAccuracy(s,default_flag_data,n_bin=10,plot=False,is_default_rate=True):
'''
Compare the generated default probability with real default flag
s: pandas.DataFrame. Score generated from regression model
default_flag_data: pandas.Dataframe. Default flag data from real transactions.
n_bin: int. default flag will be grouped into "n_bin" bins
plot: Boolean. Default as False. If select True, function will generate graph
is_default_rate. Boolean. Default as True. Select False only if this function is used to compare two series that are NOT between (0,1)
'''
import numpy as np
import pandas as pd
df= | pd.DataFrame({"outcome":default_flag_data,"score":s}) | pandas.DataFrame |
"""
Local MRIQC Stats
-----------------
This module allows the user to compare his/her images with all
similar images collected at the same scanner and same parameters,
for purpose of Quality Control (QC).
IQM: Image Quality Metrics
"""
import json
import urllib.request
import urllib.error
from datetime import datetime
from numbers import Number
from pathlib import Path
import pandas as pd
import numpy as np
from .utils import (DEVICE_SERIAL_NO,
REPOSITORY_PATH,
MRIQC_SERVER,
RELEVANT_KEYS,
read_mriqc_json,
)
def get_month_number(month):
"""
Get the month in numeric format or 'all'
Parameters
----------
month : int or str
Returns
-------
n_month : int or 'all'
Month in numeric format, or string 'all'
"""
if isinstance(month, str):
if month == 'current':
n_month = datetime.today().month
elif month == '':
n_month = 'all'
else:
if len(month) == 3:
n_month = datetime.strptime(month, '%b').month
else:
try:
n_month = datetime.strptime(month, '%B').month
except ValueError:
print('Wrong month: {0}'.format(month))
raise
elif isinstance(month, int):
if not (0 < month < 13):
raise ValueError('Wrong month: {0}'.format(month))
else:
n_month = month
else:
raise ValueError('Wrong month: {0}'.format(month))
return n_month
def get_device_iqms_from_server(modality, month='current', year='current', device_serial_no=None, versions=None):
"""
Grab all iqms for the given modality and device, for a given month/year
Parameters
----------
modality : str
Imaging modality
Options: "T1w", "T2w", "bold"
month : int or str
year : int or str
Desired year, or "current"
device_serial_no : str
Serial number of the device for which we want to query the
database
versions : list of str
Versions of MRIQC for which we want to retrieve data
Returns
-------
Pandas DataFrame with all the entries
"""
# TODO: - Define a global list or irrelevant fields:
# I can remove irrelevant fields (e.g., "WindowWidth", "WindowCenter", ...
# "SliceLocation"). Basically, all "Slices." fields.
# - see if saving the results as JSONs saves space (by adding them to the
# results only if the md5sum is not there already, or maybe replacing it)
software = 'mriqc'
url_root = 'https://{m_server}/api/v1/{{modality}}?{{query}}'.format(m_server=MRIQC_SERVER)
if device_serial_no is None:
device_serial_no = DEVICE_SERIAL_NO
if isinstance(year, str):
if year == 'current':
year = datetime.today().year
else:
year = int(year)
n_month = get_month_number(month)
if versions is None:
versions = ['*']
# prepare the query and get the data. E.g.:
# "bids_meta.DeviceSerialNumber":"166018","_updated":{"$gte":"Fri,%2012%20Jul%202019%2017:20:32%20GMT"}}&page=1
base_query = ['"bids_meta.DeviceSerialNumber":"{dev_no}"'.format(dev_no=device_serial_no),
'"provenance.software":"{software}"'.format(software=software)]
# it looks like the API requires the full date and time (e.g.: "Fri, 12 Jul 2019 17:20:32 GMT" )
if n_month == 'all':
begin_date = datetime(year, 1, 1).strftime('%a, %d %b %Y %H:%M:%S GMT')
end_date = datetime(year, 12, 31).strftime('%a, %d %b %Y %H:%M:%S GMT')
else:
begin_date = datetime(year, n_month, 1).strftime('%a, %d %b %Y %H:%M:%S GMT')
if n_month < 12:
end_date = datetime(year, n_month + 1, 1).strftime('%a, %d %b %Y %H:%M:%S GMT')
else: # December:
end_date = datetime(year + 1, 1, 1).strftime('%a, %d %b %Y %H:%M:%S GMT')
base_query.append(
'"_updated":{{"$gte":"{begin_d}", "$lte":"{end_d}"}}'.format(
begin_d=begin_date,
end_d=end_date
)
)
dfs = []
for version in versions:
query = base_query
if version != '*':
query.append('"provenance.version":"%s"' % version)
page = 1
while True:
page_url = url_root.format(
modality=modality,
query='where={{{where}}}&page={page}'.format(
where=','.join(query),
page=page
)
)
print(page_url)
try:
# VERY IMPORTANT #
# Convert spaces in the page_url into "%20". Otherwise, it doesn't work:
with urllib.request.urlopen(page_url.replace(" ", "%20")) as url:
data = json.loads(url.read().decode())
dfs.append(pd.json_normalize(data['_items']))
if 'next' not in data['_links'].keys():
break
else:
page += 1
except urllib.error.HTTPError as err:
if err.code == 400:
print('No results for these dates')
break
else:
raise
except:
print('error')
raise
if len(dfs) > 0:
# Compose a pandas dataframe
return | pd.concat(dfs, ignore_index=True, sort=True) | pandas.concat |
from collections import deque
from datetime import datetime
import operator
import re
import numpy as np
import pytest
import pytz
import pandas as pd
from pandas import DataFrame, MultiIndex, Series
import pandas._testing as tm
import pandas.core.common as com
from pandas.core.computation.expressions import _MIN_ELEMENTS, _NUMEXPR_INSTALLED
from pandas.tests.frame.common import _check_mixed_float, _check_mixed_int
# -------------------------------------------------------------------
# Comparisons
class TestFrameComparisons:
# Specifically _not_ flex-comparisons
def test_frame_in_list(self):
# GH#12689 this should raise at the DataFrame level, not blocks
df = pd.DataFrame(np.random.randn(6, 4), columns=list("ABCD"))
msg = "The truth value of a DataFrame is ambiguous"
with pytest.raises(ValueError, match=msg):
df in [None]
def test_comparison_invalid(self):
def check(df, df2):
for (x, y) in [(df, df2), (df2, df)]:
# we expect the result to match Series comparisons for
# == and !=, inequalities should raise
result = x == y
expected = pd.DataFrame(
{col: x[col] == y[col] for col in x.columns},
index=x.index,
columns=x.columns,
)
tm.assert_frame_equal(result, expected)
result = x != y
expected = pd.DataFrame(
{col: x[col] != y[col] for col in x.columns},
index=x.index,
columns=x.columns,
)
tm.assert_frame_equal(result, expected)
msgs = [
r"Invalid comparison between dtype=datetime64\[ns\] and ndarray",
"invalid type promotion",
(
# npdev 1.20.0
r"The DTypes <class 'numpy.dtype\[.*\]'> and "
r"<class 'numpy.dtype\[.*\]'> do not have a common DType."
),
]
msg = "|".join(msgs)
with pytest.raises(TypeError, match=msg):
x >= y
with pytest.raises(TypeError, match=msg):
x > y
with pytest.raises(TypeError, match=msg):
x < y
with pytest.raises(TypeError, match=msg):
x <= y
# GH4968
# invalid date/int comparisons
df = pd.DataFrame(np.random.randint(10, size=(10, 1)), columns=["a"])
df["dates"] = pd.date_range("20010101", periods=len(df))
df2 = df.copy()
df2["dates"] = df["a"]
check(df, df2)
df = pd.DataFrame(np.random.randint(10, size=(10, 2)), columns=["a", "b"])
df2 = pd.DataFrame(
{
"a": pd.date_range("20010101", periods=len(df)),
"b": pd.date_range("20100101", periods=len(df)),
}
)
check(df, df2)
def test_timestamp_compare(self):
# make sure we can compare Timestamps on the right AND left hand side
# GH#4982
df = pd.DataFrame(
{
"dates1": pd.date_range("20010101", periods=10),
"dates2": pd.date_range("20010102", periods=10),
"intcol": np.random.randint(1000000000, size=10),
"floatcol": np.random.randn(10),
"stringcol": list(tm.rands(10)),
}
)
df.loc[np.random.rand(len(df)) > 0.5, "dates2"] = pd.NaT
ops = {"gt": "lt", "lt": "gt", "ge": "le", "le": "ge", "eq": "eq", "ne": "ne"}
for left, right in ops.items():
left_f = getattr(operator, left)
right_f = getattr(operator, right)
# no nats
if left in ["eq", "ne"]:
expected = left_f(df, pd.Timestamp("20010109"))
result = right_f(pd.Timestamp("20010109"), df)
tm.assert_frame_equal(result, expected)
else:
msg = (
"'(<|>)=?' not supported between "
"instances of 'numpy.ndarray' and 'Timestamp'"
)
with pytest.raises(TypeError, match=msg):
left_f(df, pd.Timestamp("20010109"))
with pytest.raises(TypeError, match=msg):
right_f(pd.Timestamp("20010109"), df)
# nats
expected = left_f(df, pd.Timestamp("nat"))
result = right_f(pd.Timestamp("nat"), df)
tm.assert_frame_equal(result, expected)
def test_mixed_comparison(self):
# GH#13128, GH#22163 != datetime64 vs non-dt64 should be False,
# not raise TypeError
# (this appears to be fixed before GH#22163, not sure when)
df = pd.DataFrame([["1989-08-01", 1], ["1989-08-01", 2]])
other = pd.DataFrame([["a", "b"], ["c", "d"]])
result = df == other
assert not result.any().any()
result = df != other
assert result.all().all()
def test_df_boolean_comparison_error(self):
# GH#4576, GH#22880
# comparing DataFrame against list/tuple with len(obj) matching
# len(df.columns) is supported as of GH#22800
df = pd.DataFrame(np.arange(6).reshape((3, 2)))
expected = pd.DataFrame([[False, False], [True, False], [False, False]])
result = df == (2, 2)
tm.assert_frame_equal(result, expected)
result = df == [2, 2]
tm.assert_frame_equal(result, expected)
def test_df_float_none_comparison(self):
df = pd.DataFrame(
np.random.randn(8, 3), index=range(8), columns=["A", "B", "C"]
)
result = df.__eq__(None)
assert not result.any().any()
def test_df_string_comparison(self):
df = pd.DataFrame([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}])
mask_a = df.a > 1
tm.assert_frame_equal(df[mask_a], df.loc[1:1, :])
tm.assert_frame_equal(df[-mask_a], df.loc[0:0, :])
mask_b = df.b == "foo"
tm.assert_frame_equal(df[mask_b], df.loc[0:0, :])
tm.assert_frame_equal(df[-mask_b], df.loc[1:1, :])
class TestFrameFlexComparisons:
# TODO: test_bool_flex_frame needs a better name
def test_bool_flex_frame(self):
data = np.random.randn(5, 3)
other_data = np.random.randn(5, 3)
df = pd.DataFrame(data)
other = pd.DataFrame(other_data)
ndim_5 = np.ones(df.shape + (1, 3))
# Unaligned
def _check_unaligned_frame(meth, op, df, other):
part_o = other.loc[3:, 1:].copy()
rs = meth(part_o)
xp = op(df, part_o.reindex(index=df.index, columns=df.columns))
tm.assert_frame_equal(rs, xp)
# DataFrame
assert df.eq(df).values.all()
assert not df.ne(df).values.any()
for op in ["eq", "ne", "gt", "lt", "ge", "le"]:
f = getattr(df, op)
o = getattr(operator, op)
# No NAs
tm.assert_frame_equal(f(other), o(df, other))
_check_unaligned_frame(f, o, df, other)
# ndarray
tm.assert_frame_equal(f(other.values), o(df, other.values))
# scalar
tm.assert_frame_equal(f(0), o(df, 0))
# NAs
msg = "Unable to coerce to Series/DataFrame"
tm.assert_frame_equal(f(np.nan), o(df, np.nan))
with pytest.raises(ValueError, match=msg):
f(ndim_5)
# Series
def _test_seq(df, idx_ser, col_ser):
idx_eq = df.eq(idx_ser, axis=0)
col_eq = df.eq(col_ser)
idx_ne = df.ne(idx_ser, axis=0)
col_ne = df.ne(col_ser)
tm.assert_frame_equal(col_eq, df == pd.Series(col_ser))
tm.assert_frame_equal(col_eq, -col_ne)
tm.assert_frame_equal(idx_eq, -idx_ne)
tm.assert_frame_equal(idx_eq, df.T.eq(idx_ser).T)
tm.assert_frame_equal(col_eq, df.eq(list(col_ser)))
tm.assert_frame_equal(idx_eq, df.eq(pd.Series(idx_ser), axis=0))
tm.assert_frame_equal(idx_eq, df.eq(list(idx_ser), axis=0))
idx_gt = df.gt(idx_ser, axis=0)
col_gt = df.gt(col_ser)
idx_le = df.le(idx_ser, axis=0)
col_le = df.le(col_ser)
tm.assert_frame_equal(col_gt, df > pd.Series(col_ser))
tm.assert_frame_equal(col_gt, -col_le)
tm.assert_frame_equal(idx_gt, -idx_le)
tm.assert_frame_equal(idx_gt, df.T.gt(idx_ser).T)
idx_ge = df.ge(idx_ser, axis=0)
col_ge = df.ge(col_ser)
idx_lt = df.lt(idx_ser, axis=0)
col_lt = df.lt(col_ser)
tm.assert_frame_equal(col_ge, df >= pd.Series(col_ser))
tm.assert_frame_equal(col_ge, -col_lt)
tm.assert_frame_equal(idx_ge, -idx_lt)
tm.assert_frame_equal(idx_ge, df.T.ge(idx_ser).T)
idx_ser = pd.Series(np.random.randn(5))
col_ser = pd.Series(np.random.randn(3))
_test_seq(df, idx_ser, col_ser)
# list/tuple
_test_seq(df, idx_ser.values, col_ser.values)
# NA
df.loc[0, 0] = np.nan
rs = df.eq(df)
assert not rs.loc[0, 0]
rs = df.ne(df)
assert rs.loc[0, 0]
rs = df.gt(df)
assert not rs.loc[0, 0]
rs = df.lt(df)
assert not rs.loc[0, 0]
rs = df.ge(df)
assert not rs.loc[0, 0]
rs = df.le(df)
assert not rs.loc[0, 0]
def test_bool_flex_frame_complex_dtype(self):
# complex
arr = np.array([np.nan, 1, 6, np.nan])
arr2 = np.array([2j, np.nan, 7, None])
df = pd.DataFrame({"a": arr})
df2 = pd.DataFrame({"a": arr2})
msg = "|".join(
[
"'>' not supported between instances of '.*' and 'complex'",
r"unorderable types: .*complex\(\)", # PY35
]
)
with pytest.raises(TypeError, match=msg):
# inequalities are not well-defined for complex numbers
df.gt(df2)
with pytest.raises(TypeError, match=msg):
# regression test that we get the same behavior for Series
df["a"].gt(df2["a"])
with pytest.raises(TypeError, match=msg):
# Check that we match numpy behavior here
df.values > df2.values
rs = df.ne(df2)
assert rs.values.all()
arr3 = np.array([2j, np.nan, None])
df3 = pd.DataFrame({"a": arr3})
with pytest.raises(TypeError, match=msg):
# inequalities are not well-defined for complex numbers
df3.gt(2j)
with pytest.raises(TypeError, match=msg):
# regression test that we get the same behavior for Series
df3["a"].gt(2j)
with pytest.raises(TypeError, match=msg):
# Check that we match numpy behavior here
df3.values > 2j
def test_bool_flex_frame_object_dtype(self):
# corner, dtype=object
df1 = | pd.DataFrame({"col": ["foo", np.nan, "bar"]}) | pandas.DataFrame |
import random
import time
import numpy as np
import pandas as pd
import pyziabmc.orderbook as orderbook
import pyziabmc.trader as trader
from pyziabmc.shared import Side, OType, TType
class Runner:
def __init__(self, h5filename='test.h5', mpi=1, prime1=20, run_steps=100000, write_interval=5000, **kwargs):
self.exchange = orderbook.Orderbook()
self.h5filename = h5filename
self.mpi = mpi
self.run_steps = run_steps + 1
self.liquidity_providers = {}
self.provider = kwargs.pop('Provider')
if self.provider:
self.providers, self.num_providers = self.buildProviders(kwargs['numProviders'], kwargs['providerMaxQ'],
kwargs['pAlpha'], kwargs['pDelta'])
self.q_provide = kwargs['qProvide']
self.taker = kwargs.pop('Taker')
if self.taker:
self.takers = self.buildTakers(kwargs['numTakers'], kwargs['takerMaxQ'], kwargs['tMu'])
self.informed = kwargs.pop('InformedTrader')
if self.informed:
if self.taker:
takerTradeV = np.array([t.quantity*self.run_steps/t.delta_t for t in self.takers])
informedTrades = np.int(kwargs['iMu']*np.sum(takerTradeV) if self.taker else 1/kwargs['iMu'])
self.informed_trader = self.buildInformedTrader(kwargs['informedMaxQ'], kwargs['informedRunLength'], informedTrades, prime1)
self.pj = kwargs.pop('PennyJumper')
if self.pj:
self.pennyjumper = self.buildPennyJumper()
self.alpha_pj = kwargs['AlphaPJ']
self.marketmaker = kwargs.pop('MarketMaker')
if self.marketmaker:
self.marketmakers = self.buildMarketMakers(kwargs['MMMaxQ'], kwargs['NumMMs'], kwargs['MMQuotes'],
kwargs['MMQuoteRange'], kwargs['MMDelta'])
self.traders, self.num_traders = self.makeAll()
self.q_take, self.lambda_t = self.makeQTake(kwargs['QTake'], kwargs['Lambda0'], kwargs['WhiteNoise'], kwargs['CLambda'])
self.seedOrderbook(kwargs['pAlpha'])
if self.provider:
self.makeSetup(prime1, kwargs['Lambda0'])
if self.pj:
self.runMcsPJ(prime1, write_interval)
else:
self.runMcs(prime1, write_interval)
self.exchange.trade_book_to_h5(h5filename)
self.qTakeToh5()
self.mmProfitabilityToh5()
def buildProviders(self, numProviders, providerMaxQ, pAlpha, pDelta):
''' Providers id starts with 1
'''
provider_ids = [1000 + i for i in range(numProviders)]
if self.mpi == 1:
provider_list = [trader.Provider(p, providerMaxQ, pDelta, pAlpha) for p in provider_ids]
else:
provider_list = [trader.Provider5(p, providerMaxQ, pDelta, pAlpha) for p in provider_ids]
self.liquidity_providers.update(dict(zip(provider_ids, provider_list)))
return provider_list, len(provider_list)
def buildTakers(self, numTakers, takerMaxQ, tMu):
''' Takers id starts with 2
'''
taker_ids = [2000 + i for i in range(numTakers)]
return [trader.Taker(t, takerMaxQ, tMu) for t in taker_ids]
def buildInformedTrader(self, informedMaxQ, informedRunLength, informedTrades, prime1):
''' Informed trader id starts with 5
'''
return trader.InformedTrader(5000, informedMaxQ, informedTrades, informedRunLength, prime1, self.run_steps)
def buildPennyJumper(self):
''' PJ id starts with 4
'''
jumper = trader.PennyJumper(4000, 1, self.mpi)
self.liquidity_providers.update({4000: jumper})
return jumper
def buildMarketMakers(self, mMMaxQ, numMMs, mMQuotes, mMQuoteRange, mMDelta):
''' MM id starts with 3
'''
marketmaker_ids = [3000 + i for i in range(numMMs)]
if self.mpi == 1:
marketmaker_list = [trader.MarketMaker(p, mMMaxQ, 0.005, mMDelta, mMQuotes, mMQuoteRange) for p in marketmaker_ids]
else:
marketmaker_list = [trader.MarketMaker5(p, mMMaxQ, 0.005, mMDelta, mMQuotes, mMQuoteRange) for p in marketmaker_ids]
self.liquidity_providers.update(dict(zip(marketmaker_ids, marketmaker_list)))
return marketmaker_list
def makeQTake(self, q_take, lambda_0, wn, c_lambda):
if q_take:
noise = np.random.rand(2, self.run_steps)
qt_take = np.empty_like(noise)
qt_take[:,0] = 0.5
for i in range(1, self.run_steps):
qt_take[:,i] = qt_take[:,i-1] + (noise[:,i-1]>qt_take[:,i-1])*wn - (noise[:,i-1]<qt_take[:,i-1])*wn
lambda_t = -lambda_0*(1 + (np.abs(qt_take[1] - 0.5)/np.sqrt(np.mean(np.square(qt_take[0] - 0.5))))*c_lambda)
return qt_take[1], lambda_t
else:
qt_take = np.array([0.5]*self.run_steps)
lambda_t = np.array([-lambda_0]*self.run_steps)
return qt_take, lambda_t
def makeAll(self):
trader_list = []
if self.provider:
trader_list.extend(self.providers)
if self.taker:
trader_list.extend(self.takers)
if self.marketmaker:
trader_list.extend(self.marketmakers)
if self.informed:
trader_list.append(self.informed_trader)
return trader_list, len(trader_list)
def seedOrderbook(self, pAlpha):
seed_provider = trader.Provider(9999, 1, 0.05, pAlpha)
self.liquidity_providers.update({9999: seed_provider})
ba = random.choice(range(1000005, 1002001, 5))
bb = random.choice(range(997995, 999996, 5))
qask = {'order_id': 1, 'trader_id': 9999, 'timestamp': 0, 'type': OType.ADD,
'quantity': 1, 'side': Side.ASK, 'price': ba}
qbid = {'order_id': 2, 'trader_id': 9999, 'timestamp': 0, 'type': OType.ADD,
'quantity': 1, 'side': Side.BID, 'price': bb}
seed_provider.local_book[1] = qask
self.exchange.add_order_to_book(qask)
self.exchange.add_order_to_history(qask)
seed_provider.local_book[2] = qbid
self.exchange.add_order_to_book(qbid)
self.exchange.add_order_to_history(qbid)
def makeSetup(self, prime1, lambda0):
top_of_book = self.exchange.report_top_of_book(0)
for current_time in range(1, prime1):
ps = random.sample(self.providers, self.num_providers)
for p in ps:
if not current_time % p.delta_t:
self.exchange.process_order(p.process_signal(current_time, top_of_book, self.q_provide, -lambda0))
top_of_book = self.exchange.report_top_of_book(current_time)
def doCancels(self, trader):
for c in trader.cancel_collector:
self.exchange.process_order(c)
def confirmTrades(self):
for c in self.exchange.confirm_trade_collector:
contra_side = self.liquidity_providers[c['trader']]
contra_side.confirm_trade_local(c)
def runMcs(self, prime1, write_interval):
top_of_book = self.exchange.report_top_of_book(prime1)
for current_time in range(prime1, self.run_steps):
traders = random.sample(self.traders, self.num_traders)
for t in traders:
if t.trader_type == TType.Provider:
if not current_time % t.delta_t:
self.exchange.process_order(t.process_signal(current_time, top_of_book, self.q_provide, self.lambda_t[current_time]))
top_of_book = self.exchange.report_top_of_book(current_time)
t.bulk_cancel(current_time)
if t.cancel_collector:
self.doCancels(t)
top_of_book = self.exchange.report_top_of_book(current_time)
elif t.trader_type == TType.MarketMaker:
if not current_time % t.quantity:
t.process_signal(current_time, top_of_book, self.q_provide)
for q in t.quote_collector:
self.exchange.process_order(q)
top_of_book = self.exchange.report_top_of_book(current_time)
t.bulk_cancel(current_time)
if t.cancel_collector:
self.doCancels(t)
top_of_book = self.exchange.report_top_of_book(current_time)
elif t.trader_type == TType.Taker:
if not current_time % t.delta_t:
self.exchange.process_order(t.process_signal(current_time, self.q_take[current_time]))
if self.exchange.traded:
self.confirmTrades()
top_of_book = self.exchange.report_top_of_book(current_time)
else:
if current_time in t.delta_t:
self.exchange.process_order(t.process_signal(current_time))
if self.exchange.traded:
self.confirmTrades()
top_of_book = self.exchange.report_top_of_book(current_time)
if not current_time % write_interval:
self.exchange.order_history_to_h5(self.h5filename)
self.exchange.sip_to_h5(self.h5filename)
def runMcsPJ(self, prime1, write_interval):
top_of_book = self.exchange.report_top_of_book(prime1)
for current_time in range(prime1, self.run_steps):
traders = random.sample(self.traders, self.num_traders)
for t in traders:
if t.trader_type == TType.Provider:
if not current_time % t.delta_t:
self.exchange.process_order(t.process_signal(current_time, top_of_book, self.q_provide, self.lambda_t[current_time]))
top_of_book = self.exchange.report_top_of_book(current_time)
t.bulk_cancel(current_time)
if t.cancel_collector:
self.doCancels(t)
top_of_book = self.exchange.report_top_of_book(current_time)
elif t.trader_type == TType.MarketMaker:
if not current_time % t.quantity:
t.process_signal(current_time, top_of_book, self.q_provide)
for q in t.quote_collector:
self.exchange.process_order(q)
top_of_book = self.exchange.report_top_of_book(current_time)
t.bulk_cancel(current_time)
if t.cancel_collector:
self.doCancels(t)
top_of_book = self.exchange.report_top_of_book(current_time)
elif t.trader_type == TType.Taker:
if not current_time % t.delta_t:
self.exchange.process_order(t.process_signal(current_time, self.q_take[current_time]))
if self.exchange.traded:
self.confirmTrades()
top_of_book = self.exchange.report_top_of_book(current_time)
else:
if current_time in t.delta_t:
self.exchange.process_order(t.process_signal(current_time))
if self.exchange.traded:
self.confirmTrades()
top_of_book = self.exchange.report_top_of_book(current_time)
if random.random() < self.alpha_pj:
self.pennyjumper.process_signal(current_time, top_of_book, self.q_take[current_time])
if self.pennyjumper.cancel_collector:
for c in self.pennyjumper.cancel_collector:
self.exchange.process_order(c)
if self.pennyjumper.quote_collector:
for q in self.pennyjumper.quote_collector:
self.exchange.process_order(q)
top_of_book = self.exchange.report_top_of_book(current_time)
if not current_time % write_interval:
self.exchange.order_history_to_h5(self.h5filename)
self.exchange.sip_to_h5(self.h5filename)
def qTakeToh5(self):
temp_df = | pd.DataFrame({'qt_take': self.q_take, 'lambda_t': self.lambda_t}) | pandas.DataFrame |
'''Add optics groups to particle starfile'''
from __future__ import unicode_literals
from __future__ import print_function, division, absolute_import
import os
import sys
import argparse
from numpy.lib.function_base import append
import progressbar
import numpy as np
import pandas as pd
def read_data_block_as_dataframe(f, block_name):
if block_name is None:
# Assumes f is at the block_name line
pass
else:
# Read up to the block_name line
for line in f:
line = line.strip()
if line.startswith(block_name):
break
# Read up to loop_
for line in f:
if line.startswith('loop_'):
break
# Read metadata labels
metadata_labels = []
for line in f:
line = line.strip()
if line.startswith('_rln'):
metadata_labels.append(line.split()[0])
else:
break
# Read metadata
metadata = []
# The current line is the first metadata record
words = line.split()
assert len(metadata_labels) == len(words)
metadata.append(words)
for line in f:
line = line.strip()
if line == '':
# Empty line, the end of the metadata block
break
words = line.split()
assert len(metadata_labels) == len(words)
metadata.append(words)
# Convert to DataFrame
metadata_df = pd.DataFrame(metadata, columns=metadata_labels)
return metadata_df
def read_input_star(input_star):
assert os.path.exists(input_star)
with open(input_star, 'r') as f:
# Determine metadata table version
for line in f:
line = line.strip()
if line == '':
# Empty line
continue
elif line.startswith('# version'):
# Version string introduced in v3.1
input_star_version = int(line.split()[-1])
break
elif line.startswith('data_'):
# No version string, thus <= v3.0
# Assumes v3.0
input_star_version = 30000
break
else:
sys.exit('Invalid input star file.')
if input_star_version > 30000:
df_optics_in = read_data_block_as_dataframe(f, 'data_optics')
assert df_optics_in.shape[0] == 1, 'More than two opticsGroups exists in the file.'
df_particles_in = read_data_block_as_dataframe(f, 'data_particles')
else:
df_optics_in = None
df_particles_in = read_data_block_as_dataframe(f, None)
return input_star_version, df_particles_in, df_optics_in
def append_optics_groups_to_particle_dataframe(df_in, optics_group_table):
df_out = df_in.copy(deep=True)
df_out['_rlnOpticsGroup'] = np.zeros(df_in.shape[0], dtype=int)
print('Appending _rlnOpticsGroup to each particle....')
for i in progressbar.progressbar(df_out.index):
filename = os.path.splitext(os.path.basename(df_in.loc[i, '_rlnMicrographName']))[0]
match = optics_group_table[optics_group_table.filename == filename]
assert match.shape[0] == 1
optics_group = match.iloc[0].optics_group
df_out.loc[i, '_rlnOpticsGroup'] = optics_group
# Make sure all the particles are assigned optics group
assert (df_out._rlnOpticsGroup != 0).all()
return df_out
def regroup_particles_within_each_optics_group(df_in):
df_out = df_in.copy(deep=True)
print('Regrouping particles within each optics group....')
particle_group_id = 1
optics_group_list = np.sort(df_out['_rlnOpticsGroup'].unique())
for optics_group in optics_group_list:
# df_tmp is a shallow copy of df_in
df_tmp = df_in[df_in['_rlnOpticsGroup'] == optics_group]
particle_group_list = np.sort(df_tmp['_rlnGroupName'].unique())
for particle_group in particle_group_list:
particle_index_list = df_tmp[df_tmp['_rlnGroupName'] == particle_group].index
df_out.loc[particle_index_list, '_rlnGroupName'] = 'group_{:d}'.format(particle_group_id)
df_out.loc[particle_index_list, '_rlnGroupNumber'] = particle_group_id
particle_group_id += 1
return df_out
def create_output_dataframes(input_star_version, df_particles_out, df_optics_in, optics_group_table, image_size):
# Check the number of particles in each optics group
optics_groups, counts = np.unique(df_particles_out._rlnOpticsGroup, return_counts=True)
for i in range(optics_group_table.optics_group.min(), optics_group_table.optics_group.max() + 1):
if i in optics_groups:
num_particles = counts[np.where(optics_groups == i)[0]]
else:
num_particles = 0
print('OpticsGroup %3d : %7d particles' % (i, num_particles))
# Create metadata labels for output data_optics
metadata_labels_optics_out = [
'_rlnOpticsGroupName',
'_rlnOpticsGroup',
'_rlnVoltage',
'_rlnSphericalAberration',
'_rlnAmplitudeContrast',
'_rlnImagePixelSize',
'_rlnImageSize',
'_rlnImageDimensionality'
]
if 'mtf_file' in optics_group_table.columns:
metadata_labels_optics_out = metadata_labels_optics_out[:2] + ['_rlnMtfFileName','_rlnMicrographOriginalPixelSize'] + metadata_labels_optics_out[2:]
# Create output data_optics dataframe
df_optics_out = pd.DataFrame(columns=metadata_labels_optics_out)
if input_star_version < 30001:
for optics_group in optics_groups:
dict_optics_out_group = {}
# Extract data of each optics group
df_tmp = df_particles_out.loc[
df_particles_out._rlnOpticsGroup == optics_group,
[
'_rlnMagnification',
'_rlnDetectorPixelSize',
'_rlnAmplitudeContrast',
'_rlnSphericalAberration',
'_rlnVoltage'
]
]
# These values should be same for all records in a group
for label in df_tmp.columns:
is_all_same = (df_tmp[label].iloc[0] == df_tmp[label].values).all()
assert is_all_same, '%s of group %d is inconsistent' % (label, optics_group)
dict_optics_out_group['_rlnOpticsGroupName'] = 'opticsGroup%d' % (optics_group)
dict_optics_out_group['_rlnOpticsGroup'] = '%d' % optics_group
dict_optics_out_group['_rlnVoltage'] = df_tmp.iloc[0]['_rlnVoltage']
dict_optics_out_group['_rlnSphericalAberration'] = df_tmp.iloc[0]['_rlnSphericalAberration']
dict_optics_out_group['_rlnAmplitudeContrast'] = df_tmp.iloc[0]['_rlnAmplitudeContrast']
mag = float(df_tmp.iloc[0]['_rlnMagnification'])
detector_pixelsize_micron = float(df_tmp.iloc[0]['_rlnDetectorPixelSize'])
image_pixelsize_angstrom = detector_pixelsize_micron / mag * 10000
dict_optics_out_group['_rlnImagePixelSize'] = '%.6f' % image_pixelsize_angstrom
dict_optics_out_group['_rlnImageSize'] = '%d' % (image_size)
dict_optics_out_group['_rlnImageDimensionality'] = '2'
if 'mtf_file' in optics_group_table.columns:
dict_optics_out_group['_rlnMtfFileName'] = optics_group_table[optics_group_table.optics_group == optics_group].iloc[0].mtf_file
dict_optics_out_group['_rlnMicrographOriginalPixelSize'] = '%.6f' % optics_group_table[optics_group_table.optics_group == optics_group].iloc[0].orig_angpix
df_optics_out = df_optics_out.append(dict_optics_out_group, ignore_index=True)
else:
assert df_optics_in.shape[0] > 0
for optics_group in optics_groups:
dict_optics_out_group = df_optics_in.iloc[0].to_dict()
dict_optics_out_group['_rlnOpticsGroup'] = '%d' % optics_group
dict_optics_out_group['_rlnOpticsGroupName'] = 'opticsGroup%d' % optics_group
if 'mtf_file' in optics_group_table.columns:
dict_optics_out_group['_rlnMtfFileName'] = optics_group_table[optics_group_table.optics_group == optics_group].iloc[0].mtf_file
dict_optics_out_group['_rlnMicrographOriginalPixelSize'] = '%.6f' % optics_group_table[optics_group_table.optics_group == optics_group].iloc[0].orig_angpix
df_optics_out = df_optics_out.append(dict_optics_out_group, ignore_index=True)
# Create output data_particles dataframe
if input_star_version < 30001:
image_angpix = df_particles_out._rlnDetectorPixelSize.astype(float) / df_particles_out._rlnMagnification.astype(float) * 10000
# Delete old columns (Not needed in v3.1)
df_particles_out.drop([
'_rlnMagnification',
'_rlnDetectorPixelSize',
'_rlnAmplitudeContrast',
'_rlnSphericalAberration',
'_rlnVoltage'
], axis=1, inplace=True)
# Convert _rlnOrigin{X,Y} (pixel) to _rlnOrigin{X,Y}Angst (angstrom/pixel)
originx_ang = df_particles_out._rlnOriginX.astype(float) * image_angpix
originy_ang = df_particles_out._rlnOriginY.astype(float) * image_angpix
df_particles_out['_rlnOriginXAngst'] = originx_ang.map(lambda x: '%.6f'%x)
df_particles_out['_rlnOriginYAngst'] = originy_ang.map(lambda x: '%.6f'%x)
df_particles_out.drop(['_rlnOriginX', '_rlnOriginY'], axis=1, inplace=True)
else:
# Nothing to do.
pass
return df_optics_out, df_particles_out
def write_dataframe_as_star_block(f, df, block_name):
f.write('\n')
f.write('# version 30001\n')
f.write('\n')
f.write(block_name + '\n\n')
f.write('loop_\n')
for i, label in enumerate(df.columns):
f.write('%s #%d\n' % (label, i+1))
print('Writing %s ...'%block_name)
for i in progressbar.progressbar(range(df.shape[0])):
output = []
for label in df.columns:
val = str(df.iloc[i][label])
if len(val) < 12:
val = '%12s' % val
output.append(val)
output = ' '.join(output) + '\n'
f.write(output)
f.write('\n')
def parse_args():
parser = argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description=__doc__
)
parser.add_argument('--input_star', required=True, help='Input particle starfile.')
parser.add_argument('--output_star', required=True, help='Output particle starfile.')
parser.add_argument('--optics_group_csv', required=True, help='CSV file generated by roga_find_optics_groups.py')
parser.add_argument('--image_size', default=None, type=int, help='Particle image size in pixels. Required if input_star versions is <= v3.0')
parser.add_argument('--save_csv', action='store_true', help='Also save as csv file')
args = parser.parse_args()
print('##### Command #####\n\t' + ' '.join(sys.argv))
args_print_str = '##### Input parameters #####\n'
for opt, val in vars(args).items():
args_print_str += '\t{} : {}\n'.format(opt, val)
print(args_print_str)
return args
if __name__ == '__main__':
args = parse_args()
#assert not os.path.exists(args.output_star), '--output_star %s already exists.' % args.output_star
input_star_version, df_particles_in, df_optics_in = read_input_star(args.input_star)
if input_star_version < 30001:
assert args.image_size is not None, 'Please specify --image_size'
optics_group_table = | pd.read_csv(args.optics_group_csv) | pandas.read_csv |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Nov 12 01:37:34 2018
@author: Kazuki
"""
import numpy as np
import pandas as pd
import os, gc
from glob import glob
from tqdm import tqdm
import sys
sys.path.append(f'/home/{os.environ.get("USER")}/PythonLibrary')
import lgbextension as ex
import lightgbm as lgb
from multiprocessing import cpu_count
import utils
#utils.start(__file__)
#==============================================================================
SEED = np.random.randint(9999)
print('SEED:', SEED)
DROP = ['f001_hostgal_specz', 'f701_hostgal_specz',]# 'f001_distmod', ]
#DROP = []
NFOLD = 5
LOOP = 1
param = {
'objective': 'binary',
'metric': 'auc',
'learning_rate': 0.01,
'max_depth': 6,
'num_leaves': 63,
'max_bin': 255,
'min_child_weight': 10,
'min_data_in_leaf': 150,
'reg_lambda': 0.5, # L2 regularization term on weights.
'reg_alpha': 0.5, # L1 regularization term on weights.
'colsample_bytree': 0.9,
'subsample': 0.9,
# 'nthread': 32,
'nthread': cpu_count(),
'bagging_freq': 1,
'verbose':-1,
'seed': SEED
}
# =============================================================================
# load
# =============================================================================
tr = pd.read_pickle('../data/tr_1111-1.pkl.gz')
te = pd.read_pickle('../data/te_1111-1.pkl.gz')
tr['y'] = 0
te['y'] = 1
X = | pd.concat([tr, te], ignore_index=True) | pandas.concat |
from traitlets.config import LoggingConfigurable
from traitlets import Unicode
from nbgrader.api import MissingEntry
import nbformat
import os
import pandas as pd
class GradeTaskExporter(LoggingConfigurable):
def __init__(self, gradebook):
self.gb = gradebook
def get_columns(self):
columns = []
assignments = sorted(self.gb.assignments, key=lambda x: x.name)
for assignment in assignments:
for notebook in assignment.notebooks:
for gradecell in notebook.grade_cells:
name = gradecell.name
if name.startswith('test_'):
name = gradecell.name[5:]
columns.append((assignment.name, notebook.name, name))
return columns
def make_table(self):
data = []
assignments = sorted(self.gb.assignments, key=lambda x: x.name)
columns = ['Student ID'] + ['.'.join(col) for col in self.get_columns()]
for student in self.gb.students:
row = [student.id]
for assignment in assignments:
for notebook in assignment.notebooks:
for gradecell in notebook.grade_cells:
score = 0
try:
submission = self.gb.find_grade(gradecell.name, notebook.name, assignment.name, student.id)
score = submission.score
except MissingEntry:
pass
finally:
row.append(score)
data.append(row)
table = pd.DataFrame(data, columns=columns)
table['Total'] = table.sum(axis=1, numeric_only=True)
return table
class GradeAssignmentExporter(LoggingConfigurable):
def __init__(self, gradebook):
self.gb = gradebook
def make_table(self):
data = []
assignments = sorted(self.gb.assignments, key=lambda x: x.name)
columns = ['Student ID'] + [assignment.name for assignment in assignments]
for student in self.gb.students:
row = [student.id]
for assignment in assignments:
score = 0
try:
submission = self.gb.find_submission(assignment.name, student.id)
score = submission.score
except MissingEntry:
pass
finally:
row.append(score)
data.append(row)
table = pd.DataFrame(data, columns=columns)
table['Total'] = table.sum(axis=1, numeric_only=True)
return table
class GradeNotebookExporter(LoggingConfigurable):
def __init__(self, gradebook):
self.gb = gradebook
def get_columns(self):
columns = []
assignments = sorted(self.gb.assignments, key=lambda x: x.name)
for assignment in assignments:
for notebook in assignment.notebooks:
columns.append((assignment.name, notebook.name))
return columns
def make_table(self):
data = []
assignments = sorted(self.gb.assignments, key=lambda x: x.name)
columns = ['Student ID'] + ['.'.join(col) for col in self.get_columns()]
for student in self.gb.students:
row = [student.id]
for assignment in assignments:
for notebook in assignment.notebooks:
score = 0
try:
submission = self.gb.find_submission_notebook(notebook.name, assignment.name, student.id)
score = submission.score
except MissingEntry:
pass
finally:
row.append(score)
data.append(row)
table = | pd.DataFrame(data, columns=columns) | pandas.DataFrame |
import errno
import logging
import os
import uuid
import biom
import pandas as pd
from Bio import SeqIO
import shutil
from installed_clients.DataFileUtilClient import DataFileUtil
from GenericsAPI.Utils.AttributeUtils import AttributesUtil
from GenericsAPI.Utils.SampleServiceUtil import SampleServiceUtil
from GenericsAPI.Utils.DataUtil import DataUtil
from GenericsAPI.Utils.MatrixUtil import MatrixUtil
from GenericsAPI.Utils.TaxonUtil import TaxonUtil
from installed_clients.KBaseReportClient import KBaseReport
from installed_clients.KBaseSearchEngineClient import KBaseSearchEngine
from installed_clients.kb_GenericsReportClient import kb_GenericsReport
TYPE_ATTRIBUTES = {'description', 'scale', 'row_normalization', 'col_normalization'}
SCALE_TYPES = {'raw', 'ln', 'log2', 'log10'}
DEFAULT_META_KEYS = ["lineage", "score", "taxonomy_source", "species_name",
"consensus_sequence"]
TARGET_GENE_SUBFRAGMENT_MAP = {'16S': ['V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V8', 'V9'],
'18S': ['V1', 'V2', 'V3', 'V4', 'V9'],
'ITS': ['ITS1', 'ITS2']}
SEQ_INSTRUMENTS_MAP = {'Applied Biosystems': ['AB 310 Genetic Analyzer',
'AB 3130 Genetic Analyzer',
'AB 3130xL Genetic Analyzer',
'AB 3500 Genetic Analyzer',
'AB 3500xL Genetic Analyzer',
'AB 3730 Genetic Analyzer',
'AB 3730xL Genetic Analyzer',
'AB 5500xl Genetic Analyzer',
'AB 5500x-Wl Genetic Analyzer',
'AB SOLiD System',
'AB SOLiD System 2.0',
'AB SOLiD System 3.0',
'AB SOLiD 3 Plus System',
'AB SOLiD 4 System',
'AB SOLiD 4hq System',
'AB SOLiD PI System'],
'Roche 454': ['454 GS', '454 GS 20', '454 GS FLX', '454 GS FLX+',
'454 GS FLX Titanium'],
'Life Sciences': ['454 GS Junior'],
'Illumina': ['Illumina Genome Analyzer',
'Illumina Genome Analyzer II',
'Illumina Genome Analyzer IIx',
'Illumina HiScanSQ',
'Illumina HiSeq 1000',
'Illumina HiSeq 1500',
'Illumina HiSeq 2000',
'Illumina HiSeq 2500',
'Illumina HiSeq 3000',
'Illumina HiSeq 4000',
'Illumina HiSeq X',
'HiSeq X Five',
'HiSeq X Ten',
'Illumina iSeq 100',
'Illumina MiSeq',
'Illumina MiniSeq',
'NextSeq 500',
'NextSeq 550',
'NextSeq 1000',
'NextSeq 2000',
'Illumina NovaSeq 6000'],
'ThermoFisher': ['Ion Torrent PGM', 'Ion Torrent Proton',
'Ion Torrent S5 XL', 'Ion Torrent S5'],
'Pacific Biosciences': ['PacBio RS', 'PacBio RS II', 'PacBio Sequel',
'PacBio Sequel II'],
'Oxford Nanopore': ['MinION', 'GridION', 'PromethION'],
'BGI Group': ['BGISEQ-500', 'DNBSEQ-G400', 'DNBSEQ-T7', 'DNBSEQ-G50',
'MGISEQ-2000RS']}
class BiomUtil:
def _mkdir_p(self, path):
"""
_mkdir_p: make directory for given path
"""
if not path:
return
try:
os.makedirs(path)
except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise
def _process_params(self, params):
logging.info('start validating import_matrix_from_biom params')
# check for required parameters
for p in ['obj_type', 'matrix_name', 'workspace_id', 'scale', 'amplicon_type',
'sequencing_technology', 'sequencing_instrument',
'target_gene', 'target_subfragment', 'taxon_calling']:
if p not in params:
raise ValueError('"{}" parameter is required, but missing'.format(p))
# check sequencing_technology and sequencing_instrument matching
sequencing_technology = params.get('sequencing_technology')
sequencing_instrument = params.get('sequencing_instrument')
if sequencing_technology not in SEQ_INSTRUMENTS_MAP:
raise ValueError('Unexpected sequencing technology: {}'.format(sequencing_technology))
expected_instruments = SEQ_INSTRUMENTS_MAP.get(sequencing_technology)
if sequencing_instrument not in expected_instruments:
raise ValueError('Please select sequencing instrument among {} for {}'.format(
expected_instruments, sequencing_technology))
# check target_gene and target_subfragment matching
target_gene = params.get('target_gene')
target_subfragment = list(set(params.get('target_subfragment')))
params['target_subfragment'] = target_subfragment
if target_gene not in TARGET_GENE_SUBFRAGMENT_MAP:
raise ValueError('Unexpected target gene: {}'.format(target_gene))
expected_subfragments = TARGET_GENE_SUBFRAGMENT_MAP.get(target_gene)
if not set(target_subfragment) <= set(expected_subfragments):
raise ValueError('Please select target subfragments among {} for {}'.format(
expected_subfragments, target_gene))
# check taxon_calling
taxon_calling = params.get('taxon_calling')
taxon_calling_method = list(set(taxon_calling.get('taxon_calling_method')))
params['taxon_calling_method'] = taxon_calling_method
if 'denoising' in taxon_calling_method:
denoise_method = taxon_calling.get('denoise_method')
sequence_error_cutoff = taxon_calling.get('sequence_error_cutoff')
if not (denoise_method and sequence_error_cutoff):
raise ValueError('Please provide denoise_method and sequence_error_cutoff')
params['denoise_method'] = denoise_method
params['sequence_error_cutoff'] = sequence_error_cutoff
if 'clustering' in taxon_calling_method:
clustering_method = taxon_calling.get('clustering_method')
clustering_cutoff = taxon_calling.get('clustering_cutoff')
if not (clustering_method and clustering_cutoff):
raise ValueError('Please provide clustering_method and clustering_cutoff')
params['clustering_method'] = clustering_method
params['clustering_cutoff'] = clustering_cutoff
obj_type = params.get('obj_type')
if obj_type not in self.matrix_types:
raise ValueError('Unknown matrix object type: {}'.format(obj_type))
scale = params.get('scale')
if scale not in SCALE_TYPES:
raise ValueError('Unknown scale type: {}'.format(scale))
biom_file = None
tsv_file = None
fasta_file = None
metadata_keys = DEFAULT_META_KEYS
input_local_file = params.get('input_local_file', False)
if params.get('taxonomic_abundance_tsv') and params.get('taxonomic_fasta'):
tsv_file = params.get('taxonomic_abundance_tsv')
fasta_file = params.get('taxonomic_fasta')
if not (tsv_file and fasta_file):
raise ValueError('missing TSV or FASTA file')
if not input_local_file:
tsv_file = self.dfu.download_staging_file(
{'staging_file_subdir_path': tsv_file}).get('copy_file_path')
fasta_file = self.dfu.download_staging_file(
{'staging_file_subdir_path': fasta_file}).get('copy_file_path')
metadata_keys_str = params.get('metadata_keys')
if metadata_keys_str:
metadata_keys += [x.strip() for x in metadata_keys_str.split(',')]
mode = 'tsv_fasta'
elif params.get('biom_fasta'):
biom_fasta = params.get('biom_fasta')
biom_file = biom_fasta.get('biom_file_biom_fasta')
fasta_file = biom_fasta.get('fasta_file_biom_fasta')
if not (biom_file and fasta_file):
raise ValueError('missing BIOM or FASTA file')
if not input_local_file:
biom_file = self.dfu.download_staging_file(
{'staging_file_subdir_path': biom_file}).get('copy_file_path')
fasta_file = self.dfu.download_staging_file(
{'staging_file_subdir_path': fasta_file}).get('copy_file_path')
mode = 'biom_fasta'
elif params.get('tsv_fasta'):
tsv_fasta = params.get('tsv_fasta')
tsv_file = tsv_fasta.get('tsv_file_tsv_fasta')
fasta_file = tsv_fasta.get('fasta_file_tsv_fasta')
if not (tsv_file and fasta_file):
raise ValueError('missing TSV or FASTA file')
if not input_local_file:
tsv_file = self.dfu.download_staging_file(
{'staging_file_subdir_path': tsv_file}).get('copy_file_path')
fasta_file = self.dfu.download_staging_file(
{'staging_file_subdir_path': fasta_file}).get('copy_file_path')
metadata_keys_str = tsv_fasta.get('metadata_keys_tsv_fasta')
if metadata_keys_str:
metadata_keys += [x.strip() for x in metadata_keys_str.split(',')]
mode = 'tsv_fasta'
else:
raise ValueError('missing valide file group type in parameters')
return (biom_file, tsv_file, fasta_file, mode, list(set(metadata_keys)))
def _validate_fasta_file(self, df, fasta_file):
logging.info('start validating FASTA file')
try:
fastq_dict = SeqIO.index(fasta_file, "fasta")
except Exception:
raise ValueError('Cannot parse file. Please provide valide FASTA file')
matrix_ids = df.index
file_ids = fastq_dict.keys()
unmatched_ids = set(matrix_ids) - set(file_ids)
if unmatched_ids:
raise ValueError('FASTA file does not have [{}] OTU id'.format(unmatched_ids))
def _file_to_amplicon_data(self, biom_file, tsv_file, fasta_file, mode, refs, matrix_name,
workspace_id, scale, description, metadata_keys=None):
amplicon_data = refs
if mode.startswith('biom'):
logging.info('start parsing BIOM file for matrix data')
table = biom.load_table(biom_file)
observation_metadata = table._observation_metadata
sample_metadata = table._sample_metadata
matrix_data = {'row_ids': table._observation_ids.tolist(),
'col_ids': table._sample_ids.tolist(),
'values': table.matrix_data.toarray().tolist()}
logging.info('start building attribute mapping object')
amplicon_data.update(self.get_attribute_mapping("row", observation_metadata,
matrix_data, matrix_name, refs,
workspace_id))
amplicon_data.update(self.get_attribute_mapping("col", sample_metadata,
matrix_data, matrix_name, refs,
workspace_id))
amplicon_data['attributes'] = {}
for k in ('create_date', 'generated_by'):
val = getattr(table, k)
if not val:
continue
if isinstance(val, bytes):
amplicon_data['attributes'][k] = val.decode('utf-8')
else:
amplicon_data['attributes'][k] = str(val)
elif mode.startswith('tsv'):
observation_metadata = None
sample_metadata = None
try:
logging.info('start parsing TSV file for matrix data')
reader = pd.read_csv(tsv_file, sep=None, iterator=True)
inferred_sep = reader._engine.data.dialect.delimiter
df = | pd.read_csv(tsv_file, sep=inferred_sep, index_col=0) | pandas.read_csv |
import pandas as pd
import warnings
import os
import logging
import logzero
from logzero import logger
from datetime import datetime
import numpy as np
warnings.filterwarnings('ignore')
MILL_SECS_TO_SECS = 1000
class SparkLogger():
def __init__(self, filepath, txt_filename):
self.filepath = filepath
self.txt_filename = txt_filename
self.log_df = pd.read_json(self.filepath, lines=True)
def filter_df(self, event_types, log_df):
df_list = []
for event_type in event_types:
df = log_df[log_df['Event'] == event_type]
df.dropna(axis=1, how='all', inplace=True)
df_list.append(df)
return df_list
def calculate_time_duration(self, df, column_start='Submission Time', column_end='Completion Time'):
df['Duration'] = (df[column_end] - df[column_start]) / MILL_SECS_TO_SECS
df[column_end] = df[column_end].apply(lambda x: datetime.utcfromtimestamp(x/MILL_SECS_TO_SECS).strftime('%Y-%m-%d %H:%M:%S.%f'))
df[column_start] = df[column_start].apply(lambda x: datetime.utcfromtimestamp(x/MILL_SECS_TO_SECS).strftime('%Y-%m-%d %H:%M:%S.%f'))
df[column_end] = pd.to_datetime(df[column_end], format='%Y-%m-%d %H:%M:%S.%f')
df[column_start] = pd.to_datetime(df[column_start], format='%Y-%m-%d %H:%M:%S.%f')
return df
def generate_jobs(self):
event_types = ['SparkListenerJobStart', 'SparkListenerJobEnd']
job_list_df = self.filter_df(event_types, self.log_df)
job_df = job_list_df[0].merge(job_list_df[1], on=['Job ID'])
job_df = self.calculate_time_duration(job_df)
job_df['Job ID'] = job_df['Job ID'].astype(int)
job_df.drop(columns=['Event_x', 'Event_y'], inplace=True)
properties_list = []
for index, row in job_df.iterrows():
tmp_df = row['Properties']
tmp_df = pd.DataFrame.from_records([tmp_df])
properties_list.append(tmp_df)
properties = pd.concat(properties_list)
job_df.reset_index(inplace=True)
properties.reset_index(inplace=True)
job_df = job_df.merge(properties, left_index=True, right_index=True)
job_df.set_index(['Job ID'], inplace=True)
job_df.drop(['index_x', 'Properties', 'index_y'], axis=1, inplace=True)
job_df['Filename'] = str(self.txt_filename)
self.job_df = job_df
def unpack_stages_from_jobs(self):
""" Unpack nested stages info from jobs df and adds job foreign key
"""
stage_df_list = []
for index, row in self.job_df.iterrows():
tmp_df = row['Stage Infos']
tmp_df = pd.DataFrame(tmp_df)
tmp_df['Job ID'] = index
stage_df_list.append(tmp_df)
self.stage_df = pd.concat(stage_df_list)
self.stage_df.set_index(['Stage ID'], inplace=True)
self.stage_df = self.stage_df[['Job ID']]
def generate_stages(self):
event_types = ['SparkListenerStageCompleted']
stage_df = self.filter_df(event_types, self.log_df)[0]
info_df_list, rdd_info_list = [], []
for index, row in stage_df.iterrows():
tmp_df = row['Stage Info']
rdd_info = tmp_df['RDD Info']
rdd_info_df = pd.DataFrame(rdd_info)
rdd_info_df['Stage ID'] = tmp_df['Stage ID']
rdd_info_list.append(rdd_info_df)
tmp_df = pd.DataFrame.from_dict(tmp_df, orient='index')
tmp_df = tmp_df.transpose()
tmp_df.set_index(['Stage ID'], inplace=True)
info_df_list.append(tmp_df)
info_df = pd.concat(info_df_list)
self.rdd_info_df = pd.concat(rdd_info_list)
stage_df.reset_index(inplace=True)
info_df.reset_index(inplace=True)
stage_df = stage_df.merge(info_df, left_index=True, right_index=True)
stage_df.set_index(['Stage ID'], inplace=True)
self.stage_df = self.stage_df.merge(stage_df, left_index=True, right_index=True)
self.stage_df.drop(['index', 'Event', 'Stage Info', 'RDD Info', 'Accumulables'], axis=1, inplace=True)
self.stage_df = self.calculate_time_duration(self.stage_df)
self.stage_df.sort_index(inplace=True)
self.rdd_info_df.set_index('RDD ID', inplace=True)
self.rdd_info_df.sort_index(inplace=True)
def generate_tasks(self):
task_types = ['SparkListenerTaskEnd']
task_df = self.filter_df(task_types, self.log_df)[0]
info_df_list = []
accum_list = []
for index, row in task_df.iterrows():
tmp_df = row['Task Info']
tmp_df = | pd.DataFrame.from_dict(tmp_df, orient='index') | pandas.DataFrame.from_dict |
# % ---------------------------------------------------------------------------------------------------------------------------------------
# % Hospitalization Models
# % ---------------------------------------------------------------------------------------------------------------------------------------
# % This code provides predictive model described in "Early prediction of level-of-care requirements in patients with COVID-19." - Elife(2020)
# %
# % Authors: Hao, Boran, <NAME>, <NAME>, <NAME>, <NAME>,
# % <NAME>, <NAME>, <NAME>, and <NAME>.
# %
# % ---------------------------------------------------------------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# Data
# ------------------------------------------------------------------------------
import lightgbm as lgb
from sklearn.metrics import classification_report, f1_score, roc_curve, auc, accuracy_score
import pylab as pl
import statsmodels.api as sm
from scipy import stats
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import RFE
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import train_test_split, StratifiedKFold, cross_val_score, KFold
from sklearn.model_selection import GridSearchCV
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegressionCV, LogisticRegression
from sklearn import preprocessing
import os
import math
import numpy as np
import pandas as pd
pd.options.display.max_columns = None
pd.options.display.max_rows = None
# Load Preprocessed Data
Y = pd.read_csv('Final_Y.csv')
X = | pd.read_csv('Final_X.csv') | pandas.read_csv |
# Licensed to Elasticsearch B.V. under one or more contributor
# license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright
# ownership. Elasticsearch B.V. licenses this file to you under
# the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import re
import warnings
from enum import Enum
from typing import Any, Callable, Dict, List, Optional, Tuple, Union, cast
import numpy as np # type: ignore
import pandas as pd # type: ignore
from elasticsearch import Elasticsearch
# Default number of rows displayed (different to pandas where ALL could be displayed)
DEFAULT_NUM_ROWS_DISPLAYED = 60
DEFAULT_CHUNK_SIZE = 10000
DEFAULT_CSV_BATCH_OUTPUT_SIZE = 10000
DEFAULT_PROGRESS_REPORTING_NUM_ROWS = 10000
DEFAULT_ES_MAX_RESULT_WINDOW = 10000 # index.max_result_window
DEFAULT_PAGINATION_SIZE = 5000 # for composite aggregations
PANDAS_VERSION: Tuple[int, ...] = tuple(
int(part) for part in pd.__version__.split(".") if part.isdigit()
)[:2]
with warnings.catch_warnings():
warnings.simplefilter("ignore")
EMPTY_SERIES_DTYPE = pd.Series().dtype
def build_pd_series(
data: Dict[str, Any], dtype: Optional[np.dtype] = None, **kwargs: Any
) -> pd.Series:
"""Builds a pd.Series while squelching the warning
for unspecified dtype on empty series
"""
dtype = dtype or (EMPTY_SERIES_DTYPE if not data else dtype)
if dtype is not None:
kwargs["dtype"] = dtype
return pd.Series(data, **kwargs)
def docstring_parameter(*sub: Any) -> Callable[[Any], Any]:
def dec(obj: Any) -> Any:
obj.__doc__ = obj.__doc__.format(*sub)
return obj
return dec
class SortOrder(Enum):
ASC = 0
DESC = 1
@staticmethod
def reverse(order: "SortOrder") -> "SortOrder":
if order == SortOrder.ASC:
return SortOrder.DESC
return SortOrder.ASC
@staticmethod
def to_string(order: "SortOrder") -> str:
if order == SortOrder.ASC:
return "asc"
return "desc"
@staticmethod
def from_string(order: str) -> "SortOrder":
if order == "asc":
return SortOrder.ASC
return SortOrder.DESC
def elasticsearch_date_to_pandas_date(
value: Union[int, str], date_format: Optional[str]
) -> pd.Timestamp:
"""
Given a specific Elasticsearch format for a date datatype, returns the
'partial' `to_datetime` function to parse a given value in that format
**Date Formats: https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-date-format.html#built-in-date-formats
Parameters
----------
value: Union[int, str]
The date value.
date_format: str
The Elasticsearch date format (ex. 'epoch_millis', 'epoch_second', etc.)
Returns
-------
datetime: pd.Timestamp
From https://www.elastic.co/guide/en/elasticsearch/reference/current/date.html
Date formats can be customised, but if no format is specified then it uses the default:
"strict_date_optional_time||epoch_millis"
Therefore if no format is specified we assume either strict_date_optional_time
or epoch_millis.
"""
if date_format is None or isinstance(value, (int, float)):
try:
return pd.to_datetime(
value, unit="s" if date_format == "epoch_second" else "ms"
)
except ValueError:
return pd.to_datetime(value)
elif date_format == "epoch_millis":
return pd.to_datetime(value, unit="ms")
elif date_format == "epoch_second":
return pd.to_datetime(value, unit="s")
elif date_format == "strict_date_optional_time":
return pd.to_datetime(value, format="%Y-%m-%dT%H:%M:%S.%f%z", exact=False)
elif date_format == "basic_date":
return pd.to_datetime(value, format="%Y%m%d")
elif date_format == "basic_date_time":
return pd.to_datetime(value, format="%Y%m%dT%H%M%S.%f", exact=False)
elif date_format == "basic_date_time_no_millis":
return pd.to_datetime(value, format="%Y%m%dT%H%M%S%z")
elif date_format == "basic_ordinal_date":
return pd.to_datetime(value, format="%Y%j")
elif date_format == "basic_ordinal_date_time":
return pd.to_datetime(value, format="%Y%jT%H%M%S.%f%z", exact=False)
elif date_format == "basic_ordinal_date_time_no_millis":
return pd.to_datetime(value, format="%Y%jT%H%M%S%z")
elif date_format == "basic_time":
return pd.to_datetime(value, format="%H%M%S.%f%z", exact=False)
elif date_format == "basic_time_no_millis":
return pd.to_datetime(value, format="%H%M%S%z")
elif date_format == "basic_t_time":
return pd.to_datetime(value, format="T%H%M%S.%f%z", exact=False)
elif date_format == "basic_t_time_no_millis":
return pd.to_datetime(value, format="T%H%M%S%z")
elif date_format == "basic_week_date":
return pd.to_datetime(value, format="%GW%V%u")
elif date_format == "basic_week_date_time":
return pd.to_datetime(value, format="%GW%V%uT%H%M%S.%f%z", exact=False)
elif date_format == "basic_week_date_time_no_millis":
return pd.to_datetime(value, format="%GW%V%uT%H%M%S%z")
elif date_format == "strict_date":
return pd.to_datetime(value, format="%Y-%m-%d")
elif date_format == "date":
return pd.to_datetime(value, format="%Y-%m-%d")
elif date_format == "strict_date_hour":
return pd.to_datetime(value, format="%Y-%m-%dT%H")
elif date_format == "date_hour":
return pd.to_datetime(value, format="%Y-%m-%dT%H")
elif date_format == "strict_date_hour_minute":
return pd.to_datetime(value, format="%Y-%m-%dT%H:%M")
elif date_format == "date_hour_minute":
return pd.to_datetime(value, format="%Y-%m-%dT%H:%M")
elif date_format == "strict_date_hour_minute_second":
return pd.to_datetime(value, format="%Y-%m-%dT%H:%M:%S")
elif date_format == "date_hour_minute_second":
return pd.to_datetime(value, format="%Y-%m-%dT%H:%M:%S")
elif date_format == "strict_date_hour_minute_second_fraction":
return pd.to_datetime(value, format="%Y-%m-%dT%H:%M:%S.%f", exact=False)
elif date_format == "date_hour_minute_second_fraction":
return pd.to_datetime(value, format="%Y-%m-%dT%H:%M:%S.%f", exact=False)
elif date_format == "strict_date_hour_minute_second_millis":
return pd.to_datetime(value, format="%Y-%m-%dT%H:%M:%S.%f", exact=False)
elif date_format == "date_hour_minute_second_millis":
return pd.to_datetime(value, format="%Y-%m-%dT%H:%M:%S.%f", exact=False)
elif date_format == "strict_date_time":
return pd.to_datetime(value, format="%Y-%m-%dT%H:%M:%S.%f%z", exact=False)
elif date_format == "date_time":
return pd.to_datetime(value, format="%Y-%m-%dT%H:%M:%S.%f%z", exact=False)
elif date_format == "strict_date_time_no_millis":
return pd.to_datetime(value, format="%Y-%m-%dT%H:%M:%S%z")
elif date_format == "date_time_no_millis":
return pd.to_datetime(value, format="%Y-%m-%dT%H:%M:%S%z")
elif date_format == "strict_hour":
return pd.to_datetime(value, format="%H")
elif date_format == "hour":
return pd.to_datetime(value, format="%H")
elif date_format == "strict_hour_minute":
return pd.to_datetime(value, format="%H:%M")
elif date_format == "hour_minute":
return pd.to_datetime(value, format="%H:%M")
elif date_format == "strict_hour_minute_second":
return pd.to_datetime(value, format="%H:%M:%S")
elif date_format == "hour_minute_second":
return pd.to_datetime(value, format="%H:%M:%S")
elif date_format == "strict_hour_minute_second_fraction":
return pd.to_datetime(value, format="%H:%M:%S.%f", exact=False)
elif date_format == "hour_minute_second_fraction":
return pd.to_datetime(value, format="%H:%M:%S.%f", exact=False)
elif date_format == "strict_hour_minute_second_millis":
return pd.to_datetime(value, format="%H:%M:%S.%f", exact=False)
elif date_format == "hour_minute_second_millis":
return pd.to_datetime(value, format="%H:%M:%S.%f", exact=False)
elif date_format == "strict_ordinal_date":
return pd.to_datetime(value, format="%Y-%j")
elif date_format == "ordinal_date":
return pd.to_datetime(value, format="%Y-%j")
elif date_format == "strict_ordinal_date_time":
return pd.to_datetime(value, format="%Y-%jT%H:%M:%S.%f%z", exact=False)
elif date_format == "ordinal_date_time":
return pd.to_datetime(value, format="%Y-%jT%H:%M:%S.%f%z", exact=False)
elif date_format == "strict_ordinal_date_time_no_millis":
return pd.to_datetime(value, format="%Y-%jT%H:%M:%S%z")
elif date_format == "ordinal_date_time_no_millis":
return pd.to_datetime(value, format="%Y-%jT%H:%M:%S%z")
elif date_format == "strict_time":
return pd.to_datetime(value, format="%H:%M:%S.%f%z", exact=False)
elif date_format == "time":
return | pd.to_datetime(value, format="%H:%M:%S.%f%z", exact=False) | pandas.to_datetime |
# coding: utf-8
# Import libraries
import pandas as pd
from pandas import ExcelWriter
import numpy as np
import math
import pickle
from sklearn import preprocessing
import statsmodels.api as sm
def linear_regression(gene_set, n_data_matrix):
"""
The LINEAR_REGRESSION operation executes the linear regression analysis per gene set and per data matrix, considering as inputs of the model only the features selected during the previous feature selection procedure. Results are exported locally either in Excel or text files.
:param gene_set: the set of genes of interest to analyze
:param n_data_matrix: number identifying the data matrix to analyze (only 2,3 and 5 values are permitted)
Example::
import genereg as gr
gr.LinearRegression.linear_regression(gene_set='DNA_REPAIR', n_data_matrix=2)
gr.LinearRegression.linear_regression(gene_set='DNA_REPAIR', n_data_matrix=3)
gr.LinearRegression.linear_regression(gene_set='DNA_REPAIR', n_data_matrix=5)
"""
# Check input parameters
if n_data_matrix not in [2, 3, 5]:
raise ValueError('Data Matrix ERROR! Possible values: {2,3,5}')
# Define the model to create
model = str(n_data_matrix)
# Import the list of genes of interest and extract in a list the Gene Symbols of all the genes belonging to the current gene set
EntrezConversion_df = pd.read_excel('./Genes_of_Interest.xlsx',sheetname='Sheet1',header=0,converters={'GENE_SYMBOL':str,'ENTREZ_GENE_ID':str,'GENE_SET':str})
SYMs_current_pathway = []
for index, row in EntrezConversion_df.iterrows():
sym = row['GENE_SYMBOL']
path = row['GENE_SET']
if path == gene_set:
SYMs_current_pathway.append(sym)
# Create a dataframe to store results of linear regression for each gene (i.e. R2 and Adjusted R2 of the linear regression)
all_r2_df = pd.DataFrame(index=SYMs_current_pathway, columns=['Adj.R2','R2'])
for current_gene in SYMs_current_pathway:
# Import the model corresponding to the current gene
gene_ID = EntrezConversion_df.loc[EntrezConversion_df['GENE_SYMBOL'] == current_gene, 'ENTREZ_GENE_ID'].iloc[0]
model_gene_df = pd.read_excel('./4_Data_Matrix_Construction/Model'+model+'/Gene_'+gene_ID+'_['+current_gene+']'+'_('+gene_set+')-Model_v'+model+'.xlsx',sheetname='Sheet1',header=0)
# Set all the unknown values (NaN) to zero
model_gene_df = model_gene_df.fillna(0)
# DATA STANDARDIZATION:
# Normalize expression values (using the proper scaler from the sklearn library):
# MinMaxScaler() normalizes values between 0 and 1
# StandardScaler() performs Z-score normalization
scaler = preprocessing.StandardScaler() # define the scaler
# Define the dataframe to normalize
to_normalize = model_gene_df.copy()
matrix = to_normalize.values # convert into a numpy array
# Normalize and convert back to pandas dataframe
matrix_scaled = scaler.fit_transform(matrix)
model_gene_df = pd.DataFrame(matrix_scaled, index=to_normalize.index, columns=to_normalize.columns)
original_model_gene_df = model_gene_df.copy()
# Load the list of features selected
text_file = open('./5_Data_Analysis/'+gene_set+'/FeatureSelection/M'+model+'/Features-Gene_'+gene_ID+'_['+current_gene+'].txt', 'r')
features_sel = text_file.read().split('\n')
features_sel.remove('')
# Consider only the features selected and perform a complete linear regression on the data
if len(features_sel) > 0:
# Filter the initial matrix of data extracting only the columns selected by feature selection
cols_to_extract = ['EXPRESSION ('+current_gene+')']+features_sel
model_gene_df_filtered = model_gene_df[cols_to_extract].copy()
# PERFORM LINEAR REGRESSION
# Define the features (predictors X) and the target (label y)
X = model_gene_df_filtered.drop(['EXPRESSION ('+current_gene+')'],1)
y = model_gene_df_filtered['EXPRESSION ('+current_gene+')']
# Add an intercept to our model
X = sm.add_constant(X)
# Define the linear regression object and fit the model
lr_model = sm.OLS(y, X).fit()
# Make predictions
y_predicted = lr_model.predict(X)
# Compute the error rate (Root Mean Square Error).
# RMS Error measures the differences between predicted values and values actually observed
rms = np.sqrt(np.mean((np.array(y_predicted) - np.array(y)) ** 2))
# Export the summary statistics into a text file
lr_summary = lr_model.summary()
lr_summary_file = lr_summary.as_text()
with open ('./5_Data_Analysis/'+gene_set+'/LinearRegression/M'+model+'/LinReg_Summary-Gene_'+gene_ID+'_['+current_gene+'].txt', 'w') as fp:
fp.write(lr_summary_file)
fp.write('\n\nRoot Mean Square Error: RMS = '+str(rms))
# Store the R-squared score in the summery dataframe
all_r2_df.set_value(current_gene, 'Adj.R2', lr_model.rsquared_adj)
all_r2_df.set_value(current_gene, 'R2', lr_model.rsquared)
# Save the coefficients and the intercept of the model
coeff = lr_model.params
coeff_df = pd.DataFrame({'feature':coeff.index, 'coefficient':coeff.values})
coeff_df = coeff_df.sort_values(by=['coefficient'], ascending=[False])
coeff_df_ordered = coeff_df[['feature','coefficient']].copy()
coeff_path = './5_Data_Analysis/'+gene_set+'/LinearRegression/M'+model+'/Coefficients/'
writer = ExcelWriter(coeff_path+'Coefficients_(M'+model+')-Gene_'+gene_ID+'_['+current_gene+'].xlsx')
coeff_df_ordered.to_excel(writer,'Sheet1',index=False)
writer.save()
# Compute and export the confidence intervals for the model coefficients (default: 95%)
CI_df = lr_model.conf_int()
CI_df.rename(columns={0: 'min_bound', 1: 'max_bound'}, inplace=True)
# Verify the relevance of each feature by checking if 0 is contained in the confidence interval
for index, row in CI_df.iterrows():
min_ci = row['min_bound']
max_ci = row['max_bound']
if (0 >= min_ci) and (0 <= max_ci):
CI_df.set_value(index,'Significant Feature?','NO')
else:
CI_df.set_value(index,'Significant Feature?','YES')
# Extract the probabilty that the value of the feature is 0
p_val = lr_model.pvalues
p_val_df = pd.DataFrame({'feature':p_val.index, 'p-value':p_val.values})
for index, row in p_val_df.iterrows():
feature_gene = row['feature']
p = row['p-value']
CI_df.set_value(feature_gene,'P',p)
ci_path = './5_Data_Analysis/'+gene_set+'/LinearRegression/M'+model+'/ConfidenceIntervals/'
writer = ExcelWriter(ci_path+'Confidence_Intervals_(M'+model+')-Gene_'+gene_ID+'_['+current_gene+'].xlsx')
CI_df.to_excel(writer,'Sheet1')
writer.save()
# Compute the correlation matrix between gene data
if 'METHYLATION ('+current_gene+')' in cols_to_extract:
model_gene_df_corr = model_gene_df_filtered.copy()
else:
cols_for_correlation = ['EXPRESSION ('+current_gene+')','METHYLATION ('+current_gene+')']+features_sel
model_gene_df_corr = original_model_gene_df[cols_for_correlation].copy()
corr_matrix = model_gene_df_corr.corr()
corr_path = './5_Data_Analysis/'+gene_set+'/LinearRegression/M'+model+'/CorrelationMatrix/'
writer = ExcelWriter(corr_path+'Correlation_Matrix_(M'+model+')-Gene_'+gene_ID+'_['+current_gene+'].xlsx')
corr_matrix.to_excel(writer,'Sheet1')
writer.save()
# Export the summary dataframe in an Excel file
all_r2_df = all_r2_df.sort_values(by=['Adj.R2'], ascending=[False])
writer = | ExcelWriter('./5_Data_Analysis/'+gene_set+'/LinearRegression/M'+model+'/Linear_Regression_R2_SCORES.xlsx') | pandas.ExcelWriter |
"""
Preprocess OA lookup table
"""
import os
import csv
import configparser
import pandas as pd
import geopandas as gpd
import math
import random
from shapely.geometry import mapping, MultiPolygon
from tqdm import tqdm
CONFIG = configparser.ConfigParser()
CONFIG.read(os.path.join(os.path.dirname(__file__), 'script_config.ini'))
BASE_PATH = CONFIG['file_locations']['base_path']
random.seed(43)
def process_shapes(path_output, path_ew, path_scot, lookup):
"""
Process all shape boundaries for ~8,000 areas.
"""
folder = os.path.join(BASE_PATH, 'intermediate')
if not os.path.exists(os.path.join(folder, 'output_areas.csv')):
data_ew = gpd.read_file(path_ew, crs='epsg:27700')#[:10]
data_ew = data_ew[['msoa11cd', 'geometry']]
data_ew.columns = ['msoa', 'geometry']
data_scot = gpd.read_file(path_scot, crs='epsg:27700')#[:200]
data_scot = data_scot[['InterZone', 'geometry']]
data_scot.columns = ['msoa', 'geometry']
all_data = data_ew.append(data_scot, ignore_index=True)
all_data['geometry'] = all_data.apply(remove_small_shapes, axis=1)
all_data['geometry'] = all_data.simplify(
tolerance = 10,
preserve_topology=True).buffer(0.0001).simplify(
tolerance = 10,
preserve_topology=True
)
all_data['area_km2'] = all_data['geometry'].area / 1e6
lookup = pd.read_csv(lookup)
lookup = lookup[['MSOA11CD', 'RGN11NM']]
lookup = lookup.drop_duplicates()
lookup.columns = ['msoa', 'region']
all_data = (pd.merge(all_data, lookup, on='msoa'))
all_data.to_file(path_output, crs='epsg:27700')
all_data = all_data[['msoa', 'area_km2', 'region']]
out_path = os.path.join(folder, 'output_areas.csv')
all_data.to_csv(out_path, index=False)
else:
all_data = pd.read_csv(os.path.join(folder, 'output_areas.csv'))
return all_data
def remove_small_shapes(x):
"""
Get rid of small geometries.
"""
# if its a single polygon, just return the polygon geometry
if x.geometry.geom_type == 'Polygon':
return x.geometry
# if its a multipolygon, we start trying to simplify
# and remove shapes if its too big.
elif x.geometry.geom_type == 'MultiPolygon':
area1 = 1e7
area2 = 5e7
# don't remove shapes if total area is already very small
if x.geometry.area < area1:
return x.geometry
if x.geometry.area > area2:
threshold = 5e6
else:
threshold = 5e6
# save remaining polygons as new multipolygon for
# the specific country
new_geom = []
for y in x.geometry:
if y.area > threshold:
new_geom.append(y)
return MultiPolygon(new_geom)
def process_area_features(path_output, data):
"""
Load shapes and extract required urban/rural information.
"""
data = all_data.to_dict('records')
output = {}
for item in data:
output[item['msoa']] = {
'area_km2': item['area_km2'],
'region': item['region'],
}
return output
def get_lads(path):
"""
Get all unique Local Authority District IDs.
"""
path_output = os.path.join(BASE_PATH, 'intermediate', 'prems_by_lad_msoa')
if not os.path.exists(path_output):
os.makedirs(path_output)
all_data = pd.read_csv(path)
all_data = all_data.to_dict('records')
unique_lads = set()
for item in all_data:
unique_lads.add(item['LAD17CD'])
unique_lads = list(unique_lads)#[:10]
for lad in list(unique_lads):
path_lad = os.path.join(path_output, lad)
if not os.path.exists(path_lad):
os.makedirs(path_lad)
lookup = []
for item in all_data:
if lad == item['LAD17CD']:
lookup.append({
'OA11CD': item['OA11CD'],
'LSOA11CD': item['LSOA11CD'],
'MSOA11CD': item['MSOA11CD']
})
lookup = pd.DataFrame(lookup)
lookup.to_csv(os.path.join(path_lad, 'lookup.csv'), index=False)
return list(unique_lads)
def get_lookup(lad):
"""
Create a lookup table for all Middle Super Output Areas (area) (~8,000) to
lower-level Output Areas (~190,000).
"""
folder = os.path.join(BASE_PATH, 'intermediate', 'prems_by_lad_msoa', lad)
path = os.path.join(folder, 'lookup.csv')
all_data = pd.read_csv(path)
unique_msoas = all_data['MSOA11CD'].unique()
all_data = all_data.to_dict('records')
lookup = {}
for msoa in unique_msoas:
oa_ids = []
for item in all_data:
if msoa == item['MSOA11CD']:
oa_ids.append(item['OA11CD'])
lookup[msoa] = oa_ids
return unique_msoas, lookup
def write_premises_data(lad):
"""
Aggregate Output Area premises data into Middle Super Output Areas and write.
"""
path_lad = os.path.join(BASE_PATH, 'prems_by_lad', lad)
unique_msoas, lookup = get_lookup(lad)
directory = os.path.join(BASE_PATH, 'intermediate', 'prems_by_lad_msoa', lad)
for msoa in unique_msoas:
path_output = os.path.join(directory, msoa + '.csv')
if os.path.exists(path_output):
continue
oas = lookup[msoa]
prems_by_msoa = []
for oa in oas:
path_oa = os.path.join(path_lad, oa + '.csv')
if not os.path.exists(path_oa):
continue
prems = pd.read_csv(path_oa)
prems = prems.to_dict('records')
for prem in prems:
prems_by_msoa.append({
'mistral_function_class': prem['mistral_function_class'],
'mistral_building_class': prem['mistral_building_class'],
'res_count': prem['res_count'],
'floor_area': prem['floor_area'],
'height_toroofbase': prem['height_toroofbase'],
'height_torooftop': prem['height_torooftop'],
'nonres_count': prem['nonres_count'],
'number_of_floors': prem['number_of_floors'],
'footprint_area': prem['footprint_area'],
'geometry': prem['geom'],
})
prems_by_msoa = pd.DataFrame(prems_by_msoa)
prems_by_msoa.to_csv(path_output, index=False)
def write_hh_data(lad):
"""
Get the estimated household demographics for each area.
"""
filename = 'ass_{}_area11_2018.csv'.format(lad)
path = os.path.join(BASE_PATH, 'hh_demographics_msoa_2018', filename)
if not os.path.exists(path):
return
unique_msoas, lookup = get_lookup(lad)
directory = os.path.join(BASE_PATH, 'intermediate', 'hh_by_lad_msoa', lad)
if not os.path.exists(directory):
os.makedirs(directory)
hh_data = | pd.read_csv(path) | pandas.read_csv |
#!/bin/python
# -*- coding: utf-8 -*-
import numpy as np
import pandas as pd
def sdw_parser(x):
"""Tries to encode the date format used by the ECB statistical data warehouse (SDW). To be used for the argument `date_parser` in pandas `read_csv`"""
try:
return pd.to_datetime(x)
except:
pass
try:
return pd.to_datetime(x+'0', format='%GW%V%w')
except:
pass
try:
return | pd.to_datetime(x, format='%Y%b') | pandas.to_datetime |
# -*- coding: utf-8 -*-
import re
import warnings
from datetime import timedelta
from itertools import product
import pytest
import numpy as np
import pandas as pd
from pandas import (CategoricalIndex, DataFrame, Index, MultiIndex,
compat, date_range, period_range)
from pandas.compat import PY3, long, lrange, lzip, range, u, PYPY
from pandas.errors import PerformanceWarning, UnsortedIndexError
from pandas.core.dtypes.dtypes import CategoricalDtype
from pandas.core.indexes.base import InvalidIndexError
from pandas.core.dtypes.cast import construct_1d_object_array_from_listlike
from pandas._libs.tslib import Timestamp
import pandas.util.testing as tm
from pandas.util.testing import assert_almost_equal, assert_copy
from .common import Base
class TestMultiIndex(Base):
_holder = MultiIndex
_compat_props = ['shape', 'ndim', 'size']
def setup_method(self, method):
major_axis = Index(['foo', 'bar', 'baz', 'qux'])
minor_axis = Index(['one', 'two'])
major_labels = np.array([0, 0, 1, 2, 3, 3])
minor_labels = np.array([0, 1, 0, 1, 0, 1])
self.index_names = ['first', 'second']
self.indices = dict(index=MultiIndex(levels=[major_axis, minor_axis],
labels=[major_labels, minor_labels
], names=self.index_names,
verify_integrity=False))
self.setup_indices()
def create_index(self):
return self.index
def test_can_hold_identifiers(self):
idx = self.create_index()
key = idx[0]
assert idx._can_hold_identifiers_and_holds_name(key) is True
def test_boolean_context_compat2(self):
# boolean context compat
# GH7897
i1 = MultiIndex.from_tuples([('A', 1), ('A', 2)])
i2 = MultiIndex.from_tuples([('A', 1), ('A', 3)])
common = i1.intersection(i2)
def f():
if common:
pass
tm.assert_raises_regex(ValueError, 'The truth value of a', f)
def test_labels_dtypes(self):
# GH 8456
i = MultiIndex.from_tuples([('A', 1), ('A', 2)])
assert i.labels[0].dtype == 'int8'
assert i.labels[1].dtype == 'int8'
i = MultiIndex.from_product([['a'], range(40)])
assert i.labels[1].dtype == 'int8'
i = MultiIndex.from_product([['a'], range(400)])
assert i.labels[1].dtype == 'int16'
i = MultiIndex.from_product([['a'], range(40000)])
assert i.labels[1].dtype == 'int32'
i = pd.MultiIndex.from_product([['a'], range(1000)])
assert (i.labels[0] >= 0).all()
assert (i.labels[1] >= 0).all()
def test_where(self):
i = MultiIndex.from_tuples([('A', 1), ('A', 2)])
def f():
i.where(True)
pytest.raises(NotImplementedError, f)
def test_where_array_like(self):
i = MultiIndex.from_tuples([('A', 1), ('A', 2)])
klasses = [list, tuple, np.array, pd.Series]
cond = [False, True]
for klass in klasses:
def f():
return i.where(klass(cond))
pytest.raises(NotImplementedError, f)
def test_repeat(self):
reps = 2
numbers = [1, 2, 3]
names = np.array(['foo', 'bar'])
m = MultiIndex.from_product([
numbers, names], names=names)
expected = MultiIndex.from_product([
numbers, names.repeat(reps)], names=names)
tm.assert_index_equal(m.repeat(reps), expected)
with tm.assert_produces_warning(FutureWarning):
result = m.repeat(n=reps)
tm.assert_index_equal(result, expected)
def test_numpy_repeat(self):
reps = 2
numbers = [1, 2, 3]
names = np.array(['foo', 'bar'])
m = MultiIndex.from_product([
numbers, names], names=names)
expected = MultiIndex.from_product([
numbers, names.repeat(reps)], names=names)
tm.assert_index_equal(np.repeat(m, reps), expected)
msg = "the 'axis' parameter is not supported"
tm.assert_raises_regex(
ValueError, msg, np.repeat, m, reps, axis=1)
def test_set_name_methods(self):
# so long as these are synonyms, we don't need to test set_names
assert self.index.rename == self.index.set_names
new_names = [name + "SUFFIX" for name in self.index_names]
ind = self.index.set_names(new_names)
assert self.index.names == self.index_names
assert ind.names == new_names
with tm.assert_raises_regex(ValueError, "^Length"):
ind.set_names(new_names + new_names)
new_names2 = [name + "SUFFIX2" for name in new_names]
res = ind.set_names(new_names2, inplace=True)
assert res is None
assert ind.names == new_names2
# set names for specific level (# GH7792)
ind = self.index.set_names(new_names[0], level=0)
assert self.index.names == self.index_names
assert ind.names == [new_names[0], self.index_names[1]]
res = ind.set_names(new_names2[0], level=0, inplace=True)
assert res is None
assert ind.names == [new_names2[0], self.index_names[1]]
# set names for multiple levels
ind = self.index.set_names(new_names, level=[0, 1])
assert self.index.names == self.index_names
assert ind.names == new_names
res = ind.set_names(new_names2, level=[0, 1], inplace=True)
assert res is None
assert ind.names == new_names2
@pytest.mark.parametrize('inplace', [True, False])
def test_set_names_with_nlevel_1(self, inplace):
# GH 21149
# Ensure that .set_names for MultiIndex with
# nlevels == 1 does not raise any errors
expected = pd.MultiIndex(levels=[[0, 1]],
labels=[[0, 1]],
names=['first'])
m = pd.MultiIndex.from_product([[0, 1]])
result = m.set_names('first', level=0, inplace=inplace)
if inplace:
result = m
tm.assert_index_equal(result, expected)
def test_set_levels_labels_directly(self):
# setting levels/labels directly raises AttributeError
levels = self.index.levels
new_levels = [[lev + 'a' for lev in level] for level in levels]
labels = self.index.labels
major_labels, minor_labels = labels
major_labels = [(x + 1) % 3 for x in major_labels]
minor_labels = [(x + 1) % 1 for x in minor_labels]
new_labels = [major_labels, minor_labels]
with pytest.raises(AttributeError):
self.index.levels = new_levels
with pytest.raises(AttributeError):
self.index.labels = new_labels
def test_set_levels(self):
# side note - you probably wouldn't want to use levels and labels
# directly like this - but it is possible.
levels = self.index.levels
new_levels = [[lev + 'a' for lev in level] for level in levels]
def assert_matching(actual, expected, check_dtype=False):
# avoid specifying internal representation
# as much as possible
assert len(actual) == len(expected)
for act, exp in zip(actual, expected):
act = np.asarray(act)
exp = np.asarray(exp)
tm.assert_numpy_array_equal(act, exp, check_dtype=check_dtype)
# level changing [w/o mutation]
ind2 = self.index.set_levels(new_levels)
assert_matching(ind2.levels, new_levels)
assert_matching(self.index.levels, levels)
# level changing [w/ mutation]
ind2 = self.index.copy()
inplace_return = ind2.set_levels(new_levels, inplace=True)
assert inplace_return is None
assert_matching(ind2.levels, new_levels)
# level changing specific level [w/o mutation]
ind2 = self.index.set_levels(new_levels[0], level=0)
assert_matching(ind2.levels, [new_levels[0], levels[1]])
assert_matching(self.index.levels, levels)
ind2 = self.index.set_levels(new_levels[1], level=1)
assert_matching(ind2.levels, [levels[0], new_levels[1]])
assert_matching(self.index.levels, levels)
# level changing multiple levels [w/o mutation]
ind2 = self.index.set_levels(new_levels, level=[0, 1])
assert_matching(ind2.levels, new_levels)
assert_matching(self.index.levels, levels)
# level changing specific level [w/ mutation]
ind2 = self.index.copy()
inplace_return = ind2.set_levels(new_levels[0], level=0, inplace=True)
assert inplace_return is None
assert_matching(ind2.levels, [new_levels[0], levels[1]])
assert_matching(self.index.levels, levels)
ind2 = self.index.copy()
inplace_return = ind2.set_levels(new_levels[1], level=1, inplace=True)
assert inplace_return is None
assert_matching(ind2.levels, [levels[0], new_levels[1]])
assert_matching(self.index.levels, levels)
# level changing multiple levels [w/ mutation]
ind2 = self.index.copy()
inplace_return = ind2.set_levels(new_levels, level=[0, 1],
inplace=True)
assert inplace_return is None
assert_matching(ind2.levels, new_levels)
assert_matching(self.index.levels, levels)
# illegal level changing should not change levels
# GH 13754
original_index = self.index.copy()
for inplace in [True, False]:
with tm.assert_raises_regex(ValueError, "^On"):
self.index.set_levels(['c'], level=0, inplace=inplace)
assert_matching(self.index.levels, original_index.levels,
check_dtype=True)
with tm.assert_raises_regex(ValueError, "^On"):
self.index.set_labels([0, 1, 2, 3, 4, 5], level=0,
inplace=inplace)
assert_matching(self.index.labels, original_index.labels,
check_dtype=True)
with tm.assert_raises_regex(TypeError, "^Levels"):
self.index.set_levels('c', level=0, inplace=inplace)
assert_matching(self.index.levels, original_index.levels,
check_dtype=True)
with tm.assert_raises_regex(TypeError, "^Labels"):
self.index.set_labels(1, level=0, inplace=inplace)
assert_matching(self.index.labels, original_index.labels,
check_dtype=True)
def test_set_labels(self):
# side note - you probably wouldn't want to use levels and labels
# directly like this - but it is possible.
labels = self.index.labels
major_labels, minor_labels = labels
major_labels = [(x + 1) % 3 for x in major_labels]
minor_labels = [(x + 1) % 1 for x in minor_labels]
new_labels = [major_labels, minor_labels]
def assert_matching(actual, expected):
# avoid specifying internal representation
# as much as possible
assert len(actual) == len(expected)
for act, exp in zip(actual, expected):
act = np.asarray(act)
exp = np.asarray(exp, dtype=np.int8)
tm.assert_numpy_array_equal(act, exp)
# label changing [w/o mutation]
ind2 = self.index.set_labels(new_labels)
assert_matching(ind2.labels, new_labels)
assert_matching(self.index.labels, labels)
# label changing [w/ mutation]
ind2 = self.index.copy()
inplace_return = ind2.set_labels(new_labels, inplace=True)
assert inplace_return is None
assert_matching(ind2.labels, new_labels)
# label changing specific level [w/o mutation]
ind2 = self.index.set_labels(new_labels[0], level=0)
assert_matching(ind2.labels, [new_labels[0], labels[1]])
assert_matching(self.index.labels, labels)
ind2 = self.index.set_labels(new_labels[1], level=1)
assert_matching(ind2.labels, [labels[0], new_labels[1]])
assert_matching(self.index.labels, labels)
# label changing multiple levels [w/o mutation]
ind2 = self.index.set_labels(new_labels, level=[0, 1])
assert_matching(ind2.labels, new_labels)
assert_matching(self.index.labels, labels)
# label changing specific level [w/ mutation]
ind2 = self.index.copy()
inplace_return = ind2.set_labels(new_labels[0], level=0, inplace=True)
assert inplace_return is None
assert_matching(ind2.labels, [new_labels[0], labels[1]])
assert_matching(self.index.labels, labels)
ind2 = self.index.copy()
inplace_return = ind2.set_labels(new_labels[1], level=1, inplace=True)
assert inplace_return is None
assert_matching(ind2.labels, [labels[0], new_labels[1]])
assert_matching(self.index.labels, labels)
# label changing multiple levels [w/ mutation]
ind2 = self.index.copy()
inplace_return = ind2.set_labels(new_labels, level=[0, 1],
inplace=True)
assert inplace_return is None
assert_matching(ind2.labels, new_labels)
assert_matching(self.index.labels, labels)
# label changing for levels of different magnitude of categories
ind = pd.MultiIndex.from_tuples([(0, i) for i in range(130)])
new_labels = range(129, -1, -1)
expected = pd.MultiIndex.from_tuples(
[(0, i) for i in new_labels])
# [w/o mutation]
result = ind.set_labels(labels=new_labels, level=1)
assert result.equals(expected)
# [w/ mutation]
result = ind.copy()
result.set_labels(labels=new_labels, level=1, inplace=True)
assert result.equals(expected)
def test_set_levels_labels_names_bad_input(self):
levels, labels = self.index.levels, self.index.labels
names = self.index.names
with tm.assert_raises_regex(ValueError, 'Length of levels'):
self.index.set_levels([levels[0]])
with tm.assert_raises_regex(ValueError, 'Length of labels'):
self.index.set_labels([labels[0]])
with tm.assert_raises_regex(ValueError, 'Length of names'):
self.index.set_names([names[0]])
# shouldn't scalar data error, instead should demand list-like
with tm.assert_raises_regex(TypeError, 'list of lists-like'):
self.index.set_levels(levels[0])
# shouldn't scalar data error, instead should demand list-like
with tm.assert_raises_regex(TypeError, 'list of lists-like'):
self.index.set_labels(labels[0])
# shouldn't scalar data error, instead should demand list-like
with tm.assert_raises_regex(TypeError, 'list-like'):
self.index.set_names(names[0])
# should have equal lengths
with tm.assert_raises_regex(TypeError, 'list of lists-like'):
self.index.set_levels(levels[0], level=[0, 1])
with tm.assert_raises_regex(TypeError, 'list-like'):
self.index.set_levels(levels, level=0)
# should have equal lengths
with tm.assert_raises_regex(TypeError, 'list of lists-like'):
self.index.set_labels(labels[0], level=[0, 1])
with tm.assert_raises_regex(TypeError, 'list-like'):
self.index.set_labels(labels, level=0)
# should have equal lengths
with tm.assert_raises_regex(ValueError, 'Length of names'):
self.index.set_names(names[0], level=[0, 1])
with tm.assert_raises_regex(TypeError, 'string'):
self.index.set_names(names, level=0)
def test_set_levels_categorical(self):
# GH13854
index = MultiIndex.from_arrays([list("xyzx"), [0, 1, 2, 3]])
for ordered in [False, True]:
cidx = CategoricalIndex(list("bac"), ordered=ordered)
result = index.set_levels(cidx, 0)
expected = MultiIndex(levels=[cidx, [0, 1, 2, 3]],
labels=index.labels)
tm.assert_index_equal(result, expected)
result_lvl = result.get_level_values(0)
expected_lvl = CategoricalIndex(list("bacb"),
categories=cidx.categories,
ordered=cidx.ordered)
tm.assert_index_equal(result_lvl, expected_lvl)
def test_metadata_immutable(self):
levels, labels = self.index.levels, self.index.labels
# shouldn't be able to set at either the top level or base level
mutable_regex = re.compile('does not support mutable operations')
with tm.assert_raises_regex(TypeError, mutable_regex):
levels[0] = levels[0]
with tm.assert_raises_regex(TypeError, mutable_regex):
levels[0][0] = levels[0][0]
# ditto for labels
with tm.assert_raises_regex(TypeError, mutable_regex):
labels[0] = labels[0]
with tm.assert_raises_regex(TypeError, mutable_regex):
labels[0][0] = labels[0][0]
# and for names
names = self.index.names
with tm.assert_raises_regex(TypeError, mutable_regex):
names[0] = names[0]
def test_inplace_mutation_resets_values(self):
levels = [['a', 'b', 'c'], [4]]
levels2 = [[1, 2, 3], ['a']]
labels = [[0, 1, 0, 2, 2, 0], [0, 0, 0, 0, 0, 0]]
mi1 = MultiIndex(levels=levels, labels=labels)
mi2 = MultiIndex(levels=levels2, labels=labels)
vals = mi1.values.copy()
vals2 = mi2.values.copy()
assert mi1._tuples is not None
# Make sure level setting works
new_vals = mi1.set_levels(levels2).values
tm.assert_almost_equal(vals2, new_vals)
# Non-inplace doesn't kill _tuples [implementation detail]
tm.assert_almost_equal(mi1._tuples, vals)
# ...and values is still same too
tm.assert_almost_equal(mi1.values, vals)
# Inplace should kill _tuples
mi1.set_levels(levels2, inplace=True)
tm.assert_almost_equal(mi1.values, vals2)
# Make sure label setting works too
labels2 = [[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]
exp_values = np.empty((6,), dtype=object)
exp_values[:] = [(long(1), 'a')] * 6
# Must be 1d array of tuples
assert exp_values.shape == (6,)
new_values = mi2.set_labels(labels2).values
# Not inplace shouldn't change
tm.assert_almost_equal(mi2._tuples, vals2)
# Should have correct values
tm.assert_almost_equal(exp_values, new_values)
# ...and again setting inplace should kill _tuples, etc
mi2.set_labels(labels2, inplace=True)
tm.assert_almost_equal(mi2.values, new_values)
def test_copy_in_constructor(self):
levels = np.array(["a", "b", "c"])
labels = np.array([1, 1, 2, 0, 0, 1, 1])
val = labels[0]
mi = MultiIndex(levels=[levels, levels], labels=[labels, labels],
copy=True)
assert mi.labels[0][0] == val
labels[0] = 15
assert mi.labels[0][0] == val
val = levels[0]
levels[0] = "PANDA"
assert mi.levels[0][0] == val
def test_set_value_keeps_names(self):
# motivating example from #3742
lev1 = ['hans', 'hans', 'hans', 'grethe', 'grethe', 'grethe']
lev2 = ['1', '2', '3'] * 2
idx = pd.MultiIndex.from_arrays([lev1, lev2], names=['Name', 'Number'])
df = pd.DataFrame(
np.random.randn(6, 4),
columns=['one', 'two', 'three', 'four'],
index=idx)
df = df.sort_index()
assert df._is_copy is None
assert df.index.names == ('Name', 'Number')
df.at[('grethe', '4'), 'one'] = 99.34
assert df._is_copy is None
assert df.index.names == ('Name', 'Number')
def test_copy_names(self):
# Check that adding a "names" parameter to the copy is honored
# GH14302
multi_idx = pd.Index([(1, 2), (3, 4)], names=['MyName1', 'MyName2'])
multi_idx1 = multi_idx.copy()
assert multi_idx.equals(multi_idx1)
assert multi_idx.names == ['MyName1', 'MyName2']
assert multi_idx1.names == ['MyName1', 'MyName2']
multi_idx2 = multi_idx.copy(names=['NewName1', 'NewName2'])
assert multi_idx.equals(multi_idx2)
assert multi_idx.names == ['MyName1', 'MyName2']
assert multi_idx2.names == ['NewName1', 'NewName2']
multi_idx3 = multi_idx.copy(name=['NewName1', 'NewName2'])
assert multi_idx.equals(multi_idx3)
assert multi_idx.names == ['MyName1', 'MyName2']
assert multi_idx3.names == ['NewName1', 'NewName2']
def test_names(self):
# names are assigned in setup
names = self.index_names
level_names = [level.name for level in self.index.levels]
assert names == level_names
# setting bad names on existing
index = self.index
tm.assert_raises_regex(ValueError, "^Length of names",
setattr, index, "names",
list(index.names) + ["third"])
tm.assert_raises_regex(ValueError, "^Length of names",
setattr, index, "names", [])
# initializing with bad names (should always be equivalent)
major_axis, minor_axis = self.index.levels
major_labels, minor_labels = self.index.labels
tm.assert_raises_regex(ValueError, "^Length of names", MultiIndex,
levels=[major_axis, minor_axis],
labels=[major_labels, minor_labels],
names=['first'])
tm.assert_raises_regex(ValueError, "^Length of names", MultiIndex,
levels=[major_axis, minor_axis],
labels=[major_labels, minor_labels],
names=['first', 'second', 'third'])
# names are assigned
index.names = ["a", "b"]
ind_names = list(index.names)
level_names = [level.name for level in index.levels]
assert ind_names == level_names
def test_astype(self):
expected = self.index.copy()
actual = self.index.astype('O')
assert_copy(actual.levels, expected.levels)
assert_copy(actual.labels, expected.labels)
self.check_level_names(actual, expected.names)
with tm.assert_raises_regex(TypeError, "^Setting.*dtype.*object"):
self.index.astype(np.dtype(int))
@pytest.mark.parametrize('ordered', [True, False])
def test_astype_category(self, ordered):
# GH 18630
msg = '> 1 ndim Categorical are not supported at this time'
with tm.assert_raises_regex(NotImplementedError, msg):
self.index.astype(CategoricalDtype(ordered=ordered))
if ordered is False:
# dtype='category' defaults to ordered=False, so only test once
with tm.assert_raises_regex(NotImplementedError, msg):
self.index.astype('category')
def test_constructor_single_level(self):
result = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux']],
labels=[[0, 1, 2, 3]], names=['first'])
assert isinstance(result, MultiIndex)
expected = Index(['foo', 'bar', 'baz', 'qux'], name='first')
tm.assert_index_equal(result.levels[0], expected)
assert result.names == ['first']
def test_constructor_no_levels(self):
tm.assert_raises_regex(ValueError, "non-zero number "
"of levels/labels",
MultiIndex, levels=[], labels=[])
both_re = re.compile('Must pass both levels and labels')
with tm.assert_raises_regex(TypeError, both_re):
MultiIndex(levels=[])
with tm.assert_raises_regex(TypeError, both_re):
MultiIndex(labels=[])
def test_constructor_mismatched_label_levels(self):
labels = [np.array([1]), np.array([2]), np.array([3])]
levels = ["a"]
tm.assert_raises_regex(ValueError, "Length of levels and labels "
"must be the same", MultiIndex,
levels=levels, labels=labels)
length_error = re.compile('>= length of level')
label_error = re.compile(r'Unequal label lengths: \[4, 2\]')
# important to check that it's looking at the right thing.
with tm.assert_raises_regex(ValueError, length_error):
MultiIndex(levels=[['a'], ['b']],
labels=[[0, 1, 2, 3], [0, 3, 4, 1]])
with tm.assert_raises_regex(ValueError, label_error):
MultiIndex(levels=[['a'], ['b']], labels=[[0, 0, 0, 0], [0, 0]])
# external API
with tm.assert_raises_regex(ValueError, length_error):
self.index.copy().set_levels([['a'], ['b']])
with tm.assert_raises_regex(ValueError, label_error):
self.index.copy().set_labels([[0, 0, 0, 0], [0, 0]])
def test_constructor_nonhashable_names(self):
# GH 20527
levels = [[1, 2], [u'one', u'two']]
labels = [[0, 0, 1, 1], [0, 1, 0, 1]]
names = ((['foo'], ['bar']))
message = "MultiIndex.name must be a hashable type"
tm.assert_raises_regex(TypeError, message,
MultiIndex, levels=levels,
labels=labels, names=names)
# With .rename()
mi = MultiIndex(levels=[[1, 2], [u'one', u'two']],
labels=[[0, 0, 1, 1], [0, 1, 0, 1]],
names=('foo', 'bar'))
renamed = [['foor'], ['barr']]
tm.assert_raises_regex(TypeError, message, mi.rename, names=renamed)
# With .set_names()
tm.assert_raises_regex(TypeError, message, mi.set_names, names=renamed)
@pytest.mark.parametrize('names', [['a', 'b', 'a'], ['1', '1', '2'],
['1', 'a', '1']])
def test_duplicate_level_names(self, names):
# GH18872
pytest.raises(ValueError, pd.MultiIndex.from_product,
[[0, 1]] * 3, names=names)
# With .rename()
mi = pd.MultiIndex.from_product([[0, 1]] * 3)
tm.assert_raises_regex(ValueError, "Duplicated level name:",
mi.rename, names)
# With .rename(., level=)
mi.rename(names[0], level=1, inplace=True)
tm.assert_raises_regex(ValueError, "Duplicated level name:",
mi.rename, names[:2], level=[0, 2])
def assert_multiindex_copied(self, copy, original):
# Levels should be (at least, shallow copied)
tm.assert_copy(copy.levels, original.levels)
tm.assert_almost_equal(copy.labels, original.labels)
# Labels doesn't matter which way copied
tm.assert_almost_equal(copy.labels, original.labels)
assert copy.labels is not original.labels
# Names doesn't matter which way copied
assert copy.names == original.names
assert copy.names is not original.names
# Sort order should be copied
assert copy.sortorder == original.sortorder
def test_copy(self):
i_copy = self.index.copy()
self.assert_multiindex_copied(i_copy, self.index)
def test_shallow_copy(self):
i_copy = self.index._shallow_copy()
self.assert_multiindex_copied(i_copy, self.index)
def test_view(self):
i_view = self.index.view()
self.assert_multiindex_copied(i_view, self.index)
def check_level_names(self, index, names):
assert [level.name for level in index.levels] == list(names)
def test_changing_names(self):
# names should be applied to levels
level_names = [level.name for level in self.index.levels]
self.check_level_names(self.index, self.index.names)
view = self.index.view()
copy = self.index.copy()
shallow_copy = self.index._shallow_copy()
# changing names should change level names on object
new_names = [name + "a" for name in self.index.names]
self.index.names = new_names
self.check_level_names(self.index, new_names)
# but not on copies
self.check_level_names(view, level_names)
self.check_level_names(copy, level_names)
self.check_level_names(shallow_copy, level_names)
# and copies shouldn't change original
shallow_copy.names = [name + "c" for name in shallow_copy.names]
self.check_level_names(self.index, new_names)
def test_get_level_number_integer(self):
self.index.names = [1, 0]
assert self.index._get_level_number(1) == 0
assert self.index._get_level_number(0) == 1
pytest.raises(IndexError, self.index._get_level_number, 2)
tm.assert_raises_regex(KeyError, 'Level fourth not found',
self.index._get_level_number, 'fourth')
def test_from_arrays(self):
arrays = []
for lev, lab in zip(self.index.levels, self.index.labels):
arrays.append(np.asarray(lev).take(lab))
# list of arrays as input
result = MultiIndex.from_arrays(arrays, names=self.index.names)
tm.assert_index_equal(result, self.index)
# infer correctly
result = MultiIndex.from_arrays([[pd.NaT, Timestamp('20130101')],
['a', 'b']])
assert result.levels[0].equals(Index([Timestamp('20130101')]))
assert result.levels[1].equals(Index(['a', 'b']))
def test_from_arrays_iterator(self):
# GH 18434
arrays = []
for lev, lab in zip(self.index.levels, self.index.labels):
arrays.append(np.asarray(lev).take(lab))
# iterator as input
result = MultiIndex.from_arrays(iter(arrays), names=self.index.names)
tm.assert_index_equal(result, self.index)
# invalid iterator input
with tm.assert_raises_regex(
TypeError, "Input must be a list / sequence of array-likes."):
MultiIndex.from_arrays(0)
def test_from_arrays_index_series_datetimetz(self):
idx1 = pd.date_range('2015-01-01 10:00', freq='D', periods=3,
tz='US/Eastern')
idx2 = pd.date_range('2015-01-01 10:00', freq='H', periods=3,
tz='Asia/Tokyo')
result = pd.MultiIndex.from_arrays([idx1, idx2])
tm.assert_index_equal(result.get_level_values(0), idx1)
tm.assert_index_equal(result.get_level_values(1), idx2)
result2 = pd.MultiIndex.from_arrays([pd.Series(idx1), pd.Series(idx2)])
tm.assert_index_equal(result2.get_level_values(0), idx1)
tm.assert_index_equal(result2.get_level_values(1), idx2)
tm.assert_index_equal(result, result2)
def test_from_arrays_index_series_timedelta(self):
idx1 = pd.timedelta_range('1 days', freq='D', periods=3)
idx2 = pd.timedelta_range('2 hours', freq='H', periods=3)
result = pd.MultiIndex.from_arrays([idx1, idx2])
tm.assert_index_equal(result.get_level_values(0), idx1)
tm.assert_index_equal(result.get_level_values(1), idx2)
result2 = pd.MultiIndex.from_arrays([pd.Series(idx1), pd.Series(idx2)])
tm.assert_index_equal(result2.get_level_values(0), idx1)
tm.assert_index_equal(result2.get_level_values(1), idx2)
tm.assert_index_equal(result, result2)
def test_from_arrays_index_series_period(self):
idx1 = pd.period_range('2011-01-01', freq='D', periods=3)
idx2 = pd.period_range('2015-01-01', freq='H', periods=3)
result = pd.MultiIndex.from_arrays([idx1, idx2])
tm.assert_index_equal(result.get_level_values(0), idx1)
tm.assert_index_equal(result.get_level_values(1), idx2)
result2 = pd.MultiIndex.from_arrays([pd.Series(idx1), pd.Series(idx2)])
tm.assert_index_equal(result2.get_level_values(0), idx1)
tm.assert_index_equal(result2.get_level_values(1), idx2)
tm.assert_index_equal(result, result2)
def test_from_arrays_index_datetimelike_mixed(self):
idx1 = pd.date_range('2015-01-01 10:00', freq='D', periods=3,
tz='US/Eastern')
idx2 = pd.date_range('2015-01-01 10:00', freq='H', periods=3)
idx3 = pd.timedelta_range('1 days', freq='D', periods=3)
idx4 = pd.period_range('2011-01-01', freq='D', periods=3)
result = pd.MultiIndex.from_arrays([idx1, idx2, idx3, idx4])
tm.assert_index_equal(result.get_level_values(0), idx1)
tm.assert_index_equal(result.get_level_values(1), idx2)
tm.assert_index_equal(result.get_level_values(2), idx3)
tm.assert_index_equal(result.get_level_values(3), idx4)
result2 = pd.MultiIndex.from_arrays([pd.Series(idx1),
pd.Series(idx2),
pd.Series(idx3),
pd.Series(idx4)])
tm.assert_index_equal(result2.get_level_values(0), idx1)
tm.assert_index_equal(result2.get_level_values(1), idx2)
tm.assert_index_equal(result2.get_level_values(2), idx3)
tm.assert_index_equal(result2.get_level_values(3), idx4)
tm.assert_index_equal(result, result2)
def test_from_arrays_index_series_categorical(self):
# GH13743
idx1 = pd.CategoricalIndex(list("abcaab"), categories=list("bac"),
ordered=False)
idx2 = pd.CategoricalIndex(list("abcaab"), categories=list("bac"),
ordered=True)
result = pd.MultiIndex.from_arrays([idx1, idx2])
tm.assert_index_equal(result.get_level_values(0), idx1)
tm.assert_index_equal(result.get_level_values(1), idx2)
result2 = pd.MultiIndex.from_arrays([pd.Series(idx1), pd.Series(idx2)])
tm.assert_index_equal(result2.get_level_values(0), idx1)
tm.assert_index_equal(result2.get_level_values(1), idx2)
result3 = pd.MultiIndex.from_arrays([idx1.values, idx2.values])
tm.assert_index_equal(result3.get_level_values(0), idx1)
tm.assert_index_equal(result3.get_level_values(1), idx2)
def test_from_arrays_empty(self):
# 0 levels
with tm.assert_raises_regex(
ValueError, "Must pass non-zero number of levels/labels"):
MultiIndex.from_arrays(arrays=[])
# 1 level
result = MultiIndex.from_arrays(arrays=[[]], names=['A'])
assert isinstance(result, MultiIndex)
expected = Index([], name='A')
tm.assert_index_equal(result.levels[0], expected)
# N levels
for N in [2, 3]:
arrays = [[]] * N
names = list('ABC')[:N]
result = MultiIndex.from_arrays(arrays=arrays, names=names)
expected = MultiIndex(levels=[[]] * N, labels=[[]] * N,
names=names)
tm.assert_index_equal(result, expected)
def test_from_arrays_invalid_input(self):
invalid_inputs = [1, [1], [1, 2], [[1], 2],
'a', ['a'], ['a', 'b'], [['a'], 'b']]
for i in invalid_inputs:
pytest.raises(TypeError, MultiIndex.from_arrays, arrays=i)
def test_from_arrays_different_lengths(self):
# see gh-13599
idx1 = [1, 2, 3]
idx2 = ['a', 'b']
tm.assert_raises_regex(ValueError, '^all arrays must '
'be same length$',
MultiIndex.from_arrays, [idx1, idx2])
idx1 = []
idx2 = ['a', 'b']
tm.assert_raises_regex(ValueError, '^all arrays must '
'be same length$',
MultiIndex.from_arrays, [idx1, idx2])
idx1 = [1, 2, 3]
idx2 = []
tm.assert_raises_regex(ValueError, '^all arrays must '
'be same length$',
MultiIndex.from_arrays, [idx1, idx2])
def test_from_product(self):
first = ['foo', 'bar', 'buz']
second = ['a', 'b', 'c']
names = ['first', 'second']
result = MultiIndex.from_product([first, second], names=names)
tuples = [('foo', 'a'), ('foo', 'b'), ('foo', 'c'), ('bar', 'a'),
('bar', 'b'), ('bar', 'c'), ('buz', 'a'), ('buz', 'b'),
('buz', 'c')]
expected = MultiIndex.from_tuples(tuples, names=names)
tm.assert_index_equal(result, expected)
def test_from_product_iterator(self):
# GH 18434
first = ['foo', 'bar', 'buz']
second = ['a', 'b', 'c']
names = ['first', 'second']
tuples = [('foo', 'a'), ('foo', 'b'), ('foo', 'c'), ('bar', 'a'),
('bar', 'b'), ('bar', 'c'), ('buz', 'a'), ('buz', 'b'),
('buz', 'c')]
expected = MultiIndex.from_tuples(tuples, names=names)
# iterator as input
result = MultiIndex.from_product(iter([first, second]), names=names)
tm.assert_index_equal(result, expected)
# Invalid non-iterable input
with tm.assert_raises_regex(
TypeError, "Input must be a list / sequence of iterables."):
MultiIndex.from_product(0)
def test_from_product_empty(self):
# 0 levels
with tm.assert_raises_regex(
ValueError, "Must pass non-zero number of levels/labels"):
MultiIndex.from_product([])
# 1 level
result = MultiIndex.from_product([[]], names=['A'])
expected = pd.Index([], name='A')
tm.assert_index_equal(result.levels[0], expected)
# 2 levels
l1 = [[], ['foo', 'bar', 'baz'], []]
l2 = [[], [], ['a', 'b', 'c']]
names = ['A', 'B']
for first, second in zip(l1, l2):
result = MultiIndex.from_product([first, second], names=names)
expected = MultiIndex(levels=[first, second],
labels=[[], []], names=names)
tm.assert_index_equal(result, expected)
# GH12258
names = ['A', 'B', 'C']
for N in range(4):
lvl2 = lrange(N)
result = MultiIndex.from_product([[], lvl2, []], names=names)
expected = MultiIndex(levels=[[], lvl2, []],
labels=[[], [], []], names=names)
tm.assert_index_equal(result, expected)
def test_from_product_invalid_input(self):
invalid_inputs = [1, [1], [1, 2], [[1], 2],
'a', ['a'], ['a', 'b'], [['a'], 'b']]
for i in invalid_inputs:
pytest.raises(TypeError, MultiIndex.from_product, iterables=i)
def test_from_product_datetimeindex(self):
dt_index = date_range('2000-01-01', periods=2)
mi = pd.MultiIndex.from_product([[1, 2], dt_index])
etalon = construct_1d_object_array_from_listlike([(1, pd.Timestamp(
'2000-01-01')), (1, pd.Timestamp('2000-01-02')), (2, pd.Timestamp(
'2000-01-01')), (2, pd.Timestamp('2000-01-02'))])
tm.assert_numpy_array_equal(mi.values, etalon)
def test_from_product_index_series_categorical(self):
# GH13743
first = ['foo', 'bar']
for ordered in [False, True]:
idx = pd.CategoricalIndex(list("abcaab"), categories=list("bac"),
ordered=ordered)
expected = pd.CategoricalIndex(list("abcaab") + list("abcaab"),
categories=list("bac"),
ordered=ordered)
for arr in [idx, pd.Series(idx), idx.values]:
result = pd.MultiIndex.from_product([first, arr])
tm.assert_index_equal(result.get_level_values(1), expected)
def test_values_boxed(self):
tuples = [(1, pd.Timestamp('2000-01-01')), (2, pd.NaT),
(3, pd.Timestamp('2000-01-03')),
(1, pd.Timestamp('2000-01-04')),
(2, pd.Timestamp('2000-01-02')),
(3, pd.Timestamp('2000-01-03'))]
result = pd.MultiIndex.from_tuples(tuples)
expected = construct_1d_object_array_from_listlike(tuples)
tm.assert_numpy_array_equal(result.values, expected)
# Check that code branches for boxed values produce identical results
tm.assert_numpy_array_equal(result.values[:4], result[:4].values)
def test_values_multiindex_datetimeindex(self):
# Test to ensure we hit the boxing / nobox part of MI.values
ints = np.arange(10 ** 18, 10 ** 18 + 5)
naive = pd.DatetimeIndex(ints)
aware = pd.DatetimeIndex(ints, tz='US/Central')
idx = pd.MultiIndex.from_arrays([naive, aware])
result = idx.values
outer = pd.DatetimeIndex([x[0] for x in result])
tm.assert_index_equal(outer, naive)
inner = pd.DatetimeIndex([x[1] for x in result])
tm.assert_index_equal(inner, aware)
# n_lev > n_lab
result = idx[:2].values
outer = pd.DatetimeIndex([x[0] for x in result])
tm.assert_index_equal(outer, naive[:2])
inner = pd.DatetimeIndex([x[1] for x in result])
tm.assert_index_equal(inner, aware[:2])
def test_values_multiindex_periodindex(self):
# Test to ensure we hit the boxing / nobox part of MI.values
ints = np.arange(2007, 2012)
pidx = pd.PeriodIndex(ints, freq='D')
idx = pd.MultiIndex.from_arrays([ints, pidx])
result = idx.values
outer = pd.Int64Index([x[0] for x in result])
tm.assert_index_equal(outer, pd.Int64Index(ints))
inner = pd.PeriodIndex([x[1] for x in result])
tm.assert_index_equal(inner, pidx)
# n_lev > n_lab
result = idx[:2].values
outer = pd.Int64Index([x[0] for x in result])
tm.assert_index_equal(outer, pd.Int64Index(ints[:2]))
inner = pd.PeriodIndex([x[1] for x in result])
tm.assert_index_equal(inner, pidx[:2])
def test_append(self):
result = self.index[:3].append(self.index[3:])
assert result.equals(self.index)
foos = [self.index[:1], self.index[1:3], self.index[3:]]
result = foos[0].append(foos[1:])
assert result.equals(self.index)
# empty
result = self.index.append([])
assert result.equals(self.index)
def test_append_mixed_dtypes(self):
# GH 13660
dti = date_range('2011-01-01', freq='M', periods=3, )
dti_tz = date_range('2011-01-01', freq='M', periods=3, tz='US/Eastern')
pi = period_range('2011-01', freq='M', periods=3)
mi = MultiIndex.from_arrays([[1, 2, 3],
[1.1, np.nan, 3.3],
['a', 'b', 'c'],
dti, dti_tz, pi])
assert mi.nlevels == 6
res = mi.append(mi)
exp = MultiIndex.from_arrays([[1, 2, 3, 1, 2, 3],
[1.1, np.nan, 3.3, 1.1, np.nan, 3.3],
['a', 'b', 'c', 'a', 'b', 'c'],
dti.append(dti),
dti_tz.append(dti_tz),
pi.append(pi)])
tm.assert_index_equal(res, exp)
other = MultiIndex.from_arrays([['x', 'y', 'z'], ['x', 'y', 'z'],
['x', 'y', 'z'], ['x', 'y', 'z'],
['x', 'y', 'z'], ['x', 'y', 'z']])
res = mi.append(other)
exp = MultiIndex.from_arrays([[1, 2, 3, 'x', 'y', 'z'],
[1.1, np.nan, 3.3, 'x', 'y', 'z'],
['a', 'b', 'c', 'x', 'y', 'z'],
dti.append(pd.Index(['x', 'y', 'z'])),
dti_tz.append(pd.Index(['x', 'y', 'z'])),
pi.append(pd.Index(['x', 'y', 'z']))])
tm.assert_index_equal(res, exp)
def test_get_level_values(self):
result = self.index.get_level_values(0)
expected = Index(['foo', 'foo', 'bar', 'baz', 'qux', 'qux'],
name='first')
tm.assert_index_equal(result, expected)
assert result.name == 'first'
result = self.index.get_level_values('first')
expected = self.index.get_level_values(0)
tm.assert_index_equal(result, expected)
# GH 10460
index = MultiIndex(
levels=[CategoricalIndex(['A', 'B']),
CategoricalIndex([1, 2, 3])],
labels=[np.array([0, 0, 0, 1, 1, 1]),
np.array([0, 1, 2, 0, 1, 2])])
exp = CategoricalIndex(['A', 'A', 'A', 'B', 'B', 'B'])
tm.assert_index_equal(index.get_level_values(0), exp)
exp = CategoricalIndex([1, 2, 3, 1, 2, 3])
tm.assert_index_equal(index.get_level_values(1), exp)
def test_get_level_values_int_with_na(self):
# GH 17924
arrays = [['a', 'b', 'b'], [1, np.nan, 2]]
index = pd.MultiIndex.from_arrays(arrays)
result = index.get_level_values(1)
expected = Index([1, np.nan, 2])
tm.assert_index_equal(result, expected)
arrays = [['a', 'b', 'b'], [np.nan, np.nan, 2]]
index = pd.MultiIndex.from_arrays(arrays)
result = index.get_level_values(1)
expected = Index([np.nan, np.nan, 2])
tm.assert_index_equal(result, expected)
def test_get_level_values_na(self):
arrays = [[np.nan, np.nan, np.nan], ['a', np.nan, 1]]
index = pd.MultiIndex.from_arrays(arrays)
result = index.get_level_values(0)
expected = pd.Index([np.nan, np.nan, np.nan])
tm.assert_index_equal(result, expected)
result = index.get_level_values(1)
expected = pd.Index(['a', np.nan, 1])
tm.assert_index_equal(result, expected)
arrays = [['a', 'b', 'b'], pd.DatetimeIndex([0, 1, pd.NaT])]
index = pd.MultiIndex.from_arrays(arrays)
result = index.get_level_values(1)
expected = pd.DatetimeIndex([0, 1, pd.NaT])
tm.assert_index_equal(result, expected)
arrays = [[], []]
index = pd.MultiIndex.from_arrays(arrays)
result = index.get_level_values(0)
expected = pd.Index([], dtype=object)
tm.assert_index_equal(result, expected)
def test_get_level_values_all_na(self):
# GH 17924 when level entirely consists of nan
arrays = [[np.nan, np.nan, np.nan], ['a', np.nan, 1]]
index = pd.MultiIndex.from_arrays(arrays)
result = index.get_level_values(0)
expected = pd.Index([np.nan, np.nan, np.nan], dtype=np.float64)
tm.assert_index_equal(result, expected)
result = index.get_level_values(1)
expected = pd.Index(['a', np.nan, 1], dtype=object)
tm.assert_index_equal(result, expected)
def test_reorder_levels(self):
# this blows up
tm.assert_raises_regex(IndexError, '^Too many levels',
self.index.reorder_levels, [2, 1, 0])
def test_nlevels(self):
assert self.index.nlevels == 2
def test_iter(self):
result = list(self.index)
expected = [('foo', 'one'), ('foo', 'two'), ('bar', 'one'),
('baz', 'two'), ('qux', 'one'), ('qux', 'two')]
assert result == expected
def test_legacy_pickle(self):
if PY3:
pytest.skip("testing for legacy pickles not "
"support on py3")
path = tm.get_data_path('multiindex_v1.pickle')
obj = pd.read_pickle(path)
obj2 = MultiIndex.from_tuples(obj.values)
assert obj.equals(obj2)
res = obj.get_indexer(obj)
exp = np.arange(len(obj), dtype=np.intp)
assert_almost_equal(res, exp)
res = obj.get_indexer(obj2[::-1])
exp = obj.get_indexer(obj[::-1])
exp2 = obj2.get_indexer(obj2[::-1])
assert_almost_equal(res, exp)
assert_almost_equal(exp, exp2)
def test_legacy_v2_unpickle(self):
# 0.7.3 -> 0.8.0 format manage
path = tm.get_data_path('mindex_073.pickle')
obj = pd.read_pickle(path)
obj2 = MultiIndex.from_tuples(obj.values)
assert obj.equals(obj2)
res = obj.get_indexer(obj)
exp = np.arange(len(obj), dtype=np.intp)
assert_almost_equal(res, exp)
res = obj.get_indexer(obj2[::-1])
exp = obj.get_indexer(obj[::-1])
exp2 = obj2.get_indexer(obj2[::-1])
assert_almost_equal(res, exp)
assert_almost_equal(exp, exp2)
def test_roundtrip_pickle_with_tz(self):
# GH 8367
# round-trip of timezone
index = MultiIndex.from_product(
[[1, 2], ['a', 'b'], date_range('20130101', periods=3,
tz='US/Eastern')
], names=['one', 'two', 'three'])
unpickled = tm.round_trip_pickle(index)
assert index.equal_levels(unpickled)
def test_from_tuples_index_values(self):
result = MultiIndex.from_tuples(self.index)
assert (result.values == self.index.values).all()
def test_contains(self):
assert ('foo', 'two') in self.index
assert ('bar', 'two') not in self.index
assert None not in self.index
def test_contains_top_level(self):
midx = MultiIndex.from_product([['A', 'B'], [1, 2]])
assert 'A' in midx
assert 'A' not in midx._engine
def test_contains_with_nat(self):
# MI with a NaT
mi = MultiIndex(levels=[['C'],
pd.date_range('2012-01-01', periods=5)],
labels=[[0, 0, 0, 0, 0, 0], [-1, 0, 1, 2, 3, 4]],
names=[None, 'B'])
assert ('C', pd.Timestamp('2012-01-01')) in mi
for val in mi.values:
assert val in mi
def test_is_all_dates(self):
assert not self.index.is_all_dates
def test_is_numeric(self):
# MultiIndex is never numeric
assert not self.index.is_numeric()
def test_getitem(self):
# scalar
assert self.index[2] == ('bar', 'one')
# slice
result = self.index[2:5]
expected = self.index[[2, 3, 4]]
assert result.equals(expected)
# boolean
result = self.index[[True, False, True, False, True, True]]
result2 = self.index[np.array([True, False, True, False, True, True])]
expected = self.index[[0, 2, 4, 5]]
assert result.equals(expected)
assert result2.equals(expected)
def test_getitem_group_select(self):
sorted_idx, _ = self.index.sortlevel(0)
assert sorted_idx.get_loc('baz') == slice(3, 4)
assert sorted_idx.get_loc('foo') == slice(0, 2)
def test_get_loc(self):
assert self.index.get_loc(('foo', 'two')) == 1
assert self.index.get_loc(('baz', 'two')) == 3
pytest.raises(KeyError, self.index.get_loc, ('bar', 'two'))
pytest.raises(KeyError, self.index.get_loc, 'quux')
pytest.raises(NotImplementedError, self.index.get_loc, 'foo',
method='nearest')
# 3 levels
index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index(
lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array(
[0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])])
pytest.raises(KeyError, index.get_loc, (1, 1))
assert index.get_loc((2, 0)) == slice(3, 5)
def test_get_loc_duplicates(self):
index = Index([2, 2, 2, 2])
result = index.get_loc(2)
expected = slice(0, 4)
assert result == expected
# pytest.raises(Exception, index.get_loc, 2)
index = Index(['c', 'a', 'a', 'b', 'b'])
rs = index.get_loc('c')
xp = 0
assert rs == xp
def test_get_value_duplicates(self):
index = MultiIndex(levels=[['D', 'B', 'C'],
[0, 26, 27, 37, 57, 67, 75, 82]],
labels=[[0, 0, 0, 1, 2, 2, 2, 2, 2, 2],
[1, 3, 4, 6, 0, 2, 2, 3, 5, 7]],
names=['tag', 'day'])
assert index.get_loc('D') == slice(0, 3)
with pytest.raises(KeyError):
index._engine.get_value(np.array([]), 'D')
def test_get_loc_level(self):
index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index(
lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array(
[0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])])
loc, new_index = index.get_loc_level((0, 1))
expected = slice(1, 2)
exp_index = index[expected].droplevel(0).droplevel(0)
assert loc == expected
assert new_index.equals(exp_index)
loc, new_index = index.get_loc_level((0, 1, 0))
expected = 1
assert loc == expected
assert new_index is None
pytest.raises(KeyError, index.get_loc_level, (2, 2))
index = MultiIndex(levels=[[2000], lrange(4)], labels=[np.array(
[0, 0, 0, 0]), np.array([0, 1, 2, 3])])
result, new_index = index.get_loc_level((2000, slice(None, None)))
expected = slice(None, None)
assert result == expected
assert new_index.equals(index.droplevel(0))
@pytest.mark.parametrize('level', [0, 1])
@pytest.mark.parametrize('null_val', [np.nan, pd.NaT, None])
def test_get_loc_nan(self, level, null_val):
# GH 18485 : NaN in MultiIndex
levels = [['a', 'b'], ['c', 'd']]
key = ['b', 'd']
levels[level] = np.array([0, null_val], dtype=type(null_val))
key[level] = null_val
idx = MultiIndex.from_product(levels)
assert idx.get_loc(tuple(key)) == 3
def test_get_loc_missing_nan(self):
# GH 8569
idx = MultiIndex.from_arrays([[1.0, 2.0], [3.0, 4.0]])
assert isinstance(idx.get_loc(1), slice)
pytest.raises(KeyError, idx.get_loc, 3)
pytest.raises(KeyError, idx.get_loc, np.nan)
pytest.raises(KeyError, idx.get_loc, [np.nan])
@pytest.mark.parametrize('dtype1', [int, float, bool, str])
@pytest.mark.parametrize('dtype2', [int, float, bool, str])
def test_get_loc_multiple_dtypes(self, dtype1, dtype2):
# GH 18520
levels = [np.array([0, 1]).astype(dtype1),
np.array([0, 1]).astype(dtype2)]
idx = pd.MultiIndex.from_product(levels)
assert idx.get_loc(idx[2]) == 2
@pytest.mark.parametrize('level', [0, 1])
@pytest.mark.parametrize('dtypes', [[int, float], [float, int]])
def test_get_loc_implicit_cast(self, level, dtypes):
# GH 18818, GH 15994 : as flat index, cast int to float and vice-versa
levels = [['a', 'b'], ['c', 'd']]
key = ['b', 'd']
lev_dtype, key_dtype = dtypes
levels[level] = np.array([0, 1], dtype=lev_dtype)
key[level] = key_dtype(1)
idx = MultiIndex.from_product(levels)
assert idx.get_loc(tuple(key)) == 3
def test_get_loc_cast_bool(self):
# GH 19086 : int is casted to bool, but not vice-versa
levels = [[False, True], np.arange(2, dtype='int64')]
idx = MultiIndex.from_product(levels)
assert idx.get_loc((0, 1)) == 1
assert idx.get_loc((1, 0)) == 2
pytest.raises(KeyError, idx.get_loc, (False, True))
pytest.raises(KeyError, idx.get_loc, (True, False))
def test_slice_locs(self):
df = tm.makeTimeDataFrame()
stacked = df.stack()
idx = stacked.index
slob = slice(*idx.slice_locs(df.index[5], df.index[15]))
sliced = stacked[slob]
expected = df[5:16].stack()
tm.assert_almost_equal(sliced.values, expected.values)
slob = slice(*idx.slice_locs(df.index[5] + timedelta(seconds=30),
df.index[15] - timedelta(seconds=30)))
sliced = stacked[slob]
expected = df[6:15].stack()
tm.assert_almost_equal(sliced.values, expected.values)
def test_slice_locs_with_type_mismatch(self):
df = tm.makeTimeDataFrame()
stacked = df.stack()
idx = stacked.index
tm.assert_raises_regex(TypeError, '^Level type mismatch',
idx.slice_locs, (1, 3))
tm.assert_raises_regex(TypeError, '^Level type mismatch',
idx.slice_locs,
df.index[5] + timedelta(
seconds=30), (5, 2))
df = tm.makeCustomDataframe(5, 5)
stacked = df.stack()
idx = stacked.index
with tm.assert_raises_regex(TypeError, '^Level type mismatch'):
idx.slice_locs(timedelta(seconds=30))
# TODO: Try creating a UnicodeDecodeError in exception message
with tm.assert_raises_regex(TypeError, '^Level type mismatch'):
idx.slice_locs(df.index[1], (16, "a"))
def test_slice_locs_not_sorted(self):
index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index(
lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array(
[0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])])
tm.assert_raises_regex(KeyError, "[Kk]ey length.*greater than "
"MultiIndex lexsort depth",
index.slice_locs, (1, 0, 1), (2, 1, 0))
# works
sorted_index, _ = index.sortlevel(0)
# should there be a test case here???
sorted_index.slice_locs((1, 0, 1), (2, 1, 0))
def test_slice_locs_partial(self):
sorted_idx, _ = self.index.sortlevel(0)
result = sorted_idx.slice_locs(('foo', 'two'), ('qux', 'one'))
assert result == (1, 5)
result = sorted_idx.slice_locs(None, ('qux', 'one'))
assert result == (0, 5)
result = sorted_idx.slice_locs(('foo', 'two'), None)
assert result == (1, len(sorted_idx))
result = sorted_idx.slice_locs('bar', 'baz')
assert result == (2, 4)
def test_slice_locs_not_contained(self):
# some searchsorted action
index = MultiIndex(levels=[[0, 2, 4, 6], [0, 2, 4]],
labels=[[0, 0, 0, 1, 1, 2, 3, 3, 3],
[0, 1, 2, 1, 2, 2, 0, 1, 2]], sortorder=0)
result = index.slice_locs((1, 0), (5, 2))
assert result == (3, 6)
result = index.slice_locs(1, 5)
assert result == (3, 6)
result = index.slice_locs((2, 2), (5, 2))
assert result == (3, 6)
result = index.slice_locs(2, 5)
assert result == (3, 6)
result = index.slice_locs((1, 0), (6, 3))
assert result == (3, 8)
result = index.slice_locs(-1, 10)
assert result == (0, len(index))
def test_consistency(self):
# need to construct an overflow
major_axis = lrange(70000)
minor_axis = lrange(10)
major_labels = np.arange(70000)
minor_labels = np.repeat(lrange(10), 7000)
# the fact that is works means it's consistent
index = MultiIndex(levels=[major_axis, minor_axis],
labels=[major_labels, minor_labels])
# inconsistent
major_labels = np.array([0, 0, 1, 1, 1, 2, 2, 3, 3])
minor_labels = np.array([0, 1, 0, 1, 1, 0, 1, 0, 1])
index = MultiIndex(levels=[major_axis, minor_axis],
labels=[major_labels, minor_labels])
assert not index.is_unique
def test_truncate(self):
major_axis = Index(lrange(4))
minor_axis = Index(lrange(2))
major_labels = np.array([0, 0, 1, 2, 3, 3])
minor_labels = np.array([0, 1, 0, 1, 0, 1])
index = MultiIndex(levels=[major_axis, minor_axis],
labels=[major_labels, minor_labels])
result = index.truncate(before=1)
assert 'foo' not in result.levels[0]
assert 1 in result.levels[0]
result = index.truncate(after=1)
assert 2 not in result.levels[0]
assert 1 in result.levels[0]
result = index.truncate(before=1, after=2)
assert len(result.levels[0]) == 2
# after < before
pytest.raises(ValueError, index.truncate, 3, 1)
def test_get_indexer(self):
major_axis = Index(lrange(4))
minor_axis = Index(lrange(2))
major_labels = np.array([0, 0, 1, 2, 2, 3, 3], dtype=np.intp)
minor_labels = np.array([0, 1, 0, 0, 1, 0, 1], dtype=np.intp)
index = MultiIndex(levels=[major_axis, minor_axis],
labels=[major_labels, minor_labels])
idx1 = index[:5]
idx2 = index[[1, 3, 5]]
r1 = idx1.get_indexer(idx2)
assert_almost_equal(r1, np.array([1, 3, -1], dtype=np.intp))
r1 = idx2.get_indexer(idx1, method='pad')
e1 = np.array([-1, 0, 0, 1, 1], dtype=np.intp)
assert_almost_equal(r1, e1)
r2 = idx2.get_indexer(idx1[::-1], method='pad')
assert_almost_equal(r2, e1[::-1])
rffill1 = idx2.get_indexer(idx1, method='ffill')
assert_almost_equal(r1, rffill1)
r1 = idx2.get_indexer(idx1, method='backfill')
e1 = np.array([0, 0, 1, 1, 2], dtype=np.intp)
assert_almost_equal(r1, e1)
r2 = idx2.get_indexer(idx1[::-1], method='backfill')
assert_almost_equal(r2, e1[::-1])
rbfill1 = idx2.get_indexer(idx1, method='bfill')
assert_almost_equal(r1, rbfill1)
# pass non-MultiIndex
r1 = idx1.get_indexer(idx2.values)
rexp1 = idx1.get_indexer(idx2)
assert_almost_equal(r1, rexp1)
r1 = idx1.get_indexer([1, 2, 3])
assert (r1 == [-1, -1, -1]).all()
# create index with duplicates
idx1 = Index(lrange(10) + lrange(10))
idx2 = Index(lrange(20))
msg = "Reindexing only valid with uniquely valued Index objects"
with tm.assert_raises_regex(InvalidIndexError, msg):
idx1.get_indexer(idx2)
def test_get_indexer_nearest(self):
midx = MultiIndex.from_tuples([('a', 1), ('b', 2)])
with pytest.raises(NotImplementedError):
midx.get_indexer(['a'], method='nearest')
with pytest.raises(NotImplementedError):
midx.get_indexer(['a'], method='pad', tolerance=2)
def test_hash_collisions(self):
# non-smoke test that we don't get hash collisions
index = MultiIndex.from_product([np.arange(1000), np.arange(1000)],
names=['one', 'two'])
result = index.get_indexer(index.values)
tm.assert_numpy_array_equal(result, np.arange(
len(index), dtype='intp'))
for i in [0, 1, len(index) - 2, len(index) - 1]:
result = index.get_loc(index[i])
assert result == i
def test_format(self):
self.index.format()
self.index[:0].format()
def test_format_integer_names(self):
index = MultiIndex(levels=[[0, 1], [0, 1]],
labels=[[0, 0, 1, 1], [0, 1, 0, 1]], names=[0, 1])
index.format(names=True)
def test_format_sparse_display(self):
index = MultiIndex(levels=[[0, 1], [0, 1], [0, 1], [0]],
labels=[[0, 0, 0, 1, 1, 1], [0, 0, 1, 0, 0, 1],
[0, 1, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0]])
result = index.format()
assert result[3] == '1 0 0 0'
def test_format_sparse_config(self):
warn_filters = warnings.filters
warnings.filterwarnings('ignore', category=FutureWarning,
module=".*format")
# GH1538
pd.set_option('display.multi_sparse', False)
result = self.index.format()
assert result[1] == 'foo two'
tm.reset_display_options()
warnings.filters = warn_filters
def test_to_frame(self):
tuples = [(1, 'one'), (1, 'two'), (2, 'one'), (2, 'two')]
index = MultiIndex.from_tuples(tuples)
result = index.to_frame(index=False)
expected = DataFrame(tuples)
tm.assert_frame_equal(result, expected)
result = index.to_frame()
expected.index = index
tm.assert_frame_equal(result, expected)
tuples = [(1, 'one'), (1, 'two'), (2, 'one'), (2, 'two')]
index = MultiIndex.from_tuples(tuples, names=['first', 'second'])
result = index.to_frame(index=False)
expected = DataFrame(tuples)
expected.columns = ['first', 'second']
tm.assert_frame_equal(result, expected)
result = index.to_frame()
expected.index = index
tm.assert_frame_equal(result, expected)
index = MultiIndex.from_product([range(5),
pd.date_range('20130101', periods=3)])
result = index.to_frame(index=False)
expected = DataFrame(
{0: np.repeat(np.arange(5, dtype='int64'), 3),
1: np.tile(pd.date_range('20130101', periods=3), 5)})
tm.assert_frame_equal(result, expected)
index = MultiIndex.from_product([range(5),
pd.date_range('20130101', periods=3)])
result = index.to_frame()
expected.index = index
tm.assert_frame_equal(result, expected)
def test_to_hierarchical(self):
index = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), (
2, 'two')])
result = index.to_hierarchical(3)
expected = MultiIndex(levels=[[1, 2], ['one', 'two']],
labels=[[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1]])
tm.assert_index_equal(result, expected)
assert result.names == index.names
# K > 1
result = index.to_hierarchical(3, 2)
expected = MultiIndex(levels=[[1, 2], ['one', 'two']],
labels=[[0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]])
tm.assert_index_equal(result, expected)
assert result.names == index.names
# non-sorted
index = MultiIndex.from_tuples([(2, 'c'), (1, 'b'),
(2, 'a'), (2, 'b')],
names=['N1', 'N2'])
result = index.to_hierarchical(2)
expected = MultiIndex.from_tuples([(2, 'c'), (2, 'c'), (1, 'b'),
(1, 'b'),
(2, 'a'), (2, 'a'),
(2, 'b'), (2, 'b')],
names=['N1', 'N2'])
tm.assert_index_equal(result, expected)
assert result.names == index.names
def test_bounds(self):
self.index._bounds
def test_equals_multi(self):
assert self.index.equals(self.index)
assert not self.index.equals(self.index.values)
assert self.index.equals(Index(self.index.values))
assert self.index.equal_levels(self.index)
assert not self.index.equals(self.index[:-1])
assert not self.index.equals(self.index[-1])
# different number of levels
index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index(
lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array(
[0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])])
index2 = MultiIndex(levels=index.levels[:-1], labels=index.labels[:-1])
assert not index.equals(index2)
assert not index.equal_levels(index2)
# levels are different
major_axis = Index(lrange(4))
minor_axis = Index(lrange(2))
major_labels = np.array([0, 0, 1, 2, 2, 3])
minor_labels = np.array([0, 1, 0, 0, 1, 0])
index = MultiIndex(levels=[major_axis, minor_axis],
labels=[major_labels, minor_labels])
assert not self.index.equals(index)
assert not self.index.equal_levels(index)
# some of the labels are different
major_axis = Index(['foo', 'bar', 'baz', 'qux'])
minor_axis = Index(['one', 'two'])
major_labels = np.array([0, 0, 2, 2, 3, 3])
minor_labels = np.array([0, 1, 0, 1, 0, 1])
index = MultiIndex(levels=[major_axis, minor_axis],
labels=[major_labels, minor_labels])
assert not self.index.equals(index)
def test_equals_missing_values(self):
# make sure take is not using -1
i = pd.MultiIndex.from_tuples([(0, pd.NaT),
(0, pd.Timestamp('20130101'))])
result = i[0:1].equals(i[0])
assert not result
result = i[1:2].equals(i[1])
assert not result
def test_identical(self):
mi = self.index.copy()
mi2 = self.index.copy()
assert mi.identical(mi2)
mi = mi.set_names(['new1', 'new2'])
assert mi.equals(mi2)
assert not mi.identical(mi2)
mi2 = mi2.set_names(['new1', 'new2'])
assert mi.identical(mi2)
mi3 = Index(mi.tolist(), names=mi.names)
mi4 = Index(mi.tolist(), names=mi.names, tupleize_cols=False)
assert mi.identical(mi3)
assert not mi.identical(mi4)
assert mi.equals(mi4)
def test_is_(self):
mi = MultiIndex.from_tuples(lzip(range(10), range(10)))
assert mi.is_(mi)
assert mi.is_(mi.view())
assert mi.is_(mi.view().view().view().view())
mi2 = mi.view()
# names are metadata, they don't change id
mi2.names = ["A", "B"]
assert mi2.is_(mi)
assert mi.is_(mi2)
assert mi.is_(mi.set_names(["C", "D"]))
mi2 = mi.view()
mi2.set_names(["E", "F"], inplace=True)
assert mi.is_(mi2)
# levels are inherent properties, they change identity
mi3 = mi2.set_levels([lrange(10), lrange(10)])
assert not mi3.is_(mi2)
# shouldn't change
assert mi2.is_(mi)
mi4 = mi3.view()
# GH 17464 - Remove duplicate MultiIndex levels
mi4.set_levels([lrange(10), lrange(10)], inplace=True)
assert not mi4.is_(mi3)
mi5 = mi.view()
mi5.set_levels(mi5.levels, inplace=True)
assert not mi5.is_(mi)
def test_union(self):
piece1 = self.index[:5][::-1]
piece2 = self.index[3:]
the_union = piece1 | piece2
tups = sorted(self.index.values)
expected = MultiIndex.from_tuples(tups)
assert the_union.equals(expected)
# corner case, pass self or empty thing:
the_union = self.index.union(self.index)
assert the_union is self.index
the_union = self.index.union(self.index[:0])
assert the_union is self.index
# won't work in python 3
# tuples = self.index.values
# result = self.index[:4] | tuples[4:]
# assert result.equals(tuples)
# not valid for python 3
# def test_union_with_regular_index(self):
# other = Index(['A', 'B', 'C'])
# result = other.union(self.index)
# assert ('foo', 'one') in result
# assert 'B' in result
# result2 = self.index.union(other)
# assert result.equals(result2)
def test_intersection(self):
piece1 = self.index[:5][::-1]
piece2 = self.index[3:]
the_int = piece1 & piece2
tups = sorted(self.index[3:5].values)
expected = MultiIndex.from_tuples(tups)
assert the_int.equals(expected)
# corner case, pass self
the_int = self.index.intersection(self.index)
assert the_int is self.index
# empty intersection: disjoint
empty = self.index[:2] & self.index[2:]
expected = self.index[:0]
assert empty.equals(expected)
# can't do in python 3
# tuples = self.index.values
# result = self.index & tuples
# assert result.equals(tuples)
def test_sub(self):
first = self.index
# - now raises (previously was set op difference)
with pytest.raises(TypeError):
first - self.index[-3:]
with pytest.raises(TypeError):
self.index[-3:] - first
with pytest.raises(TypeError):
self.index[-3:] - first.tolist()
with pytest.raises(TypeError):
first.tolist() - self.index[-3:]
def test_difference(self):
first = self.index
result = first.difference(self.index[-3:])
expected = MultiIndex.from_tuples(sorted(self.index[:-3].values),
sortorder=0,
names=self.index.names)
assert isinstance(result, MultiIndex)
assert result.equals(expected)
assert result.names == self.index.names
# empty difference: reflexive
result = self.index.difference(self.index)
expected = self.index[:0]
assert result.equals(expected)
assert result.names == self.index.names
# empty difference: superset
result = self.index[-3:].difference(self.index)
expected = self.index[:0]
assert result.equals(expected)
assert result.names == self.index.names
# empty difference: degenerate
result = self.index[:0].difference(self.index)
expected = self.index[:0]
assert result.equals(expected)
assert result.names == self.index.names
# names not the same
chunklet = self.index[-3:]
chunklet.names = ['foo', 'baz']
result = first.difference(chunklet)
assert result.names == (None, None)
# empty, but non-equal
result = self.index.difference(self.index.sortlevel(1)[0])
assert len(result) == 0
# raise Exception called with non-MultiIndex
result = first.difference(first.values)
assert result.equals(first[:0])
# name from empty array
result = first.difference([])
assert first.equals(result)
assert first.names == result.names
# name from non-empty array
result = first.difference([('foo', 'one')])
expected = pd.MultiIndex.from_tuples([('bar', 'one'), ('baz', 'two'), (
'foo', 'two'), ('qux', 'one'), ('qux', 'two')])
expected.names = first.names
assert first.names == result.names
tm.assert_raises_regex(TypeError, "other must be a MultiIndex "
"or a list of tuples",
first.difference, [1, 2, 3, 4, 5])
def test_from_tuples(self):
tm.assert_raises_regex(TypeError, 'Cannot infer number of levels '
'from empty list',
MultiIndex.from_tuples, [])
expected = MultiIndex(levels=[[1, 3], [2, 4]],
labels=[[0, 1], [0, 1]],
names=['a', 'b'])
# input tuples
result = MultiIndex.from_tuples(((1, 2), (3, 4)), names=['a', 'b'])
tm.assert_index_equal(result, expected)
def test_from_tuples_iterator(self):
# GH 18434
# input iterator for tuples
expected = MultiIndex(levels=[[1, 3], [2, 4]],
labels=[[0, 1], [0, 1]],
names=['a', 'b'])
result = MultiIndex.from_tuples(zip([1, 3], [2, 4]), names=['a', 'b'])
tm.assert_index_equal(result, expected)
# input non-iterables
with tm.assert_raises_regex(
TypeError, 'Input must be a list / sequence of tuple-likes.'):
MultiIndex.from_tuples(0)
def test_from_tuples_empty(self):
# GH 16777
result = MultiIndex.from_tuples([], names=['a', 'b'])
expected = MultiIndex.from_arrays(arrays=[[], []],
names=['a', 'b'])
tm.assert_index_equal(result, expected)
def test_argsort(self):
result = self.index.argsort()
expected = self.index.values.argsort()
tm.assert_numpy_array_equal(result, expected)
def test_sortlevel(self):
import random
tuples = list(self.index)
random.shuffle(tuples)
index = MultiIndex.from_tuples(tuples)
sorted_idx, _ = index.sortlevel(0)
expected = MultiIndex.from_tuples(sorted(tuples))
assert sorted_idx.equals(expected)
sorted_idx, _ = index.sortlevel(0, ascending=False)
assert sorted_idx.equals(expected[::-1])
sorted_idx, _ = index.sortlevel(1)
by1 = sorted(tuples, key=lambda x: (x[1], x[0]))
expected = MultiIndex.from_tuples(by1)
assert sorted_idx.equals(expected)
sorted_idx, _ = index.sortlevel(1, ascending=False)
assert sorted_idx.equals(expected[::-1])
def test_sortlevel_not_sort_remaining(self):
mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC'))
sorted_idx, _ = mi.sortlevel('A', sort_remaining=False)
assert sorted_idx.equals(mi)
def test_sortlevel_deterministic(self):
tuples = [('bar', 'one'), ('foo', 'two'), ('qux', 'two'),
('foo', 'one'), ('baz', 'two'), ('qux', 'one')]
index = MultiIndex.from_tuples(tuples)
sorted_idx, _ = index.sortlevel(0)
expected = MultiIndex.from_tuples(sorted(tuples))
assert sorted_idx.equals(expected)
sorted_idx, _ = index.sortlevel(0, ascending=False)
assert sorted_idx.equals(expected[::-1])
sorted_idx, _ = index.sortlevel(1)
by1 = sorted(tuples, key=lambda x: (x[1], x[0]))
expected = MultiIndex.from_tuples(by1)
assert sorted_idx.equals(expected)
sorted_idx, _ = index.sortlevel(1, ascending=False)
assert sorted_idx.equals(expected[::-1])
def test_dims(self):
pass
def test_drop(self):
dropped = self.index.drop([('foo', 'two'), ('qux', 'one')])
index = MultiIndex.from_tuples([('foo', 'two'), ('qux', 'one')])
dropped2 = self.index.drop(index)
expected = self.index[[0, 2, 3, 5]]
tm.assert_index_equal(dropped, expected)
tm.assert_index_equal(dropped2, expected)
dropped = self.index.drop(['bar'])
expected = self.index[[0, 1, 3, 4, 5]]
tm.assert_index_equal(dropped, expected)
dropped = self.index.drop('foo')
expected = self.index[[2, 3, 4, 5]]
tm.assert_index_equal(dropped, expected)
index = MultiIndex.from_tuples([('bar', 'two')])
pytest.raises(KeyError, self.index.drop, [('bar', 'two')])
pytest.raises(KeyError, self.index.drop, index)
pytest.raises(KeyError, self.index.drop, ['foo', 'two'])
# partially correct argument
mixed_index = MultiIndex.from_tuples([('qux', 'one'), ('bar', 'two')])
pytest.raises(KeyError, self.index.drop, mixed_index)
# error='ignore'
dropped = self.index.drop(index, errors='ignore')
expected = self.index[[0, 1, 2, 3, 4, 5]]
tm.assert_index_equal(dropped, expected)
dropped = self.index.drop(mixed_index, errors='ignore')
expected = self.index[[0, 1, 2, 3, 5]]
tm.assert_index_equal(dropped, expected)
dropped = self.index.drop(['foo', 'two'], errors='ignore')
expected = self.index[[2, 3, 4, 5]]
tm.assert_index_equal(dropped, expected)
# mixed partial / full drop
dropped = self.index.drop(['foo', ('qux', 'one')])
expected = self.index[[2, 3, 5]]
tm.assert_index_equal(dropped, expected)
# mixed partial / full drop / error='ignore'
mixed_index = ['foo', ('qux', 'one'), 'two']
pytest.raises(KeyError, self.index.drop, mixed_index)
dropped = self.index.drop(mixed_index, errors='ignore')
expected = self.index[[2, 3, 5]]
tm.assert_index_equal(dropped, expected)
def test_droplevel_with_names(self):
index = self.index[self.index.get_loc('foo')]
dropped = index.droplevel(0)
assert dropped.name == 'second'
index = MultiIndex(
levels=[Index(lrange(4)), Index(lrange(4)), Index(lrange(4))],
labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array(
[0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])],
names=['one', 'two', 'three'])
dropped = index.droplevel(0)
assert dropped.names == ('two', 'three')
dropped = index.droplevel('two')
expected = index.droplevel(1)
assert dropped.equals(expected)
def test_droplevel_list(self):
index = MultiIndex(
levels=[Index(lrange(4)), Index(lrange(4)), Index(lrange(4))],
labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array(
[0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])],
names=['one', 'two', 'three'])
dropped = index[:2].droplevel(['three', 'one'])
expected = index[:2].droplevel(2).droplevel(0)
assert dropped.equals(expected)
dropped = index[:2].droplevel([])
expected = index[:2]
assert dropped.equals(expected)
with pytest.raises(ValueError):
index[:2].droplevel(['one', 'two', 'three'])
with pytest.raises(KeyError):
index[:2].droplevel(['one', 'four'])
def test_drop_not_lexsorted(self):
# GH 12078
# define the lexsorted version of the multi-index
tuples = [('a', ''), ('b1', 'c1'), ('b2', 'c2')]
lexsorted_mi = MultiIndex.from_tuples(tuples, names=['b', 'c'])
assert lexsorted_mi.is_lexsorted()
# and the not-lexsorted version
df = pd.DataFrame(columns=['a', 'b', 'c', 'd'],
data=[[1, 'b1', 'c1', 3], [1, 'b2', 'c2', 4]])
df = df.pivot_table(index='a', columns=['b', 'c'], values='d')
df = df.reset_index()
not_lexsorted_mi = df.columns
assert not not_lexsorted_mi.is_lexsorted()
# compare the results
tm.assert_index_equal(lexsorted_mi, not_lexsorted_mi)
with tm.assert_produces_warning(PerformanceWarning):
tm.assert_index_equal(lexsorted_mi.drop('a'),
not_lexsorted_mi.drop('a'))
def test_insert(self):
# key contained in all levels
new_index = self.index.insert(0, ('bar', 'two'))
assert new_index.equal_levels(self.index)
assert new_index[0] == ('bar', 'two')
# key not contained in all levels
new_index = self.index.insert(0, ('abc', 'three'))
exp0 = Index(list(self.index.levels[0]) + ['abc'], name='first')
tm.assert_index_equal(new_index.levels[0], exp0)
exp1 = Index(list(self.index.levels[1]) + ['three'], name='second')
tm.assert_index_equal(new_index.levels[1], exp1)
assert new_index[0] == ('abc', 'three')
# key wrong length
msg = "Item must have length equal to number of levels"
with tm.assert_raises_regex(ValueError, msg):
self.index.insert(0, ('foo2',))
left = pd.DataFrame([['a', 'b', 0], ['b', 'd', 1]],
columns=['1st', '2nd', '3rd'])
left.set_index(['1st', '2nd'], inplace=True)
ts = left['3rd'].copy(deep=True)
left.loc[('b', 'x'), '3rd'] = 2
left.loc[('b', 'a'), '3rd'] = -1
left.loc[('b', 'b'), '3rd'] = 3
left.loc[('a', 'x'), '3rd'] = 4
left.loc[('a', 'w'), '3rd'] = 5
left.loc[('a', 'a'), '3rd'] = 6
ts.loc[('b', 'x')] = 2
ts.loc['b', 'a'] = -1
ts.loc[('b', 'b')] = 3
ts.loc['a', 'x'] = 4
ts.loc[('a', 'w')] = 5
ts.loc['a', 'a'] = 6
right = pd.DataFrame([['a', 'b', 0], ['b', 'd', 1], ['b', 'x', 2],
['b', 'a', -1], ['b', 'b', 3], ['a', 'x', 4],
['a', 'w', 5], ['a', 'a', 6]],
columns=['1st', '2nd', '3rd'])
right.set_index(['1st', '2nd'], inplace=True)
# FIXME data types changes to float because
# of intermediate nan insertion;
tm.assert_frame_equal(left, right, check_dtype=False)
tm.assert_series_equal(ts, right['3rd'])
# GH9250
idx = [('test1', i) for i in range(5)] + \
[('test2', i) for i in range(6)] + \
[('test', 17), ('test', 18)]
left = pd.Series(np.linspace(0, 10, 11),
pd.MultiIndex.from_tuples(idx[:-2]))
left.loc[('test', 17)] = 11
left.loc[('test', 18)] = 12
right = pd.Series(np.linspace(0, 12, 13),
pd.MultiIndex.from_tuples(idx))
tm.assert_series_equal(left, right)
def test_take_preserve_name(self):
taken = self.index.take([3, 0, 1])
assert taken.names == self.index.names
def test_take_fill_value(self):
# GH 12631
vals = [['A', 'B'],
[pd.Timestamp('2011-01-01'), pd.Timestamp('2011-01-02')]]
idx = pd.MultiIndex.from_product(vals, names=['str', 'dt'])
result = idx.take(np.array([1, 0, -1]))
exp_vals = [('A', pd.Timestamp('2011-01-02')),
('A', pd.Timestamp('2011-01-01')),
('B', pd.Timestamp('2011-01-02'))]
expected = pd.MultiIndex.from_tuples(exp_vals, names=['str', 'dt'])
tm.assert_index_equal(result, expected)
# fill_value
result = idx.take(np.array([1, 0, -1]), fill_value=True)
exp_vals = [('A', pd.Timestamp('2011-01-02')),
('A', pd.Timestamp('2011-01-01')),
(np.nan, pd.NaT)]
expected = pd.MultiIndex.from_tuples(exp_vals, names=['str', 'dt'])
tm.assert_index_equal(result, expected)
# allow_fill=False
result = idx.take(np.array([1, 0, -1]), allow_fill=False,
fill_value=True)
exp_vals = [('A', pd.Timestamp('2011-01-02')),
('A', pd.Timestamp('2011-01-01')),
('B', pd.Timestamp('2011-01-02'))]
expected = pd.MultiIndex.from_tuples(exp_vals, names=['str', 'dt'])
tm.assert_index_equal(result, expected)
msg = ('When allow_fill=True and fill_value is not None, '
'all indices must be >= -1')
with tm.assert_raises_regex(ValueError, msg):
idx.take(np.array([1, 0, -2]), fill_value=True)
with tm.assert_raises_regex(ValueError, msg):
idx.take(np.array([1, 0, -5]), fill_value=True)
with pytest.raises(IndexError):
idx.take(np.array([1, -5]))
def take_invalid_kwargs(self):
vals = [['A', 'B'],
[pd.Timestamp('2011-01-01'), pd.Timestamp('2011-01-02')]]
idx = pd.MultiIndex.from_product(vals, names=['str', 'dt'])
indices = [1, 2]
msg = r"take\(\) got an unexpected keyword argument 'foo'"
tm.assert_raises_regex(TypeError, msg, idx.take,
indices, foo=2)
msg = "the 'out' parameter is not supported"
tm.assert_raises_regex(ValueError, msg, idx.take,
indices, out=indices)
msg = "the 'mode' parameter is not supported"
tm.assert_raises_regex(ValueError, msg, idx.take,
indices, mode='clip')
@pytest.mark.parametrize('other',
[Index(['three', 'one', 'two']),
Index(['one']),
Index(['one', 'three'])])
def test_join_level(self, other, join_type):
join_index, lidx, ridx = other.join(self.index, how=join_type,
level='second',
return_indexers=True)
exp_level = other.join(self.index.levels[1], how=join_type)
assert join_index.levels[0].equals(self.index.levels[0])
assert join_index.levels[1].equals(exp_level)
# pare down levels
mask = np.array(
[x[1] in exp_level for x in self.index], dtype=bool)
exp_values = self.index.values[mask]
tm.assert_numpy_array_equal(join_index.values, exp_values)
if join_type in ('outer', 'inner'):
join_index2, ridx2, lidx2 = \
self.index.join(other, how=join_type, level='second',
return_indexers=True)
assert join_index.equals(join_index2)
tm.assert_numpy_array_equal(lidx, lidx2)
tm.assert_numpy_array_equal(ridx, ridx2)
tm.assert_numpy_array_equal(join_index2.values, exp_values)
def test_join_level_corner_case(self):
# some corner cases
idx = Index(['three', 'one', 'two'])
result = idx.join(self.index, level='second')
assert isinstance(result, MultiIndex)
tm.assert_raises_regex(TypeError, "Join.*MultiIndex.*ambiguous",
self.index.join, self.index, level=1)
def test_join_self(self, join_type):
res = self.index
joined = res.join(res, how=join_type)
assert res is joined
def test_join_multi(self):
# GH 10665
midx = pd.MultiIndex.from_product(
[np.arange(4), np.arange(4)], names=['a', 'b'])
idx = pd.Index([1, 2, 5], name='b')
# inner
jidx, lidx, ridx = midx.join(idx, how='inner', return_indexers=True)
exp_idx = pd.MultiIndex.from_product(
[np.arange(4), [1, 2]], names=['a', 'b'])
exp_lidx = np.array([1, 2, 5, 6, 9, 10, 13, 14], dtype=np.intp)
exp_ridx = np.array([0, 1, 0, 1, 0, 1, 0, 1], dtype=np.intp)
tm.assert_index_equal(jidx, exp_idx)
tm.assert_numpy_array_equal(lidx, exp_lidx)
tm.assert_numpy_array_equal(ridx, exp_ridx)
# flip
jidx, ridx, lidx = idx.join(midx, how='inner', return_indexers=True)
tm.assert_index_equal(jidx, exp_idx)
tm.assert_numpy_array_equal(lidx, exp_lidx)
tm.assert_numpy_array_equal(ridx, exp_ridx)
# keep MultiIndex
jidx, lidx, ridx = midx.join(idx, how='left', return_indexers=True)
exp_ridx = np.array([-1, 0, 1, -1, -1, 0, 1, -1, -1, 0, 1, -1, -1, 0,
1, -1], dtype=np.intp)
tm.assert_index_equal(jidx, midx)
assert lidx is None
tm.assert_numpy_array_equal(ridx, exp_ridx)
# flip
jidx, ridx, lidx = idx.join(midx, how='right', return_indexers=True)
tm.assert_index_equal(jidx, midx)
assert lidx is None
tm.assert_numpy_array_equal(ridx, exp_ridx)
def test_reindex(self):
result, indexer = self.index.reindex(list(self.index[:4]))
assert isinstance(result, MultiIndex)
self.check_level_names(result, self.index[:4].names)
result, indexer = self.index.reindex(list(self.index))
assert isinstance(result, MultiIndex)
assert indexer is None
self.check_level_names(result, self.index.names)
def test_reindex_level(self):
idx = Index(['one'])
target, indexer = self.index.reindex(idx, level='second')
target2, indexer2 = idx.reindex(self.index, level='second')
exp_index = self.index.join(idx, level='second', how='right')
exp_index2 = self.index.join(idx, level='second', how='left')
assert target.equals(exp_index)
exp_indexer = np.array([0, 2, 4])
tm.assert_numpy_array_equal(indexer, exp_indexer, check_dtype=False)
assert target2.equals(exp_index2)
exp_indexer2 = np.array([0, -1, 0, -1, 0, -1])
tm.assert_numpy_array_equal(indexer2, exp_indexer2, check_dtype=False)
tm.assert_raises_regex(TypeError, "Fill method not supported",
self.index.reindex, self.index,
method='pad', level='second')
tm.assert_raises_regex(TypeError, "Fill method not supported",
idx.reindex, idx, method='bfill',
level='first')
def test_duplicates(self):
assert not self.index.has_duplicates
assert self.index.append(self.index).has_duplicates
index = MultiIndex(levels=[[0, 1], [0, 1, 2]], labels=[
[0, 0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 0, 1, 2]])
assert index.has_duplicates
# GH 9075
t = [(u('x'), u('out'), u('z'), 5, u('y'), u('in'), u('z'), 169),
(u('x'), u('out'), u('z'), 7, u('y'), u('in'), u('z'), 119),
(u('x'), u('out'), u('z'), 9, u('y'), u('in'), u('z'), 135),
(u('x'), u('out'), u('z'), 13, u('y'), u('in'), u('z'), 145),
(u('x'), u('out'), u('z'), 14, u('y'), u('in'), u('z'), 158),
(u('x'), u('out'), u('z'), 16, u('y'), u('in'), u('z'), 122),
(u('x'), u('out'), u('z'), 17, u('y'), u('in'), u('z'), 160),
(u('x'), u('out'), u('z'), 18, u('y'), u('in'), u('z'), 180),
(u('x'), u('out'), u('z'), 20, u('y'), u('in'), u('z'), 143),
(u('x'), u('out'), u('z'), 21, u('y'), u('in'), u('z'), 128),
(u('x'), u('out'), u('z'), 22, u('y'), u('in'), u('z'), 129),
(u('x'), u('out'), u('z'), 25, u('y'), u('in'), u('z'), 111),
(u('x'), u('out'), u('z'), 28, u('y'), u('in'), u('z'), 114),
(u('x'), u('out'), u('z'), 29, u('y'), u('in'), u('z'), 121),
(u('x'), u('out'), u('z'), 31, u('y'), u('in'), u('z'), 126),
(u('x'), u('out'), u('z'), 32, u('y'), u('in'), u('z'), 155),
(u('x'), u('out'), u('z'), 33, u('y'), u('in'), u('z'), 123),
(u('x'), u('out'), u('z'), 12, u('y'), u('in'), u('z'), 144)]
index = pd.MultiIndex.from_tuples(t)
assert not index.has_duplicates
# handle int64 overflow if possible
def check(nlevels, with_nulls):
labels = np.tile(np.arange(500), 2)
level = np.arange(500)
if with_nulls: # inject some null values
labels[500] = -1 # common nan value
labels = [labels.copy() for i in range(nlevels)]
for i in range(nlevels):
labels[i][500 + i - nlevels // 2] = -1
labels += [np.array([-1, 1]).repeat(500)]
else:
labels = [labels] * nlevels + [np.arange(2).repeat(500)]
levels = [level] * nlevels + [[0, 1]]
# no dups
index = MultiIndex(levels=levels, labels=labels)
assert not index.has_duplicates
# with a dup
if with_nulls:
def f(a):
return np.insert(a, 1000, a[0])
labels = list(map(f, labels))
index = MultiIndex(levels=levels, labels=labels)
else:
values = index.values.tolist()
index = MultiIndex.from_tuples(values + [values[0]])
assert index.has_duplicates
# no overflow
check(4, False)
check(4, True)
# overflow possible
check(8, False)
check(8, True)
# GH 9125
n, k = 200, 5000
levels = [np.arange(n), tm.makeStringIndex(n), 1000 + np.arange(n)]
labels = [np.random.choice(n, k * n) for lev in levels]
mi = MultiIndex(levels=levels, labels=labels)
for keep in ['first', 'last', False]:
left = mi.duplicated(keep=keep)
right = pd._libs.hashtable.duplicated_object(mi.values, keep=keep)
tm.assert_numpy_array_equal(left, right)
# GH5873
for a in [101, 102]:
mi = MultiIndex.from_arrays([[101, a], [3.5, np.nan]])
assert not mi.has_duplicates
with warnings.catch_warnings(record=True):
# Deprecated - see GH20239
assert mi.get_duplicates().equals(MultiIndex.from_arrays(
[[], []]))
tm.assert_numpy_array_equal(mi.duplicated(), np.zeros(
2, dtype='bool'))
for n in range(1, 6): # 1st level shape
for m in range(1, 5): # 2nd level shape
# all possible unique combinations, including nan
lab = product(range(-1, n), range(-1, m))
mi = MultiIndex(levels=[list('abcde')[:n], list('WXYZ')[:m]],
labels=np.random.permutation(list(lab)).T)
assert len(mi) == (n + 1) * (m + 1)
assert not mi.has_duplicates
with warnings.catch_warnings(record=True):
# Deprecated - see GH20239
assert mi.get_duplicates().equals(MultiIndex.from_arrays(
[[], []]))
tm.assert_numpy_array_equal(mi.duplicated(), np.zeros(
len(mi), dtype='bool'))
def test_duplicate_meta_data(self):
# GH 10115
index = MultiIndex(
levels=[[0, 1], [0, 1, 2]],
labels=[[0, 0, 0, 0, 1, 1, 1],
[0, 1, 2, 0, 0, 1, 2]])
for idx in [index,
index.set_names([None, None]),
index.set_names([None, 'Num']),
index.set_names(['Upper', 'Num']), ]:
assert idx.has_duplicates
assert idx.drop_duplicates().names == idx.names
def test_get_unique_index(self):
idx = self.index[[0, 1, 0, 1, 1, 0, 0]]
expected = self.index._shallow_copy(idx[[0, 1]])
for dropna in [False, True]:
result = idx._get_unique_index(dropna=dropna)
assert result.unique
tm.assert_index_equal(result, expected)
@pytest.mark.parametrize('names', [None, ['first', 'second']])
def test_unique(self, names):
mi = pd.MultiIndex.from_arrays([[1, 2, 1, 2], [1, 1, 1, 2]],
names=names)
res = mi.unique()
exp = pd.MultiIndex.from_arrays([[1, 2, 2], [1, 1, 2]], names=mi.names)
tm.assert_index_equal(res, exp)
mi = pd.MultiIndex.from_arrays([list('aaaa'), list('abab')],
names=names)
res = mi.unique()
exp = pd.MultiIndex.from_arrays([list('aa'), list('ab')],
names=mi.names)
tm.assert_index_equal(res, exp)
mi = pd.MultiIndex.from_arrays([list('aaaa'), list('aaaa')],
names=names)
res = mi.unique()
exp = pd.MultiIndex.from_arrays([['a'], ['a']], names=mi.names)
tm.assert_index_equal(res, exp)
# GH #20568 - empty MI
mi = pd.MultiIndex.from_arrays([[], []], names=names)
res = mi.unique()
tm.assert_index_equal(mi, res)
@pytest.mark.parametrize('level', [0, 'first', 1, 'second'])
def test_unique_level(self, level):
# GH #17896 - with level= argument
result = self.index.unique(level=level)
expected = self.index.get_level_values(level).unique()
tm.assert_index_equal(result, expected)
# With already unique level
mi = pd.MultiIndex.from_arrays([[1, 3, 2, 4], [1, 3, 2, 5]],
names=['first', 'second'])
result = mi.unique(level=level)
expected = mi.get_level_values(level)
tm.assert_index_equal(result, expected)
# With empty MI
mi = pd.MultiIndex.from_arrays([[], []], names=['first', 'second'])
result = mi.unique(level=level)
expected = mi.get_level_values(level)
def test_unique_datetimelike(self):
idx1 = pd.DatetimeIndex(['2015-01-01', '2015-01-01', '2015-01-01',
'2015-01-01', 'NaT', 'NaT'])
idx2 = pd.DatetimeIndex(['2015-01-01', '2015-01-01', '2015-01-02',
'2015-01-02', 'NaT', '2015-01-01'],
tz='Asia/Tokyo')
result = pd.MultiIndex.from_arrays([idx1, idx2]).unique()
eidx1 = pd.DatetimeIndex(['2015-01-01', '2015-01-01', 'NaT', 'NaT'])
eidx2 = pd.DatetimeIndex(['2015-01-01', '2015-01-02',
'NaT', '2015-01-01'],
tz='Asia/Tokyo')
exp = pd.MultiIndex.from_arrays([eidx1, eidx2])
tm.assert_index_equal(result, exp)
def test_tolist(self):
result = self.index.tolist()
exp = list(self.index.values)
assert result == exp
def test_repr_with_unicode_data(self):
with pd.core.config.option_context("display.encoding", 'UTF-8'):
d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]}
index = pd.DataFrame(d).set_index(["a", "b"]).index
assert "\\u" not in repr(index) # we don't want unicode-escaped
def test_repr_roundtrip(self):
mi = MultiIndex.from_product([list('ab'), range(3)],
names=['first', 'second'])
str(mi)
if PY3:
tm.assert_index_equal(eval(repr(mi)), mi, exact=True)
else:
result = eval(repr(mi))
# string coerces to unicode
tm.assert_index_equal(result, mi, exact=False)
assert mi.get_level_values('first').inferred_type == 'string'
assert result.get_level_values('first').inferred_type == 'unicode'
mi_u = MultiIndex.from_product(
[list(u'ab'), range(3)], names=['first', 'second'])
result = eval(repr(mi_u))
tm.assert_index_equal(result, mi_u, exact=True)
# formatting
if PY3:
str(mi)
else:
compat.text_type(mi)
# long format
mi = MultiIndex.from_product([list('abcdefg'), range(10)],
names=['first', 'second'])
if PY3:
tm.assert_index_equal(eval(repr(mi)), mi, exact=True)
else:
result = eval(repr(mi))
# string coerces to unicode
tm.assert_index_equal(result, mi, exact=False)
assert mi.get_level_values('first').inferred_type == 'string'
assert result.get_level_values('first').inferred_type == 'unicode'
result = eval(repr(mi_u))
tm.assert_index_equal(result, mi_u, exact=True)
def test_str(self):
# tested elsewhere
pass
def test_unicode_string_with_unicode(self):
d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]}
idx = pd.DataFrame(d).set_index(["a", "b"]).index
if PY3:
str(idx)
else:
compat.text_type(idx)
def test_bytestring_with_unicode(self):
d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]}
idx = pd.DataFrame(d).set_index(["a", "b"]).index
if PY3:
bytes(idx)
else:
str(idx)
def test_slice_keep_name(self):
x = MultiIndex.from_tuples([('a', 'b'), (1, 2), ('c', 'd')],
names=['x', 'y'])
assert x[1:].names == x.names
def test_isna_behavior(self):
# should not segfault GH5123
# NOTE: if MI representation changes, may make sense to allow
# isna(MI)
with pytest.raises(NotImplementedError):
pd.isna(self.index)
def test_level_setting_resets_attributes(self):
ind = pd.MultiIndex.from_arrays([
['A', 'A', 'B', 'B', 'B'], [1, 2, 1, 2, 3]
])
assert ind.is_monotonic
ind.set_levels([['A', 'B'], [1, 3, 2]], inplace=True)
# if this fails, probably didn't reset the cache correctly.
assert not ind.is_monotonic
def test_is_monotonic_increasing(self):
i = MultiIndex.from_product([np.arange(10),
np.arange(10)], names=['one', 'two'])
assert i.is_monotonic
assert i._is_strictly_monotonic_increasing
assert Index(i.values).is_monotonic
assert i._is_strictly_monotonic_increasing
i = MultiIndex.from_product([np.arange(10, 0, -1),
np.arange(10)], names=['one', 'two'])
assert not i.is_monotonic
assert not i._is_strictly_monotonic_increasing
assert not Index(i.values).is_monotonic
assert not Index(i.values)._is_strictly_monotonic_increasing
i = MultiIndex.from_product([np.arange(10),
np.arange(10, 0, -1)],
names=['one', 'two'])
assert not i.is_monotonic
assert not i._is_strictly_monotonic_increasing
assert not Index(i.values).is_monotonic
assert not Index(i.values)._is_strictly_monotonic_increasing
i = MultiIndex.from_product([[1.0, np.nan, 2.0], ['a', 'b', 'c']])
assert not i.is_monotonic
assert not i._is_strictly_monotonic_increasing
assert not Index(i.values).is_monotonic
assert not Index(i.values)._is_strictly_monotonic_increasing
# string ordering
i = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
['one', 'two', 'three']],
labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
[0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
names=['first', 'second'])
assert not i.is_monotonic
assert not Index(i.values).is_monotonic
assert not i._is_strictly_monotonic_increasing
assert not Index(i.values)._is_strictly_monotonic_increasing
i = MultiIndex(levels=[['bar', 'baz', 'foo', 'qux'],
['mom', 'next', 'zenith']],
labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
[0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
names=['first', 'second'])
assert i.is_monotonic
assert Index(i.values).is_monotonic
assert i._is_strictly_monotonic_increasing
assert Index(i.values)._is_strictly_monotonic_increasing
# mixed levels, hits the TypeError
i = MultiIndex(
levels=[[1, 2, 3, 4], ['gb00b03mlx29', 'lu0197800237',
'nl0000289783',
'nl0000289965', 'nl0000301109']],
labels=[[0, 1, 1, 2, 2, 2, 3], [4, 2, 0, 0, 1, 3, -1]],
names=['household_id', 'asset_id'])
assert not i.is_monotonic
assert not i._is_strictly_monotonic_increasing
# empty
i = MultiIndex.from_arrays([[], []])
assert i.is_monotonic
assert Index(i.values).is_monotonic
assert i._is_strictly_monotonic_increasing
assert Index(i.values)._is_strictly_monotonic_increasing
def test_is_monotonic_decreasing(self):
i = MultiIndex.from_product([np.arange(9, -1, -1),
np.arange(9, -1, -1)],
names=['one', 'two'])
assert i.is_monotonic_decreasing
assert i._is_strictly_monotonic_decreasing
assert Index(i.values).is_monotonic_decreasing
assert i._is_strictly_monotonic_decreasing
i = MultiIndex.from_product([np.arange(10),
np.arange(10, 0, -1)],
names=['one', 'two'])
assert not i.is_monotonic_decreasing
assert not i._is_strictly_monotonic_decreasing
assert not Index(i.values).is_monotonic_decreasing
assert not Index(i.values)._is_strictly_monotonic_decreasing
i = MultiIndex.from_product([np.arange(10, 0, -1),
np.arange(10)], names=['one', 'two'])
assert not i.is_monotonic_decreasing
assert not i._is_strictly_monotonic_decreasing
assert not Index(i.values).is_monotonic_decreasing
assert not Index(i.values)._is_strictly_monotonic_decreasing
i = MultiIndex.from_product([[2.0, np.nan, 1.0], ['c', 'b', 'a']])
assert not i.is_monotonic_decreasing
assert not i._is_strictly_monotonic_decreasing
assert not Index(i.values).is_monotonic_decreasing
assert not Index(i.values)._is_strictly_monotonic_decreasing
# string ordering
i = MultiIndex(levels=[['qux', 'foo', 'baz', 'bar'],
['three', 'two', 'one']],
labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
[0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
names=['first', 'second'])
assert not i.is_monotonic_decreasing
assert not Index(i.values).is_monotonic_decreasing
assert not i._is_strictly_monotonic_decreasing
assert not Index(i.values)._is_strictly_monotonic_decreasing
i = MultiIndex(levels=[['qux', 'foo', 'baz', 'bar'],
['zenith', 'next', 'mom']],
labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
[0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
names=['first', 'second'])
assert i.is_monotonic_decreasing
assert Index(i.values).is_monotonic_decreasing
assert i._is_strictly_monotonic_decreasing
assert Index(i.values)._is_strictly_monotonic_decreasing
# mixed levels, hits the TypeError
i = MultiIndex(
levels=[[4, 3, 2, 1], ['nl0000301109', 'nl0000289965',
'nl0000289783', 'lu0197800237',
'gb00b03mlx29']],
labels=[[0, 1, 1, 2, 2, 2, 3], [4, 2, 0, 0, 1, 3, -1]],
names=['household_id', 'asset_id'])
assert not i.is_monotonic_decreasing
assert not i._is_strictly_monotonic_decreasing
# empty
i = MultiIndex.from_arrays([[], []])
assert i.is_monotonic_decreasing
assert Index(i.values).is_monotonic_decreasing
assert i._is_strictly_monotonic_decreasing
assert Index(i.values)._is_strictly_monotonic_decreasing
def test_is_strictly_monotonic_increasing(self):
idx = pd.MultiIndex(levels=[['bar', 'baz'], ['mom', 'next']],
labels=[[0, 0, 1, 1], [0, 0, 0, 1]])
assert idx.is_monotonic_increasing
assert not idx._is_strictly_monotonic_increasing
def test_is_strictly_monotonic_decreasing(self):
idx = pd.MultiIndex(levels=[['baz', 'bar'], ['next', 'mom']],
labels=[[0, 0, 1, 1], [0, 0, 0, 1]])
assert idx.is_monotonic_decreasing
assert not idx._is_strictly_monotonic_decreasing
def test_reconstruct_sort(self):
# starts off lexsorted & monotonic
mi = MultiIndex.from_arrays([
['A', 'A', 'B', 'B', 'B'], [1, 2, 1, 2, 3]
])
assert mi.is_lexsorted()
assert mi.is_monotonic
recons = mi._sort_levels_monotonic()
assert recons.is_lexsorted()
assert recons.is_monotonic
assert mi is recons
assert mi.equals(recons)
assert Index(mi.values).equals(Index(recons.values))
# cannot convert to lexsorted
mi = pd.MultiIndex.from_tuples([('z', 'a'), ('x', 'a'), ('y', 'b'),
('x', 'b'), ('y', 'a'), ('z', 'b')],
names=['one', 'two'])
assert not mi.is_lexsorted()
assert not mi.is_monotonic
recons = mi._sort_levels_monotonic()
assert not recons.is_lexsorted()
assert not recons.is_monotonic
assert mi.equals(recons)
assert Index(mi.values).equals(Index(recons.values))
# cannot convert to lexsorted
mi = MultiIndex(levels=[['b', 'd', 'a'], [1, 2, 3]],
labels=[[0, 1, 0, 2], [2, 0, 0, 1]],
names=['col1', 'col2'])
assert not mi.is_lexsorted()
assert not mi.is_monotonic
recons = mi._sort_levels_monotonic()
assert not recons.is_lexsorted()
assert not recons.is_monotonic
assert mi.equals(recons)
assert Index(mi.values).equals(Index(recons.values))
def test_reconstruct_remove_unused(self):
# xref to GH 2770
df = DataFrame([['deleteMe', 1, 9],
['keepMe', 2, 9],
['keepMeToo', 3, 9]],
columns=['first', 'second', 'third'])
df2 = df.set_index(['first', 'second'], drop=False)
df2 = df2[df2['first'] != 'deleteMe']
# removed levels are there
expected = MultiIndex(levels=[['deleteMe', 'keepMe', 'keepMeToo'],
[1, 2, 3]],
labels=[[1, 2], [1, 2]],
names=['first', 'second'])
result = df2.index
tm.assert_index_equal(result, expected)
expected = MultiIndex(levels=[['keepMe', 'keepMeToo'],
[2, 3]],
labels=[[0, 1], [0, 1]],
names=['first', 'second'])
result = df2.index.remove_unused_levels()
tm.assert_index_equal(result, expected)
# idempotent
result2 = result.remove_unused_levels()
tm.assert_index_equal(result2, expected)
assert result2.is_(result)
@pytest.mark.parametrize('level0', [['a', 'd', 'b'],
['a', 'd', 'b', 'unused']])
@pytest.mark.parametrize('level1', [['w', 'x', 'y', 'z'],
['w', 'x', 'y', 'z', 'unused']])
def test_remove_unused_nan(self, level0, level1):
# GH 18417
mi = pd.MultiIndex(levels=[level0, level1],
labels=[[0, 2, -1, 1, -1], [0, 1, 2, 3, 2]])
result = mi.remove_unused_levels()
tm.assert_index_equal(result, mi)
for level in 0, 1:
assert('unused' not in result.levels[level])
@pytest.mark.parametrize('first_type,second_type', [
('int64', 'int64'),
('datetime64[D]', 'str')])
def test_remove_unused_levels_large(self, first_type, second_type):
# GH16556
# because tests should be deterministic (and this test in particular
# checks that levels are removed, which is not the case for every
# random input):
rng = np.random.RandomState(4) # seed is arbitrary value that works
size = 1 << 16
df = DataFrame(dict(
first=rng.randint(0, 1 << 13, size).astype(first_type),
second=rng.randint(0, 1 << 10, size).astype(second_type),
third=rng.rand(size)))
df = df.groupby(['first', 'second']).sum()
df = df[df.third < 0.1]
result = df.index.remove_unused_levels()
assert len(result.levels[0]) < len(df.index.levels[0])
assert len(result.levels[1]) < len(df.index.levels[1])
assert result.equals(df.index)
expected = df.reset_index().set_index(['first', 'second']).index
tm.assert_index_equal(result, expected)
def test_isin(self):
values = [('foo', 2), ('bar', 3), ('quux', 4)]
idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], np.arange(
4)])
result = idx.isin(values)
expected = np.array([False, False, True, True])
tm.assert_numpy_array_equal(result, expected)
# empty, return dtype bool
idx = MultiIndex.from_arrays([[], []])
result = idx.isin(values)
assert len(result) == 0
assert result.dtype == np.bool_
@pytest.mark.skipif(PYPY, reason="tuples cmp recursively on PyPy")
def test_isin_nan_not_pypy(self):
idx = MultiIndex.from_arrays([['foo', 'bar'], [1.0, np.nan]])
tm.assert_numpy_array_equal(idx.isin([('bar', np.nan)]),
np.array([False, False]))
tm.assert_numpy_array_equal(idx.isin([('bar', float('nan'))]),
np.array([False, False]))
@pytest.mark.skipif(not PYPY, reason="tuples cmp recursively on PyPy")
def test_isin_nan_pypy(self):
idx = MultiIndex.from_arrays([['foo', 'bar'], [1.0, np.nan]])
tm.assert_numpy_array_equal(idx.isin([('bar', np.nan)]),
np.array([False, True]))
tm.assert_numpy_array_equal(idx.isin([('bar', float('nan'))]),
np.array([False, True]))
def test_isin_level_kwarg(self):
idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], np.arange(
4)])
vals_0 = ['foo', 'bar', 'quux']
vals_1 = [2, 3, 10]
expected = np.array([False, False, True, True])
tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level=0))
tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level=-2))
tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level=1))
tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level=-1))
pytest.raises(IndexError, idx.isin, vals_0, level=5)
pytest.raises(IndexError, idx.isin, vals_0, level=-5)
pytest.raises(KeyError, idx.isin, vals_0, level=1.0)
pytest.raises(KeyError, idx.isin, vals_1, level=-1.0)
pytest.raises(KeyError, idx.isin, vals_1, level='A')
idx.names = ['A', 'B']
tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level='A'))
tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level='B'))
pytest.raises(KeyError, idx.isin, vals_1, level='C')
def test_reindex_preserves_names_when_target_is_list_or_ndarray(self):
# GH6552
idx = self.index.copy()
target = idx.copy()
idx.names = target.names = [None, None]
other_dtype = pd.MultiIndex.from_product([[1, 2], [3, 4]])
# list & ndarray cases
assert idx.reindex([])[0].names == [None, None]
assert idx.reindex(np.array([]))[0].names == [None, None]
assert idx.reindex(target.tolist())[0].names == [None, None]
assert idx.reindex(target.values)[0].names == [None, None]
assert idx.reindex(other_dtype.tolist())[0].names == [None, None]
assert idx.reindex(other_dtype.values)[0].names == [None, None]
idx.names = ['foo', 'bar']
assert idx.reindex([])[0].names == ['foo', 'bar']
assert idx.reindex(np.array([]))[0].names == ['foo', 'bar']
assert idx.reindex(target.tolist())[0].names == ['foo', 'bar']
assert idx.reindex(target.values)[0].names == ['foo', 'bar']
assert idx.reindex(other_dtype.tolist())[0].names == ['foo', 'bar']
assert idx.reindex(other_dtype.values)[0].names == ['foo', 'bar']
def test_reindex_lvl_preserves_names_when_target_is_list_or_array(self):
# GH7774
idx = pd.MultiIndex.from_product([[0, 1], ['a', 'b']],
names=['foo', 'bar'])
assert idx.reindex([], level=0)[0].names == ['foo', 'bar']
assert idx.reindex([], level=1)[0].names == ['foo', 'bar']
def test_reindex_lvl_preserves_type_if_target_is_empty_list_or_array(self):
# GH7774
idx = pd.MultiIndex.from_product([[0, 1], ['a', 'b']])
assert idx.reindex([], level=0)[0].levels[0].dtype.type == np.int64
assert idx.reindex([], level=1)[0].levels[1].dtype.type == np.object_
def test_groupby(self):
groups = self.index.groupby(np.array([1, 1, 1, 2, 2, 2]))
labels = self.index.get_values().tolist()
exp = {1: labels[:3], 2: labels[3:]}
tm.assert_dict_equal(groups, exp)
# GH5620
groups = self.index.groupby(self.index)
exp = {key: [key] for key in self.index}
tm.assert_dict_equal(groups, exp)
def test_index_name_retained(self):
# GH9857
result = pd.DataFrame({'x': [1, 2, 6],
'y': [2, 2, 8],
'z': [-5, 0, 5]})
result = result.set_index('z')
result.loc[10] = [9, 10]
df_expected = pd.DataFrame({'x': [1, 2, 6, 9],
'y': [2, 2, 8, 10],
'z': [-5, 0, 5, 10]})
df_expected = df_expected.set_index('z')
tm.assert_frame_equal(result, df_expected)
def test_equals_operator(self):
# GH9785
assert (self.index == self.index).all()
def test_large_multiindex_error(self):
# GH12527
df_below_1000000 = pd.DataFrame(
1, index=pd.MultiIndex.from_product([[1, 2], | range(499999) | pandas.compat.range |
#--- Import Libraries ---#
import pandas as pd
from sklearn import linear_model
from sklearn import preprocessing
#--- Check Trip_type, Companion ---#
def check_input(trip_type, companion):
# default = leisure
check_type = ["business", "leisure", None]
check_com = ["solo", "couple", "friend", "family", None]
ok = True
try:
check_type.remove(trip_type)
except:
ok = False
print("Trip_type is not valid")
try:
check_com.remove(companion)
except:
ok = False
print("Companion is not valid")
return ok
#--- Part Preprocessing ---#
def check_case(target_id, trip_type, companion, file_loc):
user_data = pd.read_csv(file_loc)
group_user = user_data.drop('hotel_name', axis=1)
target_user = group_user.loc[group_user['user_id'] == target_id]
if len(target_user) == 0:
print("Target_user is not valid")
#--- Check Target_user data ---#
else:
other_user = group_user.loc[group_user['user_id'] != target_id]
#--- Case No Context ---#
if (trip_type == None) & (companion == None):
status = [0, "case_none"]
data_out = get_main(target_id, target_user,
other_user, trip_type, companion, status[0])
del data_out['others']
return data_out, status[1]
else:
num_target = len(target_user.loc[(target_user['trip_type'] == trip_type) & (
target_user['companion'] == companion)])
#--- Case Pass Regression ---#
if num_target >= 6:
status = [1, "case_pass_regr"]
data_out = get_main(target_id, target_user,
other_user, trip_type, companion, status[0])
del data_out['others']
return data_out, status[1]
#--- Case Not Pass Regression ---#
else:
status = [-1, "case_not_regr"]
data_out = get_main(target_id, target_user,
other_user, trip_type, companion, status[0])
data_out, buff_data = re_get_main(
data_out, target_id, user_data, trip_type, companion)
del data_out['others']
return data_out, status[1]
def get_main(target_id, target, other, trip_type, companion, status):
#--- Cal Filter By trip_type, companion ---#
if status == 1:
target = target.loc[(target['trip_type'] == trip_type)
& (target['companion'] == companion)]
#--- Cal Not Filter By trip_type, companion ---#
elif (status == -1) | (status == 0):
pass
target_weight = get_weight(target)
target_rank = get_rank_weight(target_weight)
group_user = {}
result = {}
other_user = other['user_id'].drop_duplicates(keep='first')
other_user = other_user.values.tolist()
#--- Cal Weight Into Rank All User ---#
for data in other_user:
user = other.loc[other['user_id'] == data]
other_weight = get_weight(user)
other_rank = get_rank_weight(other_weight)
result.update({data: [other_weight, other_rank]})
group_user.update({'target': {target_id: [target_weight, target_rank]}})
group_user.update({'others': result})
#--- Make Neighbors ---#
group_user_rank = cal_corr(target_id, group_user)
data_out = get_neighbor(group_user_rank)
return data_out
# {
# 'target': {'user_id': [weight, rank_f]},
# 'others': {'user_id': [weight, rank_f]},
# 'neighbors': [[user_id], [corr]]
# }
#--- Cal Regression ---#
def get_weight(df):
x = df[['price', 'near_station', 'restaurant', 'entertain',
'shopping_mall', 'convenience_store']].values.tolist()
y = df['rating'].values.tolist()
# Create linear regression object
regr = linear_model.LinearRegression(fit_intercept=False)
regr.fit(x, y)
return regr.coef_
#--- Ordered Weight ---#
def get_rank_weight(weight):
rank = pd.Series(list(weight)).rank(ascending=False)
rank_f = rank.values.tolist()
return rank_f
#--- Cal Spearman's Rank ---#
def cal_corr(target_id, group_user_rank):
target = pd.Series(group_user_rank['target'][target_id][1])
others = sorted(list(group_user_rank['others'].keys()))
list_user = []
list_corr = []
for user in others:
other = pd.Series(group_user_rank['others'][user][1])
corr = target.corr(other, method='spearman')
list_user.append(user)
list_corr.append(corr)
result = [list_user, list_corr]
group_user_rank.update({'neighbors': result})
return group_user_rank
#--- Before Recommendation ---#
def get_neighbor(group_user_rank):
corr = pd.Series(group_user_rank['neighbors'][1]).rank(ascending=True)
neighbor = corr.values.tolist()
group_user_rank['neighbors'].append(neighbor)
return group_user_rank
#--- Solve Case Not Regression ---#
def re_get_main(data_main, target_id, user_data, trip_type, companion):
target_data = user_data.loc[(user_data['user_id'] == target_id) & (
user_data['trip_type'] == trip_type) & (user_data['companion'] == companion)]
other_id = data_main['neighbors'][0]
other_rank = data_main['neighbors'][2]
neighbors = | pd.DataFrame({'user_id': other_id, 'rank': other_rank}) | pandas.DataFrame |
import glob
import os
import sys
from pprint import pprint
import pandas as pd
from ..constants import (DATA_DIR, DTYPES, RAW_DATA_DIR, USE_VAR_LIST,
USE_VAR_LIST_DICT, USE_VAR_LIST_DICT_REVERSE)
from ..download.nppes import nppes_month_list
from ..utils.utils import coerce_dtypes, month_name_to_month_num
def get_filepaths_from_dissemination_zips(folder):
'''
Each dissemination folder contains a large / bulk data file of the format
npidata_20050523-yearmonthday.csv, sometimes
deep in a subdirectory. This identifies the likeliest candidate and maps
in a dictionary to the main zip folder
'''
zip_paths = os.path.join(folder, 'NPPES_Data_Dissemination*')
stub = os.path.join(folder, 'NPPES_Data_Dissemination_')
folders = [x for x
in glob.glob(zip_paths)
if not x.endswith('.zip')]
folders = [x for x in folders if 'Weekly' not in x]
possbl = list(set(glob.glob(zip_paths + '/**/*npidata_*', recursive=True)))
possbl = [x for x in possbl if 'Weekly' not in x]
paths = {(x.partition(stub)[2].split('/')[0].split('_')[1],
str(month_name_to_month_num(
x.partition(stub)[2].split('/')[0].split('_')[0]))): x
for x in possbl if 'eader' not in x}
assert len(folders) == len(paths)
return paths
def get_weekly_dissemination_zips(folder):
'''
Each weekly update folder contains a large / bulk data file of the format
npidata_pfile_20200323-20200329, representing the week covered
Will need to later add functionality for weekly updates for ploc2 files
'''
zip_paths = os.path.join(folder, 'NPPES_Data_Dissemination*')
stub = os.path.join(folder, 'NPPES_Data_Dissemination_')
folders = [x for x
in glob.glob(zip_paths)
if not x.endswith('.zip')]
folders = [x for x in folders if 'Weekly' in x]
possbl = list(set(glob.glob(zip_paths + '/**/*npidata_*', recursive=True)))
possbl = [x for x in possbl if 'Weekly' in x]
paths = {(x.partition(stub)[2].split('/')[0].split('_')[0],
x.partition(stub)[2].split('/')[0].split('_')[1]): x
for x in possbl if 'eader' not in x}
assert len(folders) == len(paths)
return paths
def which_weekly_dissemination_zips_are_updates(folder):
"""
Will need to later add functionality for weekly updates for ploc2 files
"""
last_monthly = max([pd.to_datetime(val.split('-')[1]
.split('.csv')[0]
.replace(' Jan 2013/', '')
.replace('npidata_', ''))
for key, val in
get_filepaths_from_dissemination_zips(folder).items()])
updates = [(x, val) for x, val
in get_weekly_dissemination_zips(folder).items()
if pd.to_datetime(x[1]) > last_monthly]
return updates
def get_secondary_loc_filepaths_from_dissemination_zips(folder):
zip_paths = os.path.join(folder, 'NPPES_Data_Dissemination*')
stub = os.path.join(folder, 'NPPES_Data_Dissemination_')
possbl = list(set(glob.glob(zip_paths + '/**/pl_pfile_*', recursive=True)))
possbl = [x for x in possbl if 'Weekly' not in x]
paths = {(x.partition(stub)[2].split('/')[0].split('_')[1],
str(month_name_to_month_num(
x.partition(stub)[2].split('/')[0].split('_')[0]))): x
for x in possbl if 'eader' not in x}
return paths
def get_filepaths_from_single_variable_files(variable, folder, noisily=True):
'''
Returns a dictionary of the path to each single variable file, for
each month and year
'''
files = glob.glob(os.path.join(folder, '%s*' % variable))
file_dict = {(x.split(variable)[1].split('.')[0][:4],
x.split(variable)[1].split('.')[0][4:]): x
for x in files}
if noisily:
print('For variable %s, there are %s files:'
% (variable, len(file_dict)))
pprint(sorted(list(file_dict.keys())))
return file_dict
def convert_dtypes(df):
'''
Note: should move to generic version found in utils
'''
# weird fix required for bug in select_dtypes
ints_init = (df.dtypes
.reset_index()[df.dtypes.reset_index()[0] == int]['index']
.values.tolist())
current_dtypes = {x: 'int' for x in ints_init}
for t in ['int', 'object', ['float32', 'float64'], 'datetime', 'string']:
current_dtypes.update({x: t for x in df.select_dtypes(t).columns})
dissem_file = (not set(current_dtypes.keys()).issubset(DTYPES.keys()))
for col in df.columns:
final_dtype = (DTYPES[col] if not dissem_file
else DTYPES[{**USE_VAR_LIST_DICT_REVERSE,
**{'seq': 'seq'}}[col]])
if (current_dtypes[col] != final_dtype and
final_dtype not in current_dtypes[col]):
try:
df = df.assign(**{col: coerce_dtypes(df[col],
current_dtypes[col],
final_dtype)})
except ValueError as err:
if final_dtype == 'string':
newcol = coerce_dtypes(df[col], current_dtypes[col], 'str')
newcol = coerce_dtypes(newcol, 'str', 'string')
else:
raise ValueError("{0}".format(err))
return df
def column_details(variable, dissem_file, dta_file):
'''
Generates column list to get from the raw data; dissem files
have long string names and are wide, whereas NBER files have
short names and are long
'''
diss_var = USE_VAR_LIST_DICT[variable]
multi = True if isinstance(diss_var, list) else False
tvar = ['npi', 'seq']
if not dissem_file:
if multi:
if str.isupper(variable) and not dta_file:
def collist(col): return col.upper() == variable or col in tvar
elif str.isupper(variable) and dta_file:
collist = tvar + [variable.lower()]
else:
collist = tvar + [variable]
else:
collist = ['npi', variable]
d_use = {} if not variable == 'ploczip' else {'ploczip': str}
else:
diss_vars = diss_var if multi else [diss_var]
collist = (['NPI'] + diss_var if multi else ['NPI'] + [diss_var])
d_use = {x: object for x in diss_vars if DTYPES[variable] == 'string'}
return collist, d_use
def locate_file(folder, year, month, variable):
'''
'''
paths1 = get_filepaths_from_single_variable_files(variable, folder, False)
if not variable.startswith('ploc2'):
paths2 = get_filepaths_from_dissemination_zips(folder)
else:
paths2 = get_secondary_loc_filepaths_from_dissemination_zips(folder)
try:
return paths1[(year, month)]
except KeyError:
try:
return paths2[(year, month)]
except KeyError:
return None
def read_and_process_df(folder, year, month, variable):
'''
Locates and reads in year-month-variable df from disk,
checks and converts dtypes, makes consistent variable names,
and adds a standardized month column
'''
file_path = locate_file(folder, '%s' % year, '%s' % month, variable)
if file_path:
df = process_filepath_to_df(file_path, variable)
df['month'] = pd.to_datetime('%s-%s' % (year, month))
return df
def read_and_process_weekly_updates(folder, variable):
"""
"""
filepaths = which_weekly_dissemination_zips_are_updates(folder)
if filepaths:
updates = pd.concat(
[process_filepath_to_df(f[1], variable).assign(
week=pd.to_datetime(f[0][0]))
for f in filepaths])
updates['month'] = (pd.to_datetime(updates.week.dt.year.astype(str)
+ '-'
+ updates.week.dt.month.astype(str) + '-' + '1'))
updates = (updates.dropna()
.groupby(['npi', 'month'])
.max()
.reset_index()
.merge(updates)
.drop(columns='week'))
return updates
def process_filepath_to_df(file_path, variable):
"""
"""
is_dissem_file = len(file_path.split('/')) > 6
is_dta_file = os.path.splitext(file_path)[1] == '.dta'
is_pl_file = ('pl_pfile_' in file_path) and is_dissem_file
collist, d_use = column_details(variable, is_dissem_file, is_dta_file)
df = (pd.read_csv(file_path, usecols=collist, dtype=d_use)
if file_path.endswith('.csv')
else pd.read_stata(file_path, columns=collist))
if is_pl_file:
df = (pd.concat([df, df.groupby('NPI').cumcount() + 1], axis=1)
.rename(columns={0: 'seq'}))
if (not is_dissem_file
and variable not in df.columns
and variable.lower() in df.columns):
df = df.rename(columns={variable.lower(): variable})
df = convert_dtypes(df)
df = reformat(df, variable, is_dissem_file)
return df
def reformat(df, variable, is_dissem_file):
'''
'''
multi = True if isinstance(USE_VAR_LIST_DICT[variable], list) else False
if is_dissem_file and multi:
stb = list(set([x.split('_')[0] for x in USE_VAR_LIST_DICT[variable]]))
assert len(stb) == 1
stb = stb[0] + '_'
df = pd.wide_to_long(df, [stb], i="NPI", j="seq").dropna()
df = df.reset_index().rename(columns={'NPI': 'npi', stb: variable})
elif is_dissem_file:
df = df.rename(columns={x: {**USE_VAR_LIST_DICT_REVERSE,
**{'seq': 'seq'}}[x]
for x in df.columns})
return df
def process_variable(folder, variable, searchlist, final_weekly_updates=True):
'''
'''
# searchlist = [x for x in searchlist if x != (2011, 3)]
df_list = []
for (year, month) in searchlist:
print(year, month)
if variable == "PTAXGROUP":
try:
df = read_and_process_df(folder, year, month, variable)
except ValueError as err:
assert year < 2012
else:
df = read_and_process_df(folder, year, month, variable)
df_list.append(df)
df = pd.concat(df_list, axis=0) if df_list else None
if df_list and final_weekly_updates:
u = read_and_process_weekly_updates(folder, variable)
if isinstance(u, pd.DataFrame):
df = df.merge(u, on=['npi', 'month'], how='outer', indicator=True)
if (df._merge == "right_only").sum() != 0:
df.loc[df._merge == "right_only",
'%s_x' % variable] = df['%s_y' % variable]
if (df._merge == "both").sum() != 0:
df.loc[df._merge == "both",
'%s_x' % variable] = df['%s_y' % variable]
df = (df.drop(columns=['_merge', '%s_y' % variable])
.rename(columns={'%s_x' % variable: variable}))
assert (df[['npi', 'month']].drop_duplicates().shape[0]
== df.shape[0])
return df
def sanitize_csv_for_update(df, variable):
df['month'] = pd.to_datetime(df.month)
if DTYPES[variable] == 'datetime64[ns]':
df[variable] = pd.to_datetime(df[variable])
elif variable == 'ploctel':
# Still not sure the ploctel update is working
df[variable] = df[variable].astype(str)
df.loc[df.ploctel.str.endswith('.0'),
'ploctel'] = df.ploctel.str[:-2]
df[variable] = df[variable].astype(DTYPES[variable])
else:
df[variable] = df[variable].astype(DTYPES[variable])
return df
def main_process_variable(variable, update):
# Should figure out NPPES_Data_Dissemination_March_2011 because it's weird;
# deleting for now
if not update:
print(f'Making new: {variable}')
searchlist = [x for x in nppes_month_list() if x != (2011, 3)]
df = process_variable(RAW_DATA_DIR, variable, searchlist)
df.to_csv(os.path.join(DATA_DIR, '%s.csv' % variable),
index=False)
else:
print(f'Updating: {variable}')
df = pd.read_csv(os.path.join(DATA_DIR, '%s.csv' % variable))
df = sanitize_csv_for_update(df, variable)
last_month = max(list(df.month.value_counts().index))
searchlist = [x for x in nppes_month_list() if
( | pd.to_datetime('%s-%s-01' % (x[0], x[1])) | pandas.to_datetime |
import copy
import importlib
import itertools
import os
import sys
import warnings
import numpy as np
import pandas as pd
try:
import ixmp
has_ix = True
except ImportError:
has_ix = False
from pyam import plotting
from pyam.logger import logger
from pyam.run_control import run_control
from pyam.utils import (
write_sheet,
read_ix,
read_files,
read_pandas,
format_data,
pattern_match,
years_match,
isstr,
islistable,
cast_years_to_int,
META_IDX,
YEAR_IDX,
REGION_IDX,
IAMC_IDX,
SORT_IDX,
LONG_IDX,
)
from pyam.timeseries import fill_series
class IamDataFrame(object):
"""This class is a wrapper for dataframes following the IAMC format.
It provides a number of diagnostic features (including validation of data,
completeness of variables provided) as well as a number of visualization
and plotting tools.
"""
def __init__(self, data, **kwargs):
"""Initialize an instance of an IamDataFrame
Parameters
----------
data: ixmp.TimeSeries, ixmp.Scenario, pd.DataFrame or data file
an instance of an TimeSeries or Scenario (requires `ixmp`),
or pd.DataFrame or data file with IAMC-format data columns.
A pd.DataFrame can have the required data as columns or index.
Special support is provided for data files downloaded directly from
IIASA SSP and RCP databases. If you run into any problems loading
data, please make an issue at:
https://github.com/IAMconsortium/pyam/issues
"""
# import data from pd.DataFrame or read from source
if isinstance(data, pd.DataFrame):
self.data = format_data(data.copy())
elif has_ix and isinstance(data, ixmp.TimeSeries):
self.data = read_ix(data, **kwargs)
else:
self.data = read_files(data, **kwargs)
# cast year column to `int` if necessary
if not self.data.year.dtype == 'int64':
self.data.year = cast_years_to_int(self.data.year)
# define a dataframe for categorization and other metadata indicators
self.meta = self.data[META_IDX].drop_duplicates().set_index(META_IDX)
self.reset_exclude()
# execute user-defined code
if 'exec' in run_control():
self._execute_run_control()
def __getitem__(self, key):
_key_check = [key] if isstr(key) else key
if set(_key_check).issubset(self.meta.columns):
return self.meta.__getitem__(key)
else:
return self.data.__getitem__(key)
def __setitem__(self, key, value):
_key_check = [key] if isstr(key) else key
if set(_key_check).issubset(self.meta.columns):
return self.meta.__setitem__(key, value)
else:
return self.data.__setitem__(key, value)
def __len__(self):
return self.data.__len__()
def _execute_run_control(self):
for module_block in run_control()['exec']:
fname = module_block['file']
functions = module_block['functions']
dirname = os.path.dirname(fname)
if dirname:
sys.path.append(dirname)
module = os.path.basename(fname).split('.')[0]
mod = importlib.import_module(module)
for func in functions:
f = getattr(mod, func)
f(self)
def head(self, *args, **kwargs):
"""Identical to pd.DataFrame.head() operating on data"""
return self.data.head(*args, **kwargs)
def tail(self, *args, **kwargs):
"""Identical to pd.DataFrame.tail() operating on data"""
return self.data.tail(*args, **kwargs)
def models(self):
"""Get a list of models"""
return pd.Series(self.meta.index.levels[0])
def scenarios(self):
"""Get a list of scenarios"""
return | pd.Series(self.meta.index.levels[1]) | pandas.Series |
from flask import Flask, jsonify, request
from flask_cors import CORS
# configuration
DEBUG = True
# instantiate the app
app = Flask(__name__)
app.config.from_object(__name__)
# enable CORS
CORS(app, resources={r'/*': {'origins': '*'}})
@app.route('/', methods = ['GET'])
def get_articles():
#Run h
return jsonify({"Hello":"World"})
#Set the recommended program here
recommendedProgram = "NA"
@app.route('/get-population', methods = ['GET', 'POST'])
def recommend_program():
if request.method == 'POST':
#Run here recommend program
#import libraries
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
year = request.form.get('year')
rooms = request.form.get('rooms')
fulltime = request.form.get('fullTime')
parttime = request.form.get('partTime')
DATA_CSV_FILE = | pd.read_csv('population.csv') | pandas.read_csv |
# -*- coding: utf-8 -*-
"""
Created on Wed Nov 3 15:28:09 2021
@author: <NAME>
"""
import pandas as pd
from collections import OrderedDict
import numpy as np
from numpy import linalg as LA
import math
from scipy.integrate import quad
import time
import copy
import itertools
def preprocessor(DataFrame):
"""
Function responsible for early preprocessing of the input data frames
- creates a list of body parts labeled by the neural net
- creates a trimmed frame, such that only relevant numerical data is included (i.e.: x, y coords and p-vals)
Parameters
----------
Data frames as inputs
Returns
-------
The function returns a list of these preprocessed frames.
returns a list of body parts as well.
"""
ResetColNames = {
DataFrame.columns.values[Ind]:Ind for Ind in range(len(DataFrame.columns.values))
}
ProcessedFrame = DataFrame.rename(columns=ResetColNames).drop([0], axis = 1)
BodyParts = list(OrderedDict.fromkeys(list(ProcessedFrame.iloc[0,])))
BodyParts = [Names for Names in BodyParts if Names != "bodyparts"]
TrimmedFrame = ProcessedFrame.iloc[2:,]
TrimmedFrame = TrimmedFrame.reset_index(drop=True)
return(TrimmedFrame, BodyParts)
def checkPVals(DataFrame, CutOff):
"""
Function responsible for processing p-values, namely omitting pvalues and their associated
coordinates by forward filling the last valid observation as defined by the cutoff limit (user defined)
Parameters
----------
Data frames as inputs
Takes the three columns that are associated with a label (X, Y, p-val), handled in the while loop
changes the
Returns
-------
The function returns a list of these preprocessed frames.
returns a list of body parts as well.
"""
#This loop assigns the first p-values in all label columns = 1, serve as reference point.
for Cols in DataFrame.columns.values:
if Cols % 3 == 0:
if float(DataFrame[Cols][0]) < CutOff:
DataFrame.loc[0, Cols] = 1.0
Cols = 3
#While loop iterating through every 3rd column (p-val column)
#ffill = forward fill, propagates last valid observation forward.
#Values with p-val < cutoff masked.
while Cols <= max(DataFrame.columns.values):
Query = [i for i in range(Cols-2, Cols+1)]
DataFrame[Query] = DataFrame[Query].mask(pd.to_numeric(DataFrame[Cols], downcast="float") < CutOff).ffill()
Cols += 3
return(DataFrame)
def predictLabelLocation(DataFrame, CutOff, LabelsFrom, colNames, PredictLabel):
"""
Function responsible for processing p-values, namely omitting
Parameters
----------
Data frames as inputs
Takes the three columns that are associated with a label (X, Y, p-val), handled in the while loop
changes the
Returns
-------
The function returns a list of these preprocessed frames.
returns a list of body parts as well.
"""
OldColumns = list(DataFrame.columns.values)
FeatureList = ["_x", "_y", "_p-val"]
NewCols = [f"{ColName}{Feature}" for ColName, Feature in itertools.product(colNames, FeatureList)]
DataFrame = DataFrame.rename(columns={DataFrame.columns.values[Ind]:NewCols[Ind] for Ind in range(len(NewCols))})
NewColumns = list(DataFrame.columns.values)
for Cols in DataFrame.columns.values:
DataFrame[Cols] = pd.to_numeric(DataFrame[Cols], downcast="float")
#If the direction vector remains the same, there will be an over-estimation in the new position of the head if you continue to reuse
#the same direction vector. Maybe scale the direction vector.
ReferenceDirection = []
ScaledVec = []
BodyPart = []
for Ind, PVals in enumerate(DataFrame[f"{PredictLabel}_p-val"]):
if (PVals < CutOff):
##############
#Choose surrounding label
##############
AdjacentLabel = [Label for Label in LabelsFrom if DataFrame[f"{Label}_p-val"][Ind] >= CutOff]
if ((len(AdjacentLabel) != 0) and (Ind != 0)):
if (DataFrame[f"{PredictLabel}_p-val"][Ind - 1] >= CutOff):
##########
#Create Direction Vectors between the first adjacent label available in the list
#Parallelogram law
##########
DirectionVec = [DataFrame[f"{PredictLabel}_x"][Ind - 1] - DataFrame[f"{AdjacentLabel[0]}_x"][Ind - 1],
DataFrame[f"{PredictLabel}_y"][Ind - 1] - DataFrame[f"{AdjacentLabel[0]}_y"][Ind - 1]]
ReferenceDirection = DirectionVec
elif ((DataFrame[f"{PredictLabel}_p-val"][Ind - 1] < CutOff) and (len(ReferenceDirection) == 0)):
ReferenceDirection = [0, 0]
###########
#Compute the displacement between available surronding label
#Compute the vector addition (parallelogram law) and scale the Ind - 1 first available adjacent label by it
###########
Displacement = [DataFrame[f"{AdjacentLabel[0]}_x"][Ind] - DataFrame[f"{AdjacentLabel[0]}_x"][Ind - 1],
DataFrame[f"{AdjacentLabel[0]}_y"][Ind] - DataFrame[f"{AdjacentLabel[0]}_y"][Ind - 1]]
Scale = np.add(ReferenceDirection, Displacement)
DataFrame[f"{PredictLabel}_x"][Ind] = DataFrame[f"{AdjacentLabel[0]}_x"][Ind - 1] + Scale[0]
DataFrame[f"{PredictLabel}_y"][Ind] = DataFrame[f"{AdjacentLabel[0]}_y"][Ind - 1] + Scale[1]
DataFrame[f"{PredictLabel}_p-val"][Ind] = 4.5
Norm = LA.norm(Scale)
# elif (len(AdjacentLabel) == 0):
# print(AdjacentLabel)
# print(max(ScaledVec), min(ScaledVec), np.std(ScaledVec), np.average(ScaledVec))
# print(BodyPart[ScaledVec.index(max(ScaledVec))])
PVAL_PREDICTEDLABEL = list(DataFrame[f"{PredictLabel}_p-val"])
DataFrame = DataFrame.rename(columns={NewColumns[Ind]: OldColumns[Ind] for Ind in range(len(OldColumns))})
return(DataFrame, PVAL_PREDICTEDLABEL)
def predictLabel_RotationMatrix(DataFrame, CutOff, LabelsFrom, colNames, PredictLabel):
OldColumns = list(DataFrame.columns.values)
FeatureList = ["_x", "_y", "_p-val"]
NewCols = [f"{ColName}{Feature}" for ColName, Feature in itertools.product(colNames, FeatureList)]
DataFrame = DataFrame.rename(columns={DataFrame.columns.values[Ind]:NewCols[Ind] for Ind in range(len(NewCols))})
NewColumns = list(DataFrame.columns.values)
for Cols in DataFrame.columns.values:
DataFrame[Cols] = pd.to_numeric(DataFrame[Cols], downcast="float")
ReferenceMid = []
FactorDict = {"Angle_Right":0, "Angle_Left":0}
VectorAngle = lambda V1, V2: math.acos((np.dot(V2, V1))/((np.linalg.norm(V2))*(np.linalg.norm(V1))))
RotationMatrixCW = lambda Theta: np.array(
[[math.cos(Theta), math.sin(Theta)],
[-1*math.sin(Theta), math.cos(Theta)]]
)
RotationMatrixCCW = lambda Theta: np.array(
[[math.cos(Theta), -1*math.sin(Theta)],
[math.sin(Theta), math.cos(Theta)]]
)
for Ind, PVals in enumerate(DataFrame[f"{PredictLabel}_p-val"]):
AdjacentLabel = [Label for Label in LabelsFrom if DataFrame[f"{Label}_p-val"][Ind] >= CutOff]
#If the Head label is poorly track initiate this statement
if (PVals < CutOff):
if ((DataFrame[f"{LabelsFrom[0]}_p-val"][Ind] >= CutOff) and (DataFrame[f"{LabelsFrom[1]}_p-val"][Ind] >= CutOff)):
MidPoint = [(DataFrame[f"{LabelsFrom[0]}_x"][Ind] + DataFrame[f"{LabelsFrom[1]}_x"][Ind])/2,
(DataFrame[f"{LabelsFrom[0]}_y"][Ind] + DataFrame[f"{LabelsFrom[1]}_y"][Ind])/2]
ReferenceMid = MidPoint
DataFrame[f"{PredictLabel}_x"][Ind] = ReferenceMid[0]
DataFrame[f"{PredictLabel}_y"][Ind] = ReferenceMid[1]
DataFrame[f"{PredictLabel}_p-val"][Ind] = 3.5
elif (((DataFrame[f"{LabelsFrom[0]}_p-val"][Ind] or DataFrame[f"{LabelsFrom[1]}_p-val"][Ind]) < CutOff)
and (len(AdjacentLabel) != 0) and (DataFrame["Body_p-val"][Ind] >= CutOff)):
#Right
if ((DataFrame[f"{LabelsFrom[0]}_p-val"][Ind]) >= CutOff):
DVec_Right = [DataFrame[f"{LabelsFrom[0]}_x"][Ind] - DataFrame["Body_x"][Ind],
DataFrame[f"{LabelsFrom[0]}_y"][Ind] - DataFrame["Body_y"][Ind]]
ScaleRoation = np.dot(RotationMatrixCW(Theta=FactorDict["Angle_Right"]), DVec_Right)
DataFrame[f"{PredictLabel}_x"][Ind] = ScaleRoation[0] + DataFrame["Body_x"][Ind]
DataFrame[f"{PredictLabel}_y"][Ind] = ScaleRoation[1] + DataFrame["Body_y"][Ind]
DataFrame[f"{PredictLabel}_p-val"][Ind] = 4.5
#Left
elif ((DataFrame[f"{LabelsFrom[1]}_p-val"][Ind]) >= CutOff):
DVec_Left = [DataFrame[f"{LabelsFrom[1]}_x"][Ind] - DataFrame["Body_x"][Ind],
DataFrame[f"{LabelsFrom[1]}_y"][Ind] - DataFrame["Body_y"][Ind]]
ScaleRoation = np.dot(RotationMatrixCCW(Theta=FactorDict["Angle_Left"]), DVec_Left)
DataFrame[f"{PredictLabel}_x"][Ind] = ScaleRoation[0] + DataFrame["Body_x"][Ind]
DataFrame[f"{PredictLabel}_y"][Ind] = ScaleRoation[1] + DataFrame["Body_y"][Ind]
DataFrame[f"{PredictLabel}_p-val"][Ind] = 2.5
PVAL_PREDICTEDLABEL = list(DataFrame[f"{PredictLabel}_p-val"])
DataFrame = DataFrame.rename(columns={NewColumns[Ind]: OldColumns[Ind] for Ind in range(len(OldColumns))})
return(DataFrame, PVAL_PREDICTEDLABEL)
def predictLabel_MidpointAdjacent(DataFrame, CutOff, LabelsFrom, colNames, PredictLabel):
OldColumns = list(DataFrame.columns.values)
FeatureList = ["_x", "_y", "_p-val"]
NewCols = [f"{ColName}{Feature}" for ColName, Feature in itertools.product(colNames, FeatureList)]
DataFrame = DataFrame.rename(columns={DataFrame.columns.values[Ind]:NewCols[Ind] for Ind in range(len(NewCols))})
NewColumns = list(DataFrame.columns.values)
for Cols in DataFrame.columns.values:
DataFrame[Cols] = pd.to_numeric(DataFrame[Cols], downcast="float")
ReferenceMid = []
AngleDict = {"Angle_Right":0, "Angle_Left":0}
VectorAngle = lambda V1, V2: math.acos((np.dot(V2, V1))/((np.linalg.norm(V2))*(np.linalg.norm(V1))))
RotationMatrixCW = lambda Theta: np.array(
[[np.cos(Theta), np.sin(Theta)],
[-1*np.sin(Theta), np.cos(Theta)]]
)
RotationMatrixCCW = lambda Theta: np.array(
[[math.cos(Theta), -1*math.sin(Theta)],
[math.sin(Theta), math.cos(Theta)]]
)
# print(RotationMatrixCW(Theta = np.pi/2))
# print(RotationMatrixCCW(Theta = np.pi/2))
# breakpoint()
AngleList_Right = []
AngleList_Left = []
for Ind, PVals in enumerate(DataFrame[f"{PredictLabel}_p-val"]):
if ((PVals >= CutOff) and (DataFrame["Body_p-val"][Ind] >= CutOff)
and (DataFrame[f"{LabelsFrom[0]}_p-val"][Ind] >= CutOff) and (DataFrame[f"{LabelsFrom[1]}_p-val"][Ind] >= CutOff)):
DirectionVectorBody_Head = [DataFrame[f"{PredictLabel}_x"][Ind] - DataFrame["Body_x"][Ind],
DataFrame[f"{PredictLabel}_y"][Ind] - DataFrame["Body_y"][Ind]]
DirectionVectorR_Ear = [DataFrame[f"{LabelsFrom[0]}_x"][Ind] - DataFrame["Body_x"][Ind],
DataFrame[f"{LabelsFrom[0]}_y"][Ind] - DataFrame["Body_y"][Ind]]
DirectionVectorL_Ear = [DataFrame[f"{LabelsFrom[1]}_x"][Ind] - DataFrame["Body_x"][Ind],
DataFrame[f"{LabelsFrom[1]}_y"][Ind] - DataFrame["Body_y"][Ind]]
ThetaRight = VectorAngle(DirectionVectorBody_Head, DirectionVectorR_Ear)
ThetaLeft = VectorAngle(DirectionVectorBody_Head, DirectionVectorL_Ear)
AngleList_Right.append(ThetaRight)
AngleList_Left.append(ThetaLeft)
Theta_Right = np.average(AngleList_Right)
Theta_Left = np.average(AngleList_Left)
Theta_Right_std = np.std(AngleList_Right)
Theta_Left_std = np.std(AngleList_Left)
Counter = 0
C1 = 0
C2 = 0
for Ind, PVals in enumerate(DataFrame[f"{PredictLabel}_p-val"]):
# AdjacentLabel = [Label for Label in LabelsFrom if DataFrame[f"{Label}_p-val"][Ind] >= CutOff]
#If the Head label is poorly track initiate this statement
if (PVals < CutOff):
if ((DataFrame[f"{LabelsFrom[0]}_p-val"][Ind] >= CutOff) and (DataFrame[f"{LabelsFrom[1]}_p-val"][Ind] >= CutOff)):
MidPoint = [(DataFrame[f"{LabelsFrom[0]}_x"][Ind] + DataFrame[f"{LabelsFrom[1]}_x"][Ind])/2,
(DataFrame[f"{LabelsFrom[0]}_y"][Ind] + DataFrame[f"{LabelsFrom[1]}_y"][Ind])/2]
ReferenceMid = MidPoint
elif ((DataFrame[f"{LabelsFrom[0]}_p-val"][Ind] < CutOff) or (DataFrame[f"{LabelsFrom[1]}_p-val"][Ind] < CutOff)):
##############
#Choose surrounding label
##############
AdjacentLabel = [Label for Label in LabelsFrom if DataFrame[f"{Label}_p-val"][Ind] >= CutOff]
if ((len(AdjacentLabel) != 0) and (Ind != 0)):
if (DataFrame[f"{PredictLabel}_p-val"][Ind - 1] >= CutOff):
##########
#Create Direction Vectors between the first adjacent label available in the list
#Parallelogram law
##########
DirectionVec = [DataFrame[f"{PredictLabel}_x"][Ind - 1] - DataFrame[f"{AdjacentLabel[0]}_x"][Ind - 1],
DataFrame[f"{PredictLabel}_y"][Ind - 1] - DataFrame[f"{AdjacentLabel[0]}_y"][Ind - 1]]
ReferenceDirection = DirectionVec
elif ((DataFrame[f"{PredictLabel}_p-val"][Ind - 1] < CutOff) and (len(ReferenceDirection) == 0)):
ReferenceDirection = [0, 0]
###########
#Compute the displacement between available surronding label
#Compute the vector addition (parallelogram law) and scale the Ind - 1 first available adjacent label by it
###########
Displacement = [DataFrame[f"{AdjacentLabel[0]}_x"][Ind] - DataFrame[f"{AdjacentLabel[0]}_x"][Ind - 1],
DataFrame[f"{AdjacentLabel[0]}_y"][Ind] - DataFrame[f"{AdjacentLabel[0]}_y"][Ind - 1]]
Scale = np.add(ReferenceDirection, Displacement)
#Reference Mid is a 2D coordinate
ReferenceMid = [DataFrame[f"{AdjacentLabel[0]}_x"][Ind - 1] + Scale[0], DataFrame[f"{AdjacentLabel[0]}_y"][Ind - 1] + Scale[1]]
if DataFrame["Body_p-val"][Ind] >= CutOff:
BodyAdjacent_Vector = [DataFrame[f"{AdjacentLabel[0]}_x"][Ind] - DataFrame["Body_x"][Ind],
DataFrame[f"{AdjacentLabel[0]}_y"][Ind] - DataFrame["Body_y"][Ind]]
#Create a vector from the body label to the midpoint
ScaledVec = [ReferenceMid[0] - DataFrame["Body_x"][Ind],
ReferenceMid[1] - DataFrame["Body_y"][Ind]]
Theta = VectorAngle(BodyAdjacent_Vector, ScaledVec)
VectorPosition = np.cross(BodyAdjacent_Vector, ScaledVec)
if AdjacentLabel[0] == "Left_Ear":
if (VectorPosition < 0):
C1 += 1
RotateAngle = Theta_Left + Theta
RotateVector = RotationMatrixCCW(RotateAngle).dot(ScaledVec)
ReferenceMid = [DataFrame["Body_x"][Ind] + RotateVector[0], DataFrame["Body_y"][Ind] + RotateVector[1]]
DataFrame[f"{PredictLabel}_p-val"][Ind] = 2.5
#Correct Position
elif (VectorPosition > 0):
C2 += 1
if (Theta < (Theta_Left - (2 * Theta_Left_std))):
RotateAngle = Theta_Left - Theta
RotateVector = RotationMatrixCCW(RotateAngle).dot(ScaledVec)
ReferenceMid = [DataFrame["Body_x"][Ind] + RotateVector[0], DataFrame["Body_y"][Ind] + RotateVector[1]]
DataFrame[f"{PredictLabel}_p-val"][Ind] = 2.5
elif (Theta > (Theta_Left + (2 * Theta_Left_std))):
RotateAngle = Theta - Theta_Left
RotateVector = RotationMatrixCW(RotateAngle).dot(ScaledVec)
ReferenceMid = [DataFrame["Body_x"][Ind] + RotateVector[0], DataFrame["Body_y"][Ind] + RotateVector[1]]
DataFrame[f"{PredictLabel}_p-val"][Ind] = 2.5
else:
DataFrame[f"{PredictLabel}_p-val"][Ind] = 4.0
elif AdjacentLabel[0] == "Right_Ear":
if VectorPosition < 0:
if (Theta < (Theta_Right - (2 * Theta_Right_std))):
RotateAngle = Theta_Right - Theta
RotateVector = RotationMatrixCW(RotateAngle).dot(ScaledVec)
ReferenceMid = [DataFrame["Body_x"][Ind] + RotateVector[0], DataFrame["Body_y"][Ind] + RotateVector[1]]
DataFrame[f"{PredictLabel}_p-val"][Ind] = 1.5
elif (Theta > (Theta_Right + (2 * Theta_Right_std))):
RotateAngle = Theta - Theta_Right
RotateVector = RotationMatrixCCW(RotateAngle).dot(ScaledVec)
ReferenceMid = [DataFrame["Body_x"][Ind] + RotateVector[0], DataFrame["Body_y"][Ind] + RotateVector[1]]
DataFrame[f"{PredictLabel}_p-val"][Ind] = 1.5
else:
DataFrame[f"{PredictLabel}_p-val"][Ind] = 4.0
elif VectorPosition > 0:
RotateAngle = Theta_Right + Theta
RotateVector = np.dot(RotationMatrixCW(RotateAngle), ScaledVec)
ReferenceMid = [DataFrame["Body_x"][Ind] + RotateVector[0], DataFrame["Body_y"][Ind] + RotateVector[1]]
DataFrame[f"{PredictLabel}_p-val"][Ind] = 1.5
# DataFrame[f"{PredictLabel}_p-val"][Ind] = 4.0
elif ((len(AdjacentLabel) == 0)):
Counter += 1
try:
DataFrame[f"{PredictLabel}_x"][Ind] = ReferenceMid[0]
DataFrame[f"{PredictLabel}_y"][Ind] = ReferenceMid[1]
except IndexError:
pass
if DataFrame[f"{PredictLabel}_p-val"][Ind] < 1.0:
DataFrame[f"{PredictLabel}_p-val"][Ind] = 3.5
# print(C1, C2)
# breakpoint()
PVAL_PREDICTEDLABEL = list(DataFrame[f"{PredictLabel}_p-val"])
#DataFrame.to_csv(r"F:\WorkFiles_XCELLeration\Video\DifferentApproaches\Combined_PgramRotation.csv")
DataFrame = DataFrame.rename(columns={NewColumns[Ind]: OldColumns[Ind] for Ind in range(len(OldColumns))})
return(DataFrame, PVAL_PREDICTEDLABEL)
def computeEuclideanDistance(DataFrame, BodyParts):
"""
Function responsible for computing the interframe Euclidean Distance
Applies the 2D Euclidean distance formula between frames on the coordinates of each tracked
label from DLC.
d(p, q) = sqrt(sum(q - p) ** 2))
- where p, q are 2D cartesian coordinates, in this case the coordinate labels
in sequential frames.
Parameters
----------
Data frames and body part strings as inputs
Returns
-------
The function returns a list of these frames
"""
DistanceVectors = []
ColsToDrop = [Cols for Cols in DataFrame if Cols % 3 == 0]
DataFrame = DataFrame.drop(ColsToDrop, axis = 1)
CreateDirectionalVectors = lambda Vec1, Vec2: [Vals2 - Vals1 for Vals1, Vals2 in zip(Vec1, Vec2)]
ComputeNorm = lambda Vec: np.sqrt(sum(x ** 2 for x in Vec))
for Cols1, Cols2 in zip(DataFrame.columns.values[:-1], DataFrame.columns.values[1:]):
if Cols2 - Cols1 == 1:
VectorizedFrame = list(zip(pd.to_numeric(DataFrame[Cols1], downcast="float"), pd.to_numeric(DataFrame[Cols2], downcast="float")))
DirectionalVectors = list(map(CreateDirectionalVectors, VectorizedFrame[:-1], VectorizedFrame[1:]))
Norm = list(map(ComputeNorm, DirectionalVectors))
DistanceVectors.append(Norm)
EDFrame = pd.DataFrame(data={BodyParts[Ind]: DistanceVectors[Ind] for Ind in range(len(DistanceVectors))})
return(EDFrame)
def computeHourlySums(DataFrameList):
"""
Function responsible for creating hourly sums, that is, the summed Euclidean
Distance for that hour (or .csv input). This represents the total motility of the
animal in the given time frame.
Parameters
----------
Data frame list as input
Returns
-------
A single dataframe containing the sums for that hour (or .csv input). The index will
act as the hour or timescale for that particular .csv, therefore it is important to ensure
that .csv files are in order.
"""
SumLists = []
for Frames in range(len(DataFrameList)):
SumFunction = DataFrameList[Frames].apply(np.sum, axis=0)
SummedFrame = | pd.DataFrame(SumFunction) | pandas.DataFrame |
import verboselogs, logging
logger = verboselogs.VerboseLogger(__name__)
import math
import numpy as np
from sqlalchemy.orm import sessionmaker
from sqlalchemy import or_
from .utils import db_retry, retrieve_model_hash_from_id, save_db_objects, sort_predictions_and_labels, AVAILABLE_TIEBREAKERS
from triage.component.results_schema import Model
from triage.util.db import scoped_session
from triage.util.random import generate_python_random_seed
import ohio.ext.pandas
import pandas as pd
class Predictor:
expected_matrix_ts_format = "%Y-%m-%d %H:%M:%S"
available_tiebreakers = AVAILABLE_TIEBREAKERS
def __init__(
self,
model_storage_engine,
db_engine,
rank_order,
replace=True,
save_predictions=True
):
"""Encapsulates the task of generating predictions on an arbitrary
dataset and storing the results
Args:
model_storage_engine (catwalk.storage.ModelStorageEngine)
db_engine (sqlalchemy.engine)
rank_order
"""
self.model_storage_engine = model_storage_engine
self.db_engine = db_engine
self.rank_order = rank_order
self.replace = replace
self.save_predictions = save_predictions
@property
def sessionmaker(self):
return sessionmaker(bind=self.db_engine)
@db_retry
def load_model(self, model_id):
"""Downloads the cached model associated with a given model id
Args:
model_id (int) The id of a given model in the database
Returns:
A python object which implements .predict()
"""
model_hash = retrieve_model_hash_from_id(self.db_engine, model_id)
logger.spam(f"Checking for model_hash {model_hash} in store")
if self.model_storage_engine.exists(model_hash):
return self.model_storage_engine.load(model_hash)
@db_retry
def delete_model(self, model_id):
"""Deletes the cached model associated with a given model id
Args:
model_id (int) The id of a given model in the database
"""
model_hash = retrieve_model_hash_from_id(self.db_engine, model_id)
self.model_storage_engine.delete(model_hash)
@db_retry
def _existing_predictions(self, Prediction_obj, session, model_id, matrix_store):
logger.debug(f"Looking for existing predictions for model {model_id} on {matrix_store.matrix_type.string_name} matrix [{matrix_store.uuid}]")
return (
session.query(Prediction_obj)
.filter_by(model_id=model_id)
.filter(Prediction_obj.as_of_date.in_(matrix_store.as_of_dates))
)
@db_retry
def needs_predictions(self, matrix_store, model_id):
"""Returns whether or not the given matrix and model are lacking any predictions
Args:
matrix_store (triage.component.catwalk.storage.MatrixStore) A matrix with metadata
model_id (int) A database ID of a model
The way we check is by grabbing all the distinct as-of-dates in the predictions table
for this model and matrix. If there are more as-of-dates defined in the matrix's metadata
than are in the table, we need predictions
"""
if not self.save_predictions:
return False
session = self.sessionmaker()
prediction_obj = matrix_store.matrix_type.prediction_obj
as_of_dates_in_db = set(
as_of_date.date()
for (as_of_date,) in session.query(prediction_obj).filter_by(
model_id=model_id,
matrix_uuid=matrix_store.uuid
).distinct(prediction_obj.as_of_date).values("as_of_date")
)
as_of_dates_needed = set(matrix_store.as_of_dates)
needed = bool(as_of_dates_needed - as_of_dates_in_db)
session.close()
return needed
@db_retry
def _load_saved_predictions(self, existing_predictions, matrix_store):
index = matrix_store.index
score_lookup = {}
for prediction in existing_predictions:
score_lookup[
(prediction.entity_id, prediction.as_of_date.date())
] = prediction.score
score_iterator = (
score_lookup[(entity_id, dt.date())] for (entity_id, dt) in index
)
return np.fromiter(score_iterator, float)
@db_retry
def _write_predictions_to_db(
self,
model_id,
matrix_store,
df,
misc_db_parameters,
Prediction_obj,
):
"""Writes given predictions to database
entity_ids, predictions, labels are expected to be in the same order
Args:
model_id (int) the id of the model associated with the given predictions
matrix_store (catwalk.storage.MatrixStore) the matrix and metadata
df (pd.DataFrame) with the following columns entity_id, as_of_date, score, label_value and rank_abs_no_ties, rank_abs_with_ties, rank_pct_no_ties, rank_pct_with_ties
predictions (iterable) predicted values
Prediction_obj (TrainPrediction or TestPrediction) table to store predictions to
"""
try:
session = self.sessionmaker()
existing_predictions = self._existing_predictions(
Prediction_obj, session, model_id, matrix_store
)
if existing_predictions.count() > 0:
existing_predictions.delete(synchronize_session=False)
logger.info(f"Found old predictions for model {model_id} on {matrix_store.matrix_type.string_name} matrix {matrix_store.uuid}. Those predictions were deleted.")
session.expire_all()
session.commit()
finally:
session.close()
test_label_timespan = matrix_store.metadata["label_timespan"]
record_stream = (
Prediction_obj(
model_id=int(model_id),
entity_id=int(row.entity_id),
as_of_date=row.as_of_date,
score=float(row.score),
label_value=int(row.label_value) if not math.isnan(row.label_value) else None,
rank_abs_no_ties = int(row.rank_abs_no_ties),
rank_abs_with_ties = int(row.rank_abs_with_ties),
rank_pct_no_ties = row.rank_pct_no_ties,
rank_pct_with_ties = row.rank_pct_with_ties,
matrix_uuid=matrix_store.uuid,
test_label_timespan=test_label_timespan,
**misc_db_parameters
) for row in df.itertuples()
)
save_db_objects(self.db_engine, record_stream)
def _write_metadata_to_db(self, model_id, matrix_uuid, matrix_type, random_seed):
orm_obj = matrix_type.prediction_metadata_obj(
model_id=model_id,
matrix_uuid=matrix_uuid,
tiebreaker_ordering=self.rank_order,
random_seed=random_seed,
predictions_saved=self.save_predictions,
)
session = self.sessionmaker()
session.merge(orm_obj)
session.commit()
session.close()
def _needs_ranks(self, model_id, matrix_uuid, matrix_type):
if self.replace:
logger.info("Replace flag set, will compute and store ranks regardless")
return True
with scoped_session(self.db_engine) as session:
# if the metadata is different (e.g. they changed the rank order)
# or there are any null ranks we need to rank
metadata_matches = session.query(session.query(matrix_type.prediction_metadata_obj).filter_by(
model_id=model_id,
matrix_uuid=matrix_uuid,
tiebreaker_ordering=self.rank_order,
).exists()).scalar()
if not metadata_matches:
logger.debug("Prediction metadata does not match what is in configuration"
", will compute and store ranks")
return True
any_nulls_in_ranks = session.query(session.query(matrix_type.prediction_obj)\
.filter(
matrix_type.prediction_obj.model_id == model_id,
matrix_type.prediction_obj.matrix_uuid == matrix_uuid,
or_(
matrix_type.prediction_obj.rank_abs_no_ties == None,
matrix_type.prediction_obj.rank_abs_with_ties == None,
matrix_type.prediction_obj.rank_pct_no_ties == None,
matrix_type.prediction_obj.rank_pct_with_ties == None,
)
).exists()).scalar()
if any_nulls_in_ranks:
logger.debug("At least one null in rankings in predictions table",
", will compute and store ranks")
return True
logger.debug("No need to recompute prediction ranks")
return False
def predict(self, model_id, matrix_store, misc_db_parameters, train_matrix_columns):
"""Generate predictions and store them in the database
Args:
model_id (int) the id of the trained model to predict based off of
matrix_store (catwalk.storage.MatrixStore) a wrapper for the
prediction matrix and metadata
misc_db_parameters (dict): attributes and values to add to each
TrainPrediction or TestPrediction object in the results schema
train_matrix_columns (list): The order of columns that the model
was trained on
Returns:
(np.Array) the generated prediction values
"""
# Setting the Prediction object type - TrainPrediction or TestPrediction
matrix_type = matrix_store.matrix_type
if not self.replace:
logger.info(
f"Replace flag not set, looking for old predictions for model id {model_id} "
f"on {matrix_store.matrix_type.string_name} matrix {matrix_store.uuid}"
)
try:
session = self.sessionmaker()
existing_predictions = self._existing_predictions(
matrix_type.prediction_obj, session, model_id, matrix_store
)
logger.spam(f"Existing predictions length: {existing_predictions.count()}, Length of matrix: {len(matrix_store.index)}")
if existing_predictions.count() == len(matrix_store.index):
logger.info(
f"Found old predictions for model id {model_id}, matrix {matrix_store.uuid}, returning saved versions"
)
return self._load_saved_predictions(existing_predictions, matrix_store)
finally:
session.close()
model = self.load_model(model_id)
if not model:
raise ValueError(f"Model id {model_id} not found")
logger.spam(f"Loaded model {model_id}")
# Labels are popped from matrix (i.e. they are removed and returned)
labels = matrix_store.labels
predictions = model.predict_proba(
matrix_store.matrix_with_sorted_columns(train_matrix_columns)
)[:, 1] # Returning only the scores for the label == 1
logger.debug(
f"Generated predictions for model {model_id} on {matrix_store.matrix_type.string_name} matrix {matrix_store.uuid}"
)
if self.save_predictions:
df = | pd.DataFrame(data=None, columns=None, index=matrix_store.index) | pandas.DataFrame |
import pandas as pd
from pandas import Period, offsets
from pandas.util import testing as tm
from pandas.tseries.frequencies import _period_code_map
class TestFreqConversion(tm.TestCase):
"Test frequency conversion of date objects"
def test_asfreq_corner(self):
val = Period(freq='A', year=2007)
result1 = val.asfreq('5t')
result2 = val.asfreq('t')
expected = Period('2007-12-31 23:59', freq='t')
self.assertEqual(result1.ordinal, expected.ordinal)
self.assertEqual(result1.freqstr, '5T')
self.assertEqual(result2.ordinal, expected.ordinal)
self.assertEqual(result2.freqstr, 'T')
def test_conv_annual(self):
# frequency conversion tests: from Annual Frequency
ival_A = Period(freq='A', year=2007)
ival_AJAN = Period(freq="A-JAN", year=2007)
ival_AJUN = Period(freq="A-JUN", year=2007)
ival_ANOV = Period(freq="A-NOV", year=2007)
ival_A_to_Q_start = Period(freq='Q', year=2007, quarter=1)
ival_A_to_Q_end = Period(freq='Q', year=2007, quarter=4)
ival_A_to_M_start = Period(freq='M', year=2007, month=1)
ival_A_to_M_end = Period(freq='M', year=2007, month=12)
ival_A_to_W_start = Period(freq='W', year=2007, month=1, day=1)
ival_A_to_W_end = Period(freq='W', year=2007, month=12, day=31)
ival_A_to_B_start = Period(freq='B', year=2007, month=1, day=1)
ival_A_to_B_end = Period(freq='B', year=2007, month=12, day=31)
ival_A_to_D_start = Period(freq='D', year=2007, month=1, day=1)
ival_A_to_D_end = | Period(freq='D', year=2007, month=12, day=31) | pandas.Period |
# coding: utf-8
# In[1]:
import pandas as pd
import numpy as np
edata = pd.read_csv("docword.enron.txt", skiprows=3, sep = ' ', header=None)
evocab = pd.read_csv("vocab.enron.txt", header=None)
print (evocab)
evocab.columns = ['word']
edata.columns = ['docid','wordid','freq']
# Taking a sample data set
edata = edata.iloc[:100,:]
evocab.index = evocab.index + 1
wc = edata.groupby('wordid')['freq'].sum()
#wc = egrouped['freq'].agg(np.sum)
print (wc)
in_list = wc
# In[ ]:
m = 1
# Input parameter 'm'
# while(True):
# m = input("Enter no. of randomized iterations required: ")
# try:
# m = int(m)
# break
# except:
# print("Enter valid number.")
# continue
# Calculating the dissimilarities and smoothing factors.
# Set the value of parameter m = the no. of iterations you require
Card = pd.Series(np.NAN)
DS=pd.Series(np.NAN)
idx_added = | pd.Series(np.NAN) | pandas.Series |
"""
This module handles the preprocessing steps
"""
import math
from typing import Dict, List, Tuple
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
import config as cnf
#pd.set_option('display.max_columns', None)
def generate_lags(df: pd.DataFrame,
lags: List[int]) -> pd.DataFrame:
"""
generate lags for passed time series
removes all rows with NaNs
assumes column V1 as timeseries identifier
assumes df is sorted
assumes value as column from which lags are computed
Params:
------
df : dataframe for which lags to be generated
lags : list of lags to be generated
Returns:
--------
df : dataframe with added lags
--------
"""
print('#### generating lags ####')
for lag in lags:
lagged_series = df.groupby('V1').apply(lambda x: x['value'].shift(lag))
df['lag_{}'.format(lag)]\
= lagged_series.T.squeeze().reset_index(drop=True)
return df
def generate_steps_ahead(df: pd.DataFrame,
steps_ahead: int = 7) -> pd.DataFrame:
"""
generates the y ahead steps for corresponding dataframe
assumes column V1 as time series identifier
assumes df is sorted
assumes value as column from which steps ahead are computed
Params:
-------
df : dataframe for which y_train is to be generated
steps_ahead : steps ahead to be created / defaults to 7
Returns:
y_train : numpy array y_train
"""
print('#### Setting up y_train ####')
for i in range(1, steps_ahead + 1):
step_ahead = df.groupby('V1')\
.apply(lambda x: x['value'].shift(-i))
df['step_{}'.format(i)]\
= step_ahead.T.squeeze().reset_index(drop=True)
return df
def melt_time_series(df: pd.DataFrame) -> pd.DataFrame:
"""
shorten time series to shortest series selected
to match required shape for NN for forecasting
assumes column V1 is available as time series identifier
assume timestamp is stored as V2, V3, V4, ..., Vn
Params:
-------
df : dataframe to be shortened
Returns:
-------
sorted_df : shortened and sorted dataframe
"""
print('#### Melting time series ####')
melted = df.melt(id_vars=['V1'])
# add timestep that allows for sorting
melted['timestamp'] = melted['variable'].str[1:].astype(int)
# remove all rows with NaNs
melted.dropna(inplace=True)
# sort each series by timestamp and
sorted_df = melted.sort_values(by=['V1', 'timestamp'])
sorted_df.drop('variable', inplace=True, axis=1)
sorted_df.reset_index(drop=True, inplace=True)
sorted_df = sorted_df[['V1', 'timestamp', 'value']]
return sorted_df
def standardize(df: pd.DataFrame,
scaler: MinMaxScaler = None,
tgt_col: str = 'value') -> Tuple[pd.DataFrame, MinMaxScaler]:
"""
scales all value of time series between 0 - 1
Params:
-------
df : dataframe to be scaled
Returns:
--------
(df, min_max_scaler) : transformed dataframe and corresponding scaler
"""
if isinstance(scaler, MinMaxScaler):
x = df[tgt_col].to_numpy().reshape(-1, 1)
x_scaled = scaler.transform(x)
df[tgt_col] = x_scaled
return df
else:
# standardize
x = df[tgt_col].to_numpy().reshape(-1, 1) # returns a numpy array
min_max_scaler = MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
df[tgt_col] = x_scaled
return df, min_max_scaler
def destandardize(df: pd.DataFrame,
scaler: MinMaxScaler,
tgt_col: str = 'value',
) -> Tuple[pd.DataFrame, MinMaxScaler]:
"""
reverses the scaling process
Params:
-------
df : dataframe to be scaled
scaler : MinMaxScaler that allows for the inverse transformation
Returns:
--------
df : inverse transformed dataframe
"""
# inverse transform
cols = df.columns
x = df[tgt_col].to_numpy().reshape(-1, 1) # returns a numpy array
x_descaled = scaler.inverse_transform(x)
df[tgt_col] = x_descaled
return df
def normalize_data(df: pd.DataFrame,
standardizers: Dict[str, MinMaxScaler]=None) -> pd.DataFrame:
"""
normalize value column of the passed dataframe
assumes dataframe column value requires normalization
assumes column V1 is identifier for time series
Params:
------
df : dataframe which is to be normalized
Returns:
(df_res, standardizer) : dataframe with normalized data and dict
with scaler objects
"""
l_df_tmp = [] # keep temp dataframe
df_res = | pd.DataFrame() | pandas.DataFrame |
import pandas as pd
import numpy as np
import asyncio
import time
import colorcet as cc
import geopandas as gpd
import datetime as datet
from shapely import wkt
from cartopy import crs
from SubwayMapModel import CurrentTransitTimeDelays
from models import Line
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime
from tornado import gen
# homemade modules
import sys
# sys.path.append('/home/tbartsch/source/repos')
sys.path.append('../../')
import mtatracking.MTAdatamodel as MTAdatamodel
from mtatracking.MTAgeo import addStationID_andNameToGeoPandas
from mtatracking.MTAdatamine import MTAdatamine
from mtatracking.MTAdatamodel import SubwaySystem
from mtatracking.utils import utils
_executor = ThreadPoolExecutor(1)
class SubwayMapData():
'''class to encapsulate stations and lines dataframes.
Implements observer pattern. Register callbacks in
self._stations_observers and self._lines_observers.
'''
def __init__(self, session):
'''initialize a new SubwayMapData object
Args:
session: sqlalchemy session to subway system database
'''
self._stations_observers = []
self._lines_observers = []
self.stationsdf = initializeStations(session)
self.linesdf = initializeLines(session)
self._selected_dir = 'N'
self._selected_line = 'Q'
self.session = session
self._line_ids = sorted(
list(set([l.name for l in session.query(Line).all()])))
@property
def line_ids(self):
'''list of all line ids in the system'''
return self._line_ids
@property
def selected_dir(self):
'''the direction selected in the view'''
return self._selected_dir
@selected_dir.setter
def selected_dir(self, v):
self._selected_dir = v[:1] # only save first letter (North = N)
@property
def selected_line(self):
'''the line selected in the view'''
return self._selected_line
@selected_line.setter
def selected_line(self, v):
self._selected_line = v
print('highlighting line ', v)
if v == 'All':
self.linesdf = colorizeAllLines(self.linesdf)
else:
self.linesdf = highlightOneLine(self.linesdf, v)
@property
def stationsdf(self):
return self._stationsdf
@stationsdf.setter
def stationsdf(self, v):
self._stationsdf = v
for callback in self._stations_observers:
callback(self._stationsdf)
@property
def linesdf(self):
return self._linesdf
@linesdf.setter
def linesdf(self, v):
self._linesdf = v
for callback in self._lines_observers:
callback(self._linesdf)
def bind_to_stationsdf(self, callback):
print('bound')
self._stations_observers.append(callback)
def bind_to_linesdf(self, callback):
print('bound')
self._lines_observers.append(callback)
def PushStationsDF(self):
'''await functions that update the stationsdf, then await this function.
This triggers push of the dataframe to the view.
'''
print('hello from the push callback')
for callback in self._stations_observers:
callback(self._stationsdf)
async def queryDB_async(self, loop):
'''get information about the current position of trains
in the subway system, track their position and update the
probability of train delays for traversed segments of the system.
This in turn should then trigger callbacks in the setter of the
stationsdf property.
Args:
loop: IOLoop for async execution
'''
# reset all stations to grey.
self.stationsdf.loc[:, 'color'] = '#585858'
self.stationsgeo.loc[:, 'displaysize'] = 4
self.stationsgeo.loc[:, 'MTAdelay'] = False
self.stationsgeo.loc[:, 'waittimecolor'] = '#585858'
self.stationsgeo.loc[:, 'delay_prob'] = np.nan
self.stationsgeo.loc[:, 'waittime_str'] = 'unknown'
self.stationsgeo.loc[:, 'inboundtrain'] = 'N/A'
self.stationsgeo.loc[:, 'inbound_from'] = 'N/A'
await self._updateStationsDfDelayInfo(delays, trains, stations, current_time, loop)
await self._updateStationsDfWaitTime(myRTsys, stations, current_time, self.selected_dir, self.selected_line)
self.stationsdf = stations
print('done with iteration')
#delays_filename = 'delays' + datetime.today().strftime('%Y-%m-%d') + '.pkl'
#utils.write(delays, delays_filename)
@gen.coroutine
def update(self, stations):
self.stationsdf = stations
async def _getdata(self, dmine, feed_id, waittime):
tracking_results = dmine.TrackTrains(feed_id)
await asyncio.sleep(waittime)
return tracking_results
async def _updateStationsDfDelayInfo(self, delays, trains, stations, current_time, loop):
'''update 'color' and 'displaysize' columns in the data frame, reflecting the probability that a subway will reach a station with a delay
Args:
delays: dictionary of delay objects
trains: the trains we are currently tracking
stations: stations data frame
current_time: current time stamp
loop: IOLoop for async execution
'''
ids = np.asarray([(train.route_id, train.direction) for train in trains])
for line_id, delay in delays.items():
line = line_id[:-1]
direction = line_id[-1:]
these_trains = trains[np.bitwise_and(ids[:,0] == line, ids[:,1] == direction)]
#print('updating line ' + line_id)
await loop.run_in_executor(_executor, delay.updateDelayProbs, these_trains, current_time)
for train in these_trains:
#get the MTA delay info and populate df with that
MTADelayMessages = train.MTADelayMessages
if len(MTADelayMessages) > 0:
if(np.abs(current_time - np.max(MTADelayMessages))) < 40:
arr_station = train.arrival_station_id[:-1]
stations.loc[stations['stop_id']==arr_station, 'MTAdelay']=True
if (line == self.selected_line or self.selected_line == 'All') and direction == self.selected_dir:
for key, val in delay.delayProbs.items():
k = key.split()
if not np.isnan(val):
col = cc.CET_D4[int(np.floor(val*255))]
size = 5 + 3 * val
else:
# col = cc.CET_D4[0]
col = '#585858'
size = 5
stations.loc[stations['stop_id']==k[2][:-1], 'color']=col
stations.loc[stations['stop_id']==k[2][:-1], 'displaysize']=size
stations.loc[stations['stop_id']==k[2][:-1], 'delay_prob']=val
stations.loc[stations['stop_id']==k[2][:-1], 'inboundtrain']=delay.train_ids[key]
stations.loc[stations['stop_id']==k[2][:-1], 'inbound_from']=k[0][:-1]
async def _updateStationsDfWaitTime(self, subwaysys, stationsdf, currenttime, selected_dir, selected_line):
'''update "waittime", "waittimedisplaysize", and "waittimecolor" column in data frame, reflecting the time (in seconds) that has passed since the last train visited this station.
This is trivial if we are only interested in trains of a particular line, but gets more tricky if the user selected to view "All" lines
Args:
subwaysys: subway system object containing the most recent tracking data
stationsdf: stations data frame
'''
for station_id, station in subwaysys.stations.items():
if station_id is not None and len(station_id) > 1:
station_dir = station_id[-1:]
s_id = station_id[:-1]
wait_time = None
if station_dir == selected_dir and selected_line is not 'All': #make sure we are performing this update according to the direction selected by the user
wait_time = station.timeSinceLastTrainOfLineStoppedHere(selected_line, selected_dir, currenttime)
elif station_dir == selected_dir and selected_line == 'All':
wait_times = []
#iterate over all lines that stop here
lines_this_station = list(station.trains_stopped.keys()) #contains direction (i.e. QN instead of Q)
lines_this_station = list(set([ele[:-1] for ele in lines_this_station]))
for line in lines_this_station:
wait_times.append(station.timeSinceLastTrainOfLineStoppedHere(line, selected_dir, currenttime))
wait_times = np.array(wait_times)
wts = wait_times[wait_times != None]
if len(wts) > 0:
wait_time = np.min(wait_times[wait_times != None])
else:
wait_time = None
if(wait_time is not None):
stationsdf.loc[stationsdf['stop_id']==s_id, 'waittime']=wait_time #str(datet.timedelta(seconds=wait_time))
stationsdf.loc[stationsdf['stop_id']==s_id, 'waittime_str'] = timedispstring(wait_time)
#spread colors over 30 min. We want to eventually replace this with a scaling by sdev
if(int(np.floor(wait_time/(30*60)*255)) < 255):
col = cc.fire[int(np.floor(wait_time/(30*60)*255))]
else:
col = cc.fire[255]
stationsdf.loc[stationsdf['stop_id']==s_id, 'waittimecolor']=col
stationsdf.loc[stationsdf['stop_id']==s_id, 'waittimedisplaysize']=5 #constant size in this display mode
def initializeStations(session):
''' load the locations of stations in the NYC subway system.
Args:
session (string): sqlalchemy database session
Returns:
stations: dataframe including "geometry" columns for
plotting with geoviews
'''
engine = session.get_bind()
df = pd.read_sql_table("Stop_geojson", engine, schema="public")
df['geometry'] = df['geometry'].apply(wkt.loads)
stationsgeo = gpd.GeoDataFrame(df, geometry='geometry')
stationsgeo['color'] = '#585858'
stationsgeo['displaysize'] = 3
stationsgeo['delay_prob'] = np.nan
stationsgeo['MTAdelay'] = False
stationsgeo['inboundtrain'] = 'N/A'
stationsgeo['inbound_from'] = 'N/A'
stationsgeo['waittime'] = np.nan
stationsgeo['waittime_str'] = 'unknown'
stationsgeo['waittimedisplaysize'] = 3
stationsgeo['waittimecolor'] = '#585858'
stationsgeo['waittimedisplaysize'] = 3
return stationsgeo
def initializeLines(session):
''' load the locations of lines in the NYC subway system.
Args:
session (string): sqlalchemy database session
Returns:
lines: dataframe including "geometry" columns for
plotting with geoviews
'''
engine = session.get_bind()
df = pd.read_sql_table("Line_geojson", engine, schema="public")
df['geometry'] = df['geometry'].apply(wkt.loads)
linesgeo = gpd.GeoDataFrame(df, geometry='geometry')
linesgeo['color'] = cc.blues[1]
linesgeo = colorizeAllLines(linesgeo)
return linesgeo
def colorizeAllLines(linesdf):
''' set all lines in the linesdf to their respective colors.
Args:
linesdf: the lines dataframe
Returns:
linesdf: lines dataframe with modified colors column
'''
for line_id in linesdf.name.unique():
linesdf.loc[
linesdf['name'].str.contains(line_id), 'color'
] = LineColor(line_id)
return linesdf
def highlightOneLine(linesdf, lineid):
''' set a single line in the linesdf to its respective color.
All others are set to grey.
Args:
linesdf: the lines dataframe
lineid: id of the line to colorize.
This can be either with or without its direction
('Q' and 'QN' produce the same result)
Returns:
linesdf: lines dataframe with modified colors column
'''
lineid = lineid[0]
linesdf['color'] = '#484848'
linesdf.loc[
linesdf['name'].str.contains(lineid), 'color'
] = LineColor(lineid)
return linesdf
def LineColor(lineid):
'''return the color of line lineid
Args:
lineid: id of the line to colorize.
This can be either with or without its direction
('Q' and 'QN' produce the same result)
Returns:
color
'''
lineid = lineid[0]
colors = [
'#2850ad',
'#ff6319',
'#6cbe45',
'#a7a9ac',
'#996633',
'#fccc0a',
'#ee352e',
'#00933c',
'#b933ad',
'#00add0',
'#808183']
lines_ids = [
['A', 'C', 'E'],
['B', 'D', 'F', 'M'],
['G'],
['L'],
['J', 'Z'],
['N', 'Q', 'R', 'W'],
['1', '2', '3'],
['4', '5', '6'],
['7'],
['T'],
['S']]
c = pd.Series(colors)
ids = | pd.DataFrame(lines_ids) | pandas.DataFrame |
# pandastrick1.py
import pandas as pd
weekly_data = {'day':['Monday','Tuesday', 'Wednesday', 'Thursday',
'Friday', 'Saturday', 'Sunday'],
'temp':[40, 33, 42, 31, 41, 40, 30],
'condition':['Sunny','Cloudy','Sunny','Rainy','Sunny',
'Cloudy','Rainy']
}
df = | pd.DataFrame(weekly_data) | pandas.DataFrame |
"""
July 2021
This code retrieves the calculation of sand use for concrete and glass production in the building sector in 26 global regions. For the original code & latest updates, see: https://github.com/
The dynamic material model is based on the BUMA model developed by <NAME>, Leiden University, the Netherlands. For the original code & latest updates, see: https://github.com/SPDeetman/BUMA
The dynamic stock model is based on the ODYM model developed by <NAME>, Uni Freiburg, Germany. For the original code & latest updates, see: https://github.com/IndEcol/ODYM
*NOTE: Insert location of GloBus-main folder in 'dir_path' (line 23) before running the code
Software version: Python 3.7
"""
#%% GENERAL SETTING & STATEMENTS
import pandas as pd
import numpy as np
import os
import ctypes
import math
# set current directory
dir_path = ""
os.chdir(dir_path)
# Set general constants
regions = 26 #26 IMAGE regions
building_types = 4 #4 building types: detached, semi-detached, appartments & high-rise
area = 2 #2 areas: rural & urban
materials = 2 #2 materials: Concrete, Glass
inflation = 1.2423 #gdp/cap inflation correction between 2005 (IMAGE data) & 2016 (commercial calibration) according to https://www.bls.gov/data/inflation_calculator.htm
# Set Flags for sensitivity analysis
flag_alpha = 0 # switch for the sensitivity analysis on alpha, if 1 the maximum alpha is 10% above the maximum found in the data
flag_ExpDec = 0 # switch to choose between Gompertz and Exponential Decay function for commercial floorspace demand (0 = Gompertz, 1 = Expdec)
flag_Normal = 0 # switch to choose between Weibull and Normal lifetime distributions (0 = Weibull, 1 = Normal)
flag_Mean = 0 # switch to choose between material intensity settings (0 = regular regional, 1 = mean, 2 = high, 3 = low, 4 = median)
#%%Load files & arrange tables ----------------------------------------------------
if flag_Mean == 0:
file_addition = ''
elif flag_Mean == 1:
file_addition = '_mean'
elif flag_Mean ==2:
file_addition = '_high'
elif flag_Mean ==3:
file_addition = '_low'
else:
file_addition = '_median'
# Load Population, Floor area, and Service value added (SVA) Database csv-files
pop = pd.read_csv('files_population/pop.csv', index_col = [0]) # Pop; unit: million of people; meaning: global population (over time, by region)
rurpop = pd.read_csv('files_population/rurpop.csv', index_col = [0]) # rurpop; unit: %; meaning: the share of people living in rural areas (over time, by region)
housing_type = pd.read_csv('files_population\Housing_type.csv') # Housing_type; unit: %; meaning: the share of the NUMBER OF PEOPLE living in a particular building type (by region & by area)
floorspace = pd.read_csv('files_floor_area/res_Floorspace.csv') # Floorspace; unit: m2/capita; meaning: the average m2 per capita (over time, by region & area)
floorspace = floorspace[floorspace.Region != regions + 1] # Remove empty region 27
avg_m2_cap = pd.read_csv('files_floor_area\Average_m2_per_cap.csv') # Avg_m2_cap; unit: m2/capita; meaning: average square meters per person (by region & area (rural/urban) & building type)
sva_pc_2005 = pd.read_csv('files_GDP/sva_pc.csv', index_col = [0])
sva_pc = sva_pc_2005 * inflation # we use the inflation corrected SVA to adjust for the fact that IMAGE provides gdp/cap in 2005 US$
# load material density data csv-files
building_materials_concrete = pd.read_csv('files_material_density\Building_materials_concrete' + file_addition + '.csv') # Building_materials; unit: kg/m2; meaning: the average material use per square meter (by building type, by region & by area)
building_materials_glass = pd.read_csv('files_material_density\Building_materials_glass' + file_addition + '.csv') # Building_materials; unit: kg/m2; meaning: the average material use per square meter (by building type, by region & by area)
materials_commercial_concrete = pd.read_csv('files_material_density\materials_commercial_concrete' + file_addition + '.csv', index_col = [0]) # 7 building materials in 4 commercial building types; unit: kg/m2; meaning: the average material use per square meter (by commercial building type)
materials_commercial_glass = pd.read_csv('files_material_density\materials_commercial_glass' + file_addition + '.csv', index_col = [0]) # 7 building materials in 4 commercial building types; unit: kg/m2; meaning: the average material use per square meter (by commercial building type)
# Load fitted regression parameters for comercial floor area estimate
if flag_alpha == 0:
gompertz = pd.read_csv('files_floor_area//files_commercial/Gompertz_parameters.csv', index_col = [0])
else:
gompertz = pd.read_csv('files_floor_area//files_commercial/Gompertz_parameters_alpha.csv', index_col = [0])
# Ensure full time series for pop & rurpop (interpolation, some years are missing)
rurpop2 = rurpop.reindex(list(range(1970,2061,1))).interpolate()
pop2 = pop.reindex(list(range(1970,2061,1))).interpolate()
# Remove 1st year, to ensure same Table size as floorspace data (from 1971)
pop2 = pop2.iloc[1:]
rurpop2 = rurpop2.iloc[1:]
#pre-calculate urban population
urbpop = 1 - rurpop2 # urban population is 1 - the fraction of people living in rural areas (rurpop)
# Restructure the tables to regions as columns; for floorspace
floorspace_rur = floorspace.pivot(index="t", columns="Region", values="Rural")
floorspace_urb = floorspace.pivot(index="t", columns="Region", values="Urban")
# Restructuring for square meters (m2/cap)
avg_m2_cap_urb = avg_m2_cap.loc[avg_m2_cap['Area'] == 'Urban'].drop('Area', 1).T # Remove area column & Transpose
avg_m2_cap_urb.columns = list(map(int,avg_m2_cap_urb.iloc[0])) # name columns according to the row containing the region-labels
avg_m2_cap_urb2 = avg_m2_cap_urb.drop(['Region']) # Remove idle row
avg_m2_cap_rur = avg_m2_cap.loc[avg_m2_cap['Area'] == 'Rural'].drop('Area', 1).T # Remove area column & Transpose
avg_m2_cap_rur.columns = list(map(int,avg_m2_cap_rur.iloc[0])) # name columns according to the row containing the region-labels
avg_m2_cap_rur2 = avg_m2_cap_rur.drop(['Region']) # Remove idle row
# Restructuring for the Housing types (% of population living in them)
housing_type_urb = housing_type.loc[housing_type['Area'] == 'Urban'].drop('Area', 1).T # Remove area column & Transpose
housing_type_urb.columns = list(map(int,housing_type_urb.iloc[0])) # name columns according to the row containing the region-labels
housing_type_urb2 = housing_type_urb.drop(['Region']) # Remove idle row
housing_type_rur = housing_type.loc[housing_type['Area'] == 'Rural'].drop('Area', 1).T # Remove area column & Transpose
housing_type_rur.columns = list(map(int,housing_type_rur.iloc[0])) # name columns according to the row containing the region-labels
housing_type_rur2 = housing_type_rur.drop(['Region']) # Remove idle row
#%% COMMERCIAL building space demand (stock) calculated from Gomperz curve (fitted, using separate regression model)
# Select gompertz curve paramaters for the total commercial m2 demand (stock)
alpha = gompertz['All']['a'] if flag_ExpDec == 0 else 25.601
beta = gompertz['All']['b'] if flag_ExpDec == 0 else 28.431
gamma = gompertz['All']['c'] if flag_ExpDec == 0 else 0.0415
# find the total commercial m2 stock (in Millions of m2)
commercial_m2_cap = pd.DataFrame(index=range(1971,2061), columns=range(1,27))
for year in range(1971,2061):
for region in range(1,27):
if flag_ExpDec == 0:
commercial_m2_cap[region][year] = alpha * math.exp(-beta * math.exp((-gamma/1000) * sva_pc[str(region)][year]))
else:
commercial_m2_cap[region][year] = max(0.542, alpha - beta * math.exp((-gamma/1000) * sva_pc[str(region)][year]))
# Subdivide the total across Offices, Retail+, Govt+ & Hotels+
commercial_m2_cap_office = pd.DataFrame(index=range(1971,2061), columns=range(1,27)) # Offices
commercial_m2_cap_retail = pd.DataFrame(index=range(1971,2061), columns=range(1,27)) # Retail & Warehouses
commercial_m2_cap_hotels = pd.DataFrame(index=range(1971,2061), columns=range(1,27)) # Hotels & Restaurants
commercial_m2_cap_govern = pd.DataFrame(index=range(1971,2061), columns=range(1,27)) # Hospitals, Education, Government & Transportation
minimum_com_office = 25
minimum_com_retail = 25
minimum_com_hotels = 25
minimum_com_govern = 25
for year in range(1971,2061):
for region in range(1,27):
# get the square meter per capita floorspace for 4 commercial applications
office = gompertz['Office']['a'] * math.exp(-gompertz['Office']['b'] * math.exp((-gompertz['Office']['c']/1000) * sva_pc[str(region)][year]))
retail = gompertz['Retail+']['a'] * math.exp(-gompertz['Retail+']['b'] * math.exp((-gompertz['Retail+']['c']/1000) * sva_pc[str(region)][year]))
hotels = gompertz['Hotels+']['a'] * math.exp(-gompertz['Hotels+']['b'] * math.exp((-gompertz['Hotels+']['c']/1000) * sva_pc[str(region)][year]))
govern = gompertz['Govt+']['a'] * math.exp(-gompertz['Govt+']['b'] * math.exp((-gompertz['Govt+']['c']/1000) * sva_pc[str(region)][year]))
#calculate minimum values for later use in historic tail(Region 20: China @ 134 $/cap SVA)
minimum_com_office = office if office < minimum_com_office else minimum_com_office
minimum_com_retail = retail if retail < minimum_com_retail else minimum_com_retail
minimum_com_hotels = hotels if hotels < minimum_com_hotels else minimum_com_hotels
minimum_com_govern = govern if govern < minimum_com_govern else minimum_com_govern
# Then use the ratio's to subdivide the total commercial floorspace into 4 categories
commercial_sum = office + retail + hotels + govern
commercial_m2_cap_office[region][year] = commercial_m2_cap[region][year] * (office/commercial_sum)
commercial_m2_cap_retail[region][year] = commercial_m2_cap[region][year] * (retail/commercial_sum)
commercial_m2_cap_hotels[region][year] = commercial_m2_cap[region][year] * (hotels/commercial_sum)
commercial_m2_cap_govern[region][year] = commercial_m2_cap[region][year] * (govern/commercial_sum)
#%% Add historic tail (1720-1970) + 100 yr initial --------------------------------------------
# load historic population development
hist_pop = pd.read_csv('files_initial_stock\hist_pop.csv', index_col = [0]) # initial population as a percentage of the 1970 population; unit: %; according to the Maddison Project Database (MPD) 2018 (Groningen University)
# Determine the historical average global trend in floorspace/cap & the regional rural population share based on the last 10 years of IMAGE data
floorspace_urb_trend_by_region = [0 for j in range(0,26)]
floorspace_rur_trend_by_region = [0 for j in range(0,26)]
rurpop_trend_by_region = [0 for j in range(0,26)]
commercial_m2_cap_office_trend = [0 for j in range(0,26)]
commercial_m2_cap_retail_trend = [0 for j in range(0,26)]
commercial_m2_cap_hotels_trend = [0 for j in range(0,26)]
commercial_m2_cap_govern_trend = [0 for j in range(0,26)]
# For the RESIDENTIAL & COMMERCIAL floorspace: Derive the annual trend (in m2/cap) over the initial 10 years of IMAGE data
for region in range(1,27):
floorspace_urb_trend_by_year = [0 for i in range(0,10)]
floorspace_rur_trend_by_year = [0 for i in range(0,10)]
commercial_m2_cap_office_trend_by_year = [0 for j in range(0,10)]
commercial_m2_cap_retail_trend_by_year = [0 for i in range(0,10)]
commercial_m2_cap_hotels_trend_by_year = [0 for j in range(0,10)]
commercial_m2_cap_govern_trend_by_year = [0 for i in range(0,10)]
# Get the growth by year (for the first 10 years)
for year in range(1970,1980):
floorspace_urb_trend_by_year[year-1970] = floorspace_urb[region][year+1]/floorspace_urb[region][year+2]
floorspace_rur_trend_by_year[year-1970] = floorspace_rur[region][year+1]/floorspace_rur[region][year+2]
commercial_m2_cap_office_trend_by_year[year-1970] = commercial_m2_cap_office[region][year+1]/commercial_m2_cap_office[region][year+2]
commercial_m2_cap_retail_trend_by_year[year-1970] = commercial_m2_cap_retail[region][year+1]/commercial_m2_cap_retail[region][year+2]
commercial_m2_cap_hotels_trend_by_year[year-1970] = commercial_m2_cap_hotels[region][year+1]/commercial_m2_cap_hotels[region][year+2]
commercial_m2_cap_govern_trend_by_year[year-1970] = commercial_m2_cap_govern[region][year+1]/commercial_m2_cap_govern[region][year+2]
rurpop_trend_by_region[region-1] = ((1-(rurpop[str(region)][1980]/rurpop[str(region)][1970]))/10)*100
floorspace_urb_trend_by_region[region-1] = sum(floorspace_urb_trend_by_year)/10
floorspace_rur_trend_by_region[region-1] = sum(floorspace_rur_trend_by_year)/10
commercial_m2_cap_office_trend[region-1] = sum(commercial_m2_cap_office_trend_by_year)/10
commercial_m2_cap_retail_trend[region-1] = sum(commercial_m2_cap_retail_trend_by_year)/10
commercial_m2_cap_hotels_trend[region-1] = sum(commercial_m2_cap_hotels_trend_by_year)/10
commercial_m2_cap_govern_trend[region-1] = sum(commercial_m2_cap_govern_trend_by_year)/10
# Average global annual decline in floorspace/cap in %, rural: 1%; urban 1.2%; commercial: 1.26-2.18% /yr
floorspace_urb_trend_global = (1-(sum(floorspace_urb_trend_by_region)/26))*100 # in % decrease per annum
floorspace_rur_trend_global = (1-(sum(floorspace_rur_trend_by_region)/26))*100 # in % decrease per annum
commercial_m2_cap_office_trend_global = (1-(sum(commercial_m2_cap_office_trend)/26))*100 # in % decrease per annum
commercial_m2_cap_retail_trend_global = (1-(sum(commercial_m2_cap_retail_trend)/26))*100 # in % decrease per annum
commercial_m2_cap_hotels_trend_global = (1-(sum(commercial_m2_cap_hotels_trend)/26))*100 # in % decrease per annum
commercial_m2_cap_govern_trend_global = (1-(sum(commercial_m2_cap_govern_trend)/26))*100 # in % decrease per annum
# define historic floorspace (1820-1970) in m2/cap
floorspace_urb_1820_1970 = pd.DataFrame(index=range(1820,1971), columns=floorspace_urb.columns)
floorspace_rur_1820_1970 = pd.DataFrame(index=range(1820,1971), columns=floorspace_rur.columns)
rurpop_1820_1970 = pd.DataFrame(index=range(1820,1971), columns=rurpop.columns)
pop_1820_1970 = pd.DataFrame(index=range(1820,1971), columns=pop2.columns)
commercial_m2_cap_office_1820_1970 = pd.DataFrame(index=range(1820,1971), columns=commercial_m2_cap_office.columns)
commercial_m2_cap_retail_1820_1970 = pd.DataFrame(index=range(1820,1971), columns=commercial_m2_cap_retail.columns)
commercial_m2_cap_hotels_1820_1970 = pd.DataFrame(index=range(1820,1971), columns=commercial_m2_cap_hotels.columns)
commercial_m2_cap_govern_1820_1970 = pd.DataFrame(index=range(1820,1971), columns=commercial_m2_cap_govern.columns)
# Find minumum or maximum values in the original IMAGE data (Just for residential, commercial minimum values have been calculated above)
minimum_urb_fs = floorspace_urb.values.min() # Region 20: China
minimum_rur_fs = floorspace_rur.values.min() # Region 20: China
maximum_rurpop = rurpop.values.max() # Region 9 : Eastern Africa
# Calculate the actual values used between 1820 & 1970, given the trends & the min/max values
for region in range(1,regions+1):
for year in range(1820,1971):
# MAX of 1) the MINimum value & 2) the calculated value
floorspace_urb_1820_1970[region][year] = max(minimum_urb_fs, floorspace_urb[region][1971] * ((100-floorspace_urb_trend_global)/100)**(1971-year)) # single global value for average annual Decrease
floorspace_rur_1820_1970[region][year] = max(minimum_rur_fs, floorspace_rur[region][1971] * ((100-floorspace_rur_trend_global)/100)**(1971-year)) # single global value for average annual Decrease
commercial_m2_cap_office_1820_1970[region][year] = max(minimum_com_office, commercial_m2_cap_office[region][1971] * ((100-commercial_m2_cap_office_trend_global)/100)**(1971-year)) # single global value for average annual Decrease
commercial_m2_cap_retail_1820_1970[region][year] = max(minimum_com_retail, commercial_m2_cap_retail[region][1971] * ((100-commercial_m2_cap_retail_trend_global)/100)**(1971-year)) # single global value for average annual Decrease
commercial_m2_cap_hotels_1820_1970[region][year] = max(minimum_com_hotels, commercial_m2_cap_hotels[region][1971] * ((100-commercial_m2_cap_hotels_trend_global)/100)**(1971-year)) # single global value for average annual Decrease
commercial_m2_cap_govern_1820_1970[region][year] = max(minimum_com_govern, commercial_m2_cap_govern[region][1971] * ((100-commercial_m2_cap_govern_trend_global)/100)**(1971-year)) # single global value for average annual Decrease
# MIN of 1) the MAXimum value & 2) the calculated value
rurpop_1820_1970[str(region)][year] = min(maximum_rurpop, rurpop[str(region)][1970] * ((100+rurpop_trend_by_region[region-1])/100)**(1970-year)) # average annual INcrease by region
# just add the tail to the population (no min/max & trend is pre-calculated in hist_pop)
pop_1820_1970[str(region)][year] = hist_pop[str(region)][year] * pop[str(region)][1970]
urbpop_1820_1970 = 1 - rurpop_1820_1970
# To avoid full model setup in 1820 (all required stock gets built in yr 1) we assume another tail that linearly increases to the 1820 value over a 100 year time period, so 1720 = 0
floorspace_urb_1721_1820 = pd.DataFrame(index=range(1721,1820), columns=floorspace_urb.columns)
floorspace_rur_1721_1820 = pd.DataFrame(index=range(1721,1820), columns=floorspace_rur.columns)
rurpop_1721_1820 = pd.DataFrame(index=range(1721,1820), columns=rurpop.columns)
urbpop_1721_1820 = pd.DataFrame(index=range(1721,1820), columns=urbpop.columns)
pop_1721_1820 = pd.DataFrame(index=range(1721,1820), columns=pop2.columns)
commercial_m2_cap_office_1721_1820 = pd.DataFrame(index=range(1721,1820), columns=commercial_m2_cap_office.columns)
commercial_m2_cap_retail_1721_1820 = pd.DataFrame(index=range(1721,1820), columns=commercial_m2_cap_retail.columns)
commercial_m2_cap_hotels_1721_1820 = pd.DataFrame(index=range(1721,1820), columns=commercial_m2_cap_hotels.columns)
commercial_m2_cap_govern_1721_1820 = pd.DataFrame(index=range(1721,1820), columns=commercial_m2_cap_govern.columns)
for region in range(1,27):
for time in range(1721,1820):
# MAX(0,...) Because of floating point deviations, leading to negative stock in some cases
floorspace_urb_1721_1820[int(region)][time] = max(0.0, floorspace_urb_1820_1970[int(region)][1820] - (floorspace_urb_1820_1970[int(region)][1820]/100)*(1820-time))
floorspace_rur_1721_1820[int(region)][time] = max(0.0, floorspace_rur_1820_1970[int(region)][1820] - (floorspace_rur_1820_1970[int(region)][1820]/100)*(1820-time))
rurpop_1721_1820[str(region)][time] = max(0.0, rurpop_1820_1970[str(region)][1820] - (rurpop_1820_1970[str(region)][1820]/100)*(1820-time))
urbpop_1721_1820[str(region)][time] = max(0.0, urbpop_1820_1970[str(region)][1820] - (urbpop_1820_1970[str(region)][1820]/100)*(1820-time))
pop_1721_1820[str(region)][time] = max(0.0, pop_1820_1970[str(region)][1820] - (pop_1820_1970[str(region)][1820]/100)*(1820-time))
commercial_m2_cap_office_1721_1820[int(region)][time] = max(0.0, commercial_m2_cap_office_1820_1970[region][1820] - (commercial_m2_cap_office_1820_1970[region][1820]/100)*(1820-time))
commercial_m2_cap_retail_1721_1820[int(region)][time] = max(0.0, commercial_m2_cap_retail_1820_1970[region][1820] - (commercial_m2_cap_retail_1820_1970[region][1820]/100)*(1820-time))
commercial_m2_cap_hotels_1721_1820[int(region)][time] = max(0.0, commercial_m2_cap_hotels_1820_1970[region][1820] - (commercial_m2_cap_hotels_1820_1970[region][1820]/100)*(1820-time))
commercial_m2_cap_govern_1721_1820[int(region)][time] = max(0.0, commercial_m2_cap_govern_1820_1970[region][1820] - (commercial_m2_cap_govern_1820_1970[region][1820]/100)*(1820-time))
# combine historic with IMAGE data here
rurpop_tail = rurpop_1820_1970.append(rurpop2, ignore_index=False)
urbpop_tail = urbpop_1820_1970.append(urbpop, ignore_index=False)
pop_tail = pop_1820_1970.append(pop2, ignore_index=False)
floorspace_urb_tail = floorspace_urb_1820_1970.append(floorspace_urb, ignore_index=False)
floorspace_rur_tail = floorspace_rur_1820_1970.append(floorspace_rur, ignore_index=False)
commercial_m2_cap_office_tail = commercial_m2_cap_office_1820_1970.append(commercial_m2_cap_office, ignore_index=False)
commercial_m2_cap_retail_tail = commercial_m2_cap_retail_1820_1970.append(commercial_m2_cap_retail, ignore_index=False)
commercial_m2_cap_hotels_tail = commercial_m2_cap_hotels_1820_1970.append(commercial_m2_cap_hotels, ignore_index=False)
commercial_m2_cap_govern_tail = commercial_m2_cap_govern_1820_1970.append(commercial_m2_cap_govern, ignore_index=False)
rurpop_tail = rurpop_1721_1820.append(rurpop_1820_1970.append(rurpop2, ignore_index=False), ignore_index=False)
urbpop_tail = urbpop_1721_1820.append(urbpop_1820_1970.append(urbpop, ignore_index=False), ignore_index=False)
pop_tail = pop_1721_1820.append(pop_1820_1970.append(pop2, ignore_index=False), ignore_index=False)
floorspace_urb_tail = floorspace_urb_1721_1820.append(floorspace_urb_1820_1970.append(floorspace_urb, ignore_index=False), ignore_index=False)
floorspace_rur_tail = floorspace_rur_1721_1820.append(floorspace_rur_1820_1970.append(floorspace_rur, ignore_index=False), ignore_index=False)
commercial_m2_cap_office_tail = commercial_m2_cap_office_1721_1820.append(commercial_m2_cap_office_1820_1970.append(commercial_m2_cap_office, ignore_index=False), ignore_index=False)
commercial_m2_cap_retail_tail = commercial_m2_cap_retail_1721_1820.append(commercial_m2_cap_retail_1820_1970.append(commercial_m2_cap_retail, ignore_index=False), ignore_index=False)
commercial_m2_cap_hotels_tail = commercial_m2_cap_hotels_1721_1820.append(commercial_m2_cap_hotels_1820_1970.append(commercial_m2_cap_hotels, ignore_index=False), ignore_index=False)
commercial_m2_cap_govern_tail = commercial_m2_cap_govern_1721_1820.append(commercial_m2_cap_govern_1820_1970.append(commercial_m2_cap_govern, ignore_index=False), ignore_index=False)
#%% SQUARE METER Calculations -----------------------------------------------------------
# adjust the share for urban/rural only (shares in csv are as percantage of the total(Rur + Urb), we needed to adjust the urban shares to add up to 1, same for rural)
housing_type_rur3 = housing_type_rur2/housing_type_rur2.sum()
housing_type_urb3 = housing_type_urb2/housing_type_urb2.sum()
# calculte the total rural/urban population (pop2 = millions of people, rurpop2 = % of people living in rural areas)
people_rur = pd.DataFrame(rurpop_tail.values*pop_tail.values, columns=pop_tail.columns, index=pop_tail.index)
people_urb = pd.DataFrame(urbpop_tail.values*pop_tail.values, columns=pop_tail.columns, index=pop_tail.index)
# calculate the total number of people (urban/rural) BY HOUSING TYPE (the sum of det,sem,app & hig equals the total population e.g. people_rur)
people_det_rur = pd.DataFrame(housing_type_rur3.iloc[0].values*people_rur.values, columns=people_rur.columns, index=people_rur.index)
people_sem_rur = pd.DataFrame(housing_type_rur3.iloc[1].values*people_rur.values, columns=people_rur.columns, index=people_rur.index)
people_app_rur = pd.DataFrame(housing_type_rur3.iloc[2].values*people_rur.values, columns=people_rur.columns, index=people_rur.index)
people_hig_rur = pd.DataFrame(housing_type_rur3.iloc[3].values*people_rur.values, columns=people_rur.columns, index=people_rur.index)
people_det_urb = pd.DataFrame(housing_type_urb3.iloc[0].values*people_urb.values, columns=people_urb.columns, index=people_urb.index)
people_sem_urb = pd.DataFrame(housing_type_urb3.iloc[1].values*people_urb.values, columns=people_urb.columns, index=people_urb.index)
people_app_urb = pd.DataFrame(housing_type_urb3.iloc[2].values*people_urb.values, columns=people_urb.columns, index=people_urb.index)
people_hig_urb = pd.DataFrame(housing_type_urb3.iloc[3].values*people_urb.values, columns=people_urb.columns, index=people_urb.index)
# calculate the total m2 (urban/rural) BY HOUSING TYPE (= nr. of people * OWN avg m2, so not based on IMAGE)
m2_unadjusted_det_rur = pd.DataFrame(avg_m2_cap_rur2.iloc[0].values * people_det_rur.values, columns=people_det_rur.columns, index=people_det_rur.index)
m2_unadjusted_sem_rur = pd.DataFrame(avg_m2_cap_rur2.iloc[1].values * people_sem_rur.values, columns=people_sem_rur.columns, index=people_sem_rur.index)
m2_unadjusted_app_rur = pd.DataFrame(avg_m2_cap_rur2.iloc[2].values * people_app_rur.values, columns=people_app_rur.columns, index=people_app_rur.index)
m2_unadjusted_hig_rur = pd.DataFrame(avg_m2_cap_rur2.iloc[3].values * people_hig_rur.values, columns=people_hig_rur.columns, index=people_hig_rur.index)
m2_unadjusted_det_urb = pd.DataFrame(avg_m2_cap_urb2.iloc[0].values * people_det_urb.values, columns=people_det_urb.columns, index=people_det_urb.index)
m2_unadjusted_sem_urb = pd.DataFrame(avg_m2_cap_urb2.iloc[1].values * people_sem_urb.values, columns=people_sem_urb.columns, index=people_sem_urb.index)
m2_unadjusted_app_urb = pd.DataFrame(avg_m2_cap_urb2.iloc[2].values * people_app_urb.values, columns=people_app_urb.columns, index=people_app_urb.index)
m2_unadjusted_hig_urb = pd.DataFrame(avg_m2_cap_urb2.iloc[3].values * people_hig_urb.values, columns=people_hig_urb.columns, index=people_hig_urb.index)
# Define empty dataframes for m2 adjustments
total_m2_adj_rur = pd.DataFrame(index=m2_unadjusted_det_rur.index, columns=m2_unadjusted_det_rur.columns)
total_m2_adj_urb = pd.DataFrame(index=m2_unadjusted_det_urb.index, columns=m2_unadjusted_det_urb.columns)
# Sum all square meters in Rural area
for j in range(1721,2061,1):
for i in range(1,27,1):
total_m2_adj_rur.loc[j,str(i)] = m2_unadjusted_det_rur.loc[j,str(i)] + m2_unadjusted_sem_rur.loc[j,str(i)] + m2_unadjusted_app_rur.loc[j,str(i)] + m2_unadjusted_hig_rur.loc[j,str(i)]
# Sum all square meters in Urban area
for j in range(1721,2061,1):
for i in range(1,27,1):
total_m2_adj_urb.loc[j,str(i)] = m2_unadjusted_det_urb.loc[j,str(i)] + m2_unadjusted_sem_urb.loc[j,str(i)] + m2_unadjusted_app_urb.loc[j,str(i)] + m2_unadjusted_hig_urb.loc[j,str(i)]
# average square meter per person implied by our OWN data
avg_m2_cap_adj_rur = pd.DataFrame(total_m2_adj_rur.values / people_rur.values, columns=people_rur.columns, index=people_rur.index)
avg_m2_cap_adj_urb = pd.DataFrame(total_m2_adj_urb.values / people_urb.values, columns=people_urb.columns, index=people_urb.index)
# factor to correct square meters per capita so that we respect the IMAGE data in terms of total m2, but we use our own distinction between Building types
m2_cap_adj_fact_rur = pd.DataFrame(floorspace_rur_tail.values / avg_m2_cap_adj_rur.values, columns=floorspace_rur_tail.columns, index=floorspace_rur_tail.index)
m2_cap_adj_fact_urb = pd.DataFrame(floorspace_urb_tail.values / avg_m2_cap_adj_urb.values, columns=floorspace_urb_tail.columns, index=floorspace_urb_tail.index)
# All m2 by region (in millions), Building_type & year (using the correction factor, to comply with IMAGE avg m2/cap)
m2_det_rur = pd.DataFrame(m2_unadjusted_det_rur.values * m2_cap_adj_fact_rur.values, columns=m2_cap_adj_fact_rur.columns, index=m2_cap_adj_fact_rur.index)
m2_sem_rur = pd.DataFrame(m2_unadjusted_sem_rur.values * m2_cap_adj_fact_rur.values, columns=m2_cap_adj_fact_rur.columns, index=m2_cap_adj_fact_rur.index)
m2_app_rur = pd.DataFrame(m2_unadjusted_app_rur.values * m2_cap_adj_fact_rur.values, columns=m2_cap_adj_fact_rur.columns, index=m2_cap_adj_fact_rur.index)
m2_hig_rur = pd.DataFrame(m2_unadjusted_hig_rur.values * m2_cap_adj_fact_rur.values, columns=m2_cap_adj_fact_rur.columns, index=m2_cap_adj_fact_rur.index)
m2_det_urb = pd.DataFrame(m2_unadjusted_det_urb.values * m2_cap_adj_fact_urb.values, columns=m2_cap_adj_fact_urb.columns, index=m2_cap_adj_fact_urb.index)
m2_sem_urb = | pd.DataFrame(m2_unadjusted_sem_urb.values * m2_cap_adj_fact_urb.values, columns=m2_cap_adj_fact_urb.columns, index=m2_cap_adj_fact_urb.index) | pandas.DataFrame |
# -*- coding: utf-8 -*-
# Copyright (c) Hebes Intelligence Private Company
# This source code is licensed under the Apache License, Version 2.0 found in the
# LICENSE file in the root directory of this source tree.
import logging
import warnings
import numpy as np
import pandas as pd
from pandas.api.types import is_bool_dtype as is_bool
from pandas.api.types import is_categorical_dtype as is_category
from pandas.api.types import is_integer_dtype as is_integer
from pandas.api.types import is_object_dtype as is_object
from scipy.stats import skew, wasserstein_distance
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.cluster import KMeans
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import (
FunctionTransformer,
OneHotEncoder,
OrdinalEncoder,
SplineTransformer,
StandardScaler,
)
from sklearn.tree import DecisionTreeRegressor
from sklearn.utils.validation import check_is_fitted
from ..utils import add_constant, as_list, check_X, check_y, maybe_reshape_2d
logger = logging.getLogger("feature-encoding")
UNKNOWN_VALUE = -1
#####################################################################################
# Encode features
#
# All encoders generate numpy arrays
#####################################################################################
class IdentityEncoder(TransformerMixin, BaseEstimator):
"""Create an encoder that returns what it is fed.
This encoder can act as a linear feature encoder.
Args:
feature (str or list of str, optional): The name(s) of the input dataframe's
column(s) to return. If None, the whole input dataframe will be returned.
Defaults to None.
as_filter (bool, optional): If True, the encoder will return all feature labels
for which "feature in label == True". Defaults to False.
include_bias (bool, optional): If True, a column of ones is added to the output.
Defaults to False.
Raises:
ValueError: If `as_filter` is True, `feature` cannot include multiple feature names.
"""
def __init__(self, feature=None, as_filter=False, include_bias=False):
if as_filter and isinstance(feature, list):
raise ValueError(
"If `as_filter` is True, `feature` cannot include multiple feature names"
)
self.feature = feature
self.as_filter = as_filter
self.include_bias = include_bias
self.features_ = as_list(feature)
def fit(self, X: pd.DataFrame, y=None):
"""Fit the encoder on the available data.
Args:
X (pandas.DataFrame of shape (n_samples, n_features)): The input dataframe.
y (None, optional): Ignored.
Defaults to None.
Raises:
ValueError: If the input data does not pass the checks of `utils.check_X`.
Returns:
IdentityEncoder: Fitted encoder.
"""
X = check_X(X)
if self.feature is None:
n_features_out_ = X.shape[1]
elif (self.feature is not None) and not self.as_filter:
n_features_out_ = len(self.features_)
else:
n_features_out_ = X.filter(like=self.feature, axis=1).shape[1]
self.n_features_out_ = int(self.include_bias) + n_features_out_
self.fitted_ = True
return self
def transform(self, X: pd.DataFrame):
"""Apply the encoder.
Args:
X (pandas.DataFrame of shape (n_samples, n_features)): The input
dataframe.
Raises:
ValueError: If the input data does not pass the checks of `utils.check_X`.
ValueError: If `include_bias` is True and a column with constant values
already exists in the returned columns.
Returns:
numpy array of shape: The selected column subset as a numpy array.
"""
check_is_fitted(self, "fitted_")
X = check_X(X)
if (self.feature is not None) and not self.as_filter:
X = X[self.features_]
elif self.feature is not None:
X = X.filter(like=self.feature, axis=1)
if self.include_bias:
X = add_constant(X, has_constant="raise")
return np.array(X)
class SafeOrdinalEncoder(TransformerMixin, BaseEstimator):
"""Encode categorical features as an integer array.
The encoder converts the features into ordinal integers. This results
in a single column of integers (0 to n_categories - 1) per feature.
Args:
feature (str or list of str, optional): The names of the columns to
encode. If None, all categorical columns will be encoded. Defaults
to None.
unknown_value (int, optional): This parameter will set the encoded value
for unknown categories. It has to be distinct from the values used to
encode any of the categories in `fit`. If None, the value `-1` is used.
During `transform`, unknown categories will be replaced using the most
frequent value along each column. Defaults to None.
"""
def __init__(self, feature=None, unknown_value=None):
self.feature = feature
self.unknown_value = unknown_value
self.features_ = as_list(feature)
def fit(self, X: pd.DataFrame, y=None):
"""Fit the encoder on the available data.
Args:
X (pandas.DataFrame of shape (n_samples, n_features)): The input dataframe.
y (None, optional): Ignored. Defaults to None.
Returns:
SafeOrdinalEncoder: Fitted encoder.
Raises:
ValueError: If the input data does not pass the checks of `utils.check_X`.
"""
X, categorical_cols, _ = check_X(X, exists=self.features_, return_col_info=True)
if not self.features_:
self.features_ = categorical_cols
else:
for name in self.features_:
if pd.api.types.is_float_dtype(X[name]):
raise ValueError("The encoder is applied on numerical data")
self.feature_pipeline_ = Pipeline(
[
(
"select",
ColumnTransformer(
[("select", "passthrough", self.features_)], remainder="drop"
),
),
(
"encode_ordinal",
OrdinalEncoder(
handle_unknown="use_encoded_value",
unknown_value=self.unknown_value or UNKNOWN_VALUE,
dtype=np.int16,
),
),
(
"impute_unknown",
SimpleImputer(
missing_values=self.unknown_value or UNKNOWN_VALUE,
strategy="most_frequent",
),
),
]
)
# Fit the pipeline
self.feature_pipeline_.fit(X)
self.n_features_out_ = len(self.features_)
self.fitted_ = True
return self
def transform(self, X: pd.DataFrame):
"""Apply the encoder.
Args:
X (pandas.DataFrame of shape (n_samples, n_features)): The input
dataframe.
Raises:
ValueError: If the input data does not pass the checks of `utils.check_X`.
Returns:
numpy array of shape: The encoded column subset as a numpy array.
"""
check_is_fitted(self, "fitted_")
X = check_X(X, exists=self.features_)
return self.feature_pipeline_.transform(X)
class SafeOneHotEncoder(TransformerMixin, BaseEstimator):
"""Encode categorical features in a one-hot form.
The encoder uses a `SafeOrdinalEncoder`to first encode the feature as an
integer array and then a `sklearn.preprocessing.OneHotEncoder` to encode
the features as an one-hot array.
Args:
feature (str or list of str, optional): The names of the columns to
encode. If None, all categorical columns will be encoded. Defaults
to None.
unknown_value (int, optional): This parameter will set the encoded value
of unknown categories. It has to be distinct from the values used to
encode any of the categories in `fit`. If None, the value `-1` is used.
During `transform`, unknown categories will be replaced using the most
frequent value along each column. Defaults to None.
"""
def __init__(self, feature=None, unknown_value=None):
self.feature = feature
self.unknown_value = unknown_value
self.features_ = as_list(feature)
def fit(self, X: pd.DataFrame, y=None):
"""Fit the encoder on the available data.
Args:
X (pandas.DataFrame of shape (n_samples, n_features)): The input dataframe.
y (None, optional): Ignored. Defaults to None.
Returns:
SafeOneHotEncoder: Fitted encoder.
Raises:
ValueError: If the input data does not pass the checks of `utils.check_X`.
ValueError: If the encoder is applied on numerical (float) data.
"""
X, categorical_cols, _ = check_X(X, exists=self.features_, return_col_info=True)
if not self.features_:
self.features_ = categorical_cols
else:
for name in self.features_:
if pd.api.types.is_float_dtype(X[name]):
raise ValueError("The encoder is applied on numerical data")
self.feature_pipeline_ = Pipeline(
[
(
"encode_ordinal",
SafeOrdinalEncoder(
feature=self.features_,
unknown_value=self.unknown_value or UNKNOWN_VALUE,
),
),
("one_hot", OneHotEncoder(drop=None, sparse=False)),
]
)
# Fit the pipeline
self.feature_pipeline_.fit(X)
self.n_features_out_ = 0
for category in self.feature_pipeline_["one_hot"].categories_:
self.n_features_out_ += len(category)
self.fitted_ = True
return self
def transform(self, X: pd.DataFrame):
"""Apply the encoder.
Args:
X (pandas.DataFrame of shape (n_samples, n_features)): The input
dataframe.
Raises:
ValueError: If the input data does not pass the checks of `utils.check_X`.
Returns:
numpy array of shape: The encoded column subset as a numpy array.
"""
check_is_fitted(self, "fitted_")
X = check_X(X, exists=self.features_)
return self.feature_pipeline_.transform(X)
class TargetClusterEncoder(TransformerMixin, BaseEstimator):
"""Encode a categorical feature as clusters of the target's values.
The purpose of this encoder is to reduce the cardinality of a categorical
feature. This encoder does not replace unknown values with the most frequent
one during `transform`. It just assigns them the value of `unknown_value`.
Args:
feature (str): The name of the categorical feature to transform. This
encoder operates on a single feature.
max_n_categories (int, optional): The maximum number of categories to
produce. Defaults to None.
stratify_by (str or list of str, optional): If not None, the encoder
will first stratify the categorical feature into groups that have
similar values of the features in `stratify_by`, and then cluster
based on the relationship between the categorical feature and the
target. It is used only if the number of unique categories minus
the `excluded_categories` is larger than `max_n_categories`.
Defaults to None.
excluded_categories (str or list of str, optional): The names of the
categories to be excluded from the clustering process. These categories
will stay intact by the encoding process, so they cannot have the
same values as the encoder's results (the encoder acts as an
``OrdinalEncoder`` in the sense that the feature is converted into
a column of integers 0 to n_categories - 1). Defaults to None.
unknown_value (int, optional): This parameter will set the encoded value of
unknown categories. It has to be distinct from the values used to encode
any of the categories in `fit`. If None, the value `-1` is used. Defaults
to None.
min_samples_leaf (int, optional): The minimum number of samples required to be
at a leaf node of the decision tree model that is used for stratifying the
categorical feature if `stratify_by` is not None. The actual number that will
be passed to the tree model is `min_samples_leaf` multiplied by the number of
unique values in the categorical feature to transform. Defaults to 1.
max_features (int, float or {"auto", "sqrt", "log2"}, optional): The number of
features that the decision tree considers when looking for the best split:
- If int, then consider `max_features` features at each split of the decision
tree
- If float, then `max_features` is a fraction and `int(max_features * n_features)`
features are considered at each split
- If "auto", then `max_features=n_features`
- If "sqrt", then `max_features=sqrt(n_features)`
- If "log2", then `max_features=log2(n_features)`
- If None, then `max_features=n_features`
Defaults to "auto".
random_state (int or RandomState instance, optional): Controls the randomness of
the decision tree estimator. To obtain a deterministic behaviour during its
fitting, ``random_state`` has to be fixed to an integer. Defaults to None.
"""
def __init__(
self,
*,
feature,
max_n_categories,
stratify_by=None,
excluded_categories=None,
unknown_value=None,
min_samples_leaf=5,
max_features="auto",
random_state=None,
):
self.feature = feature
self.max_n_categories = max_n_categories
self.stratify_by = stratify_by
self.excluded_categories = excluded_categories
self.unknown_value = unknown_value
self.min_samples_leaf = min_samples_leaf
self.max_features = max_features
self.random_state = random_state
self.stratify_by_ = as_list(stratify_by)
self.excluded_categories_ = as_list(excluded_categories)
def fit(self, X: pd.DataFrame, y: pd.DataFrame):
"""Fit the encoder on the available data.
Args:
X (pandas.DataFrame of shape (n_samples, n_features)): The input dataframe.
y (pandas.DataFrame of shape (n_samples, 1)): The target dataframe.
Returns:
TargetClusterEncoder: Fitted encoder.
Raises:
ValueError: If the input data does not pass the checks of `utils.check_X`.
ValueError: If the encoder is applied on numerical (float) data.
ValueError: If any of the values in `excluded_categories` is not found in
the input data.
ValueError: If the number of categories left after removing all in
`excluded_categories` is not larger than `max_n_categories`.
"""
X = check_X(X, exists=[self.feature] + self.stratify_by_)
if pd.api.types.is_float_dtype(X[self.feature]):
raise ValueError("The encoder is applied on numerical data")
y = check_y(y, index=X.index)
self.target_name_ = y.columns[0]
X = X.merge(y, left_index=True, right_index=True)
if self.excluded_categories_:
unique_vals = X[self.feature].unique()
for value in self.excluded_categories_:
if value not in unique_vals:
raise ValueError(
f"Value {value} of `excluded_categories` not found "
f"in the {self.feature} data."
)
mask = X[self.feature].isin(self.excluded_categories_)
X = X.loc[~mask]
if len(X) == 0:
raise ValueError(
"No categories left after removing all in `excluded_categories`."
)
if X[self.feature].nunique() <= self.max_n_categories:
raise ValueError(
"The number of categories left after removing all in `excluded_categories` "
"must be larger than `max_n_categories`."
)
if not self.stratify_by_:
self.mapping_ = self._cluster_without_stratify(X)
else:
self.mapping_ = self._cluster_with_stratify(X)
if self.excluded_categories_:
for i, cat in enumerate(self.excluded_categories_):
self.mapping_.update({cat: self.max_n_categories + i})
self.n_features_out_ = 1
self.fitted_ = True
return self
def _cluster_without_stratify(self, X):
reference = np.array(X[self.target_name_])
X = X.groupby(self.feature)[self.target_name_].agg(
["mean", "std", skew, lambda x: wasserstein_distance(x, reference)]
)
X.fillna(value=1, inplace=True)
X_to_cluster = StandardScaler().fit_transform(X)
n_clusters = min(X_to_cluster.shape[0], self.max_n_categories)
clusterer = KMeans(n_clusters=n_clusters)
with warnings.catch_warnings(record=True) as warning:
cluster_labels = pd.Series(
data=clusterer.fit_predict(X_to_cluster), index=X.index
)
for w in warning:
logger.warning(str(w))
return cluster_labels.to_dict()
def _cluster_with_stratify(self, X):
X_train = None
for col in self.stratify_by_:
if (
is_bool(X[col])
or | is_object(X[col]) | pandas.api.types.is_object_dtype |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# ========================================================================
# hyperparameter tuning, model training and validation.
# (the hyperparameters are tuned with the hyperopt package.)
# ========================================================================
import os, sys, glob, pickle, gc
import argparse
import numpy as np
from numpy.random import RandomState
import pandas as pd
import xgboost as xgb
from sklearn import preprocessing
from sklearn.metrics import roc_curve, auc, make_scorer
from sklearn import metrics
from sklearn.model_selection import cross_val_score, cross_validate
from hyperopt import fmin, tpe, hp, STATUS_OK, Trials
from sklearn.model_selection import StratifiedKFold
from sklearn.feature_selection import VarianceThreshold
##os.environ['CUDA_VISIBLE_DEVICES'] = "3"
def get_auc(y_test, y_pred):
##if the prediction_value is + , pos_label=1; else, pos_label=0
fpr, tpr, thresholds = roc_curve(y_test, y_pred, pos_label=1)
myauc = auc(fpr, tpr)
return myauc
auroc_scorer = make_scorer(get_auc,greater_is_better=True, needs_proba=True)
class BuildingModel:
def __init__(self, args, X_train, y_train, X_test=None, y_test=None):
#def __init__(self, args, dtrain, dtest=None):
self.skf = StratifiedKFold(n_splits=5)
self.X_train = X_train
self.y_train = y_train
self.X_test = X_test
self.y_test = y_test
#self.dtrain = dtrain
#self.dtest = dtest
self.args = args
self.space = {"eta": hp.loguniform("eta", np.log(0.005), np.log(0.5)),
#"n_estimators": hp.randint("n_estimators", 300),
"gamma": hp.uniform("gamma", 0, 1.0),
"max_depth": hp.randint("max_depth", 15),
"min_child_weight": hp.randint("min_child_weight", 10),
"subsample": hp.randint("subsample", 10),
"colsample_bytree": hp.randint("colsample_bytree", 10),
"colsample_bylevel": hp.randint("colsample_bylevel", 10),
"colsample_bynode": hp.randint("colsample_bynode", 10),
"lambda": hp.loguniform("lambda", np.log(0.001), np.log(1)),
"alpha": hp.loguniform("alpha", np.log(0.001), np.log(1)),
}
def obtain_clf(self, params):
if self.args.gpu is None:
clf = xgb.XGBClassifier(objective='binary:logistic',
eval_metric='logloss',
silent=1,
seed=self.args.random_state2,
nthread=-1,
**params
)
else:
clf = xgb.XGBClassifier(objective='binary:logistic',
eval_metric='logloss',
tree_method='gpu_hist',
silent=1,
seed=self.args.random_state2,
nthread=-1,
**params
)
return clf
def params_tranform(self, params):
params["max_depth"] = params["max_depth"] + 5
#params['n_estimators'] = params['n_estimators'] * 10 + 50
params["subsample"] = params["subsample"] * 0.05 + 0.5
params["colsample_bytree"] = params["colsample_bytree"] * 0.05 + 0.5
params["colsample_bylevel"] = params["colsample_bylevel"] * 0.05 + 0.5
params["colsample_bynode"] = params["colsample_bynode"] * 0.05 + 0.5
params["min_child_weight"] = params["min_child_weight"] + 1
return params
def f(self, params):
auroc = self.hyperopt_train_test(params)
return {'loss': -auroc, 'status': STATUS_OK}
def hyperopt_train_test(self, params):
params = self.params_tranform(params)
clf = self.obtain_clf(params)
auroc_list = []
for k, (train_index, test_index) in enumerate(self.skf.split(self.X_train, self.y_train)):
X_tr, X_te = self.X_train[train_index], self.X_train[test_index]
y_tr, y_te = self.y_train[train_index], self.y_train[test_index]
dtr = xgb.DMatrix(X_tr, label=y_tr)
dval = xgb.DMatrix(X_te, label=y_te)
res = xgb.train(clf.get_params(), dtr, evals=[(dval,'val')], num_boost_round=5000, early_stopping_rounds=50, verbose_eval=False)
y_pred_proba = res.predict(dval)
auroc = get_auc(y_te, y_pred_proba)
auroc_list.append(auroc)
return np.mean(np.array(auroc_list))
def best_params_save(self, best):
'''save the best parameters'''
#best_params = self.params_tranform(best)
s = pickle.dumps(best)
with open('best_params.pkl', 'wb') as f:
f.write(s)
def best_params_load(self, tuned=True):
if tuned:
best_params = self.hyperparams_tuning()
else:
if not os.path.exists('./best_params.pkl'):
print('the file "best_params.pkl" does not exist!')
sys.exit(1)
else:
with open('best_params.pkl','rb') as f:
best_params = pickle.loads(f.read())
return best_params
def hyperparams_tuning(self):
trials = Trials()
best = fmin(self.f, self.space, algo=tpe.suggest, max_evals=self.args.max_evals, trials=trials)
best_params = self.params_tranform(best)
print('best: %s'%best_params)
print('loss: %s'%min(trials.losses()))
self.best_params_save(best_params)
return best_params
def obtain_best_cv_scores(self, tuned=True):
'''obtain the 5-fold CV results of the training set'''
best_params = self.best_params_load(tuned)
clf = self.obtain_clf(best_params)
best_iteration = 0
auroc_list = []
for k, (train_index, test_index) in enumerate(self.skf.split(self.X_train, self.y_train)):
X_tr, X_te = self.X_train[train_index], self.X_train[test_index]
y_tr, y_te = self.y_train[train_index], self.y_train[test_index]
dtr = xgb.DMatrix(X_tr, label=y_tr)
dval = xgb.DMatrix(X_te, label=y_te)
res = xgb.train(clf.get_params(), dtr, evals=[(dval,'val')], num_boost_round=5000, early_stopping_rounds=50, verbose_eval=False)
best_iteration = max([best_iteration, res.best_iteration])
y_pred_proba = res.predict(dval)
auroc = get_auc(y_te, y_pred_proba)
auroc_list.append(auroc)
return np.array(auroc_list)
def train_and_predict(self, vtrans, scaler, tuned=False, predict=True):
'''use the best parameters to train the model, and then predict the test set'''
best_params = self.best_params_load(tuned=True)
cv_scores, best_iteration = self.obtain_best_cv_scores(tuned=True)
export_cv_results(self.args, cv_scores)
clf = self.obtain_clf(best_params)
dtrain = xgb.DMatrix(self.X_train, label=self.y_train)
res = xgb.train(clf.get_params(), dtrain, num_boost_round=best_iteration)
#clf.fit(self.X_train, self.y_train)
#model = pickle.dumps(clf)
model = pickle.dumps((res, vtrans, scaler))
with open('final_best_model.pkl', 'wb') as f:
f.write(model)
if predict:
if self.X_test is None or self.y_test is None:
print('the X and y of the test set should be provided!')
sys.exit(1)
dtest = xgb.DMatrix(self.X_test, label=self.y_test)
y_pred_proba = res.predict(dtest)
#y_pred_ = clf.predict_proba(self.X_test)
#y_pred = np.array([_[-1] for _ in y_pred_])
return y_pred_proba
else:
return None
def prepare_data(args, predict=True, variance_filter=True, standard_scaler=True):
###load the data
df_ref = pd.read_csv(args.ref_file, header=0, index_col=0)
if args.features in ['ecif+vina','ecif+vina+rank']:
df1 = pd.read_csv(args.input_file, header=0, index_col=0)
df2 = pd.read_csv(args.input_file2, header=0, index_col=0)
if ('vina_affinity' not in df1.columns) and ('vina_affinity' in df2.columns):
df_temp = df1
df1 = df2
df2 = df_temp
del df_temp
df_desc = df2['desc'].apply(lambda xs: pd.Series([int(x.strip()) for x in xs.strip('[]').split(',')]))
if 'rank' in args.features:
df1['rank'] = df1.lig_id.apply(lambda x: int(x.split('_')[-1])+1 if x != '1_000x' else 0)
vina_columns = ['pdb_id', 'lig_id', 'rank','vina_affinity','vina_gauss_1','vina_gauss_2','vina_repulsion','vina_hydrophobic','vina_hydrogen']
df1 = df1[vina_columns]
else:
vina_columns = ['pdb_id', 'lig_id', 'vina_affinity','vina_gauss_1','vina_gauss_2','vina_repulsion','vina_hydrophobic','vina_hydrogen']
df1 = df1[vina_columns]
df = pd.concat([df1, df_desc, df_ref['label']], axis=1)
del df_ref, df1, df2, df_desc
else:
df = pd.read_csv(args.input_file, header=0, index_col=0)
if args.features == 'nnscore':
df = pd.concat([df, df_ref['label']], axis=1)
del df_ref
elif args.features == 'nnscore-vina':
df = pd.concat([df, df_ref['label']], axis=1)
vina_columns = ['vina_affinity','vina_gauss_1','vina_gauss_2','vina_repulsion','vina_hydrophobic','vina_hydrogen']
df.drop(vina_columns, axis=1, inplace=True)
del df_ref
elif args.features == 'vina':
df = | pd.concat([df, df_ref['label']], axis=1) | pandas.concat |
# (in the case of target axis AB)
# Calculate only the A evidence for A (input A for classifier AC and AD) compared
# with A evidence for B (input B for classifier AC and AD) ;
# B evidence for B (input B for classifier BC and BD) compared with B
# evidence for A (input A for classifier BC and BD)
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn
import joblib
import nibabel as nib
import itertools
from sklearn.linear_model import LogisticRegression
from IPython.display import clear_output
import sys
from subprocess import call
import pickle
import pdb
def save_obj(obj, name):
with open(name + '.pkl', 'wb') as f:
pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL)
def load_obj(name):
with open(name + '.pkl', 'rb') as f:
return pickle.load(f)
def normalize(X):
X = X - X.mean(0)
return X
def jitter(size,const=0):
jit = np.random.normal(0+const, 0.05, size)
X = np.zeros((size))
X = X + jit
return X
def other(target):
other_objs = [i for i in ['bed', 'bench', 'chair', 'table'] if i not in target]
return other_objs
def red_vox(n_vox, prop=0.1):
return int(np.ceil(n_vox * prop))
def classifierEvidence(clf,X,Y): # X shape is [trials,voxelNumber], Y is ['bed', 'bed'] for example # return a list a probability
# This function get the data X and evidence object I want to know Y, and output the trained model evidence.
targetID=[np.where((clf.classes_==i)==True)[0][0] for i in Y]
# print('targetID=', targetID)
Evidence=[clf.predict_proba(X[i].reshape(1,-1))[0][j] for i,j in enumerate(targetID)]
# print('Evidence=',Evidence)
return Evidence
def get_inds(X, Y, pair, testRun=None):
inds = {}
# return relative indices
if testRun:
trainIX = Y.index[(Y['label'].isin(pair)) & (Y['run_num'] != int(testRun))]
else:
trainIX = Y.index[(Y['label'].isin(pair))]
# pull training and test data
trainX = X[trainIX]
trainY = Y.iloc[trainIX].label
# Main classifier on 5 runs, testing on 6th
clf = LogisticRegression(penalty='l2',C=1, solver='lbfgs', max_iter=1000,
multi_class='multinomial').fit(trainX, trainY)
B = clf.coef_[0] # pull betas
# retrieve only the first object, then only the second object
if testRun:
obj1IX = Y.index[(Y['label'] == pair[0]) & (Y['run_num'] != int(testRun))]
obj2IX = Y.index[(Y['label'] == pair[1]) & (Y['run_num'] != int(testRun))]
else:
obj1IX = Y.index[(Y['label'] == pair[0])]
obj2IX = Y.index[(Y['label'] == pair[1])]
# Get the average of the first object, then the second object
obj1X = np.mean(X[obj1IX], 0)
obj2X = np.mean(X[obj2IX], 0)
# Build the importance map
mult1X = obj1X * B
mult2X = obj2X * B
# Sort these so that they are from least to most important for a given category.
sortmult1X = mult1X.argsort()[::-1]
sortmult2X = mult2X.argsort()
# add to a dictionary for later use
inds[clf.classes_[0]] = sortmult1X
inds[clf.classes_[1]] = sortmult2X
return inds
def getEvidence(sub,testEvidence,METADICT=None,FEATDICT=None,filterType=None,roi="V1",include=1,testRun=6):
# each testRun, each subject, each target axis, each target obj would generate one.
META = METADICT[sub]
print('META.shape=',META.shape)
FEAT = FEATDICT[sub]
# Using the trained model, get the evidence
objects=['bed', 'bench', 'chair', 'table']
allpairs = itertools.combinations(objects,2)
for pair in allpairs: #pair=('bed', 'bench')
# Find the control (remaining) objects for this pair
altpair = other(pair) #altpair=('chair', 'table')
for obj in pair: #obj='bed'
# in the current target axis pair=('bed', 'bench') altpair=('chair', 'table'), display image obj='bed'
# find the evidence for bed from the (bed chair) and (bed table) classifier
# get the test data and seperate the test data into category obj and category other
otherObj=[i for i in pair if i!=obj][0]
print('otherObj=',otherObj)
objID = META.index[(META['label'].isin([obj])) & (META['run_num'] == int(testRun))]
otherObjID = META.index[(META['label'].isin([otherObj])) & (META['run_num'] == int(testRun))]
obj_X=FEAT[objID]
obj_Y=META.iloc[objID].label
otherObj_X=FEAT[otherObjID]
otherObj_Y=META.iloc[otherObjID].label
model_folder = f'/gpfs/milgram/project/turk-browne/jukebox/ntb/projects/sketchloop02/clf/{include}/{roi}/{filterType}/'
print(f'loading {model_folder}{sub}_{pair[0]}{pair[1]}_{obj}{altpair[0]}.joblib')
print(f'loading {model_folder}{sub}_{pair[0]}{pair[1]}_{obj}{altpair[1]}.joblib')
clf1 = joblib.load(f'{model_folder}{sub}_{pair[0]}{pair[1]}_{obj}{altpair[0]}.joblib')
clf2 = joblib.load(f'{model_folder}{sub}_{pair[0]}{pair[1]}_{obj}{altpair[1]}.joblib')
if include < 1:
selectedFeatures=load_obj(f'{model_folder}{sub}_{pair[0]}{pair[1]}_{obj}{altpair[0]}.selectedFeatures')
obj_X=obj_X[:,selectedFeatures]
otherObj_X=otherObj_X[:,selectedFeatures]
# s1 = clf1.score(obj_X, obj_Y)
# s2 = clf2.score(obj_X, obj_Y)
# s1 = clf1.score(obj_X, [obj]*obj_X.shape[0])
# s2 = clf2.score(obj_X, [obj]*obj_X.shape[0])
s1 = classifierEvidence(clf1,obj_X,[obj] * obj_X.shape[0])
s2 = classifierEvidence(clf2,obj_X,[obj] * obj_X.shape[0])
obj_evidence = np.mean([s1, s2],axis=0)
print('obj_evidence=',obj_evidence)
# s1 = clf1.score(otherObj_X, otherObj_Y)
# s2 = clf2.score(otherObj_X, otherObj_Y)
# s1 = clf1.score(otherObj_X, [obj]*obj_X.shape[0])
# s2 = clf2.score(otherObj_X, [obj]*obj_X.shape[0])
s1 = classifierEvidence(clf1,otherObj_X,[obj] * otherObj_X.shape[0])
s2 = classifierEvidence(clf2,otherObj_X,[obj] * otherObj_X.shape[0])
otherObj_evidence = np.mean([s1, s2],axis=0)
print('otherObj_evidence=',otherObj_evidence)
testEvidence = testEvidence.append({
'sub':sub,
'testRun':testRun,
'targetAxis':pair,
'obj':obj,
'obj_evidence':obj_evidence,
'otherObj_evidence':otherObj_evidence,
'filterType':filterType,
'include':include,
'roi':roi
},
ignore_index=True)
return testEvidence
def minimalClass(filterType = 'noFilter',testRun = 6, roi="V1",include = 1): #include is the proportion of features selected
accuracyContainer = pd.DataFrame(columns=['sub','testRun','targetAxis','obj','altobj','acc','filterType','roi'])
testEvidence = pd.DataFrame(columns=['sub','testRun','targetAxis','obj','obj_evidence','otherObj_evidence','filterType','roi'])
working_dir='/gpfs/milgram/project/turk-browne/projects/rtcloud_kp/FilterTesting/neurosketch_realtime_preprocess/'
os.chdir(working_dir)
data_dir=f'/gpfs/milgram/project/turk-browne/jukebox/ntb/projects/sketchloop02/features/{filterType}/recognition/'
files = os.listdir(data_dir)
feats = [i for i in files if 'metadata' not in i]
subjects = np.unique([i.split('_')[0] for i in feats if i.split('_')[0] not in ['1121161','0112174']]) # 1121161 has a grid spacing issue and 0112174 lacks one of regressor file
# If you want to reduce the number of subjects used for testing purposes
subs=len(subjects)
# subs=1
subjects = subjects[:subs]
print(subjects)
objects = ['bed', 'bench', 'chair', 'table']
phases = ['12', '34', '56']
# THIS CELL READS IN ALL OF THE PARTICIPANTS' DATA and fills into dictionary
FEATDICT = {}
METADICT = {}
subjects_new=[]
for si, sub in enumerate(subjects[:]):
try:
print('{}/{}'.format(si+1, len(subjects)))
for phase in phases:
_feat = np.load(data_dir+'/{}_{}_{}_featurematrix.npy'.format(sub, roi, phase))
_feat = normalize(_feat)
_meta = pd.read_csv(data_dir+'/metadata_{}_{}_{}.csv'.format(sub, roi, phase))
if phase!='12':
assert _feat.shape[1]==FEAT.shape[1], 'feat shape not matched'
FEAT = _feat if phase == "12" else np.vstack((FEAT, _feat))
META = _meta if phase == "12" else | pd.concat((META, _meta)) | pandas.concat |
import numpy as np
import pandas as pd
import datetime
import math
def combine_data(sales_df, weather_df):
'''
Combines sales date with weather data to fit models and get forecasts and predictions
Input: Cleaned sales pandas data frame, cleaned weather pandas data frame
Output: Pandas data frame
'''
# combine the sales and weather dataframes
combined_df = sales_df.merge(weather_df, left_index=True, right_index=True)
# create the sine and cosine vectors for each day of the year
sin_vect = pd.Series(combined_df.index).apply(lambda x: assign_sine_vector(x))
cos_vect = pd.Series(combined_df.index).apply(lambda x: assign_cosine_vector(x))
sin_vect.index = combined_df.index
cos_vect.index = combined_df.index
# add sine vector series to the dataframe
combo_with_sin_df = pd.concat([combined_df, sin_vect], axis=1)
combo_with_sin_df.rename(columns={'date': 'sin_vect'}, inplace=True)
# add cosine vector series to the dataframe
combo_with_cos_df = pd.concat([combo_with_sin_df, cos_vect], axis=1)
combo_with_cos_df.rename(columns={'date': 'cos_vect'}, inplace=True)
# create dummies out of the days of the week, and add dummies to the dataframe
day_of_week = combined_df['day_of_week']
days = pd.get_dummies(day_of_week)
combo_dummy_days_df = pd.concat([combo_with_cos_df, days], axis=1)
transformed_df = combo_dummy_days_df.drop(columns=['day_of_week'], axis=1)
# create columns for 2 week rolling averages to include recent sales
# as features
wd = combo_with_cos_df['day_of_week'].nunique()
transformed_df['rolling_{}'.format(str(2 * wd))] = transformed_df['net_sales'].rolling(2*wd).mean()
# drop the rows that have rolling averages with NaNs
return transformed_df[(2 * wd - 1):]
def date_to_nth_day(date):
'''
Function to convert a datetime day to a number
Input: datetime date ie datetime(y, m, d)
Output: int
'''
date = pd.to_datetime(date)
new_year_day = | pd.Timestamp(year=date.year, month=1, day=1) | pandas.Timestamp |
#! /usr/bin/env python
import khmer
import numpy as np
from khmer.khmer_args import optimal_size
import os
import sys
# The following is for ease of development (so I don't need to keep re-installing the tool)
try:
from CMash import MinHash as MH
except ImportError:
try:
import MinHash as MH
except ImportError:
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from CMash import MinHash as MH
import pandas as pd
from multiprocessing import Pool # Much faster without dummy (threading)
import multiprocessing
import threading
from itertools import *
import argparse
import screed
# Helper function that uses equations (2.1) and (2.7) that tells you where you need
# to set the threshold to ensure (with confidence 1-t) that you got all organisms
# with coverage >= c
def threshold_calc(k, c, p, confidence):
delta = c*(1-np.sqrt(-2*np.log(1 - confidence)*(c+p) / float(c**2 * k)))
if delta < 0:
delta = 0
return delta
# This will calculate the similarity indicies between one sketch and the sample NodeGraph
def compute_indicies(sketch, num_sample_kmers, true_fprate):
#global sample_kmers
num_hashes = len(sketch._kmers)
num_training_kmers = sketch._true_num_kmers
count = 0
adjust = 0
for kmer in sketch._kmers:
if kmer != '':
count += sample_kmers.get(kmer)
else:
adjust += 1 # Hash wasn't full, adjust accordingly
count -= int(np.round(true_fprate*num_hashes)) # adjust by the FP rate of the bloom filter
intersection_cardinality = count
containment_index = count / float(num_hashes - adjust)
jaccard_index = num_training_kmers * containment_index / float(num_training_kmers + num_sample_kmers - num_training_kmers * containment_index)
#print("Train %d sample %d" % (num_training_kmers, num_sample_kmers))
# It can happen that the query file has less k-mers in it than the training file, so just round to nearest reasonable value
if containment_index > 1:
containment_index = 1
elif containment_index < 0:
containment_index = 0
if jaccard_index > 1:
jaccard_index = 1
elif jaccard_index < 0:
jaccard_index = 0
return intersection_cardinality, containment_index, jaccard_index
def unwrap_compute_indicies(arg):
return compute_indicies(*arg)
def restricted_float(x):
x = float(x)
if x < 0.0 or x >= 1.0:
raise argparse.ArgumentTypeError("%r not in range [0.0, 1.0)" % (x,))
return x
# Can read in with: test=pd.read_csv(os.path.abspath('../data/results.csv'),index_col=0)
def main():
parser = argparse.ArgumentParser(description="This script creates a CSV file of similarity indicies between the"
" input file and each of the sketches in the training/reference file.",
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('-t', '--threads', type=int, help="Number of threads to use", default=multiprocessing.cpu_count())
parser.add_argument('-f', '--force', action="store_true", help="Force creation of new NodeGraph.")
parser.add_argument('-fp', '--fp_rate', type=restricted_float, help="False positive rate.", default=0.0001)
parser.add_argument('-ct', '--containment_threshold', type=restricted_float,
help="Only return results with containment index above this value", default=0.02)
parser.add_argument('-c', '--confidence', type=restricted_float,
help="Desired probability that all results were returned with containment index above threshold [-ct]",
default=0.95)
parser.add_argument('-ng', '--node_graph', help="NodeGraph/bloom filter location. Used if it exists; if not, one "
"will be created and put in the same directory as the specified "
"output CSV file.", default=None)
parser.add_argument('-b', '--base_name', action="store_true",
help="Flag to indicate that only the base names (not the full path) should be saved in the output CSV file")
parser.add_argument('-i', '--intersect_nodegraph', action="store_true",
help="Option to only insert query k-mers in bloom filter if they appear anywhere in the training"
" database. Note that the Jaccard estimates will now be "
"J(query intersect union_i training_i, training_i) instead of J(query, training_i), "
"but will use significantly less space.")
parser.add_argument('in_file', help="Input file: FASTQ/A file (can be gzipped).")
parser.add_argument('training_data', help="Training/reference data (HDF5 file created by MakeTrainingDatabase.py)")
parser.add_argument('out_csv', help='Output CSV file')
# Parse and check args
args = parser.parse_args()
base_name = args.base_name
training_data = os.path.abspath(args.training_data)
if not os.path.exists(training_data):
raise Exception("Training/reference file %s does not exist." % training_data)
# Let's get the k-mer sizes in the training database
ksizes = set()
# Import all the training data
sketches = MH.import_multiple_from_single_hdf5(training_data)
# Check for issues with the sketches (can also check if all the kmers make sense (i.e. no '' or non-ACTG characters))
if sketches[0]._kmers is None:
raise Exception("For some reason, the k-mers were not saved when the database was created. Try running MakeDNADatabase.py again.")
num_hashes = len(sketches[0]._kmers)
for i in range(len(sketches)):
sketch = sketches[i]
if sketch._kmers is None:
raise Exception(
"For some reason, the k-mers were not saved when the database was created. Try running MakeDNADatabase.py again.")
if len(sketch._kmers) != num_hashes:
raise Exception("Unequal number of hashes for sketch of %s" % sketch.input_file_name)
ksizes.add(sketch.ksize)
if len(ksizes) > 1:
raise Exception("Training/reference data uses different k-mer sizes. Culprit was %s." % (sketch.input_file_name))
# Get the appropriate k-mer size
ksize = ksizes.pop()
# Get number of threads to use
num_threads = args.threads
# Check and parse the query file
query_file = os.path.abspath(args.in_file)
if not os.path.exists(query_file):
raise Exception("Query file %s does not exist." % query_file)
# Node graph is stored in the output folder with name <InputFASTQ/A>.NodeGraph.K<k_size>
if args.node_graph is None: # If no node graph is specified, create one
node_graph_out = os.path.join(os.path.dirname(os.path.abspath(args.out_csv)),
os.path.basename(query_file) + ".NodeGraph.K" + str(ksize))
if not os.path.exists(node_graph_out): # Don't complain if the default location works
print("Node graph not provided (via -ng). Creating one at: %s" % node_graph_out)
elif os.path.exists(args.node_graph): # If one is specified and it exists, use it
node_graph_out = args.node_graph
else: # Otherwise, the specified one doesn't exist
raise Exception("Provided NodeGraph %s does not exist." % args.node_graph)
# import and check the intersect nodegraph
if args.intersect_nodegraph is True:
intersect_nodegraph_file = os.path.splitext(training_data)[0] + ".intersect.Nodegraph"
else:
intersect_nodegraph_file = None
intersect_nodegraph = None
if intersect_nodegraph_file is not None:
if not os.path.exists(intersect_nodegraph_file):
raise Exception("Intersection nodegraph does not exist. Please re-run MakeDNADatabase.py with the -i flag.")
try:
intersect_nodegraph = khmer.load_nodegraph(intersect_nodegraph_file)
if intersect_nodegraph.ksize() != ksize:
raise Exception("Given intersect nodegraph %s has K-mer size %d while the database K-mer size is %d"
% (intersect_nodegraph_file, intersect_nodegraph.ksize(), ksize))
except:
raise Exception("Could not load given intersect nodegraph %s" % intersect_nodegraph_file)
results_file = os.path.abspath(args.out_csv)
force = args.force
fprate = args.fp_rate
coverage_threshold = args.containment_threshold # desired coverage cutoff
confidence = args.confidence # desired confidence that you got all the organisms with coverage >= desired coverage
# Get names of training files for use as rows in returned tabular data
training_file_names = []
for i in range(len(sketches)):
training_file_names.append(sketches[i].input_file_name)
# Only form the Nodegraph if we need to
global sample_kmers
if not os.path.exists(node_graph_out) or force is True:
hll = khmer.HLLCounter(0.01, ksize)
hll.consume_seqfile(query_file)
full_kmer_count_estimate = hll.estimate_cardinality()
res = optimal_size(full_kmer_count_estimate, fp_rate=fprate)
if intersect_nodegraph is None: # If no intersect list was given, just populate the bloom filter
sample_kmers = khmer.Nodegraph(ksize, res.htable_size, res.num_htables)
#sample_kmers.consume_seqfile(query_file)
rparser = khmer.ReadParser(query_file)
threads = []
for _ in range(num_threads):
cur_thrd = threading.Thread(target=sample_kmers.consume_seqfile_with_reads_parser, args=(rparser,))
threads.append(cur_thrd)
cur_thrd.start()
for thread in threads:
thread.join()
else: # Otherwise, only put a k-mer in the bloom filter if it's in the intersect list
# (WARNING: this will cause the Jaccard index to be calculated in terms of J(query\intersect hash_list, training)
# instead of J(query, training)
# (TODO: fix this after khmer is updated)
#intersect_nodegraph_kmer_count = intersect_nodegraph.n_unique_kmers() # Doesnt work due to khmer bug
intersect_nodegraph_kmer_count = intersect_nodegraph.n_occupied() # Not technically correct, but I need to wait until khmer is updated
if intersect_nodegraph_kmer_count < full_kmer_count_estimate: # At max, we have as many k-mers as in the union of the training database (But makes this always return 0)
res = optimal_size(intersect_nodegraph_kmer_count, fp_rate=fprate)
sample_kmers = khmer.Nodegraph(ksize, res.htable_size, res.num_htables)
else:
sample_kmers = khmer.Nodegraph(ksize, res.htable_size, res.num_htables)
for record in screed.open(query_file):
seq = record.sequence
for i in range(len(seq) - ksize + 1):
kmer = seq[i:i + ksize]
if intersect_nodegraph.get(kmer) > 0:
sample_kmers.add(kmer)
# Save the sample_kmers
sample_kmers.save(node_graph_out)
true_fprate = khmer.calc_expected_collisions(sample_kmers, max_false_pos=0.99)
else:
sample_kmers = khmer.load_nodegraph(node_graph_out)
node_ksize = sample_kmers.ksize()
if node_ksize != ksize:
raise Exception("Node graph %s has wrong k-mer size of %d (input was %d). Try --force or change -k." % (
node_graph_out, node_ksize, ksize))
true_fprate = khmer.calc_expected_collisions(sample_kmers, max_false_pos=0.99)
#num_sample_kmers = sample_kmers.n_unique_kmers() # For some reason this only works when creating a new node graph, use the following instead
num_sample_kmers = sample_kmers.n_occupied()
# Compute all the indicies for all the training data
pool = Pool(processes=num_threads)
res = pool.map(unwrap_compute_indicies, zip(sketches, repeat(num_sample_kmers), repeat(true_fprate)))
# Gather up the results in a nice form
intersection_cardinalities = np.zeros(len(sketches))
containment_indexes = np.zeros(len(sketches))
jaccard_indexes = np.zeros(len(sketches))
for i in range(len(res)):
(intersection_cardinality, containment_index, jaccard_index) = res[i]
intersection_cardinalities[i] = intersection_cardinality
containment_indexes[i] = containment_index
jaccard_indexes[i] = jaccard_index
d = {'intersection': intersection_cardinalities, 'containment index': containment_indexes, 'jaccard index': jaccard_indexes}
# Use only the basenames to label the rows (if requested)
if base_name is True:
df = pd.DataFrame(d, map(os.path.basename, training_file_names))
else:
df = | pd.DataFrame(d, training_file_names) | pandas.DataFrame |
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
# Read original data from Pandas
training_data_df = | pd.read_csv('data/boxing_data_training.csv') | pandas.read_csv |
import pandas as pd
import os
from pathlib import Path
from sklearn.model_selection import train_test_split
from organize.templates import concat_templates
from train.utils import *
def add_png_path(df, run_params):
# Load DataFrame of relation between Original Filename and ID (IMG_XXX)
relation_df = pd.read_csv(os.path.join(run_params["PATH_PREFIX"], "relation.csv"))
relation_df = relation_df.set_index("Filename")
# Merge data to be able to load directly from preprocessed PNG file
final_df = df.set_index("ID").merge(relation_df, left_index=True, right_index=True)
final_df["ID"] = final_df.index.values
final_df = final_df.reset_index(drop=True)
final_df["Raw_preprocess"] = final_df["Original_Filename"].apply(
lambda filename: os.path.join(
run_params["RAW_PREPROCESS_FOLDER"], filename + ".png"
)
)
return final_df
def filter_centers_data(df, run_params):
# Read all the sources
metadata_save_path = os.path.join(run_params["PATH_PREFIX"], "metadata_raw.csv")
metadata_df = pd.read_csv(metadata_save_path)
# Filter metadata to only sent images fulfiling condition
filter_metadata_df = metadata_df[
(
metadata_df.InstitutionName.str.lower().str.contains("coslada").astype(bool)
| metadata_df.InstitutionName.str.lower().str.contains("cugat").astype(bool)
)
& (metadata_df.InstitutionName.notnull())
| (
metadata_df.AccessionNumber.astype("str").str.startswith("885")
# | metadata_df.AccessionNumber.astype('str').str.startswith('4104')
)
]
# Create DataFrame only with the Filename
filter_df = pd.DataFrame(
index=filter_metadata_df.fname.apply(lambda x: Path(x).name)
)
filter_df["check_condition"] = True
# Filter data to only the ones from desired centers
final_df = df.merge(
filter_df, left_on="Original_Filename", right_index=True, how="left"
)
final_df = final_df[
(final_df["check_condition"] == True) | (final_df["Target"] != "0")
]
return final_df
def robust_split_data(df, test_size, target_col, seed=None):
"""Split stratified data, in case of failing due to minor class too low, move it to test"""
filter_mask = pd.Series(
[
True,
]
* len(df),
index=df.index,
)
done = False
while not done:
# Try to split stratify if error due to not enough minor class, then it goes to test
try:
train_df, test_df = train_test_split(
df[filter_mask],
test_size=test_size,
shuffle=True,
stratify=df.loc[filter_mask, target_col],
random_state=seed,
)
done = True
except ValueError as e:
if str(e).startswith("The least populated class"):
minor_class = df.loc[filter_mask, target_col].value_counts().index[-1]
filter_mask = (filter_mask) & (df[target_col] != minor_class)
else:
print("Test size is too low to use stratified, then split shuffling")
train_df, test_df = train_test_split(
df[filter_mask],
test_size=test_size,
shuffle=True,
random_state=seed,
)
done = True
# Add minor classes which have not been initially included due to the error on train_test_split
test_df = pd.concat([test_df, df[~filter_mask]], axis=0).sample(
frac=1, random_state=seed
)
return train_df, test_df
def imbalance_robust_split_data(
df, positive_df, test_size, positive_test_size, target_col, seed=None
):
"""Split between train and test according with the proportion of specified positives"""
# First split positive examples
pos_train_df, pos_test_df = robust_split_data(
positive_df, positive_test_size, target_col, seed=seed
)
# Identify as negative examples the ones from `df` which are not in `positive_df`
negative_df = df.merge(
positive_df,
left_index=True,
right_index=True,
how="left",
indicator=True,
suffixes=("", "_"),
)
negative_df = negative_df[negative_df["_merge"] == "left_only"][list(df.columns)]
# Split negative examples
neg_test_size = (len(df) * test_size - len(pos_test_df)) / (
len(df) - len(pos_train_df)
)
neg_train_df, neg_test_df = train_test_split(
negative_df, test_size=neg_test_size, shuffle=True, random_state=seed
)
# Join positive with negative examples and shuffle them
train_df = pd.concat([pos_train_df, neg_train_df]).sample(frac=1, random_state=seed)
test_df = pd.concat([pos_test_df, neg_test_df]).sample(frac=1, random_state=seed)
return train_df, test_df
def get_ratio(df, target_col="Target"):
targets = (df[target_col] != "0").sum()
non_targets = (df[target_col] == "0").sum()
ratio = targets / non_targets
return ratio
def rebalance_equal_to_target_df(df, target_df, target_col="Target", seed=42):
dataset_ratio = get_ratio(df, target_col=target_col)
target_dataset_ratio = get_ratio(target_df, target_col=target_col)
negative_oversampling_ratio = dataset_ratio / target_dataset_ratio
rebalanced_df = (
pd.concat(
[
df[df[target_col] == "0"].sample(
frac=negative_oversampling_ratio, replace=True, random_state=seed
),
df[df[target_col] != "0"],
]
)
.reset_index(drop=True)
.sample(frac=1, random_state=seed)
)
return rebalanced_df
def split_by_labelled_data(df, run_params):
# Load DataFrame containing labels of OOS classifier ('ap', 'other')
metadata_labels_path = os.path.join(
run_params["PATH_PREFIX"], "metadata_labels.csv"
)
metadata_labels = pd.read_csv(metadata_labels_path)
metadata_labels["Original_Filename"] = metadata_labels["Path"].apply(
lambda path: Path(path).stem
)
metadata_labels = metadata_labels.set_index("Original_Filename")
# Merge all the data we have with the labelling in order to split correctly according to OOS classifier
unlabel_all_df = metadata_labels.merge(
df.set_index("Original_Filename"), how="left", left_index=True, right_index=True
)
unlabel_all_df = unlabel_all_df[unlabel_all_df.Target.isnull()]
unlabel_all_df["Original_Filename"] = unlabel_all_df.index.values
unlabel_all_df["Raw_preprocess"] = unlabel_all_df["Original_Filename"].apply(
lambda filename: os.path.join(
run_params["RAW_PREPROCESS_FOLDER"], filename + ".png"
)
)
# Define which column to use as the prediction
if "Final_pred" in unlabel_all_df.columns:
pred_col = "Final_pred"
else:
pred_col = "Pred"
# Conditions for AP radiographies on unlabel data
ap_match = (unlabel_all_df[pred_col] == "ap") & (
unlabel_all_df.Incorrect_image.isnull()
)
# Split between label_df (labelled data), `unlabel_df` (containing only AP) and `unlabel_not_ap_df` (with the rest of unlabel data)
label_df = df[df["Target"].notnull()].reset_index(drop=True)
unlabel_df = unlabel_all_df[ap_match].reset_index(drop=True)
unlabel_not_ap_df = unlabel_all_df[~ap_match].reset_index(drop=True)
return label_df, unlabel_df, unlabel_not_ap_df
def split_datasets(label_df, run_params):
"""Split between train, valid and test according with the proportion of specified positives"""
if run_params["TEST_SIZE"] != 0:
if run_params["POSITIVES_ON_TRAIN"]:
positive_test_size = (1 - run_params["POSITIVES_ON_TRAIN"]) * (
run_params["TEST_SIZE"]
/ (run_params["VALID_SIZE"] + run_params["TEST_SIZE"])
)
train_df, test_df = imbalance_robust_split_data(
label_df,
label_df[label_df["Target"] != "0"],
test_size=run_params["TEST_SIZE"],
positive_test_size=positive_test_size,
target_col="Target",
seed=run_params["DATA_SEED"],
)
else:
train_df, test_df = robust_split_data(
label_df,
run_params["TEST_SIZE"],
"Target",
seed=run_params["DATA_SEED"],
)
else:
test_df = pd.DataFrame([], columns=label_df.columns)
train_df = label_df
if run_params["VALID_SIZE"] != 0:
if run_params["POSITIVES_ON_TRAIN"]:
positive_test_size = (1 - run_params["POSITIVES_ON_TRAIN"]) * (
run_params["TEST_SIZE"]
/ (run_params["VALID_SIZE"] + run_params["TEST_SIZE"])
)
positive_valid_size = (
1 - run_params["POSITIVES_ON_TRAIN"] - positive_test_size
) / (1 - positive_test_size)
train_df, val_df = imbalance_robust_split_data(
train_df,
train_df[train_df["Target"] != "0"],
test_size=run_params["VALID_SIZE"] / (1 - run_params["TEST_SIZE"]),
positive_test_size=positive_valid_size,
target_col="Target",
seed=run_params["DATA_SEED"],
)
else:
train_df, val_df = robust_split_data(
train_df,
run_params["VALID_SIZE"],
"Target",
seed=run_params["DATA_SEED"],
)
else:
val_df = pd.DataFrame([], columns=label_df.columns)
train_df["Dataset"] = "train"
val_df["Dataset"] = "valid"
test_df["Dataset"] = "test"
label_df = pd.concat(
[
train_df,
val_df,
test_df,
]
).reset_index(drop=True)
return label_df
def rebalance_datasets(label_df, run_params):
"""Modify positive-negative proportion on each dataset to meet specification of positives.
Test with same proportion as the initial dataset, Dev same as train and Valid as initial dataset."""
if run_params["POSITIVES_ON_TRAIN"]:
if run_params["TEST_SIZE"] != 0:
# Test set should mantain balance equal to the original data
test_df = rebalance_equal_to_target_df(
label_df[label_df["Dataset"] == "test"],
label_df,
target_col="Target",
seed=run_params["DATA_SEED"],
)
else:
test_df = pd.DataFrame([], columns=label_df.columns)
if run_params["VALID_SIZE"] != 0:
# Dev set is use to track overfitting so it better to be proportional to Training set
dev_df = rebalance_equal_to_target_df(
label_df[label_df["Dataset"] == "valid"],
label_df[label_df["Dataset"] == "train"],
target_col="Target",
seed=run_params["DATA_SEED"],
)
# Validation set is used to represent the real proportion
val_df = rebalance_equal_to_target_df(
label_df[label_df["Dataset"] == "valid"],
label_df,
target_col="Target",
seed=run_params["DATA_SEED"],
)
else:
dev_df = pd.DataFrame([], columns=label_df.columns)
val_df = pd.DataFrame([], columns=label_df.columns)
else:
dev_df = pd.DataFrame([], columns=label_df.columns)
val_df = pd.DataFrame([], columns=label_df.columns)
test_df = | pd.DataFrame([], columns=label_df.columns) | pandas.DataFrame |
from enum import auto
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from tkinter import *
import tkinter as tk
from tkinter.ttk import *
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import requests
import json
import locale
locale.setlocale(locale.LC_ALL, 'en_IN.utf8')
from bs4 import BeautifulSoup as soup
from datetime import date, datetime
from urllib.request import Request, urlopen
import warnings
warnings.filterwarnings("ignore")
# import matplotlib.pyplot as plt
# from pandas_profiling import ProfileReport
# import seaborn as sns
# import gc
# import plotly.graph_objs as gob
# import geopandas as gpd
# import plotly.offline as py
# import os
####################################################################
vaccine = pd.read_csv('country_vaccinations.csv')
vaccine['date_new'] = pd.to_datetime(vaccine['date'])
covid=pd.read_csv('worldometer_coronavirus_daily_data.csv')
covid_cum= | pd.read_csv('worldometer_coronavirus_summary_data.csv') | pandas.read_csv |
import pandas
import numpy as np
from typing import List, Callable
from scipy.stats import mannwhitneyu, kruskal
from sources.experiment.experiment_loader import ExperimentLoader
def save_mni_table_csv(csv_path: str, experiments_list: List[ExperimentLoader], labels: List[str]):
dict_list = {}
n_snp = experiments_list[0].dataset['snapshot_count']
for i_exp, exp in enumerate(experiments_list):
snp_values = []
mni_matrix = exp.get_mni_matrix()
for i in range(n_snp):
mean = np.mean(mni_matrix[:, i])
std = np.std(mni_matrix[:, i])
snp_values.append("{0: .4f} +/- {1: .4f}".format(mean, std))
dict_list[labels[i_exp]] = snp_values
df = pandas.DataFrame.from_dict(dict_list, orient='index')
df.to_csv(csv_path)
print(df)
def save_mni_kruskall_table_csv(csv_path: str, experiments_list: List[ExperimentLoader], alpha=0.01):
n_snp = experiments_list[0].dataset['snapshot_count']
mni_exp_list = [exp.get_mni_matrix() for exp in experiments_list]
empty_str = ""
for i in range(n_snp):
data = [nmi[:, i] for nmi in mni_exp_list]
try:
_, p = kruskal(*data)
if p <= alpha:
# reject the null hypothesis, are not the same
empty_str += "\u2714"
else:
# cannot reject the null hypothesis
empty_str += "\u2716"
except ValueError:
# if all the values are the same then don't reject the null hypothesis
empty_str += "\u2592"
with open(csv_path, 'w') as f:
f.write(empty_str)
print(empty_str)
# --------------------------------------------------------------------------------------------------------------------
# AVAILABLE DATA
# ---------------------------------------------------------------------------------------------------------------------
def _get_max_sum_hv_data(exp_list: List[ExperimentLoader]) -> List[np.array]:
hv_matrix = [exp.get_hypervolume_matrix() for exp in exp_list]
hv_matrix = [hv[:, :, -1] for hv in hv_matrix]
hv_sum = [np.sum(hv, axis=1) for hv in hv_matrix]
return hv_sum
def _get_avg_sqr_error_data(exp_list: List[ExperimentLoader]) -> List[np.array]:
mni_list = [exp.get_mni_matrix() for exp in exp_list]
sqr_error = [((mni - 1.0) ** 2).mean(axis=1) for mni in mni_list]
return sqr_error
# --------------------------------------------------------------------------------------------------------------------
# GENERIC TABLE GENERATION
# ---------------------------------------------------------------------------------------------------------------------
def _save_default_matrix_csv(csv_path: str, experiments_matrix: List[List[ExperimentLoader]], labels: List[str],
datasets: List[str], data_method: Callable[[List[ExperimentLoader]], List[np.array]],
prec: int):
assert len(experiments_matrix) == len(labels), "first dimension of experiment matrix must have the same length as labels"
assert len(experiments_matrix[0]) == len(datasets), "second dimension of experiment matrix must have the same length as datasets"
dict_list = {}
n_exp = len(datasets)
for i_exp, exp_list in enumerate(experiments_matrix):
hv_sum = data_method(exp_list)
mean = [np.mean(hv) for hv in hv_sum]
std = [np.std(hv) for hv in hv_sum]
snp_values = ["{: .{}f} +/- {: .{}f}".format(mean[i], prec, std[i], prec) for i in range(n_exp)]
dict_list[labels[i_exp]] = snp_values
df = | pandas.DataFrame.from_dict(dict_list, orient='index', columns=datasets) | pandas.DataFrame.from_dict |
# Module: Preprocess
# Author: <NAME> <<EMAIL>>
# License: MIT
import pandas as pd
import numpy as np
import ipywidgets as wg
from IPython.display import display
from ipywidgets import Layout
from sklearn.base import BaseEstimator, TransformerMixin, ClassifierMixin, clone
from sklearn.impute._base import _BaseImputer
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import RobustScaler
from sklearn.preprocessing import MaxAbsScaler
from sklearn.preprocessing import PowerTransformer
from sklearn.preprocessing import QuantileTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import OrdinalEncoder
from sklearn.decomposition import PCA
from sklearn.decomposition import KernelPCA
from sklearn.cross_decomposition import PLSRegression
from sklearn.manifold import TSNE
from sklearn.decomposition import IncrementalPCA
from sklearn.preprocessing import KBinsDiscretizer
from pyod.models.knn import KNN
from pyod.models.iforest import IForest
from pyod.models.pca import PCA as PCA_od
from sklearn import cluster
from scipy import stats
from sklearn.ensemble import RandomForestClassifier as rfc
from sklearn.ensemble import RandomForestRegressor as rfr
from lightgbm import LGBMClassifier as lgbmc
from lightgbm import LGBMRegressor as lgbmr
import sys
import gc
from sklearn.pipeline import Pipeline
from sklearn import metrics
from datetime import datetime
import calendar
from sklearn.preprocessing import LabelEncoder
from collections import defaultdict
from typing import Optional, Union
from pycaret.internal.logging import get_logger
from pycaret.internal.utils import infer_ml_usecase
from sklearn.utils.validation import check_is_fitted, check_X_y, check_random_state
from sklearn.utils.validation import _deprecate_positional_args
from sklearn.utils import _safe_indexing
from sklearn.exceptions import NotFittedError
pd.set_option("display.max_columns", 500)
pd.set_option("display.max_rows", 500)
SKLEARN_EMPTY_STEP = "passthrough"
# _____________________________________________________________________________________________________________________________
def str_if_not_null(x):
if pd.isnull(x) or (x is None) or pd.isna(x) or (x is not x):
return x
return str(x)
def find_id_columns(data, target, numerical_features):
# some times we have id column in the data set, we will try to find it and then will drop it if found
len_samples = len(data)
id_columns = []
for i in data.select_dtypes(
include=["object", "int64", "float64", "float32"]
).columns:
col = data[i]
if i not in numerical_features and i != target:
if sum(col.isnull()) == 0:
try:
col = col.astype("int64")
except:
continue
if col.nunique() == len_samples:
# we extract column and sort it
features = col.sort_values()
# no we subtract i+1-th value from i-th (calculating increments)
increments = features.diff()[1:]
# if all increments are 1 (with float tolerance), then the column is ID column
if sum(np.abs(increments - 1) < 1e-7) == len_samples - 1:
id_columns.append(i)
return id_columns
class DataTypes_Auto_infer(BaseEstimator, TransformerMixin):
"""
- This will try to infer data types automatically, option to override learent data types is also available.
- This alos automatically delets duplicate columns (values or same colume name), removes rows where target variable is null and
remove columns and rows where all the records are null
"""
def __init__(
self,
target,
ml_usecase,
categorical_features=[],
numerical_features=[],
time_features=[],
features_todrop=[],
id_columns=[],
display_types=True,
float_dtype="float32",
): # nothing to define
"""
User to define the target (y) variable
args:
target: string, name of the target variable
ml_usecase: string , 'regresson' or 'classification . For now, only supports two class classification
- this is useful in case target variable is an object / string . it will replace the strings with integers
categorical_features: list of categorical features, default None, when None best guess will be used to identify categorical features
numerical_features: list of numerical features, default None, when None best guess will be used to identify numerical features
time_features: list of date/time features, default None, when None best guess will be used to identify date/time features
"""
self.target = target
self.ml_usecase = ml_usecase
self.features_todrop = [str(x) for x in features_todrop]
self.categorical_features = [
x for x in categorical_features if x not in self.features_todrop
]
self.numerical_features = [
x for x in numerical_features if x not in self.features_todrop
]
self.time_features = [x for x in time_features if x not in self.features_todrop]
self.display_types = display_types
self.id_columns = id_columns
self.float_dtype = float_dtype
def fit(self, dataset, y=None): # learning data types of all the columns
"""
Args:
data: accepts a pandas data frame
Returns:
Panda Data Frame
"""
data = dataset.copy()
# also make sure that all the column names are string
data.columns = [str(i) for i in data.columns]
# drop any columns that were asked to drop
data.drop(columns=self.features_todrop, errors="ignore", inplace=True)
# remove sepcial char from column names
# data.columns= data.columns.str.replace('[,]','')
# we will take float as numberic, object as categorical from the begning
# fir int64, we will check to see what is the proportion of unique counts to the total lenght of the data
# if proportion is lower, then it is probabaly categorical
# however, proportion can be lower / disturebed due to samller denominator (total lenghth / number of samples)
# so we will take the following chart
# 0-50 samples, threshold is 24%
# 50-100 samples, th is 12%
# 50-250 samples , th is 4.8%
# 250-500 samples, th is 2.4%
# 500 and above 2% or belwo
# if there are inf or -inf then replace them with NaN
data.replace([np.inf, -np.inf], np.NaN, inplace=True)
# we canc check if somehow everything is object, we can try converting them in float
for i in data.select_dtypes(include=["object"]).columns:
try:
data[i] = data[i].astype("int64")
except:
None
for i in (
data.select_dtypes(include=["object"])
.drop(self.target, axis=1, errors="ignore")
.columns
):
try:
data[i] = pd.to_datetime(
data[i], infer_datetime_format=True, utc=False, errors="raise"
)
except:
continue
# if data type is bool or pandas Categorical , convert to categorical
for i in data.select_dtypes(include=["bool", "category"]).columns:
data[i] = data[i].astype("object")
# wiith csv , if we have any null in a colum that was int , panda will read it as float.
# so first we need to convert any such floats that have NaN and unique values are lower than 20
for i in data.select_dtypes(include=["float64"]).columns:
data[i] = data[i].astype(self.float_dtype)
# count how many Nas are there
na_count = sum(data[i].isnull())
# count how many digits are there that have decimiles
count_float = np.nansum(
[False if r.is_integer() else True for r in data[i]]
)
# total decimiels digits
count_float = (
count_float - na_count
) # reducing it because we know NaN is counted as a float digit
# now if there isnt any float digit , & unique levales are less than 20 and there are Na's then convert it to object
if (count_float == 0) & (data[i].nunique() <= 20) & (na_count > 0):
data[i] = data[i].astype("object")
# should really be an absolute number say 20
# length = len(data.iloc[:,0])
# if length in range(0,51):
# th=.25
# elif length in range(51,101):
# th=.12
# elif length in range(101,251):
# th=.048
# elif length in range(251,501):
# th=.024
# elif length > 500:
# th=.02
# if column is int and unique counts are more than two, then: (exclude target)
for i in data.select_dtypes(include=["int64"]).columns:
if i != self.target:
if data[i].nunique() <= 20: # hard coded
data[i] = data[i].apply(str_if_not_null)
else:
data[i] = data[i].astype(self.float_dtype)
# # if colum is objfloat and only have two unique counts , this is probabaly one hot encoded
# # make it object
for i in data.select_dtypes(include=[self.float_dtype]).columns:
if data[i].nunique() == 2:
data[i] = data[i].apply(str_if_not_null)
# for time & dates
# self.drop_time = [] # for now we are deleting time columns
# now in case we were given any specific columns dtypes in advance , we will over ride theos
for i in self.categorical_features:
try:
data[i] = data[i].apply(str_if_not_null)
except:
data[i] = dataset[i].apply(str_if_not_null)
for i in self.numerical_features:
try:
data[i] = data[i].astype(self.float_dtype)
except:
data[i] = dataset[i].astype(self.float_dtype)
for i in self.time_features:
try:
data[i] = pd.to_datetime(
data[i], infer_datetime_format=True, utc=False, errors="raise"
)
except:
data[i] = pd.to_datetime(
dataset[i], infer_datetime_format=True, utc=False, errors="raise"
)
for i in data.select_dtypes(
include=["datetime64", "datetime64[ns, UTC]"]
).columns:
data[i] = data[i].astype("datetime64[ns]")
# table of learent types
self.learned_dtypes = data.dtypes
# self.training_columns = data.drop(self.target,axis=1).columns
# if there are inf or -inf then replace them with NaN
data = data.replace([np.inf, -np.inf], np.NaN).astype(self.learned_dtypes)
# lets remove duplicates
# remove duplicate columns (columns with same values)
# (too expensive on bigger data sets)
# data_c = data.T.drop_duplicates()
# data = data_c.T
# remove columns with duplicate name
data = data.loc[:, ~data.columns.duplicated()]
# Remove NAs
data.dropna(axis=0, how="all", inplace=True)
data.dropna(axis=1, how="all", inplace=True)
# remove the row if target column has NA
try:
data.dropna(subset=[self.target], inplace=True)
except KeyError:
pass
# self.training_columns = data.drop(self.target,axis=1).columns
# since due to transpose , all data types have changed, lets change the dtypes to original---- not required any more since not transposing any more
# for i in data.columns: # we are taking all the columns in test , so we dot have to worry about droping target column
# data[i] = data[i].astype(self.learned_dtypes[self.learned_dtypes.index==i])
if self.display_types == True:
display(
wg.Text(
value="Following data types have been inferred automatically, if they are correct press enter to continue or type 'quit' otherwise.",
layout=Layout(width="100%"),
),
display_id="m1",
)
dt_print_out = pd.DataFrame(
self.learned_dtypes, columns=["Feature_Type"]
).drop("UNSUPERVISED_DUMMY_TARGET", errors="ignore")
dt_print_out["Data Type"] = ""
for i in dt_print_out.index:
if i != self.target:
if i in self.id_columns:
dt_print_out.loc[i, "Data Type"] = "ID Column"
elif dt_print_out.loc[i, "Feature_Type"] == "object":
dt_print_out.loc[i, "Data Type"] = "Categorical"
elif dt_print_out.loc[i, "Feature_Type"] == self.float_dtype:
dt_print_out.loc[i, "Data Type"] = "Numeric"
elif dt_print_out.loc[i, "Feature_Type"] == "datetime64[ns]":
dt_print_out.loc[i, "Data Type"] = "Date"
# elif dt_print_out.loc[i,'Feature_Type'] == 'int64':
# dt_print_out.loc[i,'Data Type'] = 'Categorical'
else:
dt_print_out.loc[i, "Data Type"] = "Label"
# if we added the dummy target column , then drop it
dt_print_out.drop(index="dummy_target", errors="ignore", inplace=True)
display(dt_print_out[["Data Type"]])
self.response = input()
if self.response in [
"quit",
"Quit",
"exit",
"EXIT",
"q",
"Q",
"e",
"E",
"QUIT",
"Exit",
]:
sys.exit(
"Read the documentation of setup to learn how to overwrite data types over the inferred types. setup function must run again before you continue modeling."
)
# drop time columns
# data.drop(self.drop_time,axis=1,errors='ignore',inplace=True)
# drop id columns
data.drop(self.id_columns, axis=1, errors="ignore", inplace=True)
return data
def transform(self, dataset, y=None):
"""
Args:
data: accepts a pandas data frame
Returns:
Panda Data Frame
"""
data = dataset.copy()
# also make sure that all the column names are string
data.columns = [str(i) for i in data.columns]
# drop any columns that were asked to drop
data.drop(columns=self.features_todrop, errors="ignore", inplace=True)
data = data[self.final_training_columns]
# also make sure that all the column names are string
data.columns = [str(i) for i in data.columns]
# if there are inf or -inf then replace them with NaN
data.replace([np.inf, -np.inf], np.NaN, inplace=True)
try:
data.dropna(subset=[self.target], inplace=True)
except KeyError:
pass
# remove sepcial char from column names
# data.columns= data.columns.str.replace('[,]','')
# very first thing we need to so is to check if the training and test data hace same columns
for i in self.final_training_columns:
if i not in data.columns:
raise TypeError(
f"test data does not have column {i} which was used for training."
)
# just keep picking the data and keep applying to the test data set (be mindful of target variable)
for (
i
) in (
data.columns
): # we are taking all the columns in test , so we dot have to worry about droping target column
if i == self.target and (
(self.ml_usecase == "classification")
and (self.learned_dtypes[self.target] == "object")
):
data[i] = self.le.transform(data[i].apply(str).astype("object"))
data[i] = data[i].astype("int64")
else:
if self.learned_dtypes[i].name == "datetime64[ns]":
data[i] = pd.to_datetime(
data[i], infer_datetime_format=True, utc=False, errors="coerce"
)
data[i] = data[i].astype(self.learned_dtypes[i])
# drop time columns
# data.drop(self.drop_time,axis=1,errors='ignore',inplace=True)
# drop id columns
data.drop(self.id_columns, axis=1, errors="ignore", inplace=True)
return data
# fit_transform
def fit_transform(self, dataset, y=None):
data = dataset
# since this is for training , we dont nees any transformation since it has already been transformed in fit
data = self.fit(data)
# additionally we just need to treat the target variable
# for ml use ase
if (self.ml_usecase == "classification") & (
data[self.target].dtype == "object"
):
self.le = LabelEncoder()
data[self.target] = self.le.fit_transform(
data[self.target].apply(str).astype("object")
)
self.replacement = _get_labelencoder_reverse_dict(self.le)
# self.u = list(pd.unique(data[self.target]))
# self.replacement = np.arange(0,len(self.u))
# data[self.target]= data[self.target].replace(self.u,self.replacement)
# data[self.target] = data[self.target].astype('int64')
# self.replacement = pd.DataFrame(dict(target_variable=self.u,replaced_with=self.replacement))
# drop time columns
# data.drop(self.drop_time,axis=1,errors='ignore',inplace=True)
# drop id columns
data.drop(self.id_columns, axis=1, errors="ignore", inplace=True)
# finally save a list of columns that we would need from test data set
self.final_training_columns = data.columns.to_list()
self.final_training_columns.remove(self.target)
return data
# _______________________________________________________________________________________________________________________
# Imputation
class Simple_Imputer(_BaseImputer):
"""
Imputes all type of data (numerical,categorical & Time).
Highly recommended to run Define_dataTypes class first
Numerical values can be imputed with mean or median or filled with zeros
categorical missing values will be replaced with "Other"
Time values are imputed with the most frequesnt value
Ignores target (y) variable
Args:
Numeric_strategy: string , all possible values {'mean','median','zero'}
categorical_strategy: string , all possible values {'not_available','most frequent'}
target: string , name of the target variable
fill_value_numerical: number, value for filling missing values of numeric columns
fill_value_categorical: string, value for filling missing values of categorical columns
"""
_numeric_strategies = {
"mean": "mean",
"median": "median",
"most frequent": "most_frequent",
"zero": "constant",
}
_categorical_strategies = {
"most frequent": "most_frequent",
"not_available": "constant",
}
_time_strategies = {
"mean": "mean",
"median": "median",
"most frequent": "most_frequent",
}
def __init__(
self,
numeric_strategy,
categorical_strategy,
time_strategy,
target,
fill_value_numerical=0,
fill_value_categorical="not_available",
):
# Set the target variable, which we don't want to impute
self.target = target
if numeric_strategy not in self._numeric_strategies:
numeric_strategy = "zero"
self.numeric_strategy = numeric_strategy
if categorical_strategy not in self._categorical_strategies:
categorical_strategy = "most frequent"
self.categorical_strategy = categorical_strategy
if time_strategy not in self._time_strategies:
time_strategy = "most frequent"
self.time_strategy = time_strategy
self.fill_value_numerical = fill_value_numerical
self.fill_value_categorical = fill_value_categorical
# self.most_frequent_time = []
self.numeric_imputer = SimpleImputer(
strategy=self._numeric_strategies[self.numeric_strategy],
fill_value=fill_value_numerical,
)
self.categorical_imputer = SimpleImputer(
strategy=self._categorical_strategies[self.categorical_strategy],
fill_value=fill_value_categorical,
)
self.time_imputer = SimpleImputer(
strategy=self._time_strategies[self.time_strategy],
)
def fit(self, X, y=None):
"""
Fit the imputer on dataset.
Args:
X : pd.DataFrame, the dataset to be imputed
Returns:
self : Simple_Imputer
"""
try:
data = X.drop(self.target, axis=1)
except:
data = X
self.numeric_columns = data.select_dtypes(
include=["float32", "float64", "int32", "int64"]
).columns
self.categorical_columns = data.select_dtypes(
include=["object", "bool", "string", "category"]
).columns
self.time_columns = data.select_dtypes(
include=["datetime64[ns]", "timedelta64[ns]"]
).columns
statistics = []
if not self.numeric_columns.empty:
self.numeric_imputer.fit(data[self.numeric_columns])
statistics.append((self.numeric_imputer.statistics_, self.numeric_columns))
if not self.categorical_columns.empty:
self.categorical_imputer.fit(data[self.categorical_columns])
statistics.append(
(self.categorical_imputer.statistics_, self.categorical_columns)
)
if not self.time_columns.empty:
for col in self.time_columns:
data[col] = data[col][data[col].notnull()].astype(np.int64)
self.time_imputer.fit(data[self.time_columns])
statistics.append((self.time_imputer.statistics_, self.time_columns))
self.statistics_ = np.zeros(shape=len(data.columns), dtype=object)
columns = list(data.columns)
for s, index in statistics:
for i, j in enumerate(index):
self.statistics_[columns.index(j)] = s[i]
return self
def transform(self, X, y=None):
"""
Impute all missing values in dataset.
Args:
X: pd.DataFrame, the dataset to be imputed
Returns:
data: pd.DataFrame, the imputed dataset
"""
data = X
imputed_data = []
if not self.numeric_columns.empty:
numeric_data = pd.DataFrame(
self.numeric_imputer.transform(data[self.numeric_columns]),
columns=self.numeric_columns,
index=data.index,
)
imputed_data.append(numeric_data)
if not self.categorical_columns.empty:
categorical_data = pd.DataFrame(
self.categorical_imputer.transform(data[self.categorical_columns]),
columns=self.categorical_columns,
index=data.index,
)
for col in categorical_data.columns:
categorical_data[col] = categorical_data[col].apply(str)
imputed_data.append(categorical_data)
if not self.time_columns.empty:
datetime_columns = data.select_dtypes(include=["datetime"]).columns
timedelta_columns = data.select_dtypes(include=["timedelta"]).columns
timedata_copy = data[self.time_columns].copy()
for col in self.time_columns:
timedata_copy[col] = timedata_copy[col][
timedata_copy[col].notnull()
].astype(np.int64)
time_data = pd.DataFrame(
self.time_imputer.transform(timedata_copy),
columns=self.time_columns,
index=data.index,
)
for col in datetime_columns:
time_data[col][data[col].notnull()] = data[col][data[col].notnull()]
time_data[col] = time_data[col].apply(pd.Timestamp)
for col in timedelta_columns:
time_data[col][data[col].notnull()] = data[col][data[col].notnull()]
time_data[col] = time_data[col].apply(pd.Timedelta)
imputed_data.append(time_data)
if imputed_data:
data.update(pd.concat(imputed_data, axis=1))
data.astype(X.dtypes)
return data
def fit_transform(self, X, y=None):
"""
Fit and impute on dataset.
Args:
X: pd.DataFrame, the dataset to be fitted and imputed
Returns:
pd.DataFrame, the imputed dataset
"""
data = X
self.fit(data)
return self.transform(data)
# _______________________________________________________________________________________________________________________
# Imputation with surrogate columns
class Surrogate_Imputer(_BaseImputer):
"""
Imputes feature with surrogate column (numerical,categorical & Time).
- Highly recommended to run Define_dataTypes class first
- it is also recommended to only apply this to features where it makes business sense to creat surrogate column
- feature name has to be provided
- only able to handle one feature at a time
- Numerical values can be imputed with mean or median or filled with zeros
- categorical missing values will be replaced with "Other"
- Time values are imputed with the most frequesnt value
- Ignores target (y) variable
Args:
feature_name: string, provide features name
feature_type: string , all possible values {'numeric','categorical','date'}
strategy: string ,all possible values {'mean','median','zero','not_available','most frequent'}
target: string , name of the target variable
"""
def __init__(self, numeric_strategy, categorical_strategy, target):
self.numeric_strategy = numeric_strategy
self.target = target
self.categorical_strategy = categorical_strategy
def fit(self, dataset, y=None): #
def zeros(x):
return 0
data = dataset
# make a table for numerical variable with strategy stats
if self.numeric_strategy == "mean":
self.numeric_stats = (
data.drop(self.target, axis=1)
.select_dtypes(include=["float32", "float64", "int64"])
.apply(np.nanmean)
)
elif self.numeric_strategy == "median":
self.numeric_stats = (
data.drop(self.target, axis=1)
.select_dtypes(include=["float32", "float64", "int64"])
.apply(np.nanmedian)
)
else:
self.numeric_stats = (
data.drop(self.target, axis=1)
.select_dtypes(include=["float32", "float64", "int64"])
.apply(zeros)
)
self.numeric_columns = (
data.drop(self.target, axis=1)
.select_dtypes(include=["float32", "float64", "int64"])
.columns
)
# also need to learn if any columns had NA in training
self.numeric_na = pd.DataFrame(columns=self.numeric_columns)
for i in self.numeric_columns:
if data[i].isnull().any() == True:
self.numeric_na.loc[0, i] = True
else:
self.numeric_na.loc[0, i] = False
# for Catgorical ,
if self.categorical_strategy == "most frequent":
self.categorical_columns = (
data.drop(self.target, axis=1).select_dtypes(include=["object"]).columns
)
self.categorical_stats = pd.DataFrame(
columns=self.categorical_columns
) # place holder
for i in self.categorical_stats.columns:
self.categorical_stats.loc[0, i] = data[i].value_counts().index[0]
# also need to learn if any columns had NA in training, but this is only valid if strategy is "most frequent"
self.categorical_na = pd.DataFrame(columns=self.categorical_columns)
for i in self.categorical_columns:
if sum(data[i].isnull()) > 0:
self.categorical_na.loc[0, i] = True
else:
self.categorical_na.loc[0, i] = False
else:
self.categorical_columns = (
data.drop(self.target, axis=1).select_dtypes(include=["object"]).columns
)
self.categorical_na = pd.DataFrame(columns=self.categorical_columns)
self.categorical_na.loc[
0, :
] = False # (in this situation we are not making any surrogate column)
# for time, there is only one way, pick up the most frequent one
self.time_columns = (
data.drop(self.target, axis=1)
.select_dtypes(include=["datetime64[ns]"])
.columns
)
self.time_stats = pd.DataFrame(columns=self.time_columns) # place holder
self.time_na = pd.DataFrame(columns=self.time_columns)
for i in self.time_columns:
self.time_stats.loc[0, i] = data[i].value_counts().index[0]
# learn if time columns were NA
for i in self.time_columns:
if data[i].isnull().any() == True:
self.time_na.loc[0, i] = True
else:
self.time_na.loc[0, i] = False
return data # nothing to return
def transform(self, dataset, y=None):
data = dataset
# for numeric columns
for i, s in zip(data[self.numeric_columns].columns, self.numeric_stats):
array = data[i].isnull()
data[i].fillna(s, inplace=True)
# make a surrogate column if there was any
if self.numeric_na.loc[0, i] == True:
data[i + "_surrogate"] = array
# make it string
data[i + "_surrogate"] = data[i + "_surrogate"].apply(str)
# for categorical columns
if self.categorical_strategy == "most frequent":
for i in self.categorical_stats.columns:
# data[i].fillna(self.categorical_stats.loc[0,i],inplace=True)
array = data[i].isnull()
data[i] = data[i].fillna(self.categorical_stats.loc[0, i])
data[i] = data[i].apply(str)
# make surrogate column
if self.categorical_na.loc[0, i] == True:
data[i + "_surrogate"] = array
# make it string
data[i + "_surrogate"] = data[i + "_surrogate"].apply(str)
else: # this means replace na with "not_available"
for i in self.categorical_columns:
data[i].fillna("not_available", inplace=True)
data[i] = data[i].apply(str)
# no need to make surrogate since not_available is itself a new colum
# for time
for i in self.time_stats.columns:
array = data[i].isnull()
data[i].fillna(self.time_stats.loc[0, i], inplace=True)
# make surrogate column
if self.time_na.loc[0, i] == True:
data[i + "_surrogate"] = array
# make it string
data[i + "_surrogate"] = data[i + "_surrogate"].apply(str)
return data
def fit_transform(self, dataset, y=None):
data = dataset
data = self.fit(data)
return self.transform(data)
class Iterative_Imputer(_BaseImputer):
def __init__(
self,
regressor: BaseEstimator,
classifier: BaseEstimator,
*,
target=None,
missing_values=np.nan,
initial_strategy_numeric: str = "mean",
initial_strategy_categorical: str = "most frequent",
initial_strategy_time: str = "most frequent",
ordinal_columns: Optional[list] = None,
max_iter: int = 10,
warm_start: bool = False,
imputation_order: str = "ascending",
verbose: int = 0,
random_state: int = None,
add_indicator: bool = False,
):
super().__init__(missing_values=missing_values, add_indicator=add_indicator)
self.regressor = regressor
self.classifier = classifier
self.initial_strategy_numeric = initial_strategy_numeric
self.initial_strategy_categorical = initial_strategy_categorical
self.initial_strategy_time = initial_strategy_time
self.max_iter = max_iter
self.warm_start = warm_start
self.imputation_order = imputation_order
self.verbose = verbose
self.random_state = random_state
self.target = target
if ordinal_columns is None:
ordinal_columns = []
self.ordinal_columns = list(ordinal_columns)
self._column_cleaner = Clean_Colum_Names()
def _initial_imputation(self, X):
if self.initial_imputer_ is None:
self.initial_imputer_ = Simple_Imputer(
target="__TARGET__", # dummy value, we don't actually want to drop anything
numeric_strategy=self.initial_strategy_numeric,
categorical_strategy=self.initial_strategy_categorical,
time_strategy=self.initial_strategy_time,
)
X_filled = self.initial_imputer_.fit_transform(X)
else:
X_filled = self.initial_imputer_.transform(X)
return X_filled
def _impute_one_feature(self, X, column, X_na_mask, fit):
if not fit:
check_is_fitted(self)
is_classification = (
X[column].dtype.name == "object" or column in self.ordinal_columns
)
if is_classification:
if column in self.classifiers_:
time, dummy, le, estimator = self.classifiers_[column]
elif not fit:
return X
else:
estimator = clone(self._classifier)
time = Make_Time_Features()
dummy = Dummify(column)
le = LabelEncoder()
else:
if column in self.regressors_:
time, dummy, le, estimator = self.regressors_[column]
elif not fit:
return X
else:
estimator = clone(self._regressor)
time = Make_Time_Features()
dummy = Dummify(column)
le = None
if fit:
fit_kwargs = {}
X_train = X[~X_na_mask[column]]
y_train = X_train[column]
# catboost handles categoricals itself
if "catboost" not in str(type(estimator)).lower():
X_train = time.fit_transform(X_train)
X_train = dummy.fit_transform(X_train)
X_train.drop(column, axis=1, inplace=True)
else:
X_train.drop(column, axis=1, inplace=True)
fit_kwargs["cat_features"] = []
for i, col in enumerate(X_train.columns):
if X_train[col].dtype.name == "object":
X_train[col] = pd.Categorical(
X_train[col], ordered=column in self.ordinal_columns
)
fit_kwargs["cat_features"].append(i)
fit_kwargs["cat_features"] = np.array(
fit_kwargs["cat_features"], dtype=int
)
X_train = self._column_cleaner.fit_transform(X_train)
if le:
y_train = le.fit_transform(y_train)
try:
assert self.warm_start
estimator.partial_fit(X_train, y_train)
except:
estimator.fit(X_train, y_train, **fit_kwargs)
X_test = X.drop(column, axis=1)[X_na_mask[column]]
X_test = time.transform(X_test)
# catboost handles categoricals itself
if "catboost" not in str(type(estimator)).lower():
X_test = dummy.transform(X_test)
else:
for col in X_test.select_dtypes("object").columns:
X_test[col] = pd.Categorical(
X_test[col], ordered=column in self.ordinal_columns
)
result = estimator.predict(X_test)
if le:
result = le.inverse_transform(result)
if fit:
if is_classification:
self.classifiers_[column] = (time, dummy, le, estimator)
else:
self.regressors_[column] = (time, dummy, le, estimator)
if result.dtype.name == "float64":
result = result.astype("float32")
X_test[column] = result
X.update(X_test[column])
gc.collect()
return X
def _impute(self, X, fit: bool):
if self.target in X.columns:
target_column = X[self.target]
X = X.drop(self.target, axis=1)
else:
target_column = None
original_columns = X.columns
original_index = X.index
X = X.reset_index(drop=True)
X = self._column_cleaner.fit_transform(X)
self.imputation_sequence_ = (
X.isnull().sum().sort_values(ascending=self.imputation_order == "ascending")
)
self.imputation_sequence_ = [
col
for col in self.imputation_sequence_[self.imputation_sequence_ > 0].index
if X[col].dtype.name != "datetime64[ns]"
]
X_na_mask = X.isnull()
X_imputed = self._initial_imputation(X.copy())
for i in range(self.max_iter if fit else 1):
for feature in self.imputation_sequence_:
get_logger().info(f"Iterative Imputation: {i+1} cycle | {feature}")
X_imputed = self._impute_one_feature(X_imputed, feature, X_na_mask, fit)
X_imputed.columns = original_columns
X_imputed.index = original_index
if target_column is not None:
X_imputed[self.target] = target_column
return X_imputed
def transform(self, X, y=None, **fit_params):
return self._impute(X, fit=False)
def fit_transform(self, X, y=None, **fit_params):
self.random_state_ = getattr(
self, "random_state_", check_random_state(self.random_state)
)
if self.regressor is None:
raise ValueError("No regressor provided")
else:
self._regressor = clone(self.regressor)
try:
self._regressor.set_param(random_state=self.random_state_)
except:
pass
if self.classifier is None:
raise ValueError("No classifier provided")
else:
self._classifier = clone(self.classifier)
try:
self._classifier.set_param(random_state=self.random_state_)
except:
pass
self.classifiers_ = {}
self.regressors_ = {}
self.initial_imputer_ = None
return self._impute(X, fit=True)
def fit(self, X, y=None, **fit_params):
self.fit_transform(X, y=y, **fit_params)
return self
# _______________________________________________________________________________________________________________________
# Zero and Near Zero Variance
class Zroe_NearZero_Variance(BaseEstimator, TransformerMixin):
"""
- it eliminates the features having zero variance
- it eliminates the features haveing near zero variance
- Near zero variance is determined by
-1) Count of unique points divided by the total length of the feature has to be lower than a pre sepcified threshold
-2) Most common point(count) divided by the second most common point(count) in the feature is greater than a pre specified threshold
Once both conditions are met , the feature is dropped
-Ignores target variable
Args:
threshold_1: float (between 0.0 to 1.0) , default is .10
threshold_2: int (between 1 to 100), default is 20
tatget variable : string, name of the target variable
"""
def __init__(self, target, threshold_1=0.1, threshold_2=20):
self.threshold_1 = threshold_1
self.threshold_2 = threshold_2
self.target = target
def fit(
self, dataset, y=None
): # from training data set we are going to learn what columns to drop
data = dataset
self.to_drop = []
sampl_len = len(data[self.target])
for i in data.drop(self.target, axis=1).columns:
# get the number of unique counts
u = pd.DataFrame(data[i].value_counts()).sort_values(
by=i, ascending=False, inplace=False
)
# take len of u and divided it by the total sample numbers, so this will check the 1st rule , has to be low say 10%
# import pdb; pdb.set_trace()
first = len(u) / sampl_len
# then check if most common divided by 2nd most common ratio is 20 or more
if (
len(u[i]) == 1
): # this means that if column is non variance , automatically make the number big to drop it
second = 100
else:
second = u.iloc[0, 0] / u.iloc[1, 0]
# if both conditions are true then drop the column, however, we dont want to alter column that indicate NA's
if (first <= 0.10) and (second >= 20) and (i[-10:] != "_surrogate"):
self.to_drop.append(i)
# now drop if the column has zero variance
if (second == 100) and (i[-10:] != "_surrogate"):
self.to_drop.append(i)
def transform(
self, dataset, y=None
): # since it is only for training data set , nothing here
data = dataset.drop(self.to_drop, axis=1)
return data
def fit_transform(self, dataset, y=None):
data = dataset
self.fit(data)
return self.transform(data)
# ____________________________________________________________________________________________________________________________
# rare catagorical variables
class Catagorical_variables_With_Rare_levels(BaseEstimator, TransformerMixin):
"""
-Merges levels in catagorical features with more frequent level if they appear less than a threshold count
e.g. Col=[a,a,a,a,b,b,c,c]
if threshold is set to 2 , then c will be mrged with b because both are below threshold
There has to be atleast two levels belwo threshold for this to work
the process will keep going until all the levels have atleast 2(threshold) counts
-Only handles catagorical features
-It is recommended to run the Zroe_NearZero_Variance and Define_dataTypes first
-Ignores target variable
Args:
threshold: int , default 10
target: string , name of the target variable
new_level_name: string , name given to the new level generated, default 'others'
"""
def __init__(self, target, new_level_name="others_infrequent", threshold=0.05):
self.threshold = threshold
self.target = target
self.new_level_name = new_level_name
def fit(
self, dataset, y=None
): # we will learn for what columnns what are the level to merge as others
# every level of the catagorical feature has to be more than threshols, if not they will be clubed togather as "others"
# in order to apply, there should be atleast two levels belwo the threshold !
# creat a place holder
data = dataset
self.ph = pd.DataFrame(
columns=data.drop(self.target, axis=1)
.select_dtypes(include="object")
.columns
)
# ph.columns = df.columns# catagorical only
for i in data[self.ph.columns].columns:
# determine the infrequebt count
v_c = data[i].value_counts()
count_th = round(v_c.quantile(self.threshold))
a = np.sum(
pd.DataFrame(data[i].value_counts().sort_values())[i] <= count_th
)
if a >= 2: # rare levels has to be atleast two
count = pd.DataFrame(data[i].value_counts().sort_values())
count.columns = ["fre"]
count = count[count["fre"] <= count_th]
to_club = list(count.index)
self.ph.loc[0, i] = to_club
else:
self.ph.loc[0, i] = []
# # also need to make a place holder that keep records of all the levels , and in case a new level appears in test we will change it to others
# self.ph_level = pd.DataFrame(columns=data.drop(self.target,axis=1).select_dtypes(include="object").columns)
# for i in self.ph_level.columns:
# self.ph_level.loc[0,i] = list(data[i].value_counts().sort_values().index)
def transform(self, dataset, y=None): #
# transorm
data = dataset
for i in data[self.ph.columns].columns:
t_replace = self.ph.loc[0, i]
data[i].replace(
to_replace=t_replace, value=self.new_level_name, inplace=True
)
return data
def fit_transform(self, dataset, y=None):
data = dataset
self.fit(data)
return self.transform(data)
# _______________________________________________________________________________________________________________________
# new catagorical level in test
class New_Catagorical_Levels_in_TestData(BaseEstimator, TransformerMixin):
"""
-This treats if a new level appears in the test dataset catagorical's feature (i.e a level on whihc model was not trained previously)
-It simply replaces the new level in test data set with the most frequent or least frequent level in the same feature in the training data set
-It is recommended to run the Zroe_NearZero_Variance and Define_dataTypes first
-Ignores target variable
Args:
target: string , name of the target variable
replacement_strategy:string , 'raise exception', 'least frequent' or 'most frequent' (default 'most frequent' )
"""
def __init__(self, target, replacement_strategy="most frequent"):
self.target = target
self.replacement_strategy = replacement_strategy
def fit(self, data, y=None):
# need to make a place holder that keep records of all the levels , and in case a new level appears in test we will change it to others
self.ph_train_level = pd.DataFrame(
columns=data.drop(self.target, axis=1)
.select_dtypes(include="object")
.columns
)
for i in self.ph_train_level.columns:
if self.replacement_strategy == "least frequent":
self.ph_train_level.loc[0, i] = list(
data[i].value_counts().sort_values().index
)
else:
self.ph_train_level.loc[0, i] = list(data[i].value_counts().index)
def transform(self, data, y=None): #
# transorm
# we need to learn the same for test data , and then we will compare to check what levels are new in there
self.ph_test_level = pd.DataFrame(
columns=data.drop(self.target, axis=1, errors="ignore")
.select_dtypes(include="object")
.columns
)
for i in self.ph_test_level.columns:
self.ph_test_level.loc[0, i] = list(
data[i].value_counts().sort_values().index
)
# new we have levels for both test and train, we will start comparing and replacing levels in test set (Only if test set has new levels)
for i in self.ph_test_level.columns:
new = list(
(set(self.ph_test_level.loc[0, i]) - set(self.ph_train_level.loc[0, i]))
)
# now if there is a difference , only then replace it
if len(new) > 0:
if self.replacement_strategy == "raise exception":
raise ValueError(
f"Column '{i}' contains levels '{new}' which were not present in train data."
)
data[i].replace(new, self.ph_train_level.loc[0, i][0], inplace=True)
return data
def fit_transform(
self, data, y=None
): # There is no transformation happening in training data set, its all about test
self.fit(data)
return data
# _______________________________________________________________________________________________________________________
# Group akin features
class Group_Similar_Features(BaseEstimator, TransformerMixin):
"""
- Given a list of features , it creates aggregate features
- features created are Min, Max, Mean, Median, Mode & Std
- Only works on numerical features
Args:
list_of_similar_features: list of list, string , e.g. [['col',col2],['col3','col4']]
group_name: list, group name/names to be added as prefix to aggregate features, e.g ['gorup1','group2']
"""
def __init__(self, group_name=[], list_of_grouped_features=[[]]):
self.list_of_similar_features = list_of_grouped_features
self.group_name = group_name
# if list of list not given
try:
np.array(self.list_of_similar_features).shape[0]
except:
raise (
"Group_Similar_Features: list_of_grouped_features is not provided as list of list"
)
def fit(self, data, y=None):
# nothing to learn
return self
def transform(self, dataset, y=None):
data = dataset
# # only going to process if there is an actual missing value in training data set
if len(self.list_of_similar_features) > 0:
for f, g in zip(self.list_of_similar_features, self.group_name):
data[g + "_Min"] = data[f].apply(np.min, 1)
data[g + "_Max"] = data[f].apply(np.max, 1)
data[g + "_Mean"] = data[f].apply(np.mean, 1)
data[g + "_Median"] = data[f].apply(np.median, 1)
data[g + "_Mode"] = stats.mode(data[f], 1)[0]
data[g + "_Std"] = data[f].apply(np.std, 1)
return data
else:
return data
def fit_transform(self, data, y=None):
return self.transform(data)
# ____________________________________________________________________________________________________________________________________________________________________
# Binning for Continious
class Binning(BaseEstimator, TransformerMixin):
"""
- Converts numerical variables to catagorical variable through binning
- Number of binns are automitically determined through Sturges method
- Once discretize, original feature will be dropped
Args:
features_to_discretize: list of featur names to be binned
"""
def __init__(self, features_to_discretize):
self.features_to_discretize = features_to_discretize
def fit(self, data, y=None):
self.fit_transform(data, y=y)
return self
def transform(self, dataset, y=None):
data = dataset
# only do if features are provided
if len(self.features_to_discretize) > 0:
data_t = self.disc.transform(
np.array(data[self.features_to_discretize]).reshape(
-1, self.len_columns
)
)
# make pandas data frame
data_t = pd.DataFrame(
data_t, columns=self.features_to_discretize, index=data.index
)
# all these columns are catagorical
data_t = data_t.astype(str)
# drop original columns
data.drop(self.features_to_discretize, axis=1, inplace=True)
# add newly created columns
data = pd.concat((data, data_t), axis=1)
return data
def fit_transform(self, dataset, y=None):
data = dataset
# only do if features are given
if len(self.features_to_discretize) > 0:
# place holder for all the features for their binns
self.binns = []
for i in self.features_to_discretize:
# get numbr of binns
hist, _ = np.histogram(data[i], bins="sturges")
self.binns.append(len(hist))
# how many colums to deal with
self.len_columns = len(self.features_to_discretize)
# now do fit transform
self.disc = KBinsDiscretizer(
n_bins=self.binns, encode="ordinal", strategy="kmeans"
)
data_t = self.disc.fit_transform(
np.array(data[self.features_to_discretize]).reshape(
-1, self.len_columns
)
)
# make pandas data frame
data_t = pd.DataFrame(
data_t, columns=self.features_to_discretize, index=data.index
)
# all these columns are catagorical
data_t = data_t.astype(str)
# drop original columns
data.drop(self.features_to_discretize, axis=1, inplace=True)
# add newly created columns
data = | pd.concat((data, data_t), axis=1) | pandas.concat |
""" Test cases for DataFrame.plot """
import string
import warnings
import numpy as np
import pytest
import pandas.util._test_decorators as td
import pandas as pd
from pandas import (
DataFrame,
Series,
date_range,
)
import pandas._testing as tm
from pandas.tests.plotting.common import TestPlotBase
from pandas.io.formats.printing import pprint_thing
pytestmark = pytest.mark.slow
@td.skip_if_no_mpl
class TestDataFramePlotsSubplots(TestPlotBase):
def test_subplots(self):
df = DataFrame(np.random.rand(10, 3), index=list(string.ascii_letters[:10]))
for kind in ["bar", "barh", "line", "area"]:
axes = df.plot(kind=kind, subplots=True, sharex=True, legend=True)
self._check_axes_shape(axes, axes_num=3, layout=(3, 1))
assert axes.shape == (3,)
for ax, column in zip(axes, df.columns):
self._check_legend_labels(ax, labels=[pprint_thing(column)])
for ax in axes[:-2]:
self._check_visible(ax.xaxis) # xaxis must be visible for grid
self._check_visible(ax.get_xticklabels(), visible=False)
if kind != "bar":
# change https://github.com/pandas-dev/pandas/issues/26714
self._check_visible(ax.get_xticklabels(minor=True), visible=False)
self._check_visible(ax.xaxis.get_label(), visible=False)
self._check_visible(ax.get_yticklabels())
self._check_visible(axes[-1].xaxis)
self._check_visible(axes[-1].get_xticklabels())
self._check_visible(axes[-1].get_xticklabels(minor=True))
self._check_visible(axes[-1].xaxis.get_label())
self._check_visible(axes[-1].get_yticklabels())
axes = df.plot(kind=kind, subplots=True, sharex=False)
for ax in axes:
self._check_visible(ax.xaxis)
self._check_visible(ax.get_xticklabels())
self._check_visible(ax.get_xticklabels(minor=True))
self._check_visible(ax.xaxis.get_label())
self._check_visible(ax.get_yticklabels())
axes = df.plot(kind=kind, subplots=True, legend=False)
for ax in axes:
assert ax.get_legend() is None
def test_subplots_timeseries(self):
idx = date_range(start="2014-07-01", freq="M", periods=10)
df = DataFrame(np.random.rand(10, 3), index=idx)
for kind in ["line", "area"]:
axes = df.plot(kind=kind, subplots=True, sharex=True)
self._check_axes_shape(axes, axes_num=3, layout=(3, 1))
for ax in axes[:-2]:
# GH 7801
self._check_visible(ax.xaxis) # xaxis must be visible for grid
self._check_visible(ax.get_xticklabels(), visible=False)
self._check_visible(ax.get_xticklabels(minor=True), visible=False)
self._check_visible(ax.xaxis.get_label(), visible=False)
self._check_visible(ax.get_yticklabels())
self._check_visible(axes[-1].xaxis)
self._check_visible(axes[-1].get_xticklabels())
self._check_visible(axes[-1].get_xticklabels(minor=True))
self._check_visible(axes[-1].xaxis.get_label())
self._check_visible(axes[-1].get_yticklabels())
self._check_ticks_props(axes, xrot=0)
axes = df.plot(kind=kind, subplots=True, sharex=False, rot=45, fontsize=7)
for ax in axes:
self._check_visible(ax.xaxis)
self._check_visible(ax.get_xticklabels())
self._check_visible(ax.get_xticklabels(minor=True))
self._check_visible(ax.xaxis.get_label())
self._check_visible(ax.get_yticklabels())
self._check_ticks_props(ax, xlabelsize=7, xrot=45, ylabelsize=7)
def test_subplots_timeseries_y_axis(self):
# GH16953
data = {
"numeric": np.array([1, 2, 5]),
"timedelta": [
pd.Timedelta(-10, unit="s"),
pd.Timedelta(10, unit="m"),
pd.Timedelta(10, unit="h"),
],
"datetime_no_tz": [
pd.to_datetime("2017-08-01 00:00:00"),
pd.to_datetime("2017-08-01 02:00:00"),
pd.to_datetime("2017-08-02 00:00:00"),
],
"datetime_all_tz": [
pd.to_datetime("2017-08-01 00:00:00", utc=True),
pd.to_datetime("2017-08-01 02:00:00", utc=True),
| pd.to_datetime("2017-08-02 00:00:00", utc=True) | pandas.to_datetime |
import pandas as pd
from datetime import datetime
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from sqlalchemy import create_engine
import os
user=os.getenv('MYSQL_user')
pw=os.getenv('MYSQL')
str_sql = 'mysql+mysqlconnector://' + user + ':' + pw + '@localhost'
m_engine = create_engine(str_sql)
root_in = '~/Documents/PythonProjects/Airflow_Project/airflow-pipeline-amanda-wink/input_files/'
root_out = '~/Documents/PythonProjects/Airflow_Project/airflow-pipeline-amanda-wink/output_files/'
def NWHL_2017_format(root_in=root_in, root_out=root_out):
NWHL_2017 = pd.read_excel(root_in + 'NWHL Skater and Team Stats 2017-18.xlsx', sheet_name='NWHL Skaters 201718')
NWHL_2017.drop(columns=['SOG', 'Sh%', 'SOG/G'], inplace=True)
NWHL_2017.rename(columns={'Tm': 'Team', 'Name': 'Player', 'P': 'Pos', 'P.1':'P', 'EN': 'ENG', 'Pts/G': 'Pts/GP'}, inplace=True)
NWHL_2017.insert(1, 'League', ['NWHL'] * len(NWHL_2017))
NWHL_2017['Rk'].fillna('N', inplace=True)
keys = NWHL_2017.columns[15:33]
fill_dict = {key: 0 for key in keys}
NWHL_2017.fillna(fill_dict, inplace=True)
NWHL_2017['GP'].fillna('0', inplace=True)
NWHL_2017['Pl/Mi'] = [None] * len(NWHL_2017)
NWHL_2017.to_csv(root_out + '/NWHL_2017.csv', index=False)
def CWHL_2017_format(root_in=root_in, root_out=root_out):
CWHL_2017 = | pd.read_excel(root_in + 'CWHL Skater and Team Stats 2017-18.xlsx', sheet_name='Skaters') | pandas.read_excel |
import gc
import glob
import logging
import os
import numpy as np
import pandas as pd
import pretrainedmodels
import torch
import torchvision
from models_pytorch import *
from torch import nn
from torch.nn import functional as F
from torch.utils import data
from torch_dataset import BenchmarkDataset
from torchvision import models
from utils_pytorch import *
FIT_MAX_BATCH = True
NUM_SAMPLES = 10000
SIZE = 299
NUM_CHANNELS = 3
EPOCHS = 5
NUM_RUNS = 3
all_models = [
# 'DenseNet121',
# 'DenseNet169',
# 'Inception3',
'InceptionResNetV2',
# 'NASNet',
# 'PNASNet',
'ResNet50'
]
input_dim = (NUM_SAMPLES, SIZE, SIZE, NUM_CHANNELS)
print('\ninput dim: {}'.format(input_dim))
X_train = np.random.randint(
0, 255, input_dim, dtype=np.uint8).astype(np.float32)
y_train = np.expand_dims((np.random.rand(NUM_SAMPLES) > 0.5).astype(
np.uint8), axis=-1).astype(np.float32)
print('X: {}'.format(X_train.shape))
print('y: {}'.format(y_train.shape))
model_parameters = {
'num_classes': 1,
'pretrained': False,
'num_channels': 3,
'pooling_output_dim': 1,
'model_name': '',
}
for m in all_models:
print('running: {}'.format(m))
MODEL_NAME = m
if FIT_MAX_BATCH:
if MODEL_NAME == 'DenseNet121':
MAX_BATCH_SIZE = 12
if MODEL_NAME == 'DenseNet169':
MAX_BATCH_SIZE = 8
if MODEL_NAME == 'Inception3':
MAX_BATCH_SIZE = 32
if MODEL_NAME == 'InceptionResNetV2':
MAX_BATCH_SIZE = 12 # was 16
if MODEL_NAME == 'NASNet':
MAX_BATCH_SIZE = 4
if MODEL_NAME == 'PNASNet':
MAX_BATCH_SIZE = 4
if MODEL_NAME == 'ResNet50':
MAX_BATCH_SIZE = 16 # was 24
parameters_dict = {
'MODEL_NAME': MODEL_NAME,
'NUM_SAMPLES': NUM_SAMPLES,
'NUM_CHANNELS': NUM_CHANNELS,
'SIZE': SIZE,
'EPOCHS': EPOCHS,
'BATCH_SIZE': MAX_BATCH_SIZE,
'NUM_RUNS': NUM_RUNS,
}
print('parameters:\n{}'.format(parameters_dict))
if FIT_MAX_BATCH:
parameters_dict['BATCH_SIZE'] = MAX_BATCH_SIZE
print('Fit max batch size into memory.')
BATCH_SIZE = parameters_dict['BATCH_SIZE']
train_dataset = BenchmarkDataset(
X_train, y_train)
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=BATCH_SIZE,
num_workers=4,
pin_memory=True,
shuffle=True)
loss = torch.nn.BCELoss().cuda(0)
for i in range(NUM_RUNS):
run_name = '{}_size{}_batch{}_trial_{}'.format(
MODEL_NAME, SIZE, BATCH_SIZE, i)
print('Running: {}\n'.format(run_name))
if not os.path.isdir('./logs/{}'.format(run_name)):
os.makedirs('./logs/{}'.format(run_name))
| pd.DataFrame.from_dict(parameters_dict, orient='index') | pandas.DataFrame.from_dict |
import gc
print("############################################")
print("## 4.1. 결합, 마스터 테이블에서 정보 얻기 ")
print("############################################")
import pandas as pd
import numpy as np
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 100)
reserve_tb=pd.read_csv('./data/reserve.csv', encoding='UTF-8')
hotel_tb=pd.read_csv('./data/hotel.csv', encoding='UTF-8')
result=pd.merge(reserve_tb, hotel_tb, on='hotel_id', how='inner')\
.query('people_num == 1 & is_business')
print(hotel_tb.head())
print(reserve_tb.head())
print('------------------')
print(result)
result=pd.merge(reserve_tb.query('people_num == 1'),
hotel_tb.query('is_business'),
on='hotel_id', how='inner')
print('------------------')
print(result)
print("############################################")
print("## 4.2. 결합, 조건에 따라 결합할 마스터 테이블 변경하기")
print("############################################")
import pandas as pd
import numpy as np
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 100)
reserve_tb=pd.read_csv('./data/reserve.csv', encoding='UTF-8')
hotel_tb=pd.read_csv('./data/hotel.csv', encoding='UTF-8')
print(hotel_tb.head())
small_area_mst=hotel_tb\
.groupby(['big_area_name', 'small_area_name'], as_index=False)\
.size().reset_index()
small_area_mst.columns=['index','big_area_name', 'small_area_name', 'hotel_cnt']
print(small_area_mst.head())
small_area_mst['join_area_id']=\
np.where(small_area_mst['hotel_cnt']-1>=20,
small_area_mst['small_area_name'],
small_area_mst['big_area_name'])
small_area_mst.drop(['hotel_cnt', 'big_area_name'], axis=1, inplace=True)
print('-------------------------')
print(small_area_mst.head())
base_hotel_mst=pd.merge(hotel_tb, small_area_mst, on='small_area_name')\
.loc[:, ['hotel_id', 'join_area_id']]
print('-------------------------')
print(base_hotel_mst.head())
del small_area_mst
gc.collect()
print('1------------------------')
recommend_hotel_mst=pd.concat([\
hotel_tb[['small_area_name', 'hotel_id']]\
.rename(columns={'small_area_name': 'join_area_id'}, inplace=False),
hotel_tb[['big_area_name', 'hotel_id']]\
.rename(columns={'big_area_name': 'join_area_id'}, inplace=False)\
])
print(recommend_hotel_mst.head())
print('2------------------------')
recommend_hotel_mst.rename(columns={'hotel_id':'rec_hotel_id'}, inplace=True)
print(recommend_hotel_mst.head())
print('3------------------------')
result=pd.merge(base_hotel_mst, recommend_hotel_mst, on='join_area_id')\
.loc[:,['hotel_id', 'rec_hotel_id']]\
.query('hotel_id != rec_hotel_id')
print('4------------------------')
print('-------------------------')
print(base_hotel_mst.head())
print('-------------------------')
print(recommend_hotel_mst.head())
print('-------------------------')
print(result)
print("############################################")
print("## 4.3. 과거의 데이터 정보 얻기 (n번 이전 까지의 데이터")
print("############################################")
import pandas as pd
import numpy as np
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 100)
reserve_tb=pd.read_csv('./data/reserve.csv', encoding='UTF-8')
hotel_tb=pd.read_csv('./data/hotel.csv', encoding='UTF-8')
print(hotel_tb.head())
result=reserve_tb.groupby('customer_id')\
.apply(lambda x: x.sort_values(by='reserve_datetime', ascending=True))\
.reset_index(drop=True)
print(result)
result['price_avg']=pd.Series(
result.groupby('customer_id')
['total_price'].rolling(center=False, window=3, min_periods=1).mean()
.reset_index(drop=True)
)
print('-----------------')
print(result)
result['price_avg']=\
result.groupby('customer_id')['price_avg'].shift(periods=1)
print('-----------------')
print(result)
print("############################################")
print("## 4.4. 과거의 데이터 정보 얻기 (과거 n일의 합계)")
print("############################################")
import pandas as pd
import numpy as np
import pandas.tseries.offsets as offsets
import operator
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 100)
reserve_tb=pd.read_csv('./data/reserve.csv', encoding='UTF-8')
hotel_tb=pd.read_csv('./data/hotel.csv', encoding='UTF-8')
print(hotel_tb.head())
reserve_tb['reserve_datetime']=\
pd.to_datetime(reserve_tb['reserve_datetime'], format='%Y-%m-%d %H:%M:%S')
sum_table=pd.merge(
reserve_tb[['reserve_id', 'customer_id', 'reserve_datetime']],
reserve_tb[['customer_id', 'reserve_datetime', 'total_price']]
.rename(columns={'reserve_datetime':'reserve_datetime_before'}),
on='customer_id')
print('--------------')
print(sum_table)
print('--------------')
print(reserve_tb[['reserve_id', 'customer_id', 'reserve_datetime']])
print('--------------')
print(reserve_tb[['customer_id', 'reserve_datetime', 'total_price']])
sum_table=sum_table[operator.and_(
sum_table['reserve_datetime'] > sum_table['reserve_datetime_before'],
sum_table['reserve_datetime']+offsets.Day(-90) <= sum_table['reserve_datetime_before'])].groupby('reserve_id')['total_price'].sum().reset_index()
print('--------------')
print(sum_table)
sum_table.columns=['reserve_id','total_price_sum']
print('--------------')
print(sum_table)
result=pd.merge(reserve_tb, sum_table, on='reserve_id', how='left').fillna(0)
print('--------------')
print(result)
print("############################################")
print("## 4.5. 상호 결합")
print("############################################")
import pandas as pd
import numpy as np
import pandas.tseries.offsets as offsets
import operator
import datetime
from dateutil.relativedelta import relativedelta
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 100)
reserve_tb=pd.read_csv('./data/reserve.csv', encoding='UTF-8')
customer_tb= | pd.read_csv('./data/customer.csv', encoding='UTF-8') | pandas.read_csv |
"""
Author: <NAME>
Modified: <NAME>
"""
import os
import warnings
import numpy as np
import pandas as pd
import pytest
from numpy.testing import assert_almost_equal, assert_allclose
from statsmodels.tools.sm_exceptions import EstimationWarning
from statsmodels.tsa.holtwinters import (ExponentialSmoothing,
SimpleExpSmoothing, Holt, SMOOTHERS, PY_SMOOTHERS)
base, _ = os.path.split(os.path.abspath(__file__))
housing_data = pd.read_csv(os.path.join(base, 'results', 'housing-data.csv'))
housing_data = housing_data.set_index('DATE')
housing_data = housing_data.asfreq('MS')
SEASONALS = ('add', 'mul', None)
TRENDS = ('add', 'mul', None)
def _simple_dbl_exp_smoother(x, alpha, beta, l0, b0, nforecast=0):
"""
Simple, slow, direct implementation of double exp smoothing for testing
"""
n = x.shape[0]
l = np.zeros(n)
b = np.zeros(n)
xhat = np.zeros(n)
f = np.zeros(nforecast)
l[0] = l0
b[0] = b0
# Special case the 0 observations since index -1 is not available
xhat[0] = l0 + b0
l[0] = alpha * x[0] + (1 - alpha) * (l0 + b0)
b[0] = beta * (l[0] - l0) + (1 - beta) * b0
for t in range(1, n):
# Obs in index t is the time t forecast for t + 1
l[t] = alpha * x[t] + (1 - alpha) * (l[t - 1] + b[t - 1])
b[t] = beta * (l[t] - l[t - 1]) + (1 - beta) * b[t - 1]
xhat[1:] = l[0:-1] + b[0:-1]
f[:] = l[-1] + np.arange(1, nforecast + 1) * b[-1]
err = x - xhat
return l, b, f, err, xhat
class TestHoltWinters(object):
@classmethod
def setup_class(cls):
# Changed for backwards compatibility with pandas
# oildata_oil_json = '{"851990400000":446.6565229,"883526400000":454.4733065,"915062400000":455.662974,"946598400000":423.6322388,"978220800000":456.2713279,"1009756800000":440.5880501,"1041292800000":425.3325201,"1072828800000":485.1494479,"1104451200000":506.0481621,"1135987200000":526.7919833,"1167523200000":514.268889,"1199059200000":494.2110193}'
# oildata_oil = pd.read_json(oildata_oil_json, typ='Series').sort_index()
data = [446.65652290000003, 454.47330649999998, 455.66297400000002,
423.63223879999998, 456.27132790000002, 440.58805009999998,
425.33252010000001, 485.14944789999998, 506.04816210000001,
526.79198329999997, 514.26888899999994, 494.21101929999998]
index = ['1996-12-31 00:00:00', '1997-12-31 00:00:00', '1998-12-31 00:00:00',
'1999-12-31 00:00:00', '2000-12-31 00:00:00', '2001-12-31 00:00:00',
'2002-12-31 00:00:00', '2003-12-31 00:00:00', '2004-12-31 00:00:00',
'2005-12-31 00:00:00', '2006-12-31 00:00:00', '2007-12-31 00:00:00']
oildata_oil = pd.Series(data, index)
oildata_oil.index = pd.DatetimeIndex(oildata_oil.index,
freq=pd.infer_freq(oildata_oil.index))
cls.oildata_oil = oildata_oil
# air_ausair_json = '{"662601600000":17.5534,"694137600000":21.8601,"725760000000":23.8866,"757296000000":26.9293,"788832000000":26.8885,"820368000000":28.8314,"851990400000":30.0751,"883526400000":30.9535,"915062400000":30.1857,"946598400000":31.5797,"978220800000":32.577569,"1009756800000":33.477398,"1041292800000":39.021581,"1072828800000":41.386432,"1104451200000":41.596552}'
# air_ausair = pd.read_json(air_ausair_json, typ='Series').sort_index()
data = [17.5534, 21.860099999999999, 23.886600000000001,
26.929300000000001, 26.888500000000001, 28.831399999999999,
30.075099999999999, 30.953499999999998, 30.185700000000001,
31.579699999999999, 32.577568999999997, 33.477398000000001,
39.021580999999998, 41.386431999999999, 41.596552000000003]
index = ['1990-12-31 00:00:00', '1991-12-31 00:00:00', '1992-12-31 00:00:00',
'1993-12-31 00:00:00', '1994-12-31 00:00:00', '1995-12-31 00:00:00',
'1996-12-31 00:00:00', '1997-12-31 00:00:00', '1998-12-31 00:00:00',
'1999-12-31 00:00:00', '2000-12-31 00:00:00', '2001-12-31 00:00:00',
'2002-12-31 00:00:00', '2003-12-31 00:00:00', '2004-12-31 00:00:00']
air_ausair = pd.Series(data, index)
air_ausair.index = pd.DatetimeIndex(air_ausair.index,
freq=pd.infer_freq(air_ausair.index))
cls.air_ausair = air_ausair
# livestock2_livestock_json = '{"31449600000":263.917747,"62985600000":268.307222,"94608000000":260.662556,"126144000000":266.639419,"157680000000":277.515778,"189216000000":283.834045,"220838400000":290.309028,"252374400000":292.474198,"283910400000":300.830694,"315446400000":309.286657,"347068800000":318.331081,"378604800000":329.37239,"410140800000":338.883998,"441676800000":339.244126,"473299200000":328.600632,"504835200000":314.255385,"536371200000":314.459695,"567907200000":321.413779,"599529600000":329.789292,"631065600000":346.385165,"662601600000":352.297882,"694137600000":348.370515,"725760000000":417.562922,"757296000000":417.12357,"788832000000":417.749459,"820368000000":412.233904,"851990400000":411.946817,"883526400000":394.697075,"915062400000":401.49927,"946598400000":408.270468,"978220800000":414.2428}'
# livestock2_livestock = pd.read_json(livestock2_livestock_json, typ='Series').sort_index()
data = [263.91774700000002, 268.30722200000002, 260.662556,
266.63941899999998, 277.51577800000001, 283.834045,
290.30902800000001, 292.474198, 300.83069399999999,
309.28665699999999, 318.33108099999998, 329.37239,
338.88399800000002, 339.24412599999999, 328.60063200000002,
314.25538499999999, 314.45969500000001, 321.41377899999998,
329.78929199999999, 346.38516499999997, 352.29788200000002,
348.37051500000001, 417.56292200000001, 417.12356999999997,
417.749459, 412.233904, 411.94681700000001, 394.69707499999998,
401.49927000000002, 408.27046799999999, 414.24279999999999]
index = ['1970-12-31 00:00:00', '1971-12-31 00:00:00', '1972-12-31 00:00:00',
'1973-12-31 00:00:00', '1974-12-31 00:00:00', '1975-12-31 00:00:00',
'1976-12-31 00:00:00', '1977-12-31 00:00:00', '1978-12-31 00:00:00',
'1979-12-31 00:00:00', '1980-12-31 00:00:00', '1981-12-31 00:00:00',
'1982-12-31 00:00:00', '1983-12-31 00:00:00', '1984-12-31 00:00:00',
'1985-12-31 00:00:00', '1986-12-31 00:00:00', '1987-12-31 00:00:00',
'1988-12-31 00:00:00', '1989-12-31 00:00:00', '1990-12-31 00:00:00',
'1991-12-31 00:00:00', '1992-12-31 00:00:00', '1993-12-31 00:00:00',
'1994-12-31 00:00:00', '1995-12-31 00:00:00', '1996-12-31 00:00:00',
'1997-12-31 00:00:00', '1998-12-31 00:00:00', '1999-12-31 00:00:00',
'2000-12-31 00:00:00']
livestock2_livestock = pd.Series(data, index)
livestock2_livestock.index = pd.DatetimeIndex(
livestock2_livestock.index,
freq=pd.infer_freq(livestock2_livestock.index))
cls.livestock2_livestock = livestock2_livestock
# aust_json = '{"1104537600000":41.727458,"1112313600000":24.04185,"1120176000000":32.328103,"1128124800000":37.328708,"1136073600000":46.213153,"1143849600000":29.346326,"1151712000000":36.48291,"1159660800000":42.977719,"1167609600000":48.901525,"1175385600000":31.180221,"1183248000000":37.717881,"1191196800000":40.420211,"1199145600000":51.206863,"1207008000000":31.887228,"1214870400000":40.978263,"1222819200000":43.772491,"1230768000000":55.558567,"1238544000000":33.850915,"1246406400000":42.076383,"1254355200000":45.642292,"1262304000000":59.76678,"1270080000000":35.191877,"1277942400000":44.319737,"1285891200000":47.913736}'
# aust = pd.read_json(aust_json, typ='Series').sort_index()
data = [41.727457999999999, 24.04185, 32.328102999999999,
37.328707999999999, 46.213152999999998, 29.346326000000001,
36.482909999999997, 42.977719, 48.901524999999999,
31.180221, 37.717880999999998, 40.420211000000002,
51.206862999999998, 31.887228, 40.978262999999998,
43.772491000000002, 55.558566999999996, 33.850915000000001,
42.076383, 45.642291999999998, 59.766779999999997,
35.191876999999998, 44.319737000000003, 47.913736]
index = ['2005-03-01 00:00:00', '2005-06-01 00:00:00', '2005-09-01 00:00:00',
'2005-12-01 00:00:00', '2006-03-01 00:00:00', '2006-06-01 00:00:00',
'2006-09-01 00:00:00', '2006-12-01 00:00:00', '2007-03-01 00:00:00',
'2007-06-01 00:00:00', '2007-09-01 00:00:00', '2007-12-01 00:00:00',
'2008-03-01 00:00:00', '2008-06-01 00:00:00', '2008-09-01 00:00:00',
'2008-12-01 00:00:00', '2009-03-01 00:00:00', '2009-06-01 00:00:00',
'2009-09-01 00:00:00', '2009-12-01 00:00:00', '2010-03-01 00:00:00',
'2010-06-01 00:00:00', '2010-09-01 00:00:00', '2010-12-01 00:00:00']
aust = | pd.Series(data, index) | pandas.Series |
from __future__ import division
from contextlib import contextmanager
from datetime import datetime
from functools import wraps
import locale
import os
import re
from shutil import rmtree
import string
import subprocess
import sys
import tempfile
import traceback
import warnings
import numpy as np
from numpy.random import rand, randn
from pandas._libs import testing as _testing
import pandas.compat as compat
from pandas.compat import (
PY2, PY3, Counter, StringIO, callable, filter, httplib, lmap, lrange, lzip,
map, raise_with_traceback, range, string_types, u, unichr, zip)
from pandas.core.dtypes.common import (
is_bool, is_categorical_dtype, is_datetime64_dtype, is_datetime64tz_dtype,
is_datetimelike_v_numeric, is_datetimelike_v_object,
is_extension_array_dtype, is_interval_dtype, is_list_like, is_number,
is_period_dtype, is_sequence, is_timedelta64_dtype, needs_i8_conversion)
from pandas.core.dtypes.missing import array_equivalent
import pandas as pd
from pandas import (
Categorical, CategoricalIndex, DataFrame, DatetimeIndex, Index,
IntervalIndex, MultiIndex, Panel, PeriodIndex, RangeIndex, Series,
bdate_range)
from pandas.core.algorithms import take_1d
from pandas.core.arrays import (
DatetimeArrayMixin as DatetimeArray, ExtensionArray, IntervalArray,
PeriodArray, TimedeltaArrayMixin as TimedeltaArray, period_array)
import pandas.core.common as com
from pandas.io.common import urlopen
from pandas.io.formats.printing import pprint_thing
N = 30
K = 4
_RAISE_NETWORK_ERROR_DEFAULT = False
# set testing_mode
_testing_mode_warnings = (DeprecationWarning, compat.ResourceWarning)
def set_testing_mode():
# set the testing mode filters
testing_mode = os.environ.get('PANDAS_TESTING_MODE', 'None')
if 'deprecate' in testing_mode:
warnings.simplefilter('always', _testing_mode_warnings)
def reset_testing_mode():
# reset the testing mode filters
testing_mode = os.environ.get('PANDAS_TESTING_MODE', 'None')
if 'deprecate' in testing_mode:
warnings.simplefilter('ignore', _testing_mode_warnings)
set_testing_mode()
def reset_display_options():
"""
Reset the display options for printing and representing objects.
"""
pd.reset_option('^display.', silent=True)
def round_trip_pickle(obj, path=None):
"""
Pickle an object and then read it again.
Parameters
----------
obj : pandas object
The object to pickle and then re-read.
path : str, default None
The path where the pickled object is written and then read.
Returns
-------
round_trip_pickled_object : pandas object
The original object that was pickled and then re-read.
"""
if path is None:
path = u('__{random_bytes}__.pickle'.format(random_bytes=rands(10)))
with ensure_clean(path) as path:
pd.to_pickle(obj, path)
return pd.read_pickle(path)
def round_trip_pathlib(writer, reader, path=None):
"""
Write an object to file specified by a pathlib.Path and read it back
Parameters
----------
writer : callable bound to pandas object
IO writing function (e.g. DataFrame.to_csv )
reader : callable
IO reading function (e.g. pd.read_csv )
path : str, default None
The path where the object is written and then read.
Returns
-------
round_trip_object : pandas object
The original object that was serialized and then re-read.
"""
import pytest
Path = pytest.importorskip('pathlib').Path
if path is None:
path = '___pathlib___'
with ensure_clean(path) as path:
writer(Path(path))
obj = reader(Path(path))
return obj
def round_trip_localpath(writer, reader, path=None):
"""
Write an object to file specified by a py.path LocalPath and read it back
Parameters
----------
writer : callable bound to pandas object
IO writing function (e.g. DataFrame.to_csv )
reader : callable
IO reading function (e.g. pd.read_csv )
path : str, default None
The path where the object is written and then read.
Returns
-------
round_trip_object : pandas object
The original object that was serialized and then re-read.
"""
import pytest
LocalPath = pytest.importorskip('py.path').local
if path is None:
path = '___localpath___'
with ensure_clean(path) as path:
writer(LocalPath(path))
obj = reader(LocalPath(path))
return obj
@contextmanager
def decompress_file(path, compression):
"""
Open a compressed file and return a file object
Parameters
----------
path : str
The path where the file is read from
compression : {'gzip', 'bz2', 'zip', 'xz', None}
Name of the decompression to use
Returns
-------
f : file object
"""
if compression is None:
f = open(path, 'rb')
elif compression == 'gzip':
import gzip
f = gzip.open(path, 'rb')
elif compression == 'bz2':
import bz2
f = bz2.BZ2File(path, 'rb')
elif compression == 'xz':
lzma = compat.import_lzma()
f = lzma.LZMAFile(path, 'rb')
elif compression == 'zip':
import zipfile
zip_file = zipfile.ZipFile(path)
zip_names = zip_file.namelist()
if len(zip_names) == 1:
f = zip_file.open(zip_names.pop())
else:
raise ValueError('ZIP file {} error. Only one file per ZIP.'
.format(path))
else:
msg = 'Unrecognized compression type: {}'.format(compression)
raise ValueError(msg)
try:
yield f
finally:
f.close()
if compression == "zip":
zip_file.close()
def assert_almost_equal(left, right, check_dtype="equiv",
check_less_precise=False, **kwargs):
"""
Check that the left and right objects are approximately equal.
By approximately equal, we refer to objects that are numbers or that
contain numbers which may be equivalent to specific levels of precision.
Parameters
----------
left : object
right : object
check_dtype : bool / string {'equiv'}, default 'equiv'
Check dtype if both a and b are the same type. If 'equiv' is passed in,
then `RangeIndex` and `Int64Index` are also considered equivalent
when doing type checking.
check_less_precise : bool or int, default False
Specify comparison precision. 5 digits (False) or 3 digits (True)
after decimal points are compared. If int, then specify the number
of digits to compare.
When comparing two numbers, if the first number has magnitude less
than 1e-5, we compare the two numbers directly and check whether
they are equivalent within the specified precision. Otherwise, we
compare the **ratio** of the second number to the first number and
check whether it is equivalent to 1 within the specified precision.
"""
if isinstance(left, pd.Index):
return assert_index_equal(left, right,
check_exact=False,
exact=check_dtype,
check_less_precise=check_less_precise,
**kwargs)
elif isinstance(left, pd.Series):
return assert_series_equal(left, right,
check_exact=False,
check_dtype=check_dtype,
check_less_precise=check_less_precise,
**kwargs)
elif isinstance(left, pd.DataFrame):
return assert_frame_equal(left, right,
check_exact=False,
check_dtype=check_dtype,
check_less_precise=check_less_precise,
**kwargs)
else:
# Other sequences.
if check_dtype:
if is_number(left) and is_number(right):
# Do not compare numeric classes, like np.float64 and float.
pass
elif is_bool(left) and is_bool(right):
# Do not compare bool classes, like np.bool_ and bool.
pass
else:
if (isinstance(left, np.ndarray) or
isinstance(right, np.ndarray)):
obj = "numpy array"
else:
obj = "Input"
assert_class_equal(left, right, obj=obj)
return _testing.assert_almost_equal(
left, right,
check_dtype=check_dtype,
check_less_precise=check_less_precise,
**kwargs)
def _check_isinstance(left, right, cls):
"""
Helper method for our assert_* methods that ensures that
the two objects being compared have the right type before
proceeding with the comparison.
Parameters
----------
left : The first object being compared.
right : The second object being compared.
cls : The class type to check against.
Raises
------
AssertionError : Either `left` or `right` is not an instance of `cls`.
"""
err_msg = "{name} Expected type {exp_type}, found {act_type} instead"
cls_name = cls.__name__
if not isinstance(left, cls):
raise AssertionError(err_msg.format(name=cls_name, exp_type=cls,
act_type=type(left)))
if not isinstance(right, cls):
raise AssertionError(err_msg.format(name=cls_name, exp_type=cls,
act_type=type(right)))
def assert_dict_equal(left, right, compare_keys=True):
_check_isinstance(left, right, dict)
return _testing.assert_dict_equal(left, right, compare_keys=compare_keys)
def randbool(size=(), p=0.5):
return rand(*size) <= p
RANDS_CHARS = np.array(list(string.ascii_letters + string.digits),
dtype=(np.str_, 1))
RANDU_CHARS = np.array(list(u("").join(map(unichr, lrange(1488, 1488 + 26))) +
string.digits), dtype=(np.unicode_, 1))
def rands_array(nchars, size, dtype='O'):
"""Generate an array of byte strings."""
retval = (np.random.choice(RANDS_CHARS, size=nchars * np.prod(size))
.view((np.str_, nchars)).reshape(size))
if dtype is None:
return retval
else:
return retval.astype(dtype)
def randu_array(nchars, size, dtype='O'):
"""Generate an array of unicode strings."""
retval = (np.random.choice(RANDU_CHARS, size=nchars * np.prod(size))
.view((np.unicode_, nchars)).reshape(size))
if dtype is None:
return retval
else:
return retval.astype(dtype)
def rands(nchars):
"""
Generate one random byte string.
See `rands_array` if you want to create an array of random strings.
"""
return ''.join(np.random.choice(RANDS_CHARS, nchars))
def randu(nchars):
"""
Generate one random unicode string.
See `randu_array` if you want to create an array of random unicode strings.
"""
return ''.join(np.random.choice(RANDU_CHARS, nchars))
def close(fignum=None):
from matplotlib.pyplot import get_fignums, close as _close
if fignum is None:
for fignum in get_fignums():
_close(fignum)
else:
_close(fignum)
# -----------------------------------------------------------------------------
# locale utilities
def check_output(*popenargs, **kwargs):
# shamelessly taken from Python 2.7 source
r"""Run command with arguments and return its output as a byte string.
If the exit code was non-zero it raises a CalledProcessError. The
CalledProcessError object will have the return code in the returncode
attribute and output in the output attribute.
The arguments are the same as for the Popen constructor. Example:
>>> check_output(["ls", "-l", "/dev/null"])
'crw-rw-rw- 1 root root 1, 3 Oct 18 2007 /dev/null\n'
The stdout argument is not allowed as it is used internally.
To capture standard error in the result, use stderr=STDOUT.
>>> check_output(["/bin/sh", "-c",
... "ls -l non_existent_file ; exit 0"],
... stderr=STDOUT)
'ls: non_existent_file: No such file or directory\n'
"""
if 'stdout' in kwargs:
raise ValueError('stdout argument not allowed, it will be overridden.')
process = subprocess.Popen(stdout=subprocess.PIPE, stderr=subprocess.PIPE,
*popenargs, **kwargs)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise subprocess.CalledProcessError(retcode, cmd, output=output)
return output
def _default_locale_getter():
try:
raw_locales = check_output(['locale -a'], shell=True)
except subprocess.CalledProcessError as e:
raise type(e)("{exception}, the 'locale -a' command cannot be found "
"on your system".format(exception=e))
return raw_locales
def get_locales(prefix=None, normalize=True,
locale_getter=_default_locale_getter):
"""Get all the locales that are available on the system.
Parameters
----------
prefix : str
If not ``None`` then return only those locales with the prefix
provided. For example to get all English language locales (those that
start with ``"en"``), pass ``prefix="en"``.
normalize : bool
Call ``locale.normalize`` on the resulting list of available locales.
If ``True``, only locales that can be set without throwing an
``Exception`` are returned.
locale_getter : callable
The function to use to retrieve the current locales. This should return
a string with each locale separated by a newline character.
Returns
-------
locales : list of strings
A list of locale strings that can be set with ``locale.setlocale()``.
For example::
locale.setlocale(locale.LC_ALL, locale_string)
On error will return None (no locale available, e.g. Windows)
"""
try:
raw_locales = locale_getter()
except Exception:
return None
try:
# raw_locales is "\n" separated list of locales
# it may contain non-decodable parts, so split
# extract what we can and then rejoin.
raw_locales = raw_locales.split(b'\n')
out_locales = []
for x in raw_locales:
if PY3:
out_locales.append(str(
x, encoding=pd.options.display.encoding))
else:
out_locales.append(str(x))
except TypeError:
pass
if prefix is None:
return _valid_locales(out_locales, normalize)
pattern = re.compile('{prefix}.*'.format(prefix=prefix))
found = pattern.findall('\n'.join(out_locales))
return _valid_locales(found, normalize)
@contextmanager
def set_locale(new_locale, lc_var=locale.LC_ALL):
"""Context manager for temporarily setting a locale.
Parameters
----------
new_locale : str or tuple
A string of the form <language_country>.<encoding>. For example to set
the current locale to US English with a UTF8 encoding, you would pass
"en_US.UTF-8".
lc_var : int, default `locale.LC_ALL`
The category of the locale being set.
Notes
-----
This is useful when you want to run a particular block of code under a
particular locale, without globally setting the locale. This probably isn't
thread-safe.
"""
current_locale = locale.getlocale()
try:
locale.setlocale(lc_var, new_locale)
normalized_locale = locale.getlocale()
if com._all_not_none(*normalized_locale):
yield '.'.join(normalized_locale)
else:
yield new_locale
finally:
locale.setlocale(lc_var, current_locale)
def can_set_locale(lc, lc_var=locale.LC_ALL):
"""
Check to see if we can set a locale, and subsequently get the locale,
without raising an Exception.
Parameters
----------
lc : str
The locale to attempt to set.
lc_var : int, default `locale.LC_ALL`
The category of the locale being set.
Returns
-------
is_valid : bool
Whether the passed locale can be set
"""
try:
with set_locale(lc, lc_var=lc_var):
pass
except (ValueError,
locale.Error): # horrible name for a Exception subclass
return False
else:
return True
def _valid_locales(locales, normalize):
"""Return a list of normalized locales that do not throw an ``Exception``
when set.
Parameters
----------
locales : str
A string where each locale is separated by a newline.
normalize : bool
Whether to call ``locale.normalize`` on each locale.
Returns
-------
valid_locales : list
A list of valid locales.
"""
if normalize:
normalizer = lambda x: locale.normalize(x.strip())
else:
normalizer = lambda x: x.strip()
return list(filter(can_set_locale, map(normalizer, locales)))
# -----------------------------------------------------------------------------
# Stdout / stderr decorators
@contextmanager
def set_defaultencoding(encoding):
"""
Set default encoding (as given by sys.getdefaultencoding()) to the given
encoding; restore on exit.
Parameters
----------
encoding : str
"""
if not PY2:
raise ValueError("set_defaultencoding context is only available "
"in Python 2.")
orig = sys.getdefaultencoding()
reload(sys) # noqa:F821
sys.setdefaultencoding(encoding)
try:
yield
finally:
sys.setdefaultencoding(orig)
def capture_stdout(f):
r"""
Decorator to capture stdout in a buffer so that it can be checked
(or suppressed) during testing.
Parameters
----------
f : callable
The test that is capturing stdout.
Returns
-------
f : callable
The decorated test ``f``, which captures stdout.
Examples
--------
>>> from pandas.util.testing import capture_stdout
>>> import sys
>>>
>>> @capture_stdout
... def test_print_pass():
... print("foo")
... out = sys.stdout.getvalue()
... assert out == "foo\n"
>>>
>>> @capture_stdout
... def test_print_fail():
... print("foo")
... out = sys.stdout.getvalue()
... assert out == "bar\n"
...
AssertionError: assert 'foo\n' == 'bar\n'
"""
@compat.wraps(f)
def wrapper(*args, **kwargs):
try:
sys.stdout = StringIO()
f(*args, **kwargs)
finally:
sys.stdout = sys.__stdout__
return wrapper
def capture_stderr(f):
r"""
Decorator to capture stderr in a buffer so that it can be checked
(or suppressed) during testing.
Parameters
----------
f : callable
The test that is capturing stderr.
Returns
-------
f : callable
The decorated test ``f``, which captures stderr.
Examples
--------
>>> from pandas.util.testing import capture_stderr
>>> import sys
>>>
>>> @capture_stderr
... def test_stderr_pass():
... sys.stderr.write("foo")
... out = sys.stderr.getvalue()
... assert out == "foo\n"
>>>
>>> @capture_stderr
... def test_stderr_fail():
... sys.stderr.write("foo")
... out = sys.stderr.getvalue()
... assert out == "bar\n"
...
AssertionError: assert 'foo\n' == 'bar\n'
"""
@compat.wraps(f)
def wrapper(*args, **kwargs):
try:
sys.stderr = StringIO()
f(*args, **kwargs)
finally:
sys.stderr = sys.__stderr__
return wrapper
# -----------------------------------------------------------------------------
# Console debugging tools
def debug(f, *args, **kwargs):
from pdb import Pdb as OldPdb
try:
from IPython.core.debugger import Pdb
kw = dict(color_scheme='Linux')
except ImportError:
Pdb = OldPdb
kw = {}
pdb = Pdb(**kw)
return pdb.runcall(f, *args, **kwargs)
def pudebug(f, *args, **kwargs):
import pudb
return pudb.runcall(f, *args, **kwargs)
def set_trace():
from IPython.core.debugger import Pdb
try:
Pdb(color_scheme='Linux').set_trace(sys._getframe().f_back)
except Exception:
from pdb import Pdb as OldPdb
OldPdb().set_trace(sys._getframe().f_back)
# -----------------------------------------------------------------------------
# contextmanager to ensure the file cleanup
@contextmanager
def ensure_clean(filename=None, return_filelike=False):
"""Gets a temporary path and agrees to remove on close.
Parameters
----------
filename : str (optional)
if None, creates a temporary file which is then removed when out of
scope. if passed, creates temporary file with filename as ending.
return_filelike : bool (default False)
if True, returns a file-like which is *always* cleaned. Necessary for
savefig and other functions which want to append extensions.
"""
filename = filename or ''
fd = None
if return_filelike:
f = tempfile.TemporaryFile(suffix=filename)
try:
yield f
finally:
f.close()
else:
# don't generate tempfile if using a path with directory specified
if len(os.path.dirname(filename)):
raise ValueError("Can't pass a qualified name to ensure_clean()")
try:
fd, filename = tempfile.mkstemp(suffix=filename)
except UnicodeEncodeError:
import pytest
pytest.skip('no unicode file names on this system')
try:
yield filename
finally:
try:
os.close(fd)
except Exception:
print("Couldn't close file descriptor: {fdesc} (file: {fname})"
.format(fdesc=fd, fname=filename))
try:
if os.path.exists(filename):
os.remove(filename)
except Exception as e:
print("Exception on removing file: {error}".format(error=e))
@contextmanager
def ensure_clean_dir():
"""
Get a temporary directory path and agrees to remove on close.
Yields
------
Temporary directory path
"""
directory_name = tempfile.mkdtemp(suffix='')
try:
yield directory_name
finally:
try:
rmtree(directory_name)
except Exception:
pass
@contextmanager
def ensure_safe_environment_variables():
"""
Get a context manager to safely set environment variables
All changes will be undone on close, hence environment variables set
within this contextmanager will neither persist nor change global state.
"""
saved_environ = dict(os.environ)
try:
yield
finally:
os.environ.clear()
os.environ.update(saved_environ)
# -----------------------------------------------------------------------------
# Comparators
def equalContents(arr1, arr2):
"""Checks if the set of unique elements of arr1 and arr2 are equivalent.
"""
return frozenset(arr1) == frozenset(arr2)
def assert_index_equal(left, right, exact='equiv', check_names=True,
check_less_precise=False, check_exact=True,
check_categorical=True, obj='Index'):
"""Check that left and right Index are equal.
Parameters
----------
left : Index
right : Index
exact : bool / string {'equiv'}, default 'equiv'
Whether to check the Index class, dtype and inferred_type
are identical. If 'equiv', then RangeIndex can be substituted for
Int64Index as well.
check_names : bool, default True
Whether to check the names attribute.
check_less_precise : bool or int, default False
Specify comparison precision. Only used when check_exact is False.
5 digits (False) or 3 digits (True) after decimal points are compared.
If int, then specify the digits to compare
check_exact : bool, default True
Whether to compare number exactly.
check_categorical : bool, default True
Whether to compare internal Categorical exactly.
obj : str, default 'Index'
Specify object name being compared, internally used to show appropriate
assertion message
"""
__tracebackhide__ = True
def _check_types(l, r, obj='Index'):
if exact:
assert_class_equal(l, r, exact=exact, obj=obj)
# Skip exact dtype checking when `check_categorical` is False
if check_categorical:
assert_attr_equal('dtype', l, r, obj=obj)
# allow string-like to have different inferred_types
if l.inferred_type in ('string', 'unicode'):
assert r.inferred_type in ('string', 'unicode')
else:
assert_attr_equal('inferred_type', l, r, obj=obj)
def _get_ilevel_values(index, level):
# accept level number only
unique = index.levels[level]
labels = index.codes[level]
filled = take_1d(unique.values, labels, fill_value=unique._na_value)
values = unique._shallow_copy(filled, name=index.names[level])
return values
# instance validation
_check_isinstance(left, right, Index)
# class / dtype comparison
_check_types(left, right, obj=obj)
# level comparison
if left.nlevels != right.nlevels:
msg1 = '{obj} levels are different'.format(obj=obj)
msg2 = '{nlevels}, {left}'.format(nlevels=left.nlevels, left=left)
msg3 = '{nlevels}, {right}'.format(nlevels=right.nlevels, right=right)
raise_assert_detail(obj, msg1, msg2, msg3)
# length comparison
if len(left) != len(right):
msg1 = '{obj} length are different'.format(obj=obj)
msg2 = '{length}, {left}'.format(length=len(left), left=left)
msg3 = '{length}, {right}'.format(length=len(right), right=right)
raise_assert_detail(obj, msg1, msg2, msg3)
# MultiIndex special comparison for little-friendly error messages
if left.nlevels > 1:
for level in range(left.nlevels):
# cannot use get_level_values here because it can change dtype
llevel = _get_ilevel_values(left, level)
rlevel = _get_ilevel_values(right, level)
lobj = 'MultiIndex level [{level}]'.format(level=level)
assert_index_equal(llevel, rlevel,
exact=exact, check_names=check_names,
check_less_precise=check_less_precise,
check_exact=check_exact, obj=lobj)
# get_level_values may change dtype
_check_types(left.levels[level], right.levels[level], obj=obj)
# skip exact index checking when `check_categorical` is False
if check_exact and check_categorical:
if not left.equals(right):
diff = np.sum((left.values != right.values)
.astype(int)) * 100.0 / len(left)
msg = '{obj} values are different ({pct} %)'.format(
obj=obj, pct=np.round(diff, 5))
raise_assert_detail(obj, msg, left, right)
else:
_testing.assert_almost_equal(left.values, right.values,
check_less_precise=check_less_precise,
check_dtype=exact,
obj=obj, lobj=left, robj=right)
# metadata comparison
if check_names:
assert_attr_equal('names', left, right, obj=obj)
if isinstance(left, pd.PeriodIndex) or isinstance(right, pd.PeriodIndex):
assert_attr_equal('freq', left, right, obj=obj)
if (isinstance(left, pd.IntervalIndex) or
isinstance(right, pd.IntervalIndex)):
assert_interval_array_equal(left.values, right.values)
if check_categorical:
if is_categorical_dtype(left) or is_categorical_dtype(right):
assert_categorical_equal(left.values, right.values,
obj='{obj} category'.format(obj=obj))
def assert_class_equal(left, right, exact=True, obj='Input'):
"""checks classes are equal."""
__tracebackhide__ = True
def repr_class(x):
if isinstance(x, Index):
# return Index as it is to include values in the error message
return x
try:
return x.__class__.__name__
except AttributeError:
return repr(type(x))
if exact == 'equiv':
if type(left) != type(right):
# allow equivalence of Int64Index/RangeIndex
types = {type(left).__name__, type(right).__name__}
if len(types - {'Int64Index', 'RangeIndex'}):
msg = '{obj} classes are not equivalent'.format(obj=obj)
raise_assert_detail(obj, msg, repr_class(left),
repr_class(right))
elif exact:
if type(left) != type(right):
msg = '{obj} classes are different'.format(obj=obj)
raise_assert_detail(obj, msg, repr_class(left),
repr_class(right))
def assert_attr_equal(attr, left, right, obj='Attributes'):
"""checks attributes are equal. Both objects must have attribute.
Parameters
----------
attr : str
Attribute name being compared.
left : object
right : object
obj : str, default 'Attributes'
Specify object name being compared, internally used to show appropriate
assertion message
"""
__tracebackhide__ = True
left_attr = getattr(left, attr)
right_attr = getattr(right, attr)
if left_attr is right_attr:
return True
elif (is_number(left_attr) and np.isnan(left_attr) and
is_number(right_attr) and np.isnan(right_attr)):
# np.nan
return True
try:
result = left_attr == right_attr
except TypeError:
# datetimetz on rhs may raise TypeError
result = False
if not isinstance(result, bool):
result = result.all()
if result:
return True
else:
msg = 'Attribute "{attr}" are different'.format(attr=attr)
raise_assert_detail(obj, msg, left_attr, right_attr)
def assert_is_valid_plot_return_object(objs):
import matplotlib.pyplot as plt
if isinstance(objs, (pd.Series, np.ndarray)):
for el in objs.ravel():
msg = ("one of 'objs' is not a matplotlib Axes instance, type "
"encountered {name!r}").format(name=el.__class__.__name__)
assert isinstance(el, (plt.Axes, dict)), msg
else:
assert isinstance(objs, (plt.Artist, tuple, dict)), (
'objs is neither an ndarray of Artist instances nor a '
'single Artist instance, tuple, or dict, "objs" is a {name!r}'
.format(name=objs.__class__.__name__))
def isiterable(obj):
return hasattr(obj, '__iter__')
def is_sorted(seq):
if isinstance(seq, (Index, Series)):
seq = seq.values
# sorting does not change precisions
return assert_numpy_array_equal(seq, np.sort(np.array(seq)))
def assert_categorical_equal(left, right, check_dtype=True,
check_category_order=True, obj='Categorical'):
"""Test that Categoricals are equivalent.
Parameters
----------
left : Categorical
right : Categorical
check_dtype : bool, default True
Check that integer dtype of the codes are the same
check_category_order : bool, default True
Whether the order of the categories should be compared, which
implies identical integer codes. If False, only the resulting
values are compared. The ordered attribute is
checked regardless.
obj : str, default 'Categorical'
Specify object name being compared, internally used to show appropriate
assertion message
"""
_check_isinstance(left, right, Categorical)
if check_category_order:
assert_index_equal(left.categories, right.categories,
obj='{obj}.categories'.format(obj=obj))
assert_numpy_array_equal(left.codes, right.codes,
check_dtype=check_dtype,
obj='{obj}.codes'.format(obj=obj))
else:
assert_index_equal(left.categories.sort_values(),
right.categories.sort_values(),
obj='{obj}.categories'.format(obj=obj))
assert_index_equal(left.categories.take(left.codes),
right.categories.take(right.codes),
obj='{obj}.values'.format(obj=obj))
assert_attr_equal('ordered', left, right, obj=obj)
def assert_interval_array_equal(left, right, exact='equiv',
obj='IntervalArray'):
"""Test that two IntervalArrays are equivalent.
Parameters
----------
left, right : IntervalArray
The IntervalArrays to compare.
exact : bool / string {'equiv'}, default 'equiv'
Whether to check the Index class, dtype and inferred_type
are identical. If 'equiv', then RangeIndex can be substituted for
Int64Index as well.
obj : str, default 'IntervalArray'
Specify object name being compared, internally used to show appropriate
assertion message
"""
_check_isinstance(left, right, IntervalArray)
assert_index_equal(left.left, right.left, exact=exact,
obj='{obj}.left'.format(obj=obj))
assert_index_equal(left.right, right.right, exact=exact,
obj='{obj}.left'.format(obj=obj))
assert_attr_equal('closed', left, right, obj=obj)
def assert_period_array_equal(left, right, obj='PeriodArray'):
_check_isinstance(left, right, PeriodArray)
assert_numpy_array_equal(left._data, right._data,
obj='{obj}.values'.format(obj=obj))
assert_attr_equal('freq', left, right, obj=obj)
def assert_datetime_array_equal(left, right, obj='DatetimeArray'):
__tracebackhide__ = True
_check_isinstance(left, right, DatetimeArray)
assert_numpy_array_equal(left._data, right._data,
obj='{obj}._data'.format(obj=obj))
assert_attr_equal('freq', left, right, obj=obj)
assert_attr_equal('tz', left, right, obj=obj)
def assert_timedelta_array_equal(left, right, obj='TimedeltaArray'):
__tracebackhide__ = True
_check_isinstance(left, right, TimedeltaArray)
assert_numpy_array_equal(left._data, right._data,
obj='{obj}._data'.format(obj=obj))
assert_attr_equal('freq', left, right, obj=obj)
def raise_assert_detail(obj, message, left, right, diff=None):
__tracebackhide__ = True
if isinstance(left, np.ndarray):
left = pprint_thing(left)
elif is_categorical_dtype(left):
left = repr(left)
if PY2 and isinstance(left, string_types):
# left needs to be printable in native text type in python2
left = left.encode('utf-8')
if isinstance(right, np.ndarray):
right = pprint_thing(right)
elif is_categorical_dtype(right):
right = repr(right)
if PY2 and isinstance(right, string_types):
# right needs to be printable in native text type in python2
right = right.encode('utf-8')
msg = """{obj} are different
{message}
[left]: {left}
[right]: {right}""".format(obj=obj, message=message, left=left, right=right)
if diff is not None:
msg += "\n[diff]: {diff}".format(diff=diff)
raise AssertionError(msg)
def assert_numpy_array_equal(left, right, strict_nan=False,
check_dtype=True, err_msg=None,
check_same=None, obj='numpy array'):
""" Checks that 'np.ndarray' is equivalent
Parameters
----------
left : np.ndarray or iterable
right : np.ndarray or iterable
strict_nan : bool, default False
If True, consider NaN and None to be different.
check_dtype: bool, default True
check dtype if both a and b are np.ndarray
err_msg : str, default None
If provided, used as assertion message
check_same : None|'copy'|'same', default None
Ensure left and right refer/do not refer to the same memory area
obj : str, default 'numpy array'
Specify object name being compared, internally used to show appropriate
assertion message
"""
__tracebackhide__ = True
# instance validation
# Show a detailed error message when classes are different
assert_class_equal(left, right, obj=obj)
# both classes must be an np.ndarray
_check_isinstance(left, right, np.ndarray)
def _get_base(obj):
return obj.base if getattr(obj, 'base', None) is not None else obj
left_base = _get_base(left)
right_base = _get_base(right)
if check_same == 'same':
if left_base is not right_base:
msg = "{left!r} is not {right!r}".format(
left=left_base, right=right_base)
raise AssertionError(msg)
elif check_same == 'copy':
if left_base is right_base:
msg = "{left!r} is {right!r}".format(
left=left_base, right=right_base)
raise AssertionError(msg)
def _raise(left, right, err_msg):
if err_msg is None:
if left.shape != right.shape:
raise_assert_detail(obj, '{obj} shapes are different'
.format(obj=obj), left.shape, right.shape)
diff = 0
for l, r in zip(left, right):
# count up differences
if not array_equivalent(l, r, strict_nan=strict_nan):
diff += 1
diff = diff * 100.0 / left.size
msg = '{obj} values are different ({pct} %)'.format(
obj=obj, pct=np.round(diff, 5))
raise_assert_detail(obj, msg, left, right)
raise AssertionError(err_msg)
# compare shape and values
if not array_equivalent(left, right, strict_nan=strict_nan):
_raise(left, right, err_msg)
if check_dtype:
if isinstance(left, np.ndarray) and isinstance(right, np.ndarray):
assert_attr_equal('dtype', left, right, obj=obj)
return True
def assert_extension_array_equal(left, right, check_dtype=True,
check_less_precise=False,
check_exact=False):
"""Check that left and right ExtensionArrays are equal.
Parameters
----------
left, right : ExtensionArray
The two arrays to compare
check_dtype : bool, default True
Whether to check if the ExtensionArray dtypes are identical.
check_less_precise : bool or int, default False
Specify comparison precision. Only used when check_exact is False.
5 digits (False) or 3 digits (True) after decimal points are compared.
If int, then specify the digits to compare.
check_exact : bool, default False
Whether to compare number exactly.
Notes
-----
Missing values are checked separately from valid values.
A mask of missing values is computed for each and checked to match.
The remaining all-valid values are cast to object dtype and checked.
"""
assert isinstance(left, ExtensionArray), 'left is not an ExtensionArray'
assert isinstance(right, ExtensionArray), 'right is not an ExtensionArray'
if check_dtype:
assert_attr_equal('dtype', left, right, obj='ExtensionArray')
left_na = np.asarray(left.isna())
right_na = np.asarray(right.isna())
assert_numpy_array_equal(left_na, right_na, obj='ExtensionArray NA mask')
left_valid = np.asarray(left[~left_na].astype(object))
right_valid = np.asarray(right[~right_na].astype(object))
if check_exact:
assert_numpy_array_equal(left_valid, right_valid, obj='ExtensionArray')
else:
_testing.assert_almost_equal(left_valid, right_valid,
check_dtype=check_dtype,
check_less_precise=check_less_precise,
obj='ExtensionArray')
# This could be refactored to use the NDFrame.equals method
def assert_series_equal(left, right, check_dtype=True,
check_index_type='equiv',
check_series_type=True,
check_less_precise=False,
check_names=True,
check_exact=False,
check_datetimelike_compat=False,
check_categorical=True,
obj='Series'):
"""Check that left and right Series are equal.
Parameters
----------
left : Series
right : Series
check_dtype : bool, default True
Whether to check the Series dtype is identical.
check_index_type : bool / string {'equiv'}, default 'equiv'
Whether to check the Index class, dtype and inferred_type
are identical.
check_series_type : bool, default True
Whether to check the Series class is identical.
check_less_precise : bool or int, default False
Specify comparison precision. Only used when check_exact is False.
5 digits (False) or 3 digits (True) after decimal points are compared.
If int, then specify the digits to compare.
check_names : bool, default True
Whether to check the Series and Index names attribute.
check_exact : bool, default False
Whether to compare number exactly.
check_datetimelike_compat : bool, default False
Compare datetime-like which is comparable ignoring dtype.
check_categorical : bool, default True
Whether to compare internal Categorical exactly.
obj : str, default 'Series'
Specify object name being compared, internally used to show appropriate
assertion message.
"""
__tracebackhide__ = True
# instance validation
_check_isinstance(left, right, Series)
if check_series_type:
# ToDo: There are some tests using rhs is sparse
# lhs is dense. Should use assert_class_equal in future
assert isinstance(left, type(right))
# assert_class_equal(left, right, obj=obj)
# length comparison
if len(left) != len(right):
msg1 = '{len}, {left}'.format(len=len(left), left=left.index)
msg2 = '{len}, {right}'.format(len=len(right), right=right.index)
raise_assert_detail(obj, 'Series length are different', msg1, msg2)
# index comparison
assert_index_equal(left.index, right.index, exact=check_index_type,
check_names=check_names,
check_less_precise=check_less_precise,
check_exact=check_exact,
check_categorical=check_categorical,
obj='{obj}.index'.format(obj=obj))
if check_dtype:
# We want to skip exact dtype checking when `check_categorical`
# is False. We'll still raise if only one is a `Categorical`,
# regardless of `check_categorical`
if (is_categorical_dtype(left) and is_categorical_dtype(right) and
not check_categorical):
pass
else:
assert_attr_equal('dtype', left, right)
if check_exact:
assert_numpy_array_equal(left.get_values(), right.get_values(),
check_dtype=check_dtype,
obj='{obj}'.format(obj=obj),)
elif check_datetimelike_compat:
# we want to check only if we have compat dtypes
# e.g. integer and M|m are NOT compat, but we can simply check
# the values in that case
if (is_datetimelike_v_numeric(left, right) or
is_datetimelike_v_object(left, right) or
needs_i8_conversion(left) or
needs_i8_conversion(right)):
# datetimelike may have different objects (e.g. datetime.datetime
# vs Timestamp) but will compare equal
if not Index(left.values).equals(Index(right.values)):
msg = ('[datetimelike_compat=True] {left} is not equal to '
'{right}.').format(left=left.values, right=right.values)
raise AssertionError(msg)
else:
assert_numpy_array_equal(left.get_values(), right.get_values(),
check_dtype=check_dtype)
elif is_interval_dtype(left) or is_interval_dtype(right):
assert_interval_array_equal(left.array, right.array)
elif (is_extension_array_dtype(left) and not is_categorical_dtype(left) and
is_extension_array_dtype(right) and not is_categorical_dtype(right)):
return assert_extension_array_equal(left.array, right.array)
else:
_testing.assert_almost_equal(left.get_values(), right.get_values(),
check_less_precise=check_less_precise,
check_dtype=check_dtype,
obj='{obj}'.format(obj=obj))
# metadata comparison
if check_names:
assert_attr_equal('name', left, right, obj=obj)
if check_categorical:
if is_categorical_dtype(left) or is_categorical_dtype(right):
assert_categorical_equal(left.values, right.values,
obj='{obj} category'.format(obj=obj))
# This could be refactored to use the NDFrame.equals method
def assert_frame_equal(left, right, check_dtype=True,
check_index_type='equiv',
check_column_type='equiv',
check_frame_type=True,
check_less_precise=False,
check_names=True,
by_blocks=False,
check_exact=False,
check_datetimelike_compat=False,
check_categorical=True,
check_like=False,
obj='DataFrame'):
"""
Check that left and right DataFrame are equal.
This function is intended to compare two DataFrames and output any
differences. Is is mostly intended for use in unit tests.
Additional parameters allow varying the strictness of the
equality checks performed.
Parameters
----------
left : DataFrame
First DataFrame to compare.
right : DataFrame
Second DataFrame to compare.
check_dtype : bool, default True
Whether to check the DataFrame dtype is identical.
check_index_type : bool / string {'equiv'}, default 'equiv'
Whether to check the Index class, dtype and inferred_type
are identical.
check_column_type : bool / string {'equiv'}, default 'equiv'
Whether to check the columns class, dtype and inferred_type
are identical. Is passed as the ``exact`` argument of
:func:`assert_index_equal`.
check_frame_type : bool, default True
Whether to check the DataFrame class is identical.
check_less_precise : bool or int, default False
Specify comparison precision. Only used when check_exact is False.
5 digits (False) or 3 digits (True) after decimal points are compared.
If int, then specify the digits to compare.
check_names : bool, default True
Whether to check that the `names` attribute for both the `index`
and `column` attributes of the DataFrame is identical, i.e.
* left.index.names == right.index.names
* left.columns.names == right.columns.names
by_blocks : bool, default False
Specify how to compare internal data. If False, compare by columns.
If True, compare by blocks.
check_exact : bool, default False
Whether to compare number exactly.
check_datetimelike_compat : bool, default False
Compare datetime-like which is comparable ignoring dtype.
check_categorical : bool, default True
Whether to compare internal Categorical exactly.
check_like : bool, default False
If True, ignore the order of index & columns.
Note: index labels must match their respective rows
(same as in columns) - same labels must be with the same data.
obj : str, default 'DataFrame'
Specify object name being compared, internally used to show appropriate
assertion message.
See Also
--------
assert_series_equal : Equivalent method for asserting Series equality.
DataFrame.equals : Check DataFrame equality.
Examples
--------
This example shows comparing two DataFrames that are equal
but with columns of differing dtypes.
>>> from pandas.util.testing import assert_frame_equal
>>> df1 = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})
>>> df2 = pd.DataFrame({'a': [1, 2], 'b': [3.0, 4.0]})
df1 equals itself.
>>> assert_frame_equal(df1, df1)
df1 differs from df2 as column 'b' is of a different type.
>>> assert_frame_equal(df1, df2)
Traceback (most recent call last):
AssertionError: Attributes are different
Attribute "dtype" are different
[left]: int64
[right]: float64
Ignore differing dtypes in columns with check_dtype.
>>> assert_frame_equal(df1, df2, check_dtype=False)
"""
__tracebackhide__ = True
# instance validation
_check_isinstance(left, right, DataFrame)
if check_frame_type:
# ToDo: There are some tests using rhs is SparseDataFrame
# lhs is DataFrame. Should use assert_class_equal in future
assert isinstance(left, type(right))
# assert_class_equal(left, right, obj=obj)
# shape comparison
if left.shape != right.shape:
raise_assert_detail(obj,
'DataFrame shape mismatch',
'{shape!r}'.format(shape=left.shape),
'{shape!r}'.format(shape=right.shape))
if check_like:
left, right = left.reindex_like(right), right
# index comparison
assert_index_equal(left.index, right.index, exact=check_index_type,
check_names=check_names,
check_less_precise=check_less_precise,
check_exact=check_exact,
check_categorical=check_categorical,
obj='{obj}.index'.format(obj=obj))
# column comparison
assert_index_equal(left.columns, right.columns, exact=check_column_type,
check_names=check_names,
check_less_precise=check_less_precise,
check_exact=check_exact,
check_categorical=check_categorical,
obj='{obj}.columns'.format(obj=obj))
# compare by blocks
if by_blocks:
rblocks = right._to_dict_of_blocks()
lblocks = left._to_dict_of_blocks()
for dtype in list(set(list(lblocks.keys()) + list(rblocks.keys()))):
assert dtype in lblocks
assert dtype in rblocks
assert_frame_equal(lblocks[dtype], rblocks[dtype],
check_dtype=check_dtype, obj='DataFrame.blocks')
# compare by columns
else:
for i, col in enumerate(left.columns):
assert col in right
lcol = left.iloc[:, i]
rcol = right.iloc[:, i]
assert_series_equal(
lcol, rcol, check_dtype=check_dtype,
check_index_type=check_index_type,
check_less_precise=check_less_precise,
check_exact=check_exact, check_names=check_names,
check_datetimelike_compat=check_datetimelike_compat,
check_categorical=check_categorical,
obj='DataFrame.iloc[:, {idx}]'.format(idx=i))
def assert_panel_equal(left, right,
check_dtype=True,
check_panel_type=False,
check_less_precise=False,
check_names=False,
by_blocks=False,
obj='Panel'):
"""Check that left and right Panels are equal.
Parameters
----------
left : Panel (or nd)
right : Panel (or nd)
check_dtype : bool, default True
Whether to check the Panel dtype is identical.
check_panel_type : bool, default False
Whether to check the Panel class is identical.
check_less_precise : bool or int, default False
Specify comparison precision. Only used when check_exact is False.
5 digits (False) or 3 digits (True) after decimal points are compared.
If int, then specify the digits to compare
check_names : bool, default True
Whether to check the Index names attribute.
by_blocks : bool, default False
Specify how to compare internal data. If False, compare by columns.
If True, compare by blocks.
obj : str, default 'Panel'
Specify the object name being compared, internally used to show
the appropriate assertion message.
"""
if check_panel_type:
assert_class_equal(left, right, obj=obj)
for axis in left._AXIS_ORDERS:
left_ind = getattr(left, axis)
right_ind = getattr(right, axis)
assert_index_equal(left_ind, right_ind, check_names=check_names)
if by_blocks:
rblocks = right._to_dict_of_blocks()
lblocks = left._to_dict_of_blocks()
for dtype in list(set(list(lblocks.keys()) + list(rblocks.keys()))):
assert dtype in lblocks
assert dtype in rblocks
array_equivalent(lblocks[dtype].values, rblocks[dtype].values)
else:
# can potentially be slow
for i, item in enumerate(left._get_axis(0)):
msg = "non-matching item (right) '{item}'".format(item=item)
assert item in right, msg
litem = left.iloc[i]
ritem = right.iloc[i]
assert_frame_equal(litem, ritem,
check_less_precise=check_less_precise,
check_names=check_names)
for i, item in enumerate(right._get_axis(0)):
msg = "non-matching item (left) '{item}'".format(item=item)
assert item in left, msg
def assert_equal(left, right, **kwargs):
"""
Wrapper for tm.assert_*_equal to dispatch to the appropriate test function.
Parameters
----------
left : Index, Series, DataFrame, ExtensionArray, or np.ndarray
right : Index, Series, DataFrame, ExtensionArray, or np.ndarray
**kwargs
"""
__tracebackhide__ = True
if isinstance(left, pd.Index):
assert_index_equal(left, right, **kwargs)
elif isinstance(left, pd.Series):
assert_series_equal(left, right, **kwargs)
elif isinstance(left, pd.DataFrame):
assert_frame_equal(left, right, **kwargs)
elif isinstance(left, IntervalArray):
assert_interval_array_equal(left, right, **kwargs)
elif isinstance(left, PeriodArray):
assert_period_array_equal(left, right, **kwargs)
elif isinstance(left, DatetimeArray):
assert_datetime_array_equal(left, right, **kwargs)
elif isinstance(left, TimedeltaArray):
assert_timedelta_array_equal(left, right, **kwargs)
elif isinstance(left, ExtensionArray):
assert_extension_array_equal(left, right, **kwargs)
elif isinstance(left, np.ndarray):
assert_numpy_array_equal(left, right, **kwargs)
else:
raise NotImplementedError(type(left))
def box_expected(expected, box_cls, transpose=True):
"""
Helper function to wrap the expected output of a test in a given box_class.
Parameters
----------
expected : np.ndarray, Index, Series
box_cls : {Index, Series, DataFrame}
Returns
-------
subclass of box_cls
"""
if box_cls is pd.Index:
expected = pd.Index(expected)
elif box_cls is pd.Series:
expected = pd.Series(expected)
elif box_cls is pd.DataFrame:
expected = pd.Series(expected).to_frame()
if transpose:
# for vector operations, we we need a DataFrame to be a single-row,
# not a single-column, in order to operate against non-DataFrame
# vectors of the same length.
expected = expected.T
elif box_cls is PeriodArray:
# the PeriodArray constructor is not as flexible as period_array
expected = period_array(expected)
elif box_cls is DatetimeArray:
expected = | DatetimeArray(expected) | pandas.core.arrays.DatetimeArrayMixin |
from typing import Iterable
import requests
from requests.adapters import HTTPAdapter
from requests.api import request
from urllib3.util.retry import Retry
from concurrent.futures import ThreadPoolExecutor
import pandas as pd
from six import Iterator
import yaml
import csv
import re
import os
ALPHA_BASE_URL = 'https://www.alphavantage.co/query?'
CONNECT_RETRIES = 3
BACKOFF_FACTOR = 0.5
###############################################################################################
def get_alpha_key(credentials_file) -> None:
"""Grabs credentials for Alpha Vantage API from a yaml file
Parameters
-----------
credentials_file: str
path to .yml file containing credentials
requires file to contain entries for 'alpha_key:'
Returns
-----------
None
"""
with open(credentials_file, "r") as stream:
try:
credentials = yaml.safe_load(stream)
except yaml.YAMLError as exc:
print(exc)
return credentials["alpha_key"]
def get_alpha_listings(
api_key: str, base_url: str = ALPHA_BASE_URL,
date: str = None, state: str = None) -> Iterator:
"""Gets all stock listings from Alpha Vantage
Parameters
-----------
api_key: str
The Alpha Vantage API key
base_url: str
the alpha vantage URL for the API
date: str
the listings as of a certain date. if None, then uses todays date
state: str
the state of the listings to be returned. See Alpha Vantage docs for more info
Returns
-----------
pandas DataFrame
a DataFrame containing the Alpha Vantage listings
"""
sequence = (base_url, 'function=LISTING_STATUS')
if date is not None:
sequence += ('&date=', date)
if state is not None:
sequence += ('&state=', state)
sequence = sequence + ('&apikey=', api_key)
url = ''.join(map(str, sequence))
response = requests.get(url)
df_list = alpha_csv_to_dataframe(response)
return df_list[0]
def alpha_csv_to_dataframe(responses):
output = []
if isinstance(responses, list) != True:
responses = [responses]
for response in responses:
decoded_content = response.content.decode('utf-8')
cr = csv.reader(decoded_content.splitlines(), delimiter=',')
df = | pd.DataFrame(cr) | pandas.DataFrame |
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import datetime
from finrl.config import config
from finrl.marketdata.yahoodownloader import YahooDownloader
from finrl.preprocessing.preprocessors import FeatureEngineer
from finrl.preprocessing.data import data_split
from finrl.env.env_stocktrading import StockTradingEnv
from finrl.model.models import DRLAgent
from finrl.trade.backtest import backtest_stats, backtest_plot, get_daily_return, get_baseline
from pprint import pprint
import sys
sys.path.append("../FinRL-Library")
import itertools
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
print("============== CONFIG DATES ===========")
START_TRAINING_DATE = '2010-01-01'
END_TRAINING_DATE = '2019-01-01'
START_TRADING_DATE = '2019-01-01'
END_TRADING_DATE = '2021-01-01'
print('START_TRAINING_DATE :'+START_TRAINING_DATE)
print('END_TRAINING_DATE :'+END_TRAINING_DATE)
print('START_TRADING_DATE :'+START_TRADING_DATE)
print('END_TRADING_DATE :'+END_TRADING_DATE)
SEED = 41
np.random.seed(SEED)
print("==============Prepare Training Data===========")
# df = pd.read_csv('JII_data_google_20210613.csv')
# fe = FeatureEngineer(
# use_technical_indicator=True,
# tech_indicator_list = config.TECHNICAL_INDICATORS_LIST,
# use_turbulence=True,
# user_defined_feature = False)
# processed = fe.preprocess_data(df)
# list_ticker = processed["tic"].unique().tolist()
# list_date = list(pd.date_range(processed['date'].min(),processed['date'].max()).astype(str))
# combination = list(itertools.product(list_date,list_ticker))
# processed_full = pd.DataFrame(combination,columns=["date","tic"]).merge(processed,on=["date","tic"],how="left")
# processed_full = processed_full[processed_full['date'].isin(processed['date'])]
# processed_full = processed_full.sort_values(['tic','date']).fillna(method='ffill')
# processed_full = processed_full.sort_values(['date','tic'])
# processed_full.to_csv('JII_data_processed_20210613.csv', index=False)
processed_full = | pd.read_csv('JII_data_processed_20210613.csv') | pandas.read_csv |
# -*- coding: utf-8 -*-
# pylint: disable-msg=W0612,E1101
import itertools
import warnings
from warnings import catch_warnings
from datetime import datetime
from pandas.types.common import (is_integer_dtype,
is_float_dtype,
is_scalar)
from pandas.compat import range, lrange, lzip, StringIO, lmap
from pandas.tslib import NaT
from numpy import nan
from numpy.random import randn
import numpy as np
import pandas as pd
from pandas import option_context
from pandas.core.indexing import _non_reducing_slice, _maybe_numeric_slice
from pandas.core.api import (DataFrame, Index, Series, Panel, isnull,
MultiIndex, Timestamp, Timedelta, UInt64Index)
from pandas.formats.printing import pprint_thing
from pandas import concat
from pandas.core.common import PerformanceWarning
from pandas.tests.indexing.common import _mklbl
import pandas.util.testing as tm
from pandas import date_range
_verbose = False
# ------------------------------------------------------------------------
# Indexing test cases
def _generate_indices(f, values=False):
""" generate the indicies
if values is True , use the axis values
is False, use the range
"""
axes = f.axes
if values:
axes = [lrange(len(a)) for a in axes]
return itertools.product(*axes)
def _get_value(f, i, values=False):
""" return the value for the location i """
# check agains values
if values:
return f.values[i]
# this is equiv of f[col][row].....
# v = f
# for a in reversed(i):
# v = v.__getitem__(a)
# return v
with catch_warnings(record=True):
return f.ix[i]
def _get_result(obj, method, key, axis):
""" return the result for this obj with this key and this axis """
if isinstance(key, dict):
key = key[axis]
# use an artifical conversion to map the key as integers to the labels
# so ix can work for comparisions
if method == 'indexer':
method = 'ix'
key = obj._get_axis(axis)[key]
# in case we actually want 0 index slicing
try:
xp = getattr(obj, method).__getitem__(_axify(obj, key, axis))
except:
xp = getattr(obj, method).__getitem__(key)
return xp
def _axify(obj, key, axis):
# create a tuple accessor
axes = [slice(None)] * obj.ndim
axes[axis] = key
return tuple(axes)
class TestIndexing(tm.TestCase):
_objs = set(['series', 'frame', 'panel'])
_typs = set(['ints', 'uints', 'labels', 'mixed',
'ts', 'floats', 'empty', 'ts_rev'])
def setUp(self):
self.series_ints = Series(np.random.rand(4), index=lrange(0, 8, 2))
self.frame_ints = DataFrame(np.random.randn(4, 4),
index=lrange(0, 8, 2),
columns=lrange(0, 12, 3))
self.panel_ints = Panel(np.random.rand(4, 4, 4),
items=lrange(0, 8, 2),
major_axis=lrange(0, 12, 3),
minor_axis=lrange(0, 16, 4))
self.series_uints = Series(np.random.rand(4),
index=UInt64Index(lrange(0, 8, 2)))
self.frame_uints = DataFrame(np.random.randn(4, 4),
index=UInt64Index(lrange(0, 8, 2)),
columns=UInt64Index(lrange(0, 12, 3)))
self.panel_uints = Panel(np.random.rand(4, 4, 4),
items=UInt64Index(lrange(0, 8, 2)),
major_axis=UInt64Index(lrange(0, 12, 3)),
minor_axis=UInt64Index(lrange(0, 16, 4)))
self.series_labels = Series(np.random.randn(4), index=list('abcd'))
self.frame_labels = DataFrame(np.random.randn(4, 4),
index=list('abcd'), columns=list('ABCD'))
self.panel_labels = Panel(np.random.randn(4, 4, 4),
items=list('abcd'),
major_axis=list('ABCD'),
minor_axis=list('ZYXW'))
self.series_mixed = Series(np.random.randn(4), index=[2, 4, 'null', 8])
self.frame_mixed = DataFrame(np.random.randn(4, 4),
index=[2, 4, 'null', 8])
self.panel_mixed = Panel(np.random.randn(4, 4, 4),
items=[2, 4, 'null', 8])
self.series_ts = Series(np.random.randn(4),
index=date_range('20130101', periods=4))
self.frame_ts = DataFrame(np.random.randn(4, 4),
index=date_range('20130101', periods=4))
self.panel_ts = Panel(np.random.randn(4, 4, 4),
items=date_range('20130101', periods=4))
dates_rev = (date_range('20130101', periods=4)
.sort_values(ascending=False))
self.series_ts_rev = Series(np.random.randn(4),
index=dates_rev)
self.frame_ts_rev = DataFrame(np.random.randn(4, 4),
index=dates_rev)
self.panel_ts_rev = Panel(np.random.randn(4, 4, 4),
items=dates_rev)
self.frame_empty = DataFrame({})
self.series_empty = Series({})
self.panel_empty = Panel({})
# form agglomerates
for o in self._objs:
d = dict()
for t in self._typs:
d[t] = getattr(self, '%s_%s' % (o, t), None)
setattr(self, o, d)
def check_values(self, f, func, values=False):
if f is None:
return
axes = f.axes
indicies = itertools.product(*axes)
for i in indicies:
result = getattr(f, func)[i]
# check agains values
if values:
expected = f.values[i]
else:
expected = f
for a in reversed(i):
expected = expected.__getitem__(a)
tm.assert_almost_equal(result, expected)
def check_result(self, name, method1, key1, method2, key2, typs=None,
objs=None, axes=None, fails=None):
def _eq(t, o, a, obj, k1, k2):
""" compare equal for these 2 keys """
if a is not None and a > obj.ndim - 1:
return
def _print(result, error=None):
if error is not None:
error = str(error)
v = ("%-16.16s [%-16.16s]: [typ->%-8.8s,obj->%-8.8s,"
"key1->(%-4.4s),key2->(%-4.4s),axis->%s] %s" %
(name, result, t, o, method1, method2, a, error or ''))
if _verbose:
pprint_thing(v)
try:
rs = getattr(obj, method1).__getitem__(_axify(obj, k1, a))
try:
xp = _get_result(obj, method2, k2, a)
except:
result = 'no comp'
_print(result)
return
detail = None
try:
if is_scalar(rs) and is_scalar(xp):
self.assertEqual(rs, xp)
elif xp.ndim == 1:
tm.assert_series_equal(rs, xp)
elif xp.ndim == 2:
tm.assert_frame_equal(rs, xp)
elif xp.ndim == 3:
tm.assert_panel_equal(rs, xp)
result = 'ok'
except AssertionError as e:
detail = str(e)
result = 'fail'
# reverse the checks
if fails is True:
if result == 'fail':
result = 'ok (fail)'
_print(result)
if not result.startswith('ok'):
raise AssertionError(detail)
except AssertionError:
raise
except Exception as detail:
# if we are in fails, the ok, otherwise raise it
if fails is not None:
if isinstance(detail, fails):
result = 'ok (%s)' % type(detail).__name__
_print(result)
return
result = type(detail).__name__
raise AssertionError(_print(result, error=detail))
if typs is None:
typs = self._typs
if objs is None:
objs = self._objs
if axes is not None:
if not isinstance(axes, (tuple, list)):
axes = [axes]
else:
axes = list(axes)
else:
axes = [0, 1, 2]
# check
for o in objs:
if o not in self._objs:
continue
d = getattr(self, o)
for a in axes:
for t in typs:
if t not in self._typs:
continue
obj = d[t]
if obj is not None:
obj = obj.copy()
k2 = key2
_eq(t, o, a, obj, key1, k2)
def test_ix_deprecation(self):
# GH 15114
df = DataFrame({'A': [1, 2, 3]})
with tm.assert_produces_warning(DeprecationWarning,
check_stacklevel=False):
df.ix[1, 'A']
def test_indexer_caching(self):
# GH5727
# make sure that indexers are in the _internal_names_set
n = 1000001
arrays = [lrange(n), lrange(n)]
index = MultiIndex.from_tuples(lzip(*arrays))
s = Series(np.zeros(n), index=index)
str(s)
# setitem
expected = Series(np.ones(n), index=index)
s = Series(np.zeros(n), index=index)
s[s == 0] = 1
tm.assert_series_equal(s, expected)
def test_at_and_iat_get(self):
def _check(f, func, values=False):
if f is not None:
indicies = _generate_indices(f, values)
for i in indicies:
result = getattr(f, func)[i]
expected = _get_value(f, i, values)
tm.assert_almost_equal(result, expected)
for o in self._objs:
d = getattr(self, o)
# iat
for f in [d['ints'], d['uints']]:
_check(f, 'iat', values=True)
for f in [d['labels'], d['ts'], d['floats']]:
if f is not None:
self.assertRaises(ValueError, self.check_values, f, 'iat')
# at
for f in [d['ints'], d['uints'], d['labels'],
d['ts'], d['floats']]:
_check(f, 'at')
def test_at_and_iat_set(self):
def _check(f, func, values=False):
if f is not None:
indicies = _generate_indices(f, values)
for i in indicies:
getattr(f, func)[i] = 1
expected = _get_value(f, i, values)
tm.assert_almost_equal(expected, 1)
for t in self._objs:
d = getattr(self, t)
# iat
for f in [d['ints'], d['uints']]:
_check(f, 'iat', values=True)
for f in [d['labels'], d['ts'], d['floats']]:
if f is not None:
self.assertRaises(ValueError, _check, f, 'iat')
# at
for f in [d['ints'], d['uints'], d['labels'],
d['ts'], d['floats']]:
_check(f, 'at')
def test_at_iat_coercion(self):
# as timestamp is not a tuple!
dates = date_range('1/1/2000', periods=8)
df = DataFrame(randn(8, 4), index=dates, columns=['A', 'B', 'C', 'D'])
s = df['A']
result = s.at[dates[5]]
xp = s.values[5]
self.assertEqual(result, xp)
# GH 7729
# make sure we are boxing the returns
s = Series(['2014-01-01', '2014-02-02'], dtype='datetime64[ns]')
expected = Timestamp('2014-02-02')
for r in [lambda: s.iat[1], lambda: s.iloc[1]]:
result = r()
self.assertEqual(result, expected)
s = Series(['1 days', '2 days'], dtype='timedelta64[ns]')
expected = Timedelta('2 days')
for r in [lambda: s.iat[1], lambda: s.iloc[1]]:
result = r()
self.assertEqual(result, expected)
def test_iat_invalid_args(self):
pass
def test_imethods_with_dups(self):
# GH6493
# iat/iloc with dups
s = Series(range(5), index=[1, 1, 2, 2, 3], dtype='int64')
result = s.iloc[2]
self.assertEqual(result, 2)
result = s.iat[2]
self.assertEqual(result, 2)
self.assertRaises(IndexError, lambda: s.iat[10])
self.assertRaises(IndexError, lambda: s.iat[-10])
result = s.iloc[[2, 3]]
expected = Series([2, 3], [2, 2], dtype='int64')
tm.assert_series_equal(result, expected)
df = s.to_frame()
result = df.iloc[2]
expected = Series(2, index=[0], name=2)
tm.assert_series_equal(result, expected)
result = df.iat[2, 0]
expected = 2
self.assertEqual(result, 2)
def test_repeated_getitem_dups(self):
# GH 5678
# repeated gettitems on a dup index returing a ndarray
df = DataFrame(
np.random.random_sample((20, 5)),
index=['ABCDE' [x % 5] for x in range(20)])
expected = df.loc['A', 0]
result = df.loc[:, 0].loc['A']
tm.assert_series_equal(result, expected)
def test_iloc_exceeds_bounds(self):
# GH6296
# iloc should allow indexers that exceed the bounds
df = DataFrame(np.random.random_sample((20, 5)), columns=list('ABCDE'))
expected = df
# lists of positions should raise IndexErrror!
with tm.assertRaisesRegexp(IndexError,
'positional indexers are out-of-bounds'):
df.iloc[:, [0, 1, 2, 3, 4, 5]]
self.assertRaises(IndexError, lambda: df.iloc[[1, 30]])
self.assertRaises(IndexError, lambda: df.iloc[[1, -30]])
self.assertRaises(IndexError, lambda: df.iloc[[100]])
s = df['A']
self.assertRaises(IndexError, lambda: s.iloc[[100]])
self.assertRaises(IndexError, lambda: s.iloc[[-100]])
# still raise on a single indexer
msg = 'single positional indexer is out-of-bounds'
with tm.assertRaisesRegexp(IndexError, msg):
df.iloc[30]
self.assertRaises(IndexError, lambda: df.iloc[-30])
# GH10779
# single positive/negative indexer exceeding Series bounds should raise
# an IndexError
with tm.assertRaisesRegexp(IndexError, msg):
s.iloc[30]
self.assertRaises(IndexError, lambda: s.iloc[-30])
# slices are ok
result = df.iloc[:, 4:10] # 0 < start < len < stop
expected = df.iloc[:, 4:]
tm.assert_frame_equal(result, expected)
result = df.iloc[:, -4:-10] # stop < 0 < start < len
expected = df.iloc[:, :0]
tm.assert_frame_equal(result, expected)
result = df.iloc[:, 10:4:-1] # 0 < stop < len < start (down)
expected = df.iloc[:, :4:-1]
tm.assert_frame_equal(result, expected)
result = df.iloc[:, 4:-10:-1] # stop < 0 < start < len (down)
expected = df.iloc[:, 4::-1]
tm.assert_frame_equal(result, expected)
result = df.iloc[:, -10:4] # start < 0 < stop < len
expected = df.iloc[:, :4]
tm.assert_frame_equal(result, expected)
result = df.iloc[:, 10:4] # 0 < stop < len < start
expected = df.iloc[:, :0]
tm.assert_frame_equal(result, expected)
result = df.iloc[:, -10:-11:-1] # stop < start < 0 < len (down)
expected = df.iloc[:, :0]
tm.assert_frame_equal(result, expected)
result = df.iloc[:, 10:11] # 0 < len < start < stop
expected = df.iloc[:, :0]
tm.assert_frame_equal(result, expected)
# slice bounds exceeding is ok
result = s.iloc[18:30]
expected = s.iloc[18:]
tm.assert_series_equal(result, expected)
result = s.iloc[30:]
expected = s.iloc[:0]
tm.assert_series_equal(result, expected)
result = s.iloc[30::-1]
expected = s.iloc[::-1]
tm.assert_series_equal(result, expected)
# doc example
def check(result, expected):
str(result)
result.dtypes
tm.assert_frame_equal(result, expected)
dfl = DataFrame(np.random.randn(5, 2), columns=list('AB'))
check(dfl.iloc[:, 2:3], DataFrame(index=dfl.index))
check(dfl.iloc[:, 1:3], dfl.iloc[:, [1]])
check(dfl.iloc[4:6], dfl.iloc[[4]])
self.assertRaises(IndexError, lambda: dfl.iloc[[4, 5, 6]])
self.assertRaises(IndexError, lambda: dfl.iloc[:, 4])
def test_iloc_getitem_int(self):
# integer
self.check_result('integer', 'iloc', 2, 'ix',
{0: 4, 1: 6, 2: 8}, typs=['ints', 'uints'])
self.check_result('integer', 'iloc', 2, 'indexer', 2,
typs=['labels', 'mixed', 'ts', 'floats', 'empty'],
fails=IndexError)
def test_iloc_getitem_neg_int(self):
# neg integer
self.check_result('neg int', 'iloc', -1, 'ix',
{0: 6, 1: 9, 2: 12}, typs=['ints', 'uints'])
self.check_result('neg int', 'iloc', -1, 'indexer', -1,
typs=['labels', 'mixed', 'ts', 'floats', 'empty'],
fails=IndexError)
def test_iloc_getitem_list_int(self):
# list of ints
self.check_result('list int', 'iloc', [0, 1, 2], 'ix',
{0: [0, 2, 4], 1: [0, 3, 6], 2: [0, 4, 8]},
typs=['ints', 'uints'])
self.check_result('list int', 'iloc', [2], 'ix',
{0: [4], 1: [6], 2: [8]}, typs=['ints', 'uints'])
self.check_result('list int', 'iloc', [0, 1, 2], 'indexer', [0, 1, 2],
typs=['labels', 'mixed', 'ts', 'floats', 'empty'],
fails=IndexError)
# array of ints (GH5006), make sure that a single indexer is returning
# the correct type
self.check_result('array int', 'iloc', np.array([0, 1, 2]), 'ix',
{0: [0, 2, 4],
1: [0, 3, 6],
2: [0, 4, 8]}, typs=['ints', 'uints'])
self.check_result('array int', 'iloc', np.array([2]), 'ix',
{0: [4], 1: [6], 2: [8]}, typs=['ints', 'uints'])
self.check_result('array int', 'iloc', np.array([0, 1, 2]), 'indexer',
[0, 1, 2],
typs=['labels', 'mixed', 'ts', 'floats', 'empty'],
fails=IndexError)
def test_iloc_getitem_neg_int_can_reach_first_index(self):
# GH10547 and GH10779
# negative integers should be able to reach index 0
df = DataFrame({'A': [2, 3, 5], 'B': [7, 11, 13]})
s = df['A']
expected = df.iloc[0]
result = df.iloc[-3]
tm.assert_series_equal(result, expected)
expected = df.iloc[[0]]
result = df.iloc[[-3]]
tm.assert_frame_equal(result, expected)
expected = s.iloc[0]
result = s.iloc[-3]
self.assertEqual(result, expected)
expected = s.iloc[[0]]
result = s.iloc[[-3]]
tm.assert_series_equal(result, expected)
# check the length 1 Series case highlighted in GH10547
expected = pd.Series(['a'], index=['A'])
result = expected.iloc[[-1]]
tm.assert_series_equal(result, expected)
def test_iloc_getitem_dups(self):
# no dups in panel (bug?)
self.check_result('list int (dups)', 'iloc', [0, 1, 1, 3], 'ix',
{0: [0, 2, 2, 6], 1: [0, 3, 3, 9]},
objs=['series', 'frame'], typs=['ints', 'uints'])
# GH 6766
df1 = DataFrame([{'A': None, 'B': 1}, {'A': 2, 'B': 2}])
df2 = DataFrame([{'A': 3, 'B': 3}, {'A': 4, 'B': 4}])
df = concat([df1, df2], axis=1)
# cross-sectional indexing
result = df.iloc[0, 0]
self.assertTrue(isnull(result))
result = df.iloc[0, :]
expected = Series([np.nan, 1, 3, 3], index=['A', 'B', 'A', 'B'],
name=0)
tm.assert_series_equal(result, expected)
def test_iloc_getitem_array(self):
# array like
s = Series(index=lrange(1, 4))
self.check_result('array like', 'iloc', s.index, 'ix',
{0: [2, 4, 6], 1: [3, 6, 9], 2: [4, 8, 12]},
typs=['ints', 'uints'])
def test_iloc_getitem_bool(self):
# boolean indexers
b = [True, False, True, False, ]
self.check_result('bool', 'iloc', b, 'ix', b, typs=['ints', 'uints'])
self.check_result('bool', 'iloc', b, 'ix', b,
typs=['labels', 'mixed', 'ts', 'floats', 'empty'],
fails=IndexError)
def test_iloc_getitem_slice(self):
# slices
self.check_result('slice', 'iloc', slice(1, 3), 'ix',
{0: [2, 4], 1: [3, 6], 2: [4, 8]},
typs=['ints', 'uints'])
self.check_result('slice', 'iloc', slice(1, 3), 'indexer',
slice(1, 3),
typs=['labels', 'mixed', 'ts', 'floats', 'empty'],
fails=IndexError)
def test_iloc_getitem_slice_dups(self):
df1 = DataFrame(np.random.randn(10, 4), columns=['A', 'A', 'B', 'B'])
df2 = DataFrame(np.random.randint(0, 10, size=20).reshape(10, 2),
columns=['A', 'C'])
# axis=1
df = concat([df1, df2], axis=1)
tm.assert_frame_equal(df.iloc[:, :4], df1)
tm.assert_frame_equal(df.iloc[:, 4:], df2)
df = concat([df2, df1], axis=1)
tm.assert_frame_equal(df.iloc[:, :2], df2)
tm.assert_frame_equal(df.iloc[:, 2:], df1)
exp = concat([df2, df1.iloc[:, [0]]], axis=1)
tm.assert_frame_equal(df.iloc[:, 0:3], exp)
# axis=0
df = concat([df, df], axis=0)
tm.assert_frame_equal(df.iloc[0:10, :2], df2)
tm.assert_frame_equal(df.iloc[0:10, 2:], df1)
tm.assert_frame_equal(df.iloc[10:, :2], df2)
tm.assert_frame_equal(df.iloc[10:, 2:], df1)
def test_iloc_setitem(self):
df = self.frame_ints
df.iloc[1, 1] = 1
result = df.iloc[1, 1]
self.assertEqual(result, 1)
df.iloc[:, 2:3] = 0
expected = df.iloc[:, 2:3]
result = df.iloc[:, 2:3]
tm.assert_frame_equal(result, expected)
# GH5771
s = Series(0, index=[4, 5, 6])
s.iloc[1:2] += 1
expected = Series([0, 1, 0], index=[4, 5, 6])
tm.assert_series_equal(s, expected)
def test_loc_setitem_slice(self):
# GH10503
# assigning the same type should not change the type
df1 = DataFrame({'a': [0, 1, 1],
'b': Series([100, 200, 300], dtype='uint32')})
ix = df1['a'] == 1
newb1 = df1.loc[ix, 'b'] + 1
df1.loc[ix, 'b'] = newb1
expected = DataFrame({'a': [0, 1, 1],
'b': Series([100, 201, 301], dtype='uint32')})
tm.assert_frame_equal(df1, expected)
# assigning a new type should get the inferred type
df2 = DataFrame({'a': [0, 1, 1], 'b': [100, 200, 300]},
dtype='uint64')
ix = df1['a'] == 1
newb2 = df2.loc[ix, 'b']
df1.loc[ix, 'b'] = newb2
expected = DataFrame({'a': [0, 1, 1], 'b': [100, 200, 300]},
dtype='uint64')
tm.assert_frame_equal(df2, expected)
def test_ix_loc_setitem_consistency(self):
# GH 5771
# loc with slice and series
s = Series(0, index=[4, 5, 6])
s.loc[4:5] += 1
expected = Series([1, 1, 0], index=[4, 5, 6])
tm.assert_series_equal(s, expected)
# GH 5928
# chained indexing assignment
df = DataFrame({'a': [0, 1, 2]})
expected = df.copy()
with catch_warnings(record=True):
expected.ix[[0, 1, 2], 'a'] = -expected.ix[[0, 1, 2], 'a']
with catch_warnings(record=True):
df['a'].ix[[0, 1, 2]] = -df['a'].ix[[0, 1, 2]]
tm.assert_frame_equal(df, expected)
df = DataFrame({'a': [0, 1, 2], 'b': [0, 1, 2]})
with catch_warnings(record=True):
df['a'].ix[[0, 1, 2]] = -df['a'].ix[[0, 1, 2]].astype(
'float64') + 0.5
expected = DataFrame({'a': [0.5, -0.5, -1.5], 'b': [0, 1, 2]})
tm.assert_frame_equal(df, expected)
# GH 8607
# ix setitem consistency
df = DataFrame({'timestamp': [1413840976, 1413842580, 1413760580],
'delta': [1174, 904, 161],
'elapsed': [7673, 9277, 1470]})
expected = DataFrame({'timestamp': pd.to_datetime(
[1413840976, 1413842580, 1413760580], unit='s'),
'delta': [1174, 904, 161],
'elapsed': [7673, 9277, 1470]})
df2 = df.copy()
df2['timestamp'] = pd.to_datetime(df['timestamp'], unit='s')
tm.assert_frame_equal(df2, expected)
df2 = df.copy()
df2.loc[:, 'timestamp'] = pd.to_datetime(df['timestamp'], unit='s')
tm.assert_frame_equal(df2, expected)
df2 = df.copy()
with catch_warnings(record=True):
df2.ix[:, 2] = pd.to_datetime(df['timestamp'], unit='s')
tm.assert_frame_equal(df2, expected)
def test_ix_loc_consistency(self):
# GH 8613
# some edge cases where ix/loc should return the same
# this is not an exhaustive case
def compare(result, expected):
if is_scalar(expected):
self.assertEqual(result, expected)
else:
self.assertTrue(expected.equals(result))
# failure cases for .loc, but these work for .ix
df = pd.DataFrame(np.random.randn(5, 4), columns=list('ABCD'))
for key in [slice(1, 3), tuple([slice(0, 2), slice(0, 2)]),
tuple([slice(0, 2), df.columns[0:2]])]:
for index in [tm.makeStringIndex, tm.makeUnicodeIndex,
tm.makeDateIndex, tm.makePeriodIndex,
tm.makeTimedeltaIndex]:
df.index = index(len(df.index))
with catch_warnings(record=True):
df.ix[key]
self.assertRaises(TypeError, lambda: df.loc[key])
df = pd.DataFrame(np.random.randn(5, 4), columns=list('ABCD'),
index=pd.date_range('2012-01-01', periods=5))
for key in ['2012-01-03',
'2012-01-31',
slice('2012-01-03', '2012-01-03'),
slice('2012-01-03', '2012-01-04'),
slice('2012-01-03', '2012-01-06', 2),
slice('2012-01-03', '2012-01-31'),
tuple([[True, True, True, False, True]]), ]:
# getitem
# if the expected raises, then compare the exceptions
try:
with catch_warnings(record=True):
expected = df.ix[key]
except KeyError:
self.assertRaises(KeyError, lambda: df.loc[key])
continue
result = df.loc[key]
compare(result, expected)
# setitem
df1 = df.copy()
df2 = df.copy()
with catch_warnings(record=True):
df1.ix[key] = 10
df2.loc[key] = 10
compare(df2, df1)
# edge cases
s = Series([1, 2, 3, 4], index=list('abde'))
result1 = s['a':'c']
with catch_warnings(record=True):
result2 = s.ix['a':'c']
result3 = s.loc['a':'c']
tm.assert_series_equal(result1, result2)
tm.assert_series_equal(result1, result3)
# now work rather than raising KeyError
s = Series(range(5), [-2, -1, 1, 2, 3])
with catch_warnings(record=True):
result1 = s.ix[-10:3]
result2 = s.loc[-10:3]
tm.assert_series_equal(result1, result2)
with catch_warnings(record=True):
result1 = s.ix[0:3]
result2 = s.loc[0:3]
tm.assert_series_equal(result1, result2)
def test_loc_setitem_dups(self):
# GH 6541
df_orig = DataFrame(
{'me': list('rttti'),
'foo': list('aaade'),
'bar': np.arange(5, dtype='float64') * 1.34 + 2,
'bar2': np.arange(5, dtype='float64') * -.34 + 2}).set_index('me')
indexer = tuple(['r', ['bar', 'bar2']])
df = df_orig.copy()
df.loc[indexer] *= 2.0
tm.assert_series_equal(df.loc[indexer], 2.0 * df_orig.loc[indexer])
indexer = tuple(['r', 'bar'])
df = df_orig.copy()
df.loc[indexer] *= 2.0
self.assertEqual(df.loc[indexer], 2.0 * df_orig.loc[indexer])
indexer = tuple(['t', ['bar', 'bar2']])
df = df_orig.copy()
df.loc[indexer] *= 2.0
tm.assert_frame_equal(df.loc[indexer], 2.0 * df_orig.loc[indexer])
def test_iloc_setitem_dups(self):
# GH 6766
# iloc with a mask aligning from another iloc
df1 = DataFrame([{'A': None, 'B': 1}, {'A': 2, 'B': 2}])
df2 = DataFrame([{'A': 3, 'B': 3}, {'A': 4, 'B': 4}])
df = concat([df1, df2], axis=1)
expected = df.fillna(3)
expected['A'] = expected['A'].astype('float64')
inds = np.isnan(df.iloc[:, 0])
mask = inds[inds].index
df.iloc[mask, 0] = df.iloc[mask, 2]
tm.assert_frame_equal(df, expected)
# del a dup column across blocks
expected = DataFrame({0: [1, 2], 1: [3, 4]})
expected.columns = ['B', 'B']
del df['A']
tm.assert_frame_equal(df, expected)
# assign back to self
df.iloc[[0, 1], [0, 1]] = df.iloc[[0, 1], [0, 1]]
tm.assert_frame_equal(df, expected)
# reversed x 2
df.iloc[[1, 0], [0, 1]] = df.iloc[[1, 0], [0, 1]].reset_index(
drop=True)
df.iloc[[1, 0], [0, 1]] = df.iloc[[1, 0], [0, 1]].reset_index(
drop=True)
tm.assert_frame_equal(df, expected)
def test_chained_getitem_with_lists(self):
# GH6394
# Regression in chained getitem indexing with embedded list-like from
# 0.12
def check(result, expected):
tm.assert_numpy_array_equal(result, expected)
tm.assertIsInstance(result, np.ndarray)
df = DataFrame({'A': 5 * [np.zeros(3)], 'B': 5 * [np.ones(3)]})
expected = df['A'].iloc[2]
result = df.loc[2, 'A']
check(result, expected)
result2 = df.iloc[2]['A']
check(result2, expected)
result3 = df['A'].loc[2]
check(result3, expected)
result4 = df['A'].iloc[2]
check(result4, expected)
def test_loc_getitem_int(self):
# int label
self.check_result('int label', 'loc', 2, 'ix', 2,
typs=['ints', 'uints'], axes=0)
self.check_result('int label', 'loc', 3, 'ix', 3,
typs=['ints', 'uints'], axes=1)
self.check_result('int label', 'loc', 4, 'ix', 4,
typs=['ints', 'uints'], axes=2)
self.check_result('int label', 'loc', 2, 'ix', 2,
typs=['label'], fails=KeyError)
def test_loc_getitem_label(self):
# label
self.check_result('label', 'loc', 'c', 'ix', 'c', typs=['labels'],
axes=0)
self.check_result('label', 'loc', 'null', 'ix', 'null', typs=['mixed'],
axes=0)
self.check_result('label', 'loc', 8, 'ix', 8, typs=['mixed'], axes=0)
self.check_result('label', 'loc', Timestamp('20130102'), 'ix', 1,
typs=['ts'], axes=0)
self.check_result('label', 'loc', 'c', 'ix', 'c', typs=['empty'],
fails=KeyError)
def test_loc_getitem_label_out_of_range(self):
# out of range label
self.check_result('label range', 'loc', 'f', 'ix', 'f',
typs=['ints', 'uints', 'labels', 'mixed', 'ts'],
fails=KeyError)
self.check_result('label range', 'loc', 'f', 'ix', 'f',
typs=['floats'], fails=TypeError)
self.check_result('label range', 'loc', 20, 'ix', 20,
typs=['ints', 'uints', 'mixed'], fails=KeyError)
self.check_result('label range', 'loc', 20, 'ix', 20,
typs=['labels'], fails=TypeError)
self.check_result('label range', 'loc', 20, 'ix', 20, typs=['ts'],
axes=0, fails=TypeError)
self.check_result('label range', 'loc', 20, 'ix', 20, typs=['floats'],
axes=0, fails=TypeError)
def test_loc_getitem_label_list(self):
# list of labels
self.check_result('list lbl', 'loc', [0, 2, 4], 'ix', [0, 2, 4],
typs=['ints', 'uints'], axes=0)
self.check_result('list lbl', 'loc', [3, 6, 9], 'ix', [3, 6, 9],
typs=['ints', 'uints'], axes=1)
self.check_result('list lbl', 'loc', [4, 8, 12], 'ix', [4, 8, 12],
typs=['ints', 'uints'], axes=2)
self.check_result('list lbl', 'loc', ['a', 'b', 'd'], 'ix',
['a', 'b', 'd'], typs=['labels'], axes=0)
self.check_result('list lbl', 'loc', ['A', 'B', 'C'], 'ix',
['A', 'B', 'C'], typs=['labels'], axes=1)
self.check_result('list lbl', 'loc', ['Z', 'Y', 'W'], 'ix',
['Z', 'Y', 'W'], typs=['labels'], axes=2)
self.check_result('list lbl', 'loc', [2, 8, 'null'], 'ix',
[2, 8, 'null'], typs=['mixed'], axes=0)
self.check_result('list lbl', 'loc',
[Timestamp('20130102'), Timestamp('20130103')], 'ix',
[Timestamp('20130102'), Timestamp('20130103')],
typs=['ts'], axes=0)
self.check_result('list lbl', 'loc', [0, 1, 2], 'indexer', [0, 1, 2],
typs=['empty'], fails=KeyError)
self.check_result('list lbl', 'loc', [0, 2, 3], 'ix', [0, 2, 3],
typs=['ints', 'uints'], axes=0, fails=KeyError)
self.check_result('list lbl', 'loc', [3, 6, 7], 'ix', [3, 6, 7],
typs=['ints', 'uints'], axes=1, fails=KeyError)
self.check_result('list lbl', 'loc', [4, 8, 10], 'ix', [4, 8, 10],
typs=['ints', 'uints'], axes=2, fails=KeyError)
def test_loc_getitem_label_list_fails(self):
# fails
self.check_result('list lbl', 'loc', [20, 30, 40], 'ix', [20, 30, 40],
typs=['ints', 'uints'], axes=1, fails=KeyError)
self.check_result('list lbl', 'loc', [20, 30, 40], 'ix', [20, 30, 40],
typs=['ints', 'uints'], axes=2, fails=KeyError)
def test_loc_getitem_label_array_like(self):
# array like
self.check_result('array like', 'loc', Series(index=[0, 2, 4]).index,
'ix', [0, 2, 4], typs=['ints', 'uints'], axes=0)
self.check_result('array like', 'loc', Series(index=[3, 6, 9]).index,
'ix', [3, 6, 9], typs=['ints', 'uints'], axes=1)
self.check_result('array like', 'loc', Series(index=[4, 8, 12]).index,
'ix', [4, 8, 12], typs=['ints', 'uints'], axes=2)
def test_loc_getitem_bool(self):
# boolean indexers
b = [True, False, True, False]
self.check_result('bool', 'loc', b, 'ix', b,
typs=['ints', 'uints', 'labels',
'mixed', 'ts', 'floats'])
self.check_result('bool', 'loc', b, 'ix', b, typs=['empty'],
fails=KeyError)
def test_loc_getitem_int_slice(self):
# ok
self.check_result('int slice2', 'loc', slice(2, 4), 'ix', [2, 4],
typs=['ints', 'uints'], axes=0)
self.check_result('int slice2', 'loc', slice(3, 6), 'ix', [3, 6],
typs=['ints', 'uints'], axes=1)
self.check_result('int slice2', 'loc', slice(4, 8), 'ix', [4, 8],
typs=['ints', 'uints'], axes=2)
# GH 3053
# loc should treat integer slices like label slices
from itertools import product
index = MultiIndex.from_tuples([t for t in product(
[6, 7, 8], ['a', 'b'])])
df = DataFrame(np.random.randn(6, 6), index, index)
result = df.loc[6:8, :]
with catch_warnings(record=True):
expected = df.ix[6:8, :]
tm.assert_frame_equal(result, expected)
index = MultiIndex.from_tuples([t
for t in product(
[10, 20, 30], ['a', 'b'])])
df = DataFrame(np.random.randn(6, 6), index, index)
result = df.loc[20:30, :]
with catch_warnings(record=True):
expected = df.ix[20:30, :]
tm.assert_frame_equal(result, expected)
# doc examples
result = df.loc[10, :]
with catch_warnings(record=True):
expected = df.ix[10, :]
tm.assert_frame_equal(result, expected)
result = df.loc[:, 10]
# expected = df.ix[:,10] (this fails)
expected = df[10]
tm.assert_frame_equal(result, expected)
def test_loc_to_fail(self):
# GH3449
df = DataFrame(np.random.random((3, 3)),
index=['a', 'b', 'c'],
columns=['e', 'f', 'g'])
# raise a KeyError?
self.assertRaises(KeyError, df.loc.__getitem__,
tuple([[1, 2], [1, 2]]))
# GH 7496
# loc should not fallback
s = Series()
s.loc[1] = 1
s.loc['a'] = 2
self.assertRaises(KeyError, lambda: s.loc[-1])
self.assertRaises(KeyError, lambda: s.loc[[-1, -2]])
self.assertRaises(KeyError, lambda: s.loc[['4']])
s.loc[-1] = 3
result = s.loc[[-1, -2]]
expected = Series([3, np.nan], index=[-1, -2])
tm.assert_series_equal(result, expected)
s['a'] = 2
self.assertRaises(KeyError, lambda: s.loc[[-2]])
del s['a']
def f():
s.loc[[-2]] = 0
self.assertRaises(KeyError, f)
# inconsistency between .loc[values] and .loc[values,:]
# GH 7999
df = DataFrame([['a'], ['b']], index=[1, 2], columns=['value'])
def f():
df.loc[[3], :]
self.assertRaises(KeyError, f)
def f():
df.loc[[3]]
self.assertRaises(KeyError, f)
def test_at_to_fail(self):
# at should not fallback
# GH 7814
s = Series([1, 2, 3], index=list('abc'))
result = s.at['a']
self.assertEqual(result, 1)
self.assertRaises(ValueError, lambda: s.at[0])
df = DataFrame({'A': [1, 2, 3]}, index=list('abc'))
result = df.at['a', 'A']
self.assertEqual(result, 1)
self.assertRaises(ValueError, lambda: df.at['a', 0])
s = Series([1, 2, 3], index=[3, 2, 1])
result = s.at[1]
self.assertEqual(result, 3)
self.assertRaises(ValueError, lambda: s.at['a'])
df = DataFrame({0: [1, 2, 3]}, index=[3, 2, 1])
result = df.at[1, 0]
self.assertEqual(result, 3)
self.assertRaises(ValueError, lambda: df.at['a', 0])
# GH 13822, incorrect error string with non-unique columns when missing
# column is accessed
df = DataFrame({'x': [1.], 'y': [2.], 'z': [3.]})
df.columns = ['x', 'x', 'z']
# Check that we get the correct value in the KeyError
self.assertRaisesRegexp(KeyError, r"\['y'\] not in index",
lambda: df[['x', 'y', 'z']])
def test_loc_getitem_label_slice(self):
# label slices (with ints)
self.check_result('lab slice', 'loc', slice(1, 3),
'ix', slice(1, 3),
typs=['labels', 'mixed', 'empty', 'ts', 'floats'],
fails=TypeError)
# real label slices
self.check_result('lab slice', 'loc', slice('a', 'c'),
'ix', slice('a', 'c'), typs=['labels'], axes=0)
self.check_result('lab slice', 'loc', slice('A', 'C'),
'ix', slice('A', 'C'), typs=['labels'], axes=1)
self.check_result('lab slice', 'loc', slice('W', 'Z'),
'ix', slice('W', 'Z'), typs=['labels'], axes=2)
self.check_result('ts slice', 'loc', slice('20130102', '20130104'),
'ix', slice('20130102', '20130104'),
typs=['ts'], axes=0)
self.check_result('ts slice', 'loc', slice('20130102', '20130104'),
'ix', slice('20130102', '20130104'),
typs=['ts'], axes=1, fails=TypeError)
self.check_result('ts slice', 'loc', slice('20130102', '20130104'),
'ix', slice('20130102', '20130104'),
typs=['ts'], axes=2, fails=TypeError)
# GH 14316
self.check_result('ts slice rev', 'loc', slice('20130104', '20130102'),
'indexer', [0, 1, 2], typs=['ts_rev'], axes=0)
self.check_result('mixed slice', 'loc', slice(2, 8), 'ix', slice(2, 8),
typs=['mixed'], axes=0, fails=TypeError)
self.check_result('mixed slice', 'loc', slice(2, 8), 'ix', slice(2, 8),
typs=['mixed'], axes=1, fails=KeyError)
self.check_result('mixed slice', 'loc', slice(2, 8), 'ix', slice(2, 8),
typs=['mixed'], axes=2, fails=KeyError)
self.check_result('mixed slice', 'loc', slice(2, 4, 2), 'ix', slice(
2, 4, 2), typs=['mixed'], axes=0, fails=TypeError)
def test_loc_general(self):
df = DataFrame(
np.random.rand(4, 4), columns=['A', 'B', 'C', 'D'],
index=['A', 'B', 'C', 'D'])
# want this to work
result = df.loc[:, "A":"B"].iloc[0:2, :]
self.assertTrue((result.columns == ['A', 'B']).all())
self.assertTrue((result.index == ['A', 'B']).all())
# mixed type
result = DataFrame({'a': [Timestamp('20130101')], 'b': [1]}).iloc[0]
expected = Series([Timestamp('20130101'), 1], index=['a', 'b'], name=0)
tm.assert_series_equal(result, expected)
self.assertEqual(result.dtype, object)
def test_loc_setitem_consistency(self):
# GH 6149
# coerce similary for setitem and loc when rows have a null-slice
expected = DataFrame({'date': Series(0, index=range(5),
dtype=np.int64),
'val': Series(range(5), dtype=np.int64)})
df = DataFrame({'date': date_range('2000-01-01', '2000-01-5'),
'val': Series(
range(5), dtype=np.int64)})
df.loc[:, 'date'] = 0
tm.assert_frame_equal(df, expected)
df = DataFrame({'date': date_range('2000-01-01', '2000-01-5'),
'val': Series(range(5), dtype=np.int64)})
df.loc[:, 'date'] = np.array(0, dtype=np.int64)
tm.assert_frame_equal(df, expected)
df = DataFrame({'date': date_range('2000-01-01', '2000-01-5'),
'val': Series(range(5), dtype=np.int64)})
df.loc[:, 'date'] = np.array([0, 0, 0, 0, 0], dtype=np.int64)
tm.assert_frame_equal(df, expected)
expected = DataFrame({'date': Series('foo', index=range(5)),
'val': Series(range(5), dtype=np.int64)})
df = DataFrame({'date': date_range('2000-01-01', '2000-01-5'),
'val': Series(range(5), dtype=np.int64)})
df.loc[:, 'date'] = 'foo'
tm.assert_frame_equal(df, expected)
expected = DataFrame({'date': Series(1.0, index=range(5)),
'val': Series(range(5), dtype=np.int64)})
df = DataFrame({'date': date_range('2000-01-01', '2000-01-5'),
'val': Series(range(5), dtype=np.int64)})
df.loc[:, 'date'] = 1.0
tm.assert_frame_equal(df, expected)
def test_loc_setitem_consistency_empty(self):
# empty (essentially noops)
expected = DataFrame(columns=['x', 'y'])
expected['x'] = expected['x'].astype(np.int64)
df = DataFrame(columns=['x', 'y'])
df.loc[:, 'x'] = 1
tm.assert_frame_equal(df, expected)
df = DataFrame(columns=['x', 'y'])
df['x'] = 1
tm.assert_frame_equal(df, expected)
def test_loc_setitem_consistency_slice_column_len(self):
# .loc[:,column] setting with slice == len of the column
# GH10408
data = """Level_0,,,Respondent,Respondent,Respondent,OtherCat,OtherCat
Level_1,,,Something,StartDate,EndDate,Yes/No,SomethingElse
Region,Site,RespondentID,,,,,
Region_1,Site_1,3987227376,A,5/25/2015 10:59,5/25/2015 11:22,Yes,
Region_1,Site_1,3980680971,A,5/21/2015 9:40,5/21/2015 9:52,Yes,Yes
Region_1,Site_2,3977723249,A,5/20/2015 8:27,5/20/2015 8:41,Yes,
Region_1,Site_2,3977723089,A,5/20/2015 8:33,5/20/2015 9:09,Yes,No"""
df = pd.read_csv(StringIO(data), header=[0, 1], index_col=[0, 1, 2])
df.loc[:, ('Respondent', 'StartDate')] = pd.to_datetime(df.loc[:, (
'Respondent', 'StartDate')])
df.loc[:, ('Respondent', 'EndDate')] = pd.to_datetime(df.loc[:, (
'Respondent', 'EndDate')])
df.loc[:, ('Respondent', 'Duration')] = df.loc[:, (
'Respondent', 'EndDate')] - df.loc[:, ('Respondent', 'StartDate')]
df.loc[:, ('Respondent', 'Duration')] = df.loc[:, (
'Respondent', 'Duration')].astype('timedelta64[s]')
expected = Series([1380, 720, 840, 2160.], index=df.index,
name=('Respondent', 'Duration'))
tm.assert_series_equal(df[('Respondent', 'Duration')], expected)
def test_loc_setitem_frame(self):
df = self.frame_labels
result = df.iloc[0, 0]
df.loc['a', 'A'] = 1
result = df.loc['a', 'A']
self.assertEqual(result, 1)
result = df.iloc[0, 0]
self.assertEqual(result, 1)
df.loc[:, 'B':'D'] = 0
expected = df.loc[:, 'B':'D']
with catch_warnings(record=True):
result = df.ix[:, 1:]
tm.assert_frame_equal(result, expected)
# GH 6254
# setting issue
df = DataFrame(index=[3, 5, 4], columns=['A'])
df.loc[[4, 3, 5], 'A'] = np.array([1, 2, 3], dtype='int64')
expected = DataFrame(dict(A=Series(
[1, 2, 3], index=[4, 3, 5]))).reindex(index=[3, 5, 4])
tm.assert_frame_equal(df, expected)
# GH 6252
# setting with an empty frame
keys1 = ['@' + str(i) for i in range(5)]
val1 = np.arange(5, dtype='int64')
keys2 = ['@' + str(i) for i in range(4)]
val2 = np.arange(4, dtype='int64')
index = list(set(keys1).union(keys2))
df = DataFrame(index=index)
df['A'] = nan
df.loc[keys1, 'A'] = val1
df['B'] = nan
df.loc[keys2, 'B'] = val2
expected = DataFrame(dict(A=Series(val1, index=keys1), B=Series(
val2, index=keys2))).reindex(index=index)
tm.assert_frame_equal(df, expected)
# GH 8669
# invalid coercion of nan -> int
df = DataFrame({'A': [1, 2, 3], 'B': np.nan})
df.loc[df.B > df.A, 'B'] = df.A
expected = DataFrame({'A': [1, 2, 3], 'B': np.nan})
tm.assert_frame_equal(df, expected)
# GH 6546
# setting with mixed labels
df = DataFrame({1: [1, 2], 2: [3, 4], 'a': ['a', 'b']})
result = df.loc[0, [1, 2]]
expected = Series([1, 3], index=[1, 2], dtype=object, name=0)
tm.assert_series_equal(result, expected)
expected = DataFrame({1: [5, 2], 2: [6, 4], 'a': ['a', 'b']})
df.loc[0, [1, 2]] = [5, 6]
tm.assert_frame_equal(df, expected)
def test_loc_setitem_frame_multiples(self):
# multiple setting
df = DataFrame({'A': ['foo', 'bar', 'baz'],
'B': Series(
range(3), dtype=np.int64)})
rhs = df.loc[1:2]
rhs.index = df.index[0:2]
df.loc[0:1] = rhs
expected = DataFrame({'A': ['bar', 'baz', 'baz'],
'B': Series(
[1, 2, 2], dtype=np.int64)})
tm.assert_frame_equal(df, expected)
# multiple setting with frame on rhs (with M8)
df = DataFrame({'date': date_range('2000-01-01', '2000-01-5'),
'val': Series(
range(5), dtype=np.int64)})
expected = DataFrame({'date': [Timestamp('20000101'), Timestamp(
'20000102'), Timestamp('20000101'), Timestamp('20000102'),
Timestamp('20000103')],
'val': Series(
[0, 1, 0, 1, 2], dtype=np.int64)})
rhs = df.loc[0:2]
rhs.index = df.index[2:5]
df.loc[2:4] = rhs
tm.assert_frame_equal(df, expected)
def test_iloc_getitem_frame(self):
df = DataFrame(np.random.randn(10, 4), index=lrange(0, 20, 2),
columns=lrange(0, 8, 2))
result = df.iloc[2]
with catch_warnings(record=True):
exp = df.ix[4]
tm.assert_series_equal(result, exp)
result = df.iloc[2, 2]
with catch_warnings(record=True):
exp = df.ix[4, 4]
self.assertEqual(result, exp)
# slice
result = df.iloc[4:8]
with catch_warnings(record=True):
expected = df.ix[8:14]
tm.assert_frame_equal(result, expected)
result = df.iloc[:, 2:3]
with catch_warnings(record=True):
expected = df.ix[:, 4:5]
tm.assert_frame_equal(result, expected)
# list of integers
result = df.iloc[[0, 1, 3]]
with catch_warnings(record=True):
expected = df.ix[[0, 2, 6]]
tm.assert_frame_equal(result, expected)
result = df.iloc[[0, 1, 3], [0, 1]]
with catch_warnings(record=True):
expected = df.ix[[0, 2, 6], [0, 2]]
tm.assert_frame_equal(result, expected)
# neg indicies
result = df.iloc[[-1, 1, 3], [-1, 1]]
with catch_warnings(record=True):
expected = df.ix[[18, 2, 6], [6, 2]]
tm.assert_frame_equal(result, expected)
# dups indicies
result = df.iloc[[-1, -1, 1, 3], [-1, 1]]
with catch_warnings(record=True):
expected = df.ix[[18, 18, 2, 6], [6, 2]]
| tm.assert_frame_equal(result, expected) | pandas.util.testing.assert_frame_equal |
"""compare_word_frequencies.py.
Compare two datasets to one another using a Wilcoxon rank sum test.
For use with compare_word_frequencies.ipynb v 2.0.
Last update: 2021-02-16
"""
from scipy.stats import mannwhitneyu
from scipy.stats import ranksums
import os
import csv
import json
import collections
from collections import Counter
from collections import defaultdict
import pandas as pd
import statistics
import random
from IPython.display import display, HTML
def get_bags(filenames, docterms_original, docterms_selected):
"""Check filenames against the doc-terms file and grab doc bags where filenames match.
Accepts a list of filenames and returns a new doc-terms file including only selected documents.
"""
# define variables
filenames_list = []
# open provided list of filenames and add to python list
with open(filenames) as f1:
for row in f1:
row = row.strip()
filenames_list.append(row)
# create new file including only selected documents
with open(docterms_selected, 'w') as f2:
# open up file with all bags of words for the collection ("docterms_original" file)
# if a filename in this file matches one in the list of filenames,
# grab the bag of words for that filename to print to new doc-terms file
with open(docterms_original) as f3:
for row in f3:
row2 = row.strip()
row2 = row2.split(' ')
filename = row2[0]
for x in filenames_list:
if x == filename:
f2.write(row)
display(HTML('<p style="color: green;">New doc-terms file created at <code>' + docterms_selected + '</code>.</p>'))
def get_random_sample(selection, docterms_original, docterms_selected):
"""Randomly select a selected number of docs from a doc-terms file.
@selection: the number of documents to return.
@docterms_original: the full doc-terms file.
Returns a `docterms_sample` file containing the random sample of documents from `docterms_original`.
"""
# define variables
filenames_list = []
# open up docterms_original and grab the filename
# add each filename to a list
with open(docterms_original) as f1:
for row in f1:
row2 = row.strip()
row2 = row2.split(' ')
filename = row2[0]
filenames_list.append(filename)
# take a random sample of the filenames list
sample = random.sample(filenames_list, selection)
# create new doc-terms file that will include only randomly selected documents
with open(docterms_selected, 'w') as f2, open(docterms_original) as f3:
for row in f3:
row2 = row.strip()
row2 = row2.split(' ')
filename = row2[0]
for x in sample:
if x == filename:
f2.write(row)
display(HTML('<p style="color: green;">New doc-terms file created at <code>' + docterms_selected + '</code>.</p>'))
def findFreq(bags):
"""Converts a doc-terms file into dataframes containing the document's raw word counts and relative word frequencies.
Code adapted from https://github.com/rbudac/Text-Analysis-Notebooks/blob/master/Mann-Whitney.ipynb for we1s data.
"""
# define variables
texts = []
docs_relative = {}
docs_freqs = {}
num_words = 0
counts = defaultdict(int)
num_words = 0
with open(bags) as f:
# grab filename and bag of words for every document in txt file.
for row in f:
row = row.strip()
row = row.split(' ')
filename = row[0]
x = len(row)
words = row[2:x]
# add each word in each doc to a dict of dicts and count raw frequencies
for word in words:
counts[word] += 1
num_words += len(words)
# create dicts for relative frequencies and for raw frequencies of each word in each doc
relativefreqs = {}
freqs = {}
# add words and frequencies to dictionaries
for word, rawCount in counts.items():
relativefreqs[word] = rawCount / float(num_words)
freqs[word] = rawCount
# reset counts to use for the next doc
counts[word] = 0
# add relative and raw freqs for each doc to overall dictionary for all docs, filenames are keys
docs_relative[filename] = relativefreqs
docs_freqs[filename] = freqs
# convert dicts to pandas dataframes and return dataframes.
# the dataframes we are creating here are sparse matrices of EVERY word in EVERY doc in the input file.
# as a result, they can get huge very quickly.
# loading anything over ~4000 documents and handling via pandas can cause memory problems bc of the large
# vocabulary size. this code is therefore not extensible to large datasets.
# this is why this notebook encourages users to work with small samples of their data.
# for large datasets, we recommend rewriting our code to take advantage of the parquet file format
# (for more on pandas integration with parquet, see https://pandas.pydata.org/pandas-docs/version/0.21/io.html#io-parquet)
# or using numpy arrays. Sorry we didn't do this ourselves.
df_relative = | pd.DataFrame(docs_relative) | pandas.DataFrame |
# MIT License
# Copyright (c) 2019, INRIA
# Copyright (c) 2019, University of Lille
# All rights reserved.
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from typing import Iterable
try:
import pandas
except ImportError:
import logging
logging.getLogger().info("Pandas is not installed.")
from . import EnergyHandler, UnconsistantSamplesError
from ..energy_trace import EnergyTrace, EnergySample
def _gen_column_names(samples):
sample = samples[0]
names = ['timestamp', 'tag', 'duration']
for domain_name in sample.energy:
names.append(domain_name)
return names
def _gen_data(samples):
data = []
for sample in samples:
data.append(_gen_row(sample))
return data
def _gen_row(sample):
row = [sample.timestamp, sample.tag, sample.duration]
for domain_name in sample.energy:
row.append(sample.energy[domain_name])
return row
def trace_to_dataframe(trace: Iterable[EnergySample]) -> pandas.DataFrame:
"""
convert an energy trace into a pandas DataFrame
"""
if len(trace) == 0:
return | pandas.DataFrame() | pandas.DataFrame |
import pandas as pd
import numpy as np
import scipy as sp
import argparse
import os
import gc
import time
from base import *
from features import *
from datetime import datetime
from sklearn.externals import joblib
from sklearn.model_selection import cross_val_score, StratifiedKFold
basepath = os.path.expanduser('../')
SEED = 1231
np.random.seed(SEED)
#############################################################################################################
# EXPERIMENT PARAMETERS #
#############################################################################################################
COLS_TO_REMOVE = ['TARGET',
"due_to_paid_3",
"instalment_dpd_num_147",
"instalment_amount_diff_num_143",
"total_cash_credit_dpd",
"due_to_paid_2",
"instalment_amount_diff_num_169",
"NONLIVINGAPARTMENTS_AVG",
"instalment_amount_diff_num_48",
"instalment_amount_diff_num_31",
"instalment_dpd_num_100",
"instalment_amount_diff_num_16",
"instalment_dpd_num_144",
"instalment_amount_diff_num_18",
"instalment_amount_diff_num_190",
"instalment_dpd_num_38",
"instalment_dpd_num_22",
"HOUR_APPR_PROCESS_START_7",
"instalment_dpd_num_191",
"instalment_amount_diff_num_170",
"instalment_amount_diff_num_69",
"instalment_dpd_num_171",
"instalment_amount_diff_num_212",
"instalment_dpd_num_175",
"instalment_dpd_num_72",
"instalment_dpd_num_97",
"instalment_amount_diff_num_192",
"instalment_amount_diff_num_26",
"instalment_amount_diff_num_160",
"instalment_dpd_num_57",
"bureau_credit_type_7.0",
"instalment_dpd_num_184",
"instalment_amount_diff_num_239",
"instalment_amount_diff_num_38",
"change_in_credit_limit_ot",
"instalment_amount_diff_num_131",
"instalment_amount_diff_num_130",
"mean_NAME_INCOME_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_146",
"instalment_amount_diff_num_198",
"instalment_amount_diff_num_39",
"instalment_amount_diff_num_6",
"instalment_dpd_num_194",
"instalment_amount_diff_num_204",
"instalment_dpd_num_51",
"due_to_paid_15",
"bureau_credit_type_14.0",
"instalment_dpd_num_168",
"instalment_dpd_num_160",
"instalment_amount_diff_num_90",
"instalment_dpd_num_78",
"HOUR_APPR_PROCESS_START_18",
"NONLIVINGAPARTMENTS_MEDI",
"instalment_amount_diff_num_33",
"instalment_amount_diff_num_178",
"instalment_dpd_num_136",
"instalment_dpd_num_17",
"instalment_amount_diff_num_89",
"prev_credit_year_4",
"instalment_amount_diff_num_105",
"instalment_dpd_num_64",
"instalment_dpd_num_21",
"NAME_GOODS_CATEGORY_19",
"instalment_amount_diff_num_194",
"instalment_dpd_num_114",
"instalment_dpd_num_134",
"instalment_dpd_num_98",
"due_to_paid_9",
"instalment_dpd_num_84",
"STATUS1.0",
"instalment_amount_diff_num_127",
"instalment_amount_diff_num_40",
"bureau_credit_type_5.0",
"prev_credit_year_5",
"instalment_dpd_num_127",
"instalment_amount_diff_num_56",
"PRODUCT_COMBINATION_9",
"instalment_amount_diff_num_155",
"instalment_amount_diff_num_219",
"due_to_paid_1",
"instalment_dpd_num_116",
"instalment_dpd_num_35",
"instalment_amount_diff_num_1",
"instalment_dpd_num_154",
"instalment_amount_diff_num_50",
"instalment_amount_diff_num_211",
"prev_credit_year_10",
"instalment_dpd_num_67",
"instalment_dpd_num_174",
"mean_OCCUPATION_TYPE_AMT_CREDIT",
"bbal_2",
"instalment_dpd_num_36",
"instalment_dpd_num_81",
"instalment_dpd_num_213",
"instalment_dpd_num_71",
"instalment_dpd_num_55",
"instalment_amount_diff_num_156",
"CNT_FAM_MEMBERS",
"bureau_credit_type_13.0",
"instalment_dpd_num_125",
"instalment_dpd_num_41",
"range_min_max_credit_limit",
"instalment_amount_diff_num_3",
"instalment_amount_diff_num_96",
"instalment_dpd_num_59",
"due_to_paid_19",
"instalment_dpd_num_69",
"instalment_dpd_num_130",
"instalment_dpd_num_204",
"instalment_amount_diff_num_177",
"instalment_dpd_num_135",
"NAME_GOODS_CATEGORY_2",
"instalment_amount_diff_num_150",
"instalment_dpd_num_143",
"instalment_amount_diff_num_122",
"instalment_dpd_num_122",
"instalment_dpd_num_117",
"instalment_dpd_num_146",
"instalment_amount_diff_num_55",
"due_to_paid_17",
"instalment_amount_diff_num_30",
"instalment_amount_diff_num_136",
"instalment_amount_diff_num_180",
"instalment_amount_diff_num_162",
"instalment_dpd_num_170",
"instalment_amount_diff_num_71",
"instalment_amount_diff_num_42",
"due_to_paid_4",
"mean_NAME_INCOME_TYPE_OCCUPATION_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_23",
"PRODUCT_COMBINATION_8",
"instalment_dpd_num_159",
"instalment_amount_diff_num_118",
"instalment_amount_diff_num_78",
"instalment_dpd_num_227",
"instalment_amount_diff_num_187",
"instalment_dpd_num_214",
"instalment_amount_diff_num_145",
"instalment_dpd_num_158",
"instalment_dpd_num_203",
"instalment_amount_diff_num_161",
"instalment_amount_diff_num_21",
"NUM_NULLS_EXT_SCORES",
"instalment_dpd_num_65",
"NAME_GOODS_CATEGORY_5",
"prev_credit_year_3",
"instalment_amount_diff_num_191",
"mean_cb_credit_annuity",
"instalment_amount_diff_num_17",
"instalment_dpd_num_63",
"instalment_amount_diff_num_129",
"instalment_amount_diff_num_148",
"instalment_amount_diff_num_27",
"instalment_dpd_num_121",
"HOUSETYPE_MODE",
"instalment_dpd_num_195",
"instalment_amount_diff_num_68",
"instalment_dpd_num_186",
"instalment_amount_diff_num_245",
"instalment_dpd_num_148",
"instalment_amount_diff_num_41",
"instalment_dpd_num_66",
"num_high_int_no_info_loans",
"mean_NAME_EDUCATION_TYPE_OCCUPATION_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_128",
"bbal_4",
"instalment_dpd_num_95",
"instalment_dpd_num_155",
"instalment_dpd_num_89",
"instalment_dpd_num_132",
"instalment_amount_diff_num_28",
"instalment_dpd_num_52",
"instalment_dpd_num_40",
"instalment_dpd_num_190",
"instalment_amount_diff_num_99",
"instalment_dpd_num_92",
"instalment_dpd_num_109",
"instalment_dpd_num_115",
"instalment_dpd_num_149",
"instalment_amount_diff_num_104",
"instalment_amount_diff_num_158",
"instalment_dpd_num_180",
"instalment_dpd_num_230",
"instalment_dpd_num_208",
"instalment_amount_diff_num_222",
"instalment_amount_diff_num_199",
"bureau_credit_year_10",
"instalment_dpd_num_177",
"instalment_amount_diff_num_63",
"due_to_paid_20",
"instalment_amount_diff_num_19",
"instalment_dpd_num_61",
"instalment_amount_diff_num_32",
"instalment_dpd_num_210",
"instalment_amount_diff_num_116",
"instalment_dpd_num_140",
"mean_OCCUPATION_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_117",
"due_to_paid_13",
"NAME_INCOME_TYPE__7",
"instalment_amount_diff_num_188",
"instalment_dpd_num_198",
"instalment_amount_diff_num_34",
"instalment_amount_diff_num_262",
"instalment_dpd_num_202",
"instalment_amount_diff_num_53",
"instalment_amount_diff_num_108",
"instalment_dpd_num_56",
"instalment_amount_diff_num_214",
"FONDKAPREMONT_MODE",
"instalment_dpd_num_192",
"instalment_amount_diff_num_189",
"instalment_amount_diff_num_86",
"instalment_dpd_num_169",
"instalment_amount_diff_num_172",
"instalment_dpd_num_46",
"instalment_dpd_num_211",
"instalment_amount_diff_num_109",
"mean_NAME_FAMILY_STATUS_NAME_INCOME_TYPE_DAYS_EMPLOYED",
"instalment_amount_diff_num_175",
"instalment_amount_diff_num_168",
"MONTHS_BALANCE_median",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_AMT_INCOME_TOTAL",
"instalment_amount_diff_num_58",
"instalment_amount_diff_num_51",
"instalment_dpd_num_74",
"instalment_dpd_num_113",
"instalment_amount_diff_num_137",
"instalment_dpd_num_39",
"instalment_amount_diff_num_25",
"NAME_YIELD_GROUP_3",
"instalment_dpd_num_165",
"instalment_amount_diff_num_107",
"HOUR_APPR_PROCESS_START_16",
"prev_credit_year_11",
"CHANNEL_TYPE_6",
"instalment_amount_diff_num_88",
"instalment_amount_diff_num_64",
"instalment_amount_diff_num_201",
"ELEVATORS_AVG",
"prev_credit_year_2",
"instalment_amount_diff_num_37",
"instalment_dpd_num_54",
"instalment_amount_diff_num_153",
"instalment_amount_diff_num_203",
"instalment_dpd_num_166",
"ENTRANCES_MEDI",
"instalment_amount_diff_num_166",
"mean_NAME_INCOME_TYPE_DAYS_BIRTH",
"due_to_paid_10",
"instalment_amount_diff_num_141",
"instalment_dpd_num_96",
"instalment_dpd_num_167",
"instalment_amount_diff_num_140",
"instalment_amount_diff_num_77",
"NAME_FAMILY_STATUS",
"instalment_dpd_num_133",
"NAME_TYPE_SUITE",
"instalment_amount_diff_num_134",
"instalment_amount_diff_num_72",
"instalment_amount_diff_num_80",
"instalment_dpd_num_193",
"instalment_dpd_num_86",
"instalment_amount_diff_num_207",
"instalment_amount_diff_num_234",
"instalment_dpd_num_29",
"instalment_amount_diff_num_196",
"instalment_amount_diff_num_195",
"instalment_dpd_num_75",
"bureau_bal_pl_5",
"instalment_amount_diff_num_73",
"instalment_amount_diff_num_81",
"instalment_amount_diff_num_215",
"due_to_paid_23",
"instalment_amount_diff_num_114",
"instalment_amount_diff_num_157",
"bureau_credit_status_1.0",
"instalment_amount_diff_num_2",
"instalment_dpd_num_94",
"instalment_amount_diff_num_45",
"instalment_amount_diff_num_4",
"instalment_amount_diff_num_22",
"instalment_amount_diff_num_74",
"instalment_amount_diff_num_70",
"bureau_credit_year_11",
"instalment_dpd_num_85",
"instalment_amount_diff_num_184",
"instalment_amount_diff_num_126",
"instalment_dpd_num_14",
"instalment_amount_diff_num_62",
"instalment_amount_diff_num_121",
"instalment_amount_diff_num_15",
"instalment_dpd_num_172",
"instalment_dpd_num_142",
"mean_OCCUPATION_TYPE_DAYS_BIRTH",
"instalment_amount_diff_num_44",
"instalment_amount_diff_num_100",
"instalment_dpd_num_58",
"instalment_amount_diff_num_49",
"instalment_dpd_num_26",
"instalment_dpd_num_79",
"instalment_dpd_num_119",
"instalment_amount_diff_num_149",
"bbal_3",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_DAYS_BIRTH",
"due_to_paid_22",
"instalment_amount_diff_num_202",
"instalment_amount_diff_num_208",
"instalment_dpd_num_47",
"young_age",
"mean_CODE_GENDER_NAME_EDUCATION_TYPE_DAYS_BIRTH",
"due_to_paid_24",
"instalment_dpd_num_212",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_AMT_CREDIT",
"mean_OCCUPATION_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_44",
"instalment_amount_diff_num_182",
"due_to_paid_7",
"instalment_amount_diff_num_154",
"instalment_amount_diff_num_95",
"instalment_dpd_num_93",
"instalment_dpd_num_179",
"due_to_paid_11",
"bureau_credit_type_9.0",
"instalment_amount_diff_num_111",
"prev_credit_year_-1",
"mean_NAME_EDUCATION_TYPE_AMT_INCOME_TOTAL",
"instalment_dpd_num_189",
"instalment_amount_diff_num_256",
"instalment_dpd_num_90",
"instalment_amount_diff_num_254",
"diff_education_ext_income_mean",
"AMT_INCOME_TOTAL",
"instalment_amount_diff_num_29",
"instalment_amount_diff_num_60",
"prev_credit_year_9",
"instalment_amount_diff_num_210",
"mean_NAME_INCOME_TYPE_AMT_INCOME_TOTAL",
"instalment_amount_diff_num_176",
"instalment_amount_diff_num_98",
"instalment_amount_diff_num_47",
"instalment_amount_diff_num_173",
"HOUR_APPR_PROCESS_START_12",
"DPD_9",
"instalment_dpd_num_42",
"instalment_amount_diff_num_43",
"bureau_credit_type_11.0",
"instalment_amount_diff_num_221",
"instalment_dpd_num_138",
"instalment_amount_diff_num_128",
"instalment_dpd_num_108",
"mean_OCCUPATION_TYPE_EXT_SOURCE_2",
"instalment_dpd_num_123",
"instalment_amount_diff_num_76",
"instalment_dpd_num_24",
"instalment_dpd_num_139",
"prev_credit_year_7",
"credit_total_instalment_regular",
"due_to_paid_18",
"instalment_amount_diff_num_164",
"instalment_amount_diff_num_268",
"instalment_dpd_num_183",
"instalment_dpd_num_145",
"instalment_dpd_num_201",
"instalment_amount_diff_num_57",
"mean_NAME_INCOME_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_99",
"due_to_paid_25",
"instalment_dpd_num_137",
"instalment_dpd_num_73",
"instalment_dpd_num_68",
"instalment_amount_diff_num_183",
"instalment_dpd_num_30",
"instalment_dpd_num_70",
"instalment_dpd_num_37",
"NAME_EDUCATION_TYPE__1",
"instalment_dpd_num_151",
"bureau_credit_year_9",
"instalment_dpd_num_152",
"due_to_paid_5",
"instalment_dpd_num_207",
"child_to_non_child_ratio",
"instalment_dpd_num_87",
"bureau_credit_type_8.0",
"due_to_paid_6",
"due_to_paid_16",
"instalment_amount_diff_num_110",
"NONLIVINGAPARTMENTS_MODE",
"instalment_amount_diff_num_181",
"bureau_credit_year_0",
"instalment_amount_diff_num_91",
"instalment_amount_diff_num_152",
"bureau_bal_pl_3",
"instalment_dpd_num_45",
"instalment_amount_diff_num_54",
"instalment_dpd_num_173",
"instalment_dpd_num_120",
"instalment_dpd_num_31",
"due_to_paid_0",
"instalment_amount_diff_num_179",
"instalment_dpd_num_124",
"instalment_amount_diff_num_159",
"instalment_amount_diff_num_65",
"instalment_dpd_num_176",
"instalment_dpd_num_33",
"instalment_amount_diff_num_167",
"bureau_credit_year_8",
"instalment_dpd_num_53",
"instalment_dpd_num_164",
"EMERGENCYSTATE_MODE",
"instalment_dpd_num_188",
"instalment_amount_diff_num_79",
"instalment_dpd_num_141",
"bureau_credit_type_1.0",
"instalment_amount_diff_num_82",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_CNT_CHILDREN",
"cash_dpd_sum",
"instalment_amount_diff_num_125",
"FLAG_OWN_CAR",
"instalment_amount_diff_num_132",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_DAYS_ID_PUBLISH",
"instalment_amount_diff_num_8",
"instalment_amount_diff_num_138",
"instalment_dpd_num_80",
"instalment_amount_diff_num_106",
"instalment_amount_diff_num_135",
"bbal_5",
"mean_CODE_GENDER_NAME_EDUCATION_TYPE_AMT_CREDIT",
"instalment_dpd_num_62",
"instalment_dpd_num_126",
"due_to_paid_14",
"HOUR_APPR_PROCESS_START_11",
"mean_NAME_INCOME_TYPE_NAME_EDUCATION_TYPE_DAYS_BIRTH",
"instalment_amount_diff_num_139",
"instalment_amount_diff_num_87",
"instalment_amount_diff_num_61",
"most_recent_min_pos_cash_dpd",
"instalment_dpd_num_77",
"instalment_amount_diff_num_119",
"instalment_dpd_num_150",
"instalment_amount_diff_num_103",
"instalment_amount_diff_num_59",
"HOUR_APPR_PROCESS_START_17",
"instalment_dpd_num_82",
"mean_NAME_EDUCATION_TYPE_AMT_CREDIT",
"bureau_credit_type_2.0",
"bureau_credit_type_12.0",
"mean_NAME_EDUCATION_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_97",
"instalment_amount_diff_num_36",
"instalment_amount_diff_num_66",
"CODE_GENDER",
"instalment_dpd_num_112",
"instalment_dpd_num_34",
"HOUR_APPR_PROCESS_START_9",
"YEARS_BUILD_AVG",
"max_credit_term",
"instalment_amount_diff_num_147",
"due_to_paid_21",
"instalment_amount_diff_num_151",
"instalment_dpd_num_129",
"instalment_amount_diff_num_123",
"mean_CODE_GENDER_NAME_EDUCATION_TYPE_AMT_ANNUITY",
"instalment_dpd_num_215",
"instalment_dpd_num_218",
"instalment_amount_diff_num_94",
"instalment_dpd_num_178",
"instalment_dpd_num_118",
"instalment_dpd_num_162",
"STATUS7.0",
"prev_credit_year_8",
"HOUR_APPR_PROCESS_START_6",
"instalment_dpd_num_60",
"instalment_amount_diff_num_142",
"instalment_amount_diff_num_186",
"instalment_dpd_num_76",
"instalment_amount_diff_num_75",
"instalment_dpd_num_88",
"instalment_amount_diff_num_35",
"instalment_amount_diff_num_102",
"instalment_amount_diff_num_67",
"instalment_amount_diff_num_237",
"instalment_dpd_num_187",
"instalment_dpd_num_50",
"credit_dpd_sum",
"instalment_dpd_num_196",
"instalment_amount_diff_num_84",
"instalment_dpd_num_181",
"instalment_dpd_num_49",
"instalment_dpd_num_161",
"CNT_CHILDREN",
"instalment_dpd_num_157",
"total_credit_debt_active_to_closed",
"mean_NAME_INCOME_TYPE_NAME_EDUCATION_TYPE_DAYS_EMPLOYED",
"bureau_credit_type_6.0",
"instalment_amount_diff_num_174",
"mean_OCCUPATION_TYPE_OWN_CAR_AGE",
"instalment_amount_diff_num_133",
"instalment_amount_diff_num_144",
"instalment_dpd_num_91",
"instalment_amount_diff_num_124",
"instalment_amount_diff_num_120",
"instalment_amount_diff_num_85",
"due_to_paid_12",
"instalment_dpd_num_156",
"instalment_amount_diff_num_185",
"bureau_credit_year_-1",
"instalment_dpd_num_83",
"instalment_amount_diff_num_52",
"instalment_dpd_num_163",
"instalment_amount_diff_num_12",
"due_to_paid_8",
"instalment_dpd_num_131",
"instalment_dpd_num_32",
"FLOORSMAX_MEDI",
"NAME_EDUCATION_TYPE__4",
"instalment_amount_diff_num_93",
"instalment_dpd_num_110",
"instalment_amount_diff_num_113",
"instalment_dpd_num_185",
"instalment_amount_diff_num_163",
"instalment_amount_diff_num_92",
"instalment_amount_diff_num_264",
"instalment_amount_diff_num_112",
"children_ratio",
"instalment_amount_diff_num_165",
"ELEVATORS_MEDI",
"instalment_amount_diff_num_197",
"instalment_amount_diff_num_115",
"instalment_amount_diff_num_171",
"num_diff_credits",
"instalment_dpd_num_200",
"instalment_dpd_num_182",
"instalment_amount_diff_num_83",
"bureau_credit_type_0.0",
"instalment_amount_diff_num_13",
"FLOORSMAX_MODE",
"instalment_amount_diff_num_193",
"instalment_dpd_num_153",
"mean_NAME_FAMILY_STATUS_NAME_INCOME_TYPE_DAYS_BIRTH",
"STATUS2.0",
"mean_NAME_EDUCATION_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_111""due_to_paid_3",
"instalment_dpd_num_147",
"instalment_amount_diff_num_143",
"total_cash_credit_dpd",
"due_to_paid_2",
"instalment_amount_diff_num_169",
"NONLIVINGAPARTMENTS_AVG",
"instalment_amount_diff_num_48",
"instalment_amount_diff_num_31",
"instalment_dpd_num_100",
"instalment_amount_diff_num_16",
"instalment_dpd_num_144",
"instalment_amount_diff_num_18",
"instalment_amount_diff_num_190",
"instalment_dpd_num_38",
"instalment_dpd_num_22",
"HOUR_APPR_PROCESS_START_7",
"instalment_dpd_num_191",
"instalment_amount_diff_num_170",
"instalment_amount_diff_num_69",
"instalment_dpd_num_171",
"instalment_amount_diff_num_212",
"instalment_dpd_num_175",
"instalment_dpd_num_72",
"instalment_dpd_num_97",
"instalment_amount_diff_num_192",
"instalment_amount_diff_num_26",
"instalment_amount_diff_num_160",
"instalment_dpd_num_57",
"bureau_credit_type_7.0",
"instalment_dpd_num_184",
"instalment_amount_diff_num_239",
"instalment_amount_diff_num_38",
"change_in_credit_limit_ot",
"instalment_amount_diff_num_131",
"instalment_amount_diff_num_130",
"mean_NAME_INCOME_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_146",
"instalment_amount_diff_num_198",
"instalment_amount_diff_num_39",
"instalment_amount_diff_num_6",
"instalment_dpd_num_194",
"instalment_amount_diff_num_204",
"instalment_dpd_num_51",
"due_to_paid_15",
"bureau_credit_type_14.0",
"instalment_dpd_num_168",
"instalment_dpd_num_160",
"instalment_amount_diff_num_90",
"instalment_dpd_num_78",
"HOUR_APPR_PROCESS_START_18",
"NONLIVINGAPARTMENTS_MEDI",
"instalment_amount_diff_num_33",
"instalment_amount_diff_num_178",
"instalment_dpd_num_136",
"instalment_dpd_num_17",
"instalment_amount_diff_num_89",
"prev_credit_year_4",
"instalment_amount_diff_num_105",
"instalment_dpd_num_64",
"instalment_dpd_num_21",
"NAME_GOODS_CATEGORY_19",
"instalment_amount_diff_num_194",
"instalment_dpd_num_114",
"instalment_dpd_num_134",
"instalment_dpd_num_98",
"due_to_paid_9",
"instalment_dpd_num_84",
"STATUS1.0",
"instalment_amount_diff_num_127",
"instalment_amount_diff_num_40",
"bureau_credit_type_5.0",
"prev_credit_year_5",
"instalment_dpd_num_127",
"instalment_amount_diff_num_56",
"PRODUCT_COMBINATION_9",
"instalment_amount_diff_num_155",
"instalment_amount_diff_num_219",
"due_to_paid_1",
"instalment_dpd_num_116",
"instalment_dpd_num_35",
"instalment_amount_diff_num_1",
"instalment_dpd_num_154",
"instalment_amount_diff_num_50",
"instalment_amount_diff_num_211",
"prev_credit_year_10",
"instalment_dpd_num_67",
"instalment_dpd_num_174",
"mean_OCCUPATION_TYPE_AMT_CREDIT",
"bbal_2",
"instalment_dpd_num_36",
"instalment_dpd_num_81",
"instalment_dpd_num_213",
"instalment_dpd_num_71",
"instalment_dpd_num_55",
"instalment_amount_diff_num_156",
"CNT_FAM_MEMBERS",
"bureau_credit_type_13.0",
"instalment_dpd_num_125",
"instalment_dpd_num_41",
"range_min_max_credit_limit",
"instalment_amount_diff_num_3",
"instalment_amount_diff_num_96",
"instalment_dpd_num_59",
"due_to_paid_19",
"instalment_dpd_num_69",
"instalment_dpd_num_130",
"instalment_dpd_num_204",
"instalment_amount_diff_num_177",
"instalment_dpd_num_135",
"NAME_GOODS_CATEGORY_2",
"instalment_amount_diff_num_150",
"instalment_dpd_num_143",
"instalment_amount_diff_num_122",
"instalment_dpd_num_122",
"instalment_dpd_num_117",
"instalment_dpd_num_146",
"instalment_amount_diff_num_55",
"due_to_paid_17",
"instalment_amount_diff_num_30",
"instalment_amount_diff_num_136",
"instalment_amount_diff_num_180",
"instalment_amount_diff_num_162",
"instalment_dpd_num_170",
"instalment_amount_diff_num_71",
"instalment_amount_diff_num_42",
"due_to_paid_4",
"mean_NAME_INCOME_TYPE_OCCUPATION_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_23",
"PRODUCT_COMBINATION_8",
"instalment_dpd_num_159",
"instalment_amount_diff_num_118",
"instalment_amount_diff_num_78",
"instalment_dpd_num_227",
"instalment_amount_diff_num_187",
"instalment_dpd_num_214",
"instalment_amount_diff_num_145",
"instalment_dpd_num_158",
"instalment_dpd_num_203",
"instalment_amount_diff_num_161",
"instalment_amount_diff_num_21",
"NUM_NULLS_EXT_SCORES",
"instalment_dpd_num_65",
"NAME_GOODS_CATEGORY_5",
"prev_credit_year_3",
"instalment_amount_diff_num_191",
"mean_cb_credit_annuity",
"instalment_amount_diff_num_17",
"instalment_dpd_num_63",
"instalment_amount_diff_num_129",
"instalment_amount_diff_num_148",
"instalment_amount_diff_num_27",
"instalment_dpd_num_121",
"HOUSETYPE_MODE",
"instalment_dpd_num_195",
"instalment_amount_diff_num_68",
"instalment_dpd_num_186",
"instalment_amount_diff_num_245",
"instalment_dpd_num_148",
"instalment_amount_diff_num_41",
"instalment_dpd_num_66",
"num_high_int_no_info_loans",
"mean_NAME_EDUCATION_TYPE_OCCUPATION_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_128",
"bbal_4",
"instalment_dpd_num_95",
"instalment_dpd_num_155",
"instalment_dpd_num_89",
"instalment_dpd_num_132",
"instalment_amount_diff_num_28",
"instalment_dpd_num_52",
"instalment_dpd_num_40",
"instalment_dpd_num_190",
"instalment_amount_diff_num_99",
"instalment_dpd_num_92",
"instalment_dpd_num_109",
"instalment_dpd_num_115",
"instalment_dpd_num_149",
"instalment_amount_diff_num_104",
"instalment_amount_diff_num_158",
"instalment_dpd_num_180",
"instalment_dpd_num_230",
"instalment_dpd_num_208",
"instalment_amount_diff_num_222",
"instalment_amount_diff_num_199",
"bureau_credit_year_10",
"instalment_dpd_num_177",
"instalment_amount_diff_num_63",
"due_to_paid_20",
"instalment_amount_diff_num_19",
"instalment_dpd_num_61",
"instalment_amount_diff_num_32",
"instalment_dpd_num_210",
"instalment_amount_diff_num_116",
"instalment_dpd_num_140",
"mean_OCCUPATION_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_117",
"due_to_paid_13",
"NAME_INCOME_TYPE__7",
"instalment_amount_diff_num_188",
"instalment_dpd_num_198",
"instalment_amount_diff_num_34",
"instalment_amount_diff_num_262",
"instalment_dpd_num_202",
"instalment_amount_diff_num_53",
"instalment_amount_diff_num_108",
"instalment_dpd_num_56",
"instalment_amount_diff_num_214",
"FONDKAPREMONT_MODE",
"instalment_dpd_num_192",
"instalment_amount_diff_num_189",
"instalment_amount_diff_num_86",
"instalment_dpd_num_169",
"instalment_amount_diff_num_172",
"instalment_dpd_num_46",
"instalment_dpd_num_211",
"instalment_amount_diff_num_109",
"mean_NAME_FAMILY_STATUS_NAME_INCOME_TYPE_DAYS_EMPLOYED",
"instalment_amount_diff_num_175",
"instalment_amount_diff_num_168",
"MONTHS_BALANCE_median",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_AMT_INCOME_TOTAL",
"instalment_amount_diff_num_58",
"instalment_amount_diff_num_51",
"instalment_dpd_num_74",
"instalment_dpd_num_113",
"instalment_amount_diff_num_137",
"instalment_dpd_num_39",
"instalment_amount_diff_num_25",
"NAME_YIELD_GROUP_3",
"instalment_dpd_num_165",
"instalment_amount_diff_num_107",
"HOUR_APPR_PROCESS_START_16",
"prev_credit_year_11",
"CHANNEL_TYPE_6",
"instalment_amount_diff_num_88",
"instalment_amount_diff_num_64",
"instalment_amount_diff_num_201",
"ELEVATORS_AVG",
"prev_credit_year_2",
"instalment_amount_diff_num_37",
"instalment_dpd_num_54",
"instalment_amount_diff_num_153",
"instalment_amount_diff_num_203",
"instalment_dpd_num_166",
"ENTRANCES_MEDI",
"instalment_amount_diff_num_166",
"mean_NAME_INCOME_TYPE_DAYS_BIRTH",
"due_to_paid_10",
"instalment_amount_diff_num_141",
"instalment_dpd_num_96",
"instalment_dpd_num_167",
"instalment_amount_diff_num_140",
"instalment_amount_diff_num_77",
"NAME_FAMILY_STATUS",
"instalment_dpd_num_133",
"NAME_TYPE_SUITE",
"instalment_amount_diff_num_134",
"instalment_amount_diff_num_72",
"instalment_amount_diff_num_80",
"instalment_dpd_num_193",
"instalment_dpd_num_86",
"instalment_amount_diff_num_207",
"instalment_amount_diff_num_234",
"instalment_dpd_num_29",
"instalment_amount_diff_num_196",
"instalment_amount_diff_num_195",
"instalment_dpd_num_75",
"bureau_bal_pl_5",
"instalment_amount_diff_num_73",
"instalment_amount_diff_num_81",
"instalment_amount_diff_num_215",
"due_to_paid_23",
"instalment_amount_diff_num_114",
"instalment_amount_diff_num_157",
"bureau_credit_status_1.0",
"instalment_amount_diff_num_2",
"instalment_dpd_num_94",
"instalment_amount_diff_num_45",
"instalment_amount_diff_num_4",
"instalment_amount_diff_num_22",
"instalment_amount_diff_num_74",
"instalment_amount_diff_num_70",
"bureau_credit_year_11",
"instalment_dpd_num_85",
"instalment_amount_diff_num_184",
"instalment_amount_diff_num_126",
"instalment_dpd_num_14",
"instalment_amount_diff_num_62",
"instalment_amount_diff_num_121",
"instalment_amount_diff_num_15",
"instalment_dpd_num_172",
"instalment_dpd_num_142",
"mean_OCCUPATION_TYPE_DAYS_BIRTH",
"instalment_amount_diff_num_44",
"instalment_amount_diff_num_100",
"instalment_dpd_num_58",
"instalment_amount_diff_num_49",
"instalment_dpd_num_26",
"instalment_dpd_num_79",
"instalment_dpd_num_119",
"instalment_amount_diff_num_149",
"bbal_3",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_DAYS_BIRTH",
"due_to_paid_22",
"instalment_amount_diff_num_202",
"instalment_amount_diff_num_208",
"instalment_dpd_num_47",
"young_age",
"mean_CODE_GENDER_NAME_EDUCATION_TYPE_DAYS_BIRTH",
"due_to_paid_24",
"instalment_dpd_num_212",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_AMT_CREDIT",
"mean_OCCUPATION_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_44",
"instalment_amount_diff_num_182",
"due_to_paid_7",
"instalment_amount_diff_num_154",
"instalment_amount_diff_num_95",
"instalment_dpd_num_93",
"instalment_dpd_num_179",
"due_to_paid_11",
"bureau_credit_type_9.0",
"instalment_amount_diff_num_111",
"prev_credit_year_-1",
"mean_NAME_EDUCATION_TYPE_AMT_INCOME_TOTAL",
"instalment_dpd_num_189",
"instalment_amount_diff_num_256",
"instalment_dpd_num_90",
"instalment_amount_diff_num_254",
"diff_education_ext_income_mean",
"AMT_INCOME_TOTAL",
"instalment_amount_diff_num_29",
"instalment_amount_diff_num_60",
"prev_credit_year_9",
"instalment_amount_diff_num_210",
"mean_NAME_INCOME_TYPE_AMT_INCOME_TOTAL",
"instalment_amount_diff_num_176",
"instalment_amount_diff_num_98",
"instalment_amount_diff_num_47",
"instalment_amount_diff_num_173",
"HOUR_APPR_PROCESS_START_12",
"DPD_9",
"instalment_dpd_num_42",
"instalment_amount_diff_num_43",
"bureau_credit_type_11.0",
"instalment_amount_diff_num_221",
"instalment_dpd_num_138",
"instalment_amount_diff_num_128",
"instalment_dpd_num_108",
"mean_OCCUPATION_TYPE_EXT_SOURCE_2",
"instalment_dpd_num_123",
"instalment_amount_diff_num_76",
"instalment_dpd_num_24",
"instalment_dpd_num_139",
"prev_credit_year_7",
"credit_total_instalment_regular",
"due_to_paid_18",
"instalment_amount_diff_num_164",
"instalment_amount_diff_num_268",
"instalment_dpd_num_183",
"instalment_dpd_num_145",
"instalment_dpd_num_201",
"instalment_amount_diff_num_57",
"mean_NAME_INCOME_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_99",
"due_to_paid_25",
"instalment_dpd_num_137",
"instalment_dpd_num_73",
"instalment_dpd_num_68",
"instalment_amount_diff_num_183",
"instalment_dpd_num_30",
"instalment_dpd_num_70",
"instalment_dpd_num_37",
"NAME_EDUCATION_TYPE__1",
"instalment_dpd_num_151",
"bureau_credit_year_9",
"instalment_dpd_num_152",
"due_to_paid_5",
"instalment_dpd_num_207",
"child_to_non_child_ratio",
"instalment_dpd_num_87",
"bureau_credit_type_8.0",
"due_to_paid_6",
"due_to_paid_16",
"instalment_amount_diff_num_110",
"NONLIVINGAPARTMENTS_MODE",
"instalment_amount_diff_num_181",
"bureau_credit_year_0",
"instalment_amount_diff_num_91",
"instalment_amount_diff_num_152",
"bureau_bal_pl_3",
"instalment_dpd_num_45",
"instalment_amount_diff_num_54",
"instalment_dpd_num_173",
"instalment_dpd_num_120",
"instalment_dpd_num_31",
"due_to_paid_0",
"instalment_amount_diff_num_179",
"instalment_dpd_num_124",
"instalment_amount_diff_num_159",
"instalment_amount_diff_num_65",
"instalment_dpd_num_176",
"instalment_dpd_num_33",
"instalment_amount_diff_num_167",
"bureau_credit_year_8",
"instalment_dpd_num_53",
"instalment_dpd_num_164",
"EMERGENCYSTATE_MODE",
"instalment_dpd_num_188",
"instalment_amount_diff_num_79",
"instalment_dpd_num_141",
"bureau_credit_type_1.0",
"instalment_amount_diff_num_82",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_CNT_CHILDREN",
"cash_dpd_sum",
"instalment_amount_diff_num_125",
"FLAG_OWN_CAR",
"instalment_amount_diff_num_132",
"mean_CODE_GENDER_REG_CITY_NOT_WORK_CITY_DAYS_ID_PUBLISH",
"instalment_amount_diff_num_8",
"instalment_amount_diff_num_138",
"instalment_dpd_num_80",
"instalment_amount_diff_num_106",
"instalment_amount_diff_num_135",
"bbal_5",
"mean_CODE_GENDER_NAME_EDUCATION_TYPE_AMT_CREDIT",
"instalment_dpd_num_62",
"instalment_dpd_num_126",
"due_to_paid_14",
"HOUR_APPR_PROCESS_START_11",
"mean_NAME_INCOME_TYPE_NAME_EDUCATION_TYPE_DAYS_BIRTH",
"instalment_amount_diff_num_139",
"instalment_amount_diff_num_87",
"instalment_amount_diff_num_61",
"most_recent_min_pos_cash_dpd",
"instalment_dpd_num_77",
"instalment_amount_diff_num_119",
"instalment_dpd_num_150",
"instalment_amount_diff_num_103",
"instalment_amount_diff_num_59",
"HOUR_APPR_PROCESS_START_17",
"instalment_dpd_num_82",
"mean_NAME_EDUCATION_TYPE_AMT_CREDIT",
"bureau_credit_type_2.0",
"bureau_credit_type_12.0",
"mean_NAME_EDUCATION_TYPE_AMT_ANNUITY",
"instalment_amount_diff_num_97",
"instalment_amount_diff_num_36",
"instalment_amount_diff_num_66",
"CODE_GENDER",
"instalment_dpd_num_112",
"instalment_dpd_num_34",
"HOUR_APPR_PROCESS_START_9",
"YEARS_BUILD_AVG",
"max_credit_term",
"instalment_amount_diff_num_147",
"due_to_paid_21",
"instalment_amount_diff_num_151",
"instalment_dpd_num_129",
"instalment_amount_diff_num_123",
"mean_CODE_GENDER_NAME_EDUCATION_TYPE_AMT_ANNUITY",
"instalment_dpd_num_215",
"instalment_dpd_num_218",
"instalment_amount_diff_num_94",
"instalment_dpd_num_178",
"instalment_dpd_num_118",
"instalment_dpd_num_162",
"STATUS7.0",
"prev_credit_year_8",
"HOUR_APPR_PROCESS_START_6",
"instalment_dpd_num_60",
"instalment_amount_diff_num_142",
"instalment_amount_diff_num_186",
"instalment_dpd_num_76",
"instalment_amount_diff_num_75",
"instalment_dpd_num_88",
"instalment_amount_diff_num_35",
"instalment_amount_diff_num_102",
"instalment_amount_diff_num_67",
"instalment_amount_diff_num_237",
"instalment_dpd_num_187",
"instalment_dpd_num_50",
"credit_dpd_sum",
"instalment_dpd_num_196",
"instalment_amount_diff_num_84",
"instalment_dpd_num_181",
"instalment_dpd_num_49",
"instalment_dpd_num_161",
"CNT_CHILDREN",
"instalment_dpd_num_157",
"total_credit_debt_active_to_closed",
"mean_NAME_INCOME_TYPE_NAME_EDUCATION_TYPE_DAYS_EMPLOYED",
"bureau_credit_type_6.0",
"instalment_amount_diff_num_174",
"mean_OCCUPATION_TYPE_OWN_CAR_AGE",
"instalment_amount_diff_num_133",
"instalment_amount_diff_num_144",
"instalment_dpd_num_91",
"instalment_amount_diff_num_124",
"instalment_amount_diff_num_120",
"instalment_amount_diff_num_85",
"due_to_paid_12",
"instalment_dpd_num_156",
"instalment_amount_diff_num_185",
"bureau_credit_year_-1",
"instalment_dpd_num_83",
"instalment_amount_diff_num_52",
"instalment_dpd_num_163",
"instalment_amount_diff_num_12",
"due_to_paid_8",
"instalment_dpd_num_131",
"instalment_dpd_num_32",
"FLOORSMAX_MEDI",
"NAME_EDUCATION_TYPE__4",
"instalment_amount_diff_num_93",
"instalment_dpd_num_110",
"instalment_amount_diff_num_113",
"instalment_dpd_num_185",
"instalment_amount_diff_num_163",
"instalment_amount_diff_num_92",
"instalment_amount_diff_num_264",
"instalment_amount_diff_num_112",
"children_ratio",
"instalment_amount_diff_num_165",
"ELEVATORS_MEDI",
"instalment_amount_diff_num_197",
"instalment_amount_diff_num_115",
"instalment_amount_diff_num_171",
"num_diff_credits",
"instalment_dpd_num_200",
"instalment_dpd_num_182",
"instalment_amount_diff_num_83",
"bureau_credit_type_0.0",
"instalment_amount_diff_num_13",
"FLOORSMAX_MODE",
"instalment_amount_diff_num_193",
"instalment_dpd_num_153",
"mean_NAME_FAMILY_STATUS_NAME_INCOME_TYPE_DAYS_BIRTH",
"STATUS2.0",
"mean_NAME_EDUCATION_TYPE_DAYS_EMPLOYED",
"instalment_dpd_num_111"
]
PARAMS = {
'num_boost_round': 20000,
'early_stopping_rounds': 200,
'objective': 'binary',
'boosting_type': 'gbdt',
'learning_rate': .01,
'metric': 'auc',
'num_leaves': 20,
'sub_feature': 0.05,
'bagging_fraction': 0.9,
'reg_lambda': 75,
'reg_alpha': 5,
'min_split_gain': .5,
'min_data_in_leaf': 15,
'min_sum_hessian_in_leaf': 1,
'nthread': 16,
'verbose': -1,
'seed': SEED
}
PCA_PARAMS = {
'n_components': 10,
'whiten': True,
'random_state': SEED
}
MODEL_FILENAME = 'v145'
SAMPLE_SIZE = .3
# NOTE: column in frequency encoded columns
# cannot be in ohe cols.
FREQ_ENCODING_COLS = ['ORGANIZATION_OCCUPATION',
'age_emp_categorical',
'age_occupation'
]
OHE_COLS = [
'ORGANIZATION_TYPE',
'OCCUPATION_TYPE',
'NAME_EDUCATION_TYPE',
'NAME_HOUSING_TYPE',
'NAME_INCOME_TYPE'
]
TARGET_ENCODING_COLS = []
class Modelv145(BaseModel):
def __init__(self, **params):
self.params = params
self.n_train = 307511 # TODO: find a way to remove this constant
def load_data(self, filenames):
dfs = []
for filename in filenames:
dfs.append(pd.read_csv(filename, parse_dates=True, keep_date_col=True))
df = pd.concat(dfs)
df.index = np.arange(len(df))
df = super(Modelv145, self).reduce_mem_usage(df)
return df
def reduce_mem_usage(self, df):
return super(Modelv145, self).reduce_mem_usage(df)
def preprocess(self):
tr = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'application_train.pkl'))
te = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'application_test.pkl'))
ntrain = len(tr)
data = pd.concat((tr, te))
del tr, te
gc.collect()
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'current_application_train.pkl')):
print('Generating features based on current application ....')
t0 = time.clock()
data, FEATURE_NAMES = current_application_features(data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'current_application_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'current_application_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on current application')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_train.pkl')):
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
print('Generating features based on credits reported to bureau ....')
t0 = time.clock()
data, FEATURE_NAMES = bureau_features(bureau, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
del bureau
gc.collect()
else:
print('Already generated features based on bureau application')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_bal_train.pkl')):
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
bureau_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau_balance.pkl'))
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
for col in bureau_bal.select_dtypes(include=['category']).columns:
bureau_bal.loc[:, col] = bureau_bal.loc[:, col].cat.codes
print('Generating features based on credits reported to bureau and bureau balance ....')
t0 = time.clock()
data, FEATURE_NAMES = bureau_and_balance(bureau, bureau_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_bal_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_bal_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on bureau and balance')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
print('Generating features based on previous application ....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_features(prev_app, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on previous application')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'pos_cash_train.pkl')):
pos_cash = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'POS_CASH_balance.pkl'))
for col in pos_cash.select_dtypes(include=['category']).columns:
pos_cash.loc[:, col] = pos_cash.loc[:, col].cat.codes
print('Generating features based on pos cash ....')
t0 = time.clock()
data, FEATURE_NAMES = pos_cash_features(pos_cash, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del pos_cash
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'pos_cash_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'pos_cash_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on pos cash')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'credit_train.pkl')):
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
print('Generating features based on Credit Card ....')
t0 = time.clock()
data, FEATURE_NAMES = credit_card_features(credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del credit_bal
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'credit_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'credit_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Credit Card')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'installments_train.pkl')):
installments = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'installments_payments.pkl'))
for col in installments.select_dtypes(include=['category']).columns:
installments.loc[:, col] = installments.loc[:, col].cat.codes
print('Generating features based on Installments ....')
t0 = time.clock()
data, FEATURE_NAMES = get_installment_features(installments, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del installments
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'installments_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'installments_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Installments')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_bureau_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
print('Generating features based on Previous Applications and Bureau Applications....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_bureau(prev_app, bureau, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del bureau, prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_bureau_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_bureau_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Previous application and Bureau Applications')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_credit_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
print('Generating features based on Previous Applications and Credit card balance ....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_credit_card(prev_app, credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del credit_bal, prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_credit_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_credit_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Previous application and Credit card balance')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_installments_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
installments = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'installments_payments.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in installments.select_dtypes(include=['category']).columns:
installments.loc[:, col] = installments.loc[:, col].cat.codes
print('Generating features based on Previous Applications and Installment Payments ....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_installments(prev_app, installments, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del installments, prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_installments_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_installments_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Previous application and Installment Payments.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'loan_stacking_train.pkl')):
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
print('Generating features based on loan stacking ....')
t0 = time.clock()
data, FEATURE_NAMES = loan_stacking(bureau, prev_app, credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'loan_stacking_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'loan_stacking_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
del bureau
gc.collect()
else:
print('Already generated features based on loan stacking.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'feature_groups_train.pkl')):
print('Generating features based on feature groups ....')
t0 = time.clock()
data, FEATURE_NAMES = feature_groups(data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'feature_groups_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'feature_groups_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on feature groups.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_train.pkl')):
print('Generating features based on previous application and pos cash ....')
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
pos_cash = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'POS_CASH_balance.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in pos_cash.select_dtypes(include=['category']).columns:
pos_cash.loc[:, col] = pos_cash.loc[:, col].cat.codes
t0 = time.clock()
data, FEATURE_NAMES = prev_app_pos(prev_app, pos_cash, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on previous application and pos cash.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_credit_bal_train.pkl')):
print('Generating features based on previous application, pos cash and credit card balance ....')
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
pos_cash = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'POS_CASH_balance.pkl'))
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in pos_cash.select_dtypes(include=['category']).columns:
pos_cash.loc[:, col] = pos_cash.loc[:, col].cat.codes
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
t0 = time.time()
data, FEATURE_NAMES = prev_app_pos_credit(prev_app, pos_cash, credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_credit_bal_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_credit_bal_test.pkl'))
print('\nTook: {} seconds'.format(time.time() - t0))
else:
print('Already generated features based on previous application, pos cash and credit card balance.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_ohe_train.pkl')):
print('Generating features based on previous application one hot encoded features ....')
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
t0 = time.time()
data, FEATURE_NAMES = prev_app_ohe(prev_app, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_ohe_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_ohe_test.pkl'))
print('\nTook: {} seconds'.format(time.time() - t0))
else:
print('Already generated features based on previous application one hot encode features.')
def prepare_features(self):
tr = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'application_train.pkl'))
te = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'application_test.pkl'))
ntrain = len(tr)
data = pd.concat((tr, te))
del tr, te
gc.collect()
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'current_application_train.pkl')):
print('Generating features based on current application ....')
t0 = time.clock()
data, FEATURE_NAMES = current_application_features(data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'current_application_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'current_application_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on current application')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_train.pkl')):
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
print('Generating features based on credits reported to bureau ....')
t0 = time.clock()
data, FEATURE_NAMES = bureau_features(bureau, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
del bureau
gc.collect()
else:
print('Already generated features based on bureau application')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_bal_train.pkl')):
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
bureau_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau_balance.pkl'))
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
for col in bureau_bal.select_dtypes(include=['category']).columns:
bureau_bal.loc[:, col] = bureau_bal.loc[:, col].cat.codes
print('Generating features based on credits reported to bureau and bureau balance ....')
t0 = time.clock()
data, FEATURE_NAMES = bureau_and_balance(bureau, bureau_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_bal_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'bureau_bal_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on bureau and balance')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
print('Generating features based on previous application ....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_features(prev_app, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on previous application')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'pos_cash_train.pkl')):
pos_cash = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'POS_CASH_balance.pkl'))
for col in pos_cash.select_dtypes(include=['category']).columns:
pos_cash.loc[:, col] = pos_cash.loc[:, col].cat.codes
print('Generating features based on pos cash ....')
t0 = time.clock()
data, FEATURE_NAMES = pos_cash_features(pos_cash, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del pos_cash
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'pos_cash_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'pos_cash_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on pos cash')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'credit_train.pkl')):
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
print('Generating features based on Credit Card ....')
t0 = time.clock()
data, FEATURE_NAMES = credit_card_features(credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del credit_bal
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'credit_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'credit_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Credit Card')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'installments_train.pkl')):
installments = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'installments_payments.pkl'))
for col in installments.select_dtypes(include=['category']).columns:
installments.loc[:, col] = installments.loc[:, col].cat.codes
print('Generating features based on Installments ....')
t0 = time.clock()
data, FEATURE_NAMES = get_installment_features(installments, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del installments
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'installments_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'installments_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Installments')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_bureau_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
print('Generating features based on Previous Applications and Bureau Applications....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_bureau(prev_app, bureau, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del bureau, prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_bureau_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_bureau_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Previous application and Bureau Applications')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_credit_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
print('Generating features based on Previous Applications and Credit card balance ....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_credit_card(prev_app, credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del credit_bal, prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_credit_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_credit_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Previous application and Credit card balance')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_installments_train.pkl')):
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
installments = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'installments_payments.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in installments.select_dtypes(include=['category']).columns:
installments.loc[:, col] = installments.loc[:, col].cat.codes
print('Generating features based on Previous Applications and Installment Payments ....')
t0 = time.clock()
data, FEATURE_NAMES = prev_app_installments(prev_app, installments, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
del installments, prev_app
gc.collect()
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_installments_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_installments_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on Previous application and Installment Payments.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'loan_stacking_train.pkl')):
bureau = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'bureau.pkl'))
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in bureau.select_dtypes(include=['category']).columns:
bureau.loc[:, col] = bureau.loc[:, col].cat.codes
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
print('Generating features based on loan stacking ....')
t0 = time.clock()
data, FEATURE_NAMES = loan_stacking(bureau, prev_app, credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'loan_stacking_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'loan_stacking_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
del bureau
gc.collect()
else:
print('Already generated features based on loan stacking.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'feature_groups_train.pkl')):
print('Generating features based on feature groups ....')
t0 = time.clock()
data, FEATURE_NAMES = feature_groups(data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'feature_groups_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'feature_groups_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on feature groups.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_train.pkl')):
print('Generating features based on previous application and pos cash ....')
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
pos_cash = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'POS_CASH_balance.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in pos_cash.select_dtypes(include=['category']).columns:
pos_cash.loc[:, col] = pos_cash.loc[:, col].cat.codes
t0 = time.clock()
data, FEATURE_NAMES = prev_app_pos(prev_app, pos_cash, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_test.pkl'))
print('\nTook: {} seconds'.format(time.clock() - t0))
else:
print('Already generated features based on previous application and pos cash.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_credit_bal_train.pkl')):
print('Generating features based on previous application, pos cash and credit card balance ....')
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
pos_cash = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'POS_CASH_balance.pkl'))
credit_bal = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'credit_card_balance.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
for col in pos_cash.select_dtypes(include=['category']).columns:
pos_cash.loc[:, col] = pos_cash.loc[:, col].cat.codes
for col in credit_bal.select_dtypes(include=['category']).columns:
credit_bal.loc[:, col] = credit_bal.loc[:, col].cat.codes
t0 = time.time()
data, FEATURE_NAMES = prev_app_pos_credit(prev_app, pos_cash, credit_bal, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_credit_bal_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_pos_cash_credit_bal_test.pkl'))
print('\nTook: {} seconds'.format(time.time() - t0))
else:
print('Already generated features based on previous application, pos cash and credit card balance.')
if not os.path.exists(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_ohe_train.pkl')):
print('Generating features based on previous application one hot encoded features ....')
prev_app = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + 'previous_application.pkl'))
for col in prev_app.select_dtypes(include=['category']).columns:
prev_app.loc[:, col] = prev_app.loc[:, col].cat.codes
t0 = time.time()
data, FEATURE_NAMES = prev_app_ohe(prev_app, data)
data.index = np.arange(len(data))
# fill infrequent values
data = super(Modelv145, self).fill_infrequent_values(data)
data.iloc[:ntrain].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_ohe_train.pkl'))
data.iloc[ntrain:].loc[:, FEATURE_NAMES].to_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'prev_app_ohe_test.pkl'))
print('\nTook: {} seconds'.format(time.time() - t0))
else:
print('Already generated features based on previous application one hot encode features.')
# This method currently takes care of loading engineered features from disk
# and merging train and test to report back a dataframe (data) which can be used by
# other layers.
def merge_datasets(self):
def get_filenames():
filenames = [f'application_',
f'current_application_',
f'bureau_',
f'bureau_bal_',
f'prev_app_',
f'pos_cash_',
f'credit_',
f'installments_',
f'prev_app_bureau_',
f'prev_app_credit_',
f'prev_app_installments_',
f'loan_stacking_',
f'feature_groups_',
f'prev_app_pos_cash_',
f'prev_app_pos_cash_credit_bal_',
f'prev_app_ohe_'
]
return filenames
train = []
test = []
filenames = get_filenames()
for filename_ in filenames:
tmp = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'{filename_}train.pkl'))
tmp.index = np.arange(len(tmp))
train.append(tmp)
for filename_ in filenames:
tmp = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + 'feature_groups/' + f'{filename_}test.pkl'))
tmp.index = np.arange(len(tmp))
test.append(tmp)
return pd.concat(train, axis=1), pd.concat(test, axis=1)
def feature_interaction(self, data, key, agg_feature, agg_func, agg_func_name):
key_name = '_'.join(key)
tmp = data.groupby(key)[agg_feature].apply(agg_func)\
.reset_index()\
.rename(columns={agg_feature: f'{agg_func_name}_{key_name}_{agg_feature}'})
feat_name = f'{agg_func_name}_{key_name}_{agg_feature}'
data.loc[:, feat_name] = data.loc[:, key].merge(tmp, on=key, how='left')[feat_name]
return data, feat_name
def feature_preprocessing(self, data):
# current application preprocessing
data['DAYS_LAST_PHONE_CHANGE'].replace(0, np.nan, inplace=True)
data['CODE_GENDER'].replace(2, np.nan, inplace=True)
data['DAYS_EMPLOYED'].replace(365243, np.nan, inplace=True)
# previous application
data['DAYS_FIRST_DRAWING'].replace(365243, np.nan, inplace=True)
data['DAYS_FIRST_DUE'].replace(365243, np.nan, inplace=True)
data['DAYS_LAST_DUE_1ST_VERSION'].replace(365243, np.nan, inplace=True)
data['DAYS_LAST_DUE'].replace(365243, np.nan, inplace=True)
data['DAYS_TERMINATION'].replace(365243, np.nan, inplace=True)
return data
def add_missing_values_flag(self, data):
# preprocess for pca
SKIP_COLS = ['SK_ID_CURR', 'TARGET']
for col in data.columns.drop(SKIP_COLS):
# replace inf with np.nan
data[col] = data[col].replace([np.inf, -np.inf], np.nan)
# fill missing values with median
if data[col].isnull().sum():
data[f'{col}_flag'] = data[col].isnull().astype(np.uint8)
if pd.isnull(data[col].median()):
data[col] = data[col].fillna(-1)
else:
data[col] = data[col].fillna(data[col].median())
return data
def get_features(self, train, test, compute_categorical):
data = pd.concat((train, test))
data.index = np.arange(len(data))
for col in data.select_dtypes(include=['category']).columns:
data[col] = data[col].cat.codes
# TODO: not very happy with the way we are computing interactions
# because if we omit any of this feature from pipeline it would
# still work but would most likely be a feature of all null values.
# concatenate OCCUPATION TYPE AND ORGANIZATION TYPE
data.loc[:, 'ORGANIZATION_OCCUPATION'] = pd.factorize(data.ORGANIZATION_TYPE.astype(np.str) +\
data.OCCUPATION_TYPE.astype(np.str)
)[0]
# interaction between total debt to income and (annuity / credit)
data.loc[:, 'debt_income_to_annuity_credit'] = data.total_debt_to_income / data.ratio_annuity_credit
# interaction between days birth and ratio of annuity to credit
data.loc[:, 'add_days_birth_annuity_credit'] = data.DAYS_BIRTH + data.ratio_annuity_credit
# interaction between ratio of annuity to credit with external source 2 score
data.loc[:, 'mult_annuity_credit_ext_source_2'] = data.ratio_annuity_credit * data.EXT_SOURCE_2
data.loc[:, 'ratio_annuity_credit_ext_source_2'] = data.ratio_annuity_credit / data.EXT_SOURCE_2.map(np.log1p)
data.loc[:, 'mult_annuity_credit_ext_source_1'] = data.ratio_annuity_credit * data.EXT_SOURCE_1
data.loc[:, 'ratio_annuity_credit_ext_source_1'] = data.ratio_annuity_credit / data.EXT_SOURCE_1.map(np.log1p)
data.loc[:, 'mult_annuity_credit_ext_source_3'] = data.ratio_annuity_credit * data.EXT_SOURCE_3
data.loc[:, 'ratio_annuity_credit_ext_source_3'] = data.ratio_annuity_credit / data.EXT_SOURCE_3.map(np.log1p)
# interaction between ratio of annuity to credit with total amount paid in installments
data.loc[:, 'mult_annuity_credit_amt_payment_sum'] = data.ratio_annuity_credit * data.AMT_PAYMENT_sum
# interaction between total amount paid in installments and delay in installments
data.loc[:, 'mult_amt_payment_sum_delay_installment'] = data.AMT_PAYMENT_sum * data.delay_in_installment_payments
# interaction between credit / annuity and age
data.loc[:, 'diff_credit_annuity_age'] = (data.AMT_CREDIT / data.AMT_ANNUITY) - (-data.DAYS_BIRTH / 365)
# interaction between ext_3 and age
data.loc[:, 'ext_3_age'] = data.EXT_SOURCE_3 * (-data.DAYS_BIRTH / 365)
# interaction between ext_2 and age
data.loc[:, 'ext_2_age'] = data.EXT_SOURCE_2 * (-data.DAYS_BIRTH / 365)
# interaction between rate and external source 2
data.loc[:, 'add_rate_ext_2'] = (data.AMT_CREDIT / data.AMT_ANNUITY) + data.EXT_SOURCE_2
# interaction between rate and age
data.loc[:, 'add_rate_age'] = (data.AMT_CREDIT / data.AMT_ANNUITY) + (-data.DAYS_BIRTH / 365)
# interaction between age and employed and external score 2
data.loc[:, 'add_mult_age_employed_ext_2'] = ((-data.DAYS_BIRTH / 365) +\
(-data.DAYS_EMPLOYED.replace({365243: np.nan}))) *\
(data.EXT_SOURCE_2)
# combine ratio annuity credit, region populative relative and ext source 2
data.loc[:, 'rate_annuity_region_ext_source_2'] = data.ratio_annuity_credit * data.REGION_POPULATION_RELATIVE * data.EXT_SOURCE_2
data.loc[:, 'region_ext_source_3'] = data.REGION_POPULATION_RELATIVE * data.EXT_SOURCE_3
# Relationship between AMT_REQ_CREDIT_BUREAU_HOUR and AMT_REQ_CREDIT_BUREAU_YEAR
data.loc[:, 'ratio_check_hour_to_year'] = data.AMT_REQ_CREDIT_BUREAU_HOUR.div(data.AMT_REQ_CREDIT_BUREAU_YEAR)
# Relationship between Income and ratio annuity credit
data.loc[:, 'mult_ratio_income'] = (data.ratio_annuity_credit * data.AMT_INCOME_TOTAL).map(np.log1p)
data.loc[:, 'div_ratio_income'] = (data.AMT_INCOME_TOTAL / data.ratio_annuity_credit).map(np.log1p)
# Gender, Education and other features
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_code_gender_name_education_type_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_2', np.var, 'var')
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_code_gender_name_education_type_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_code_gender_name_education_type_amt_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_code_gender_name_education_type_amt_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'OWN_CAR_AGE', np.max, 'max')
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'OWN_CAR_AGE', np.sum, 'sum')
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_code_gender_education_type_age'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_code_gender_education_type_empl'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'NAME_EDUCATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_code_gender_education_type_income'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Gender, Occupation and other features
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_days_birth_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'OCCUPATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_code_gender_occupation_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
# Gender, Organization and other features
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_amt_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'DAYS_REGISTRATION', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_days_reg_mean'] = data[feat_name] - data['DAYS_REGISTRATION']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_age_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_empl_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'ORGANIZATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_code_gender_organization_type_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
# Gender, Reg city not work city and other fatures
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_code_gender_reg_city_amount_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], 'CNT_CHILDREN', np.mean, 'mean')
data.loc[:, 'diff_code_gender_reg_city_cnt_children_mean'] = data[feat_name] - data['CNT_CHILDREN']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], 'DAYS_ID_PUBLISH', np.mean, 'mean')
data.loc[:, 'diff_code_gender_reg_city_days_id_mean'] = data[feat_name] - data['DAYS_ID_PUBLISH']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_code_gender_reg_city_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_code_gender_reg_city_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_code_gender_reg_city_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['CODE_GENDER', 'REG_CITY_NOT_WORK_CITY'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_code_gender_reg_city_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Income, Occupation and Ext Score
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'OCCUPATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_name_income_type_occupation_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Occupation and Organization and Ext Score
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_source_2_mean'] = data[feat_name] - data['DAYS_ID_PUBLISH']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_source_1_mean'] = data[feat_name] - data['DAYS_ID_PUBLISH']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_source_3_mean'] = data[feat_name] - data['DAYS_ID_PUBLISH']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE', 'ORGANIZATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_occupation_organization_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Income, Education and Ext score
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_income_type_education_type_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Education and Occupation and other features
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_amt_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'OWN_CAR_AGE', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_car_age_mean'] = data[feat_name] - data['OWN_CAR_AGE']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Education, Occupation, Reg city not work city and other features
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'REG_CITY_NOT_WORK_CITY'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_education_occupation_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Occupation and other features
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_occupation_reg_city_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'CNT_CHILDREN', np.mean, 'mean')
data.loc[:, 'diff_occupation_cnt_children_mean'] = data[feat_name] - data['CNT_CHILDREN']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'CNT_FAM_MEMBERS', np.mean, 'mean')
data.loc[:, 'diff_occupation_cnt_fam_mebers_mean'] = data[feat_name] - data['CNT_FAM_MEMBERS']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_occupation_days_birth_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_occupation_days_employed_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_occupation_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_occupation_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_occupation_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'OWN_CAR_AGE', np.mean, 'mean')
data.loc[:, 'diff_occupation_own_car_age_mean'] = data[feat_name] - data['OWN_CAR_AGE']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'YEARS_BUILD_AVG', np.mean, 'mean')
data.loc[:, 'diff_occupation_year_build_mean'] = data[feat_name] - data['YEARS_BUILD_AVG']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'ratio_annuity_credit', np.mean, 'mean')
data.loc[:, 'diff_occupation_annuity_credit_mean'] = data[feat_name] - data['ratio_annuity_credit']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_occupation_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_occupation_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['OCCUPATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_occupation_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Organization type and other features
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_organization_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_organization_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_organization_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_organization_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_organization_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_organization_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_organization_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['ORGANIZATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_organization_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# INCOME Type and other features
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_income_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data.loc[:, 'ratio_income_ext_source_1_mean'] = data[feat_name] / data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_income_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_income_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_income_ext_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_income_ext_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_income_ext_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_income_ext_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_INCOME_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_income_ext_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# EDUCATION Type and other features
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_education_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_education_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_education_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_education_ext_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_education_ext_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_education_ext_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_education_ext_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_EDUCATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_education_ext_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Family Type and Income Type
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_INCOME_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_family_income_ext_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Family Type and Education Type
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_family_education_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_family_education_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_family_education_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_family_education_ext_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_family_education_ext_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_family_education_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_family_education_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'NAME_EDUCATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_family_education_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Family Type, Organization Type
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'ORGANIZATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_family_organization_ext_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# Family Type, Occupation Type
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'EXT_SOURCE_1', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_source_1_mean'] = data[feat_name] - data['EXT_SOURCE_1']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'EXT_SOURCE_2', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_source_2_mean'] = data[feat_name] - data['EXT_SOURCE_2']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'EXT_SOURCE_3', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_source_3_mean'] = data[feat_name] - data['EXT_SOURCE_3']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'DAYS_BIRTH', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_age_mean'] = data[feat_name] - data['DAYS_BIRTH']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'DAYS_EMPLOYED', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_empl_mean'] = data[feat_name] - data['DAYS_EMPLOYED']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'AMT_CREDIT', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_credit_mean'] = data[feat_name] - data['AMT_CREDIT']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'AMT_ANNUITY', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_annuity_mean'] = data[feat_name] - data['AMT_ANNUITY']
data, feat_name = self.feature_interaction(data, ['NAME_FAMILY_STATUS', 'OCCUPATION_TYPE'], 'AMT_INCOME_TOTAL', np.mean, 'mean')
data.loc[:, 'diff_family_occupation_ext_income_mean'] = data[feat_name] - data['AMT_INCOME_TOTAL']
# frequency encoding of some of the categorical variables.
data = frequency_encoding(data, FREQ_ENCODING_COLS)
# add pca components
if os.path.exists(os.path.join(basepath, self.params['output_path'] + f'{self.params["data_folder"]}pca.pkl')):
pca_components = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + f'{self.params["data_folder"]}pca.pkl'))
else:
pca_components = super(Modelv145, self).add_pca_components(data.copy(), PCA_PARAMS)
pca_components.to_pickle(os.path.join(basepath, self.params['output_path'] + f'{self.params["data_folder"]}pca.pkl'))
# add tsne components
if os.path.exists(os.path.join(basepath, self.params['output_path'] + f'{self.params["data_folder"]}tsne.pkl')):
tsne_components = pd.read_pickle(os.path.join(basepath, self.params['output_path'] + f'{self.params["data_folder"]}tsne.pkl'))
else:
tsne_components = super(Modelv145, self).add_tsne_components(data.copy())
tsne_components.to_pickle(os.path.join(basepath, self.params['output_path'] + f'{self.params["data_folder"]}tsne.pkl'))
pca_components.index = data.index
data = | pd.concat((data, pca_components), axis=1) | pandas.concat |
import pandas as pd
ips = pd.read_csv('Result1.csv', names=['IP', 'Domain', 'Country', 'Region', 'City', 'ISP', 'ASN'],
encoding='ISO-8859-1')
org_file = | pd.read_csv('WA_2017_append.csv', names=['<NAME>', '<NAME>', 'DOB', 'IP'], encoding='ISO-8859-1') | pandas.read_csv |
import numpy as np
import pandas as pa
import requests, sys
import json
from Bio.Seq import Seq
import os
class TF3DScan:
def __init__(self,genes,PWM_directory,seqs=None):
self.gene_names=genes
self.PWM_dir=PWM_directory
self.seq=None
self.PWM=None
self.weights=None
self.proteins=None
self.initialize()
def initialize(self):
self.seq=self.get_seq_by_name(self.gene_names)
self.PWM=self.convolutional_filter_for_each_TF(self.PWM_dir)
self.weights, self.proteins= self.get_Weights(self.PWM)
return
def softmax(self,x):
e_x = np.exp(x - np.max(x))
return (e_x / e_x.sum(axis=0))
def convolutional_filter_for_each_TF(self,PWM_directory):
path = PWM_directory
#print(path)
filelist = os.listdir(path)
TF_kernel_PWM={}
for file in filelist:
TF_kernel_PWM[file.split("_")[0]] = pa.read_csv(path+file, sep="\t", skiprows=[0], header=None)
return TF_kernel_PWM
def get_reverse_scaning_weights(self, weight):
return np.flipud(weight[:,[3,2,1,0]])
def get_Weights(self, filter_PWM_human):
#forward and reverse scanning matrix with reverse complement
#forward_and_reverse_direction_filter_list=[{k:np.dstack((filter_PWM_human[k],self.get_reverse_scaning_weights(np.array(filter_PWM_human[k]))))} for k in filter_PWM_human.keys()]
#forward and reverse scanning with same matrix
forward_and_reverse_direction_filter_list=[{k:np.dstack((filter_PWM_human[k],filter_PWM_human[k]))} for k in filter_PWM_human.keys()]
forward_and_reverse_direction_filter_dict=dict(j for i in forward_and_reverse_direction_filter_list for j in i.items())
unequefilter_shape=pa.get_dummies([filter_PWM_human[k].shape for k in filter_PWM_human])
TF_with_common_dimmention=[{i:list(unequefilter_shape.loc[list(unequefilter_shape[i]==1),:].index)} for i in unequefilter_shape.columns]
filterr={}
for i in TF_with_common_dimmention:
#print(list(i.keys()))
aa=[list(forward_and_reverse_direction_filter_list[i].keys()) for i in list(i.values())[0]]
aa=sum(aa,[])
#print(aa)
xx=[forward_and_reverse_direction_filter_dict[j] for j in aa]
#print(xx)
xxx=np.stack(xx,axis=-1)
#xxx=xxx.reshape(xxx.shape[1],xxx.shape[2],xxx.shape[3],xxx.shape[0])
filterr["-".join(aa)]=xxx
weights=[v for k,v in filterr.items()]
protein_names=[k for k,v in filterr.items()]
protein_names=[n.split("-") for n in protein_names]
return (weights,protein_names)
def get_sequenceBy_Id(self, EnsemblID,content="application/json",expand_5prime=2000, formatt="fasta",
species="homo_sapiens",typee="genomic"):
server = "http://rest.ensembl.org"
ext="/sequence/id/"+EnsemblID+"?expand_5prime="+str(expand_5prime)+";format="+formatt+";species="+species+";type="+typee
r = requests.get(server+ext, headers={"Content-Type" : content})
_=r
if not r.ok:
r.raise_for_status()
sys.exit()
return(r.json()['seq'][0:int(expand_5prime)+2000])
def seq_to3Darray(self, sequence):
seq3Darray=pa.get_dummies(list(sequence))
myseq=Seq(sequence)
myseq=str(myseq.reverse_complement())
reverseseq3Darray=pa.get_dummies(list(myseq))
array3D=np.dstack((seq3Darray,reverseseq3Darray))
return array3D
def get_seq_by_name(self, target_genes):
promoter_inhancer_sequence=list(map(self.get_sequenceBy_Id, target_genes))
threeD_sequence=list(map(self.seq_to3Darray, promoter_inhancer_sequence))
input_for_convolutional_scan=np.stack((threeD_sequence)).astype('float32')
return input_for_convolutional_scan
def from_2DtoSeq(self, twoD_seq):
indToSeq={0:"A",1:"C",2:"G",3:"T"}
seq=str(''.join([indToSeq[i] for i in np.argmax(twoD_seq, axis=1)]))
return seq
def conv_single_step(self, seq_slice, W):
s = seq_slice*W
# Sum over all entries of the volume s.
Z = np.sum(s)
return Z
def conv_single_filter(self, seq,W,stridev,strideh):
(fv, fh, n_C_prev, n_C) = W.shape
m=seq.shape[0]
pad=0
n_H = int((((seq.shape[1]-fv)+(2*pad))/stridev)+1)
n_W = int((((seq.shape[2]-fh)+(2*pad))/strideh)+1)
Z = np.zeros((m, n_H, n_W ,n_C_prev, n_C))
for i in range(m):
for h in range(int(n_H)):
vert_start = h*stridev
vert_end = stridev*h+fv
for w in range(int(n_W)):
horiz_start = w*strideh
horiz_end = strideh*w+fh
for c in range(int(n_C)):
a_slice_prev = seq[i,vert_start:vert_end,horiz_start:horiz_end,:]
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)
for d in range(n_C_prev):
Z[i, h, w,d, c] = self.conv_single_step(a_slice_prev[:,:,d], W[:,:,d,c])
Z=self.softmax(Z)
return Z
def conv_total_filter(self, Weights, seqs,stridev,strideh):
return [self.conv_single_filter(seqs,i,stridev,strideh) for i in Weights]
def single_sigmoid_pool(self, motif_score):
n=sum(motif_score>.5)
score=[motif_score[i] for i in np.argsort(motif_score)[::-1][:n]]
index=[j for j in np.argsort(motif_score)[::-1][:n]]
sigmoid_pooled=dict(zip(index, score))
sigmoid_pooled=sorted(sigmoid_pooled.items(), key=lambda x: x[1])[::-1]
return sigmoid_pooled
def total_sigmoid_pool(self, z):
sigmoid_pooled_motifs=[]
for i in range(z.shape[0]):
proteins=[]
for k in range(z.shape[4]):
strands=[]
for j in range(z.shape[3]):
strands.append(self.single_sigmoid_pool(z[i,:,0,j,k]))
proteins.append(strands)
sigmoid_pooled_motifs.append(proteins)
#return np.stack(sigmoid_pooled_motifs)
return np.array(sigmoid_pooled_motifs)
def extract_binding_sites_per_protein(self, seq, motif_start, motif_leng):
return seq[motif_start:motif_start+motif_leng]
def getScore(self, seq, weights):
NtoInd={"A":0,"C":1,"G":2,"T":3}
cost=0
for i in range(len(seq)):
cost+=weights[i,NtoInd[seq[i]]]
return cost
def motifs(self, seqs, mot, weights, protein_names):
Motifs=[]
for m in range(len(mot)):
motifs_by_seq=[]
for z in range(mot[m].shape[0]):
motifs_by_protein=[]
for i in range(mot[m].shape[1]):
motifs_by_strand=[]
for j in range(mot[m].shape[2]):
seqq=[self.extract_binding_sites_per_protein(self.from_2DtoSeq(seqs[z,:,:,j]),l,weights[m].shape[0]) for l in list(pa.DataFrame(mot[m][z,i,j])[0])]
score=[self.getScore(k,weights[m][:,:,j,i]) for k in seqq]
#coordinate=[{p:p+weights[m]} for p in list(pa.DataFrame(mot[m][z,i,j])[0])]
scor_mat={"motif":seqq, "PWM_score":score,"sigmoid_score":list(pa.DataFrame(mot[m][z,i,j])[1]), "protein":protein_names[m][i], "strand":j, "input_Sequence":z, "best_threshold":sum(np.max(weights[m][:,:,j,i], axis=1))*.80}
motifs_by_strand.append(scor_mat)
motifs_by_protein.append(motifs_by_strand)
motifs_by_seq.append(motifs_by_protein)
print(m)
Motifs.append(np.stack(motifs_by_seq))
return Motifs
def flatten_motif(self, xc):
mymotifs=[]
for i in range(len(xc)):
for j in range(xc[i].shape[0]):
for k in range(xc[i].shape[1]):
for z in range(xc[i].shape[2]):
if(not pa.DataFrame(xc[i][j,k,z]).sort_values(["PWM_score"], ascending=[0]).loc[list(pa.DataFrame(xc[i][j,k,z]).sort_values(["PWM_score"], ascending=[0])["PWM_score"]>pa.DataFrame(xc[i][j,k,z]).sort_values(["PWM_score"], ascending=[0])["best_threshold"]),:].empty):
mymotifs.append(pa.DataFrame(xc[i][j,k,z]).sort_values(["PWM_score"], ascending=[0]).loc[list(pa.DataFrame(xc[i][j,k,z]).sort_values(["PWM_score"], ascending=[0])["PWM_score"]>pa.DataFrame(xc[i][j,k,z]).sort_values(["PWM_score"], ascending=[0])["best_threshold"]),:])
return | pa.concat(mymotifs) | pandas.concat |
import json
import requests
import logging
import pandas as pd
import streamlit as st
from .exceptions.data_error import DataError
logger = logging.getLogger(__name__)
@st.cache(ttl=60*60*24)
def covid_india_statewise_df():
"""Load statewise covid data for India from covid19india.org/v4 api.
URL: https://api.covid19india.org/v4/min/timeseries.min.json
Returns pandas dataframe of statewise data with state codes
All India data is under the state code `TT`
"""
source = 'https://api.covid19india.org/v4/min/timeseries.min.json'
dest_file = './data/india.min.json'
with open(dest_file, 'wb') as f:
r = requests.get(source, stream=True)
if r.status_code != 200:
raise DataError()
f.writelines(r.iter_content(1024))
rows = []
with open(dest_file, 'r') as f:
j = json.load(f)
for state in j.keys():
for dt in j[state]['dates'].keys():
if 'total' not in j[state]['dates'][dt]:
logger.warning(f'total not found in {state} {dt}')
continue
data = j[state]['dates'][dt]['total'].copy()
data['record_date'] = dt
data['state_code'] = state
rows.append(data)
df = pd.DataFrame(rows)
df.loc[:, 'active'] = df.confirmed - df.recovered - df.deceased
return df
@st.cache(ttl=60*60*24)
def covid_india_df():
"""Covid data for India."""
df = covid_india_statewise_df()
df = df[df.state_code == 'TT'].copy()
df = df.drop(columns=['state_code'])
df.loc[:, 'record_date'] = | pd.to_datetime(df.record_date) | pandas.to_datetime |
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from tqdm import tqdm
from IPython.core.display import HTML
from fbprophet import Prophet
from fbprophet.plot import plot_plotly
import plotly.offline as py
import plotly.graph_objs as go
import plotly.express as px
class SalesForecaster:
"""This class creates 'easy to handle' forecaster objects
It will gather all the required variables to make the code more readable
- sales_clusters_df (pandas dataframe): The original sales dataframe
The columns are :
- product_code : string values such as CLA0 (CLA is the client and 0 is the product number)
- date : datetime64 (ns) the date of the sale such as pd.to_datetime("2018-01-02") : YYYY-MM-DD
- quantity : int64 an integer value: the number of products for this sale
- cluster : int64 an integer value The cluster the product is part of
- test_date (string : "2019-03-01" : YYYY-MM-DD): the training data is automatically all sales prior to this date
- max_waiting_time (string such as '7 days') : The maximum time a client is willing to wait :
required for grouping orders into batches)
- calendar_length (string such as '7 days'): The calendar length you want to zoom in
"""
def __init__(self,
sales_clusters_df,
test_date,
max_waiting_time,
detailed_view=False,
calendar_length='7 days'
):
self.sales_clusters_df = sales_clusters_df
self.test_date = test_date
self.max_waiting_time = max_waiting_time
self.detailed_view = detailed_view
self.calendar_length = calendar_length
self.optimal_batches = []
self.predicted_batches = []
self.predictions = []
def get_predicted_batches(self):
"""This function takes the original sales df,
computes the dates and quantities models at a product level using the test_date to split the dataset
into a training dataset and a testing dataset,
generates the predicted sales,
computes the associated "predicted" batches using the max waiting time value,
computes the optimal batches using the actual data using the max waiting time value,
outputs the optimal batches df and the predicted batches df,
and 2 graphs to visualize it:
- Input:
All the inputs are encapsulated in the SalesForecaster instance:
- sales_clusters_df
- test_date
- max_waiting_time
- calendar_length
- Output:
- Main graph with optimal batches vs predicted batches for the test data
- The same graph zoomed in the week following the test date
- 1 optimal batches df
- 1 predicted batches df
"""
clusters_list = self.sales_clusters_df['Cluster'].unique()
optimal_batches = []
predicted_batches = []
predictions = []
for cluster in clusters_list:
local_optimal_batches, local_predicted_batches, local_predictions = self.\
get_cluster_level_predicted_batches(cluster)
local_optimal_batches['Cluster'] = cluster
local_predicted_batches['Cluster'] = cluster
optimal_batches.append(local_optimal_batches)
predicted_batches.append(local_predicted_batches)
predictions.append(local_predictions)
optimal_batches = pd.concat(optimal_batches)
optimal_batches.reset_index(drop=True,
inplace=True)
optimal_batches['batch_date'] = optimal_batches.batch_date.str.split(' ').apply(lambda x: x[0])
predicted_batches = pd.concat(predicted_batches)
predicted_batches.reset_index(drop=True,
inplace=True)
predicted_batches['batch_date'] = predicted_batches.batch_date.str.split(' ').apply(lambda x: x[0])
predictions = pd.concat(predictions)
predictions.reset_index(drop=True,
inplace=True)
dark_map = px.colors.qualitative.Dark2
pastel_map = px.colors.qualitative.Pastel2
fig = go.Figure()
for (cluster, dark_color, pastel_color) in zip(clusters_list, dark_map, pastel_map):
local_optimal = optimal_batches[optimal_batches['Cluster'] == cluster]
local_predicted = predicted_batches[predicted_batches['Cluster'] == cluster]
fig.add_trace(go.Bar(x=pd.to_datetime(local_optimal[local_optimal['batch_date'] > self.test_date] \
['batch_date']) - pd.Timedelta('12 hours'),
y=local_optimal[local_optimal['batch_date'] > self.test_date] \
['quantities'],
name='Cluster #{}\nOptimized batches - actual values'.format(cluster),
width=1e3 * pd.Timedelta('6 hours').total_seconds(),
marker_color=dark_color))
fig.add_trace(go.Bar(x=pd.to_datetime(local_predicted[local_predicted['batch_date'] > self.test_date] \
['batch_date']) - pd.Timedelta('12 hours'),
y=local_predicted[local_predicted['batch_date'] > self.test_date] \
['predicted_quantities'],
name='Cluster #{}\nPredicted batches'.format(cluster),
width=1e3 * pd.Timedelta('6 hours').total_seconds(),
marker_color=pastel_color))
# Edit the layout
fig.update_layout(title='Optimal batches vs predicted batches for the test period',
xaxis_title='Date',
yaxis_title='Quantities')
fig.show()
fig = go.Figure()
for (cluster, dark_color, pastel_color) in zip(clusters_list, dark_map, pastel_map):
local_optimal = optimal_batches[optimal_batches['Cluster'] == cluster]
local_predicted = predicted_batches[predicted_batches['Cluster'] == cluster]
fig.add_trace(go.Bar(x=pd.to_datetime(local_optimal[(local_optimal['batch_date'] > self.test_date) & \
(local_optimal['batch_date'] < str((pd.Timestamp(
self.test_date) + pd.Timedelta(self.calendar_length))))] \
['batch_date']) - pd.Timedelta('0 hours'),
y=local_optimal[(local_optimal['batch_date'] > self.test_date) & \
(local_optimal['batch_date'] < str(
(pd.Timestamp(self.test_date) + pd.Timedelta(self.calendar_length))))] \
['quantities'],
name='Cluster #{}\nOptimized batches - actual values'.format(cluster),
width=1e3 * | pd.Timedelta('6 hours') | pandas.Timedelta |
from flask import Flask, render_template, request, redirect, url_for, session
import jinja2
from jinja2 import Template
from map import map_init
from forms import EntryForm, TextForm, CancelForm
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
import pickle
import numpy as np
import pandas as pd
from datetime import date
import os
app = Flask(__name__)
app.config['SECRET_KEY'] = 'SECRET_PROJECT'
main_tabs={"model_specification": "Model 1: Prediction of Review Score", "room_review": "Model 2: Sentiment Classification", "summary": "Summary"}
index_score=pd.Index(['host_since_days', 'host_response_rate', 'host_listings_count',
'latitude', 'longitude', 'accommodates', 'bedrooms', 'price',
'minimum_nights', 'maximum_nights', 'availability_30',
'number_of_reviews', 'host_response_time_within a day',
'host_response_time_within a few hours',
'host_response_time_within an hour', 'host_is_superhost_t',
'host_identity_verified_t', 'room_type_Hotel room',
'room_type_Private room', 'room_type_Shared room',
'instant_bookable_t'],
dtype='object')
with open("ridge_model.pkl", "rb") as file:
model=pickle.load(file)
tf_model=tf.keras.models.load_model("tf_model.h5")
with open("tokenizer.pickle", "rb") as handle:
tokenizer=pickle.load(handle)
@app.route("/")
def home():
return render_template("about.html", main_tabs=main_tabs )
@app.route('/map')
def map():
map=map_init()
return map._repr_html_()
@app.route('/model_specification', methods=["GET", "POST"])
def tabs():
form=EntryForm(csrf_enabled=False)
if request.method== 'POST':
if form.validate_on_submit():
session['host_since']=form.host_since.data
session['host_since_days']=int((pd.to_datetime(date.today())- | pd.to_datetime(session['host_since']) | pandas.to_datetime |
"""
data_collection_13_TwetNLP_without_Pronouns.py
1 - create the neccessary lists
2 - compute proprtions, excluding pronouns
3 - save results for one side
4 - join together LEFT and RIGHT results dfs to import into R for analysis
5 - add centrality metrics to this new df
@author: lizakarmannaya
"""
import pandas as pd
import os
import glob
import csv
import collections
import matplotlib.pyplot as plt
import re
####################################################
#### 1 - create the neccessary lists ####
####################################################
Noun_tags = ['N', 'S', 'L']
PN_tags = ['^', 'Z', 'M'] #proper nouns
Pronoun_tags = ['O']
special_chars = ['#', '@', '~', 'U', 'E', ','] #keepng 'G' (abbreviations) and '$' (numerals)
open_class_words = ['N', 'O', 'S', '^', 'Z', 'L', 'M', 'V', 'A', 'R', '!']
first_pers_pronouns = ['i', 'we', 'me','us','mine','ours','my','our','myself','ourselves']
second_pers_pronouns = ['you','yours','your','yourself','yourselves']
third_pers_pronouns = ['he', 'she', 'it', 'they', 'her','him','them','hers','his','its','theirs','his','their','herself','himself','itself','themselves']
other_pronouns = ['all','another','any','anybody','anyone','anything','both','each','either','everybody','everyone','everything','few','many','most','neither','nobody','none','noone','nothing','one','other','others','several','some','somebody','someone','something','such','that','these','this','those','what','whatever','which','whichever','who','whoever','whom','whomever','whose','as','that','what','whatever','thou','thee','thy','thine','ye','eachother','everybody','naught','nought','somewhat','thyself','whatsoever','whence','where','whereby','wherever']
pronouns = first_pers_pronouns + second_pers_pronouns + third_pers_pronouns + other_pronouns
####################################################
#### 2 - compute proprtions, excluding pronouns ####
##re-run for LEFT/RIGHT from here ##################
####################################################
os.chdir(os.path.expanduser("~"))
os.chdir('ark-tweet-nlp-0.3.2/outputs_conll/LEFT') #CHANGE to LEFT/RIGHT
#os.listdir()
results =[] #list of dicts to save each user's reslts into
errors_triple = [] #list of dicts to save errors into
errors_tweet = []
counter = 0
for txt_file in glob.glob("*.txt"):
index = -1
counter+=1
#extract user_id from file name
user_id = txt_file.split("tweets_")[1]
user_id = user_id.split(".txt")[0]
with open(txt_file, 'r') as f:
number_of_tweets = len(f.read().split('\n\n')[:-1]) #minus the last blank line
#create dataframe of desired length
df = pd.DataFrame(index=range(number_of_tweets), columns=['N_nopronouns_proportion', 'N_nopronouns_proportion_filtered', 'N_nopronouns_open_proportion', 'PN_proportion', 'PN_proportion_filtered', 'PN_open_proportion', 'N_PN_proportion', 'N_PN_proportion_filtered', 'N_PN_open_proportion', 'Pronoun_proportion', 'Pronoun_proportion_filtered', 'Pronoun_open_proportion', 'Objects_proportion', 'Objects_proportion_filtered', 'Objects_open_proportion'])
with open(txt_file, 'r') as f:
for tweet in f.read().split('\n\n')[:-1]: #for every tweet from this user, minus the last blank line
index += 1
nouns_in_tweet = [] #create new list of noun tags in tweet
propernouns_in_tweet = []
pronouns_in_tweet = []
tags_in_tweet = []
lines = tweet.split('\n') #create iterable with every triple of tab-separasted tags
lines_split = [x.split('\t') for x in lines] #this is now a list of 3 items
for triple in lines_split: #triple = [word, tag, confidence]
try:
tags_in_tweet.append(triple[1]) #save each tag into a list of all tags_in_tweet
if triple[1] in Noun_tags: #if tagged as noun
if triple[0].lower() not in pronouns: #if word itself not in pronouns list - multiversing
nouns_in_tweet.append(triple[1])
elif triple[0].lower() in pronouns:
pronouns_in_tweet.append(triple[1]) #to add the pronouns misclassified as pronouns to pronouns_in_tweet
elif triple[1].lower() in PN_tags: #if tagged as proper noun
propernouns_in_tweet.append(triple[1])
elif triple[1].lower() in Pronoun_tags: #if tagged as pronoun
pronouns_in_tweet.append(triple[1])
except IndexError as e: #to catch empty line/triple at the end of the file
errors_triple.append({user_id: {tweet: e}})
#N_nopronouns_proportion
N_nopronouns_proportion = round((len(nouns_in_tweet)/len(tags_in_tweet)), 4)
#N_nopronouns_proportion_filtered
tags_filtered = [x for x in tags_in_tweet if x not in special_chars]
if len(tags_filtered) > 0:
N_nopronouns_proportion_filtered = round((len(nouns_in_tweet)/len(tags_filtered)), 4)
else:
N_nopronouns_proportion_filtered = 0 #to avoid dividing by 0 if tags_filtered is empty
#N_nopronouns_open_proportion
tags_open_class = [x for x in tags_in_tweet if x in open_class_words]
if len(tags_open_class) > 0:
N_nopronouns_open_proportion = round((len(nouns_in_tweet)/len(tags_open_class)), 4)
else:
N_nopronouns_open_proportion = 0 #to avoid dividing by 0 if tags_filtered is empty
#PN_proportion
PN_proportion = round((len(propernouns_in_tweet)/len(tags_in_tweet)), 4)
#PN_proportion_filtered
if len(tags_filtered) > 0:
PN_proportion_filtered = round((len(propernouns_in_tweet)/len(tags_filtered)), 4)
else:
PN_proportion_filtered = 0 #to avoid dividing by 0 if tags_filtered is empty
#PN_open_proportion
if len(tags_open_class) > 0:
PN_open_proportion = round((len(propernouns_in_tweet)/len(tags_open_class)), 4)
else:
PN_open_proportion = 0 #to avoid dividing by 0 if tags_filtered is empty
#N_PN_proportion - Nouns + Propernouns
N_PN_proportion = round((len(nouns_in_tweet + propernouns_in_tweet)/len(tags_in_tweet)), 4)
#N_PN_proportion_filtered
if len(tags_filtered) > 0:
N_PN_proportion_filtered = round((len(nouns_in_tweet + propernouns_in_tweet)/len(tags_filtered)), 4)
else:
N_PN_proportion_filtered = 0 #to avoid dividing by 0 if tags_filtered is empty
#N_PN_open_proportion
if len(tags_open_class) > 0:
N_PN_open_proportion = round((len(nouns_in_tweet + propernouns_in_tweet)/len(tags_open_class)), 4)
else:
N_PN_open_proportion = 0 #to avoid dividing by 0 if tags_filtered is empty
#Pronoun_proportion
Pronoun_proportion = round((len(pronouns_in_tweet)/len(tags_in_tweet)), 4)
#Pronoun_proportion_filtered
if len(tags_filtered) > 0:
Pronoun_proportion_filtered = round((len(pronouns_in_tweet)/len(tags_filtered)), 4)
else:
Pronoun_proportion_filtered = 0 #to avoid dividing by 0 if tags_filtered is empty
#Pronoun_open_proportion
if len(tags_open_class) > 0:
Pronoun_open_proportion = round((len(pronouns_in_tweet)/len(tags_open_class)), 4)
else:
Pronoun_open_proportion = 0 #to avoid dividing by 0 if tags_filtered is empty
#Objects_proportion = all together: Nouns + Propenouns + Pronouns
Objects_proportion = round((len(nouns_in_tweet + propernouns_in_tweet + pronouns_in_tweet)/len(tags_in_tweet)), 4)
#Objects_proportion_filtered
if len(tags_filtered) > 0:
Objects_proportion_filtered = round((len(nouns_in_tweet + propernouns_in_tweet + pronouns_in_tweet)/len(tags_filtered)), 4)
else:
Objects_proportion_filtered = 0 #to avoid dividing by 0 if tags_filtered is empty
#Objects_open_proportion
if len(tags_open_class) > 0:
Objects_open_proportion = round((len(nouns_in_tweet + propernouns_in_tweet + pronouns_in_tweet)/len(tags_open_class)), 4)
else:
Objects_open_proportion = 0 #to avoid dividing by 0 if tags_filtered is empty
#add these values to a df - once for every tweeet
try:
#df['N_count'].values[index]=len(nouns_in_tweet)
df['N_nopronouns_proportion'].values[index]=N_nopronouns_proportion
df['N_nopronouns_proportion_filtered'].values[index]=N_nopronouns_proportion_filtered
df['N_nopronouns_open_proportion'].values[index]=N_nopronouns_open_proportion
df['PN_proportion'].values[index]=PN_proportion
df['PN_proportion_filtered'].values[index]=PN_proportion_filtered
df['PN_open_proportion'].values[index]=PN_open_proportion
df['N_PN_proportion'].values[index]=N_PN_proportion
df['N_PN_proportion_filtered'].values[index]=N_PN_proportion_filtered
df['N_PN_open_proportion'].values[index]=N_PN_open_proportion
df['Pronoun_proportion'].values[index]=Pronoun_proportion
df['Pronoun_proportion_filtered'].values[index]=Pronoun_proportion_filtered
df['Pronoun_open_proportion'].values[index]=Pronoun_open_proportion
df['Objects_proportion'].values[index]=Objects_proportion
df['Objects_proportion_filtered'].values[index]=Objects_proportion_filtered
df['Objects_open_proportion'].values[index]=Objects_open_proportion
except IndexError as e: #to catch empty line/triple at the end of the file
errors_tweet.append({user_id: {tweet: e}})
#create dictionary of mean proportions for every user
d={'user_id_str':user_id, 'side':'LEFT', 'mean_N_nopronouns_proportion':df['N_nopronouns_proportion'].mean(), 'mean_N_nopronouns_proportion_filtered':df['N_nopronouns_proportion_filtered'].mean(), 'mean_N_nopronouns_open_proportion':df['N_nopronouns_open_proportion'].mean(),
'mean_PN_proportion':df['PN_proportion'].mean(), 'mean_PN_proportion_filtered':df['PN_proportion_filtered'].mean(), 'mean_PN_open_proportion':df['PN_open_proportion'].mean(),
'mean_N_PN_proportion': df['N_PN_proportion'].mean(), 'mean_N_PN_proportion_filtered':df['N_PN_proportion_filtered'].mean(), 'mean_N_PN_open_proportion':df['N_PN_open_proportion'].mean(),
'mean_Pronoun_proportion':df['Pronoun_proportion'].mean(), 'mean_Pronoun_proportion_filtered':df['Pronoun_proportion_filtered'].mean(), 'mean_Pronoun_open_proportion':df['Pronoun_open_proportion'].mean(),
'mean_Objects_proportion':df['Objects_proportion'].mean(), 'mean_Objects_proportion_filtered':df['Objects_proportion_filtered'].mean(), 'mean_Objects_open_proportion':df['Objects_open_proportion'].mean()
}
#append this dict to list of results for all users
results.append(d)
print(f'finished file {counter} out of 17789 LEFT/16496 RIGHT')
len(results) #17789 for LEFT, 16496 for RIGHT
len(errors_triple) #6 for LEFT, 5 for RIGHT
len(errors_tweet) #0 for LEFT, 0 for RIGHT
########################################
#### 3 - save results for one side #####
########################################
results = pd.DataFrame(results)
results.tail()
results.to_csv('RESULTS_LEFT_multiverse_4.csv')
#saved in 'ark-tweet-nlp-0.3.2/outputs_conll/LEFT/RESULTS_LEFT_multiverse_4.csv'
#saved in 'ark-tweet-nlp-0.3.2/outputs_conll/RIGHT/RESULTS_RIGHT_multiverse_4.csv'
errors_triple = | pd.DataFrame(errors_triple) | pandas.DataFrame |
import pytz
import pytest
import dateutil
import warnings
import numpy as np
from datetime import timedelta
from itertools import product
import pandas as pd
import pandas._libs.tslib as tslib
import pandas.util.testing as tm
from pandas.errors import PerformanceWarning
from pandas.core.indexes.datetimes import cdate_range
from pandas import (DatetimeIndex, PeriodIndex, Series, Timestamp, Timedelta,
date_range, TimedeltaIndex, _np_version_under1p10, Index,
datetime, Float64Index, offsets, bdate_range)
from pandas.tseries.offsets import BMonthEnd, CDay, BDay
from pandas.tests.test_base import Ops
START, END = datetime(2009, 1, 1), datetime(2010, 1, 1)
class TestDatetimeIndexOps(Ops):
tz = [None, 'UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/Asia/Singapore',
'dateutil/US/Pacific']
def setup_method(self, method):
super(TestDatetimeIndexOps, self).setup_method(method)
mask = lambda x: (isinstance(x, DatetimeIndex) or
isinstance(x, PeriodIndex))
self.is_valid_objs = [o for o in self.objs if mask(o)]
self.not_valid_objs = [o for o in self.objs if not mask(o)]
def test_ops_properties(self):
f = lambda x: isinstance(x, DatetimeIndex)
self.check_ops_properties(DatetimeIndex._field_ops, f)
self.check_ops_properties(DatetimeIndex._object_ops, f)
self.check_ops_properties(DatetimeIndex._bool_ops, f)
def test_ops_properties_basic(self):
# sanity check that the behavior didn't change
# GH7206
for op in ['year', 'day', 'second', 'weekday']:
pytest.raises(TypeError, lambda x: getattr(self.dt_series, op))
# attribute access should still work!
s = Series(dict(year=2000, month=1, day=10))
assert s.year == 2000
assert s.month == 1
assert s.day == 10
pytest.raises(AttributeError, lambda: s.weekday)
def test_asobject_tolist(self):
idx = pd.date_range(start='2013-01-01', periods=4, freq='M',
name='idx')
expected_list = [Timestamp('2013-01-31'),
Timestamp('2013-02-28'),
Timestamp('2013-03-31'),
Timestamp('2013-04-30')]
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
assert isinstance(result, Index)
assert result.dtype == object
tm.assert_index_equal(result, expected)
assert result.name == expected.name
assert idx.tolist() == expected_list
idx = pd.date_range(start='2013-01-01', periods=4, freq='M',
name='idx', tz='Asia/Tokyo')
expected_list = [Timestamp('2013-01-31', tz='Asia/Tokyo'),
Timestamp('2013-02-28', tz='Asia/Tokyo'),
Timestamp('2013-03-31', tz='Asia/Tokyo'),
Timestamp('2013-04-30', tz='Asia/Tokyo')]
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
assert isinstance(result, Index)
assert result.dtype == object
tm.assert_index_equal(result, expected)
assert result.name == expected.name
assert idx.tolist() == expected_list
idx = DatetimeIndex([datetime(2013, 1, 1), datetime(2013, 1, 2),
pd.NaT, datetime(2013, 1, 4)], name='idx')
expected_list = [Timestamp('2013-01-01'),
Timestamp('2013-01-02'), pd.NaT,
Timestamp('2013-01-04')]
expected = pd.Index(expected_list, dtype=object, name='idx')
result = idx.asobject
assert isinstance(result, Index)
assert result.dtype == object
tm.assert_index_equal(result, expected)
assert result.name == expected.name
assert idx.tolist() == expected_list
def test_minmax(self):
for tz in self.tz:
# monotonic
idx1 = pd.DatetimeIndex(['2011-01-01', '2011-01-02',
'2011-01-03'], tz=tz)
assert idx1.is_monotonic
# non-monotonic
idx2 = pd.DatetimeIndex(['2011-01-01', pd.NaT, '2011-01-03',
'2011-01-02', pd.NaT], tz=tz)
assert not idx2.is_monotonic
for idx in [idx1, idx2]:
assert idx.min() == Timestamp('2011-01-01', tz=tz)
assert idx.max() == Timestamp('2011-01-03', tz=tz)
assert idx.argmin() == 0
assert idx.argmax() == 2
for op in ['min', 'max']:
# Return NaT
obj = DatetimeIndex([])
assert pd.isna(getattr(obj, op)())
obj = DatetimeIndex([pd.NaT])
assert pd.isna(getattr(obj, op)())
obj = DatetimeIndex([pd.NaT, pd.NaT, pd.NaT])
assert pd.isna(getattr(obj, op)())
def test_numpy_minmax(self):
dr = | pd.date_range(start='2016-01-15', end='2016-01-20') | pandas.date_range |
# -*- coding: utf-8 -*-
"""Supports F10.7 index values. Downloads data from LASP and the SWPC.
Parameters
----------
platform : string
'sw'
name : string
'f107'
tag : string
'' Standard F10.7 data (single day at a time)
'all' All F10.7
'forecast' Grab forecast data from SWPC (next 3 days)
'45day' 45-Day Forecast data from the Air Force
Note
----
The forecast data is stored by generation date, where each file contains the
forecast for the next three days. Forecast data downloads are only supported
for the current day. When loading forecast data, the date specified with the
load command is the date the forecast was generated. The data loaded will span
three days. To always ensure you are loading the most recent data, load
the data with tomorrow's date.
f107 = pysat.Instrument('sw', 'f107', tag='forecast')
f107.download()
f107.load(date=f107.tomorrow())
The forecast data should not be used with the data padding option available
from pysat.Instrument objects. The 'all' tag shouldn't be used either, no
other data available to pad with.
Warnings
--------
The 'forecast' F10.7 data loads three days at a time. The data padding feature
and multi_file_day feature available from the pyast.Instrument object
is not appropriate for 'forecast' data.
"""
import os
import functools
import pandas as pds
import numpy as np
import pysat
platform = 'sw'
name = 'f107'
tags = {'':'Daily value of F10.7',
'all':'All F10.7 values',
'forecast':'SWPC Forecast F107 data next (3 days)',
'45day':'Air Force 45-day Forecast'}
# dict keyed by sat_id that lists supported tags for each sat_id
sat_ids = {'':['', 'all', 'forecast', '45day']}
# dict keyed by sat_id that lists supported tags and a good day of test data
# generate todays date to support loading forecast data
now = pysat.datetime.now()
today = pysat.datetime(now.year, now.month, now.day)
tomorrow = today + pds.DateOffset(days=1)
# set test dates
test_dates = {'':{'':pysat.datetime(2009,1,1),
'all':pysat.datetime(2009,1,1),
'forecast':tomorrow,
'45day':tomorrow}}
def load(fnames, tag=None, sat_id=None):
"""Load F10.7 index files
Parameters
------------
fnames : (pandas.Series)
Series of filenames
tag : (str or NoneType)
tag or None (default=None)
sat_id : (str or NoneType)
satellite id or None (default=None)
Returns
---------
data : (pandas.DataFrame)
Object containing satellite data
meta : (pysat.Meta)
Object containing metadata such as column names and units
Notes
-----
Called by pysat. Not intended for direct use by user.
"""
if tag == '':
# f107 data stored monthly, need to return data daily
# the daily date is attached to filename
# parse off the last date, load month of data, downselect to desired day
date = pysat.datetime.strptime(fnames[0][-10:], '%Y-%m-%d')
data = pds.read_csv(fnames[0][0:-11], index_col=0, parse_dates=True)
idx, = np.where((data.index >= date) &
(data.index < date+ | pds.DateOffset(days=1) | pandas.DateOffset |
# -*- coding: utf-8 -*-
from numpy import NaN as npNaN
from pandas import DataFrame, Series
from .ema import ema
from .hma import hma
from .sma import sma
from pandas_ta.utils import get_offset, verify_series
def hilo(high, low, close, high_length=None, low_length=None, mamode=None, offset=None, **kwargs):
"""Indicator: Gann HiLo (HiLo)"""
# Validate Arguments
high = verify_series(high)
low = verify_series(low)
close = verify_series(close)
high_length = int(high_length) if high_length and high_length > 0 else 13
low_length = int(low_length) if low_length and low_length > 0 else 21
mamode = mamode.lower() if mamode else "sma"
offset = get_offset(offset)
# Calculate Result
m = close.size
hilo = | Series(npNaN, index=close.index) | pandas.Series |
import pandas as pd
from gurobipy import *
import gurobipy as grb
import matplotlib.pyplot as plt
import numpy as np
from Curve import *
from CurrModelPara import *
# class Folder_Info():
# def __init__(self, Input_folder_parent, Output_folder, curr_model):
# self.curr_model = curr_model
# self.Input_folder_parent = Input_folder_parent
# self.Output_folder = self.Output_folder
# self.date = self.curr_model.date
# self.Input_folder = self.Input_folder_parent + '/' +self.date
# self.filename = None
class System():
def __init__(self, curr_model):
self.curr_model = curr_model
self.Input_folder_parent = None
self.filename = None
self.Input_folder = None
self.Output_folder = None
self.parameter = {}
#this is for easy
if curr_model.current_stage == 'training_50':
self.Input_all_total = './Input_Curve'
if curr_model.current_stage == 'training_500':
self.Input_all_total = './Input_bootstrap'
if curr_model.current_stage == 'test':
self.Input_all_total = './Input_test'
def input_parameter(self, paranameter_name, in_model_name):
Data = pd.read_csv(self.filename)
df = | pd.DataFrame(Data) | pandas.DataFrame |
# -*- coding: utf-8 -*-
"""
Created on Fri Dec 25 23:17:27 2020
@author: adwait
"""
import logging
import os
import time
import numpy as np
import pandas as pd
from PyQt5.QtWidgets import QDialog, QGridLayout, QPushButton, QLabel, \
QTextEdit, QLineEdit, QComboBox, QSizePolicy, QFileDialog
class FileListDialog(QDialog):
def __init__(self):
super().__init__()
self.resize(350, 100)
self.setWindowTitle("Generate file list")
self.layout = QGridLayout()
self.setLayout(self.layout)
self.home()
def home(self):
folderLabel = QLabel('Folder path', self)
formatLabel = QLabel('Extension', self)
self.sortLabel = QLabel('Sort by', self)
self.sortComboBox = QComboBox(self)
self.sortComboBox.addItems(['Date modified', 'Name'])
self.createBtn = QPushButton("Create list")
self.createBtn.clicked.connect(self.create_list)
self.closeBtn = QPushButton("Close")
self.closeBtn.clicked.connect(self.close_dialog)
self.layout.addWidget(folderLabel, 0, 1, 1, 1)
self.layout.addWidget(formatLabel, 0, 2, 1, 1)
self.layout.addWidget(self.sortLabel, 1, 1, 1, 1)
self.layout.addWidget(self.sortComboBox, 1, 0, 1, 1)
self.layout.addWidget(self.createBtn, 1, 0, 1, 1)
self.layout.addWidget(self.closeBtn, 1, 0, 1, 1)
#filename dictionary. keys: file_type::file number, vals: [folder_path, format_string]
self.filename_dict = {'Video': {}, 'Data': {}}
self.num_dict = {'Video': 0, 'Data': 0} #number of defined files (or rows)
self.add_widgets('Video')
self.add_widgets('Data')
#add row of new video/data input widgets
def add_widgets(self, file_type):
self.num_dict[file_type] += 1
self.filename_dict[file_type][self.num_dict[file_type]] = ['', '']
format_example = '.avi' if file_type == 'Video' else '.txt'
label = f'{file_type} {self.num_dict[file_type]}'
fileLabel = QLabel(label)
folderPathText = QTextEdit()
# folderPathText.setSizePolicy(QSizePolicy.Preferred, QSizePolicy.Ignored)
formatLine = QLineEdit()
formatLine.setSizePolicy(QSizePolicy.Ignored, QSizePolicy.Preferred)
formatLine.setToolTip(f'Leave blank to consider all files.\nEx: {format_example}')
browseBtn = QPushButton("Browse..")
browseBtn.clicked.connect(lambda: self.open_directory(folderPathText, label))
folderPathText.textChanged.connect(lambda: self.update_filename_dict(self.num_dict[file_type],
file_type,
folderPathText.toPlainText(),
formatLine.text()))
formatLine.textChanged.connect(lambda: self.update_filename_dict(self.num_dict[file_type],
file_type,
folderPathText.toPlainText(),
formatLine.text()))
plusBtn = QPushButton("+")
minusBtn = QPushButton("-")
if self.num_dict[file_type] == 1:
minusBtn.setEnabled(False)
widget_list = [fileLabel, folderPathText, formatLine, browseBtn,
plusBtn, minusBtn]
plusBtn.clicked.connect(lambda: self.add_or_remove('+', widget_list,
self.num_dict[file_type],
file_type))
minusBtn.clicked.connect(lambda: self.add_or_remove('-', widget_list,
self.num_dict[file_type],
file_type))
row_num = sum(self.num_dict.values())
self.layout.addWidget(fileLabel, row_num, 0, 1, 1)
self.layout.addWidget(folderPathText, row_num, 1, 1, 1)
self.layout.addWidget(formatLine, row_num, 2, 1, 1)
self.layout.addWidget(browseBtn, row_num, 3, 1, 1)
self.layout.addWidget(plusBtn, row_num, 4, 1, 1)
self.layout.addWidget(minusBtn, row_num, 5, 1, 1)
self.layout.addWidget(self.sortLabel, row_num + 1, 0, 1, 1)
self.layout.addWidget(self.sortComboBox, row_num + 1, 1, 1, 1)
self.layout.addWidget(self.createBtn, row_num + 1, 3, 1, 1)
self.layout.addWidget(self.closeBtn, row_num + 1, 4, 1, 2)
def add_or_remove(self, action, wid_list, rownum, file_type):
if action == '+':
self.add_widgets(file_type)
elif action == '-':
for wid in wid_list:
self.layout.removeWidget(wid)
wid.deleteLater()
self.num_dict[file_type] -= 1
del self.filename_dict[file_type][rownum]
# if self.num_dict[file_type] == 0:
# self.layout.removeWidget(self.varListBtn)
# self.layout.addWidget(self.makeVarAdd, 1, 1, 1, 1)
# self.layout.addWidget(self.makeVarOk, 1, 0, 1, 1)
self.resize(350, 100)
def open_directory(self, wid, label):
folderpath = QFileDialog.getExistingDirectory(caption = f'Select {label} folder')
wid.setText(folderpath + '/')
def update_filename_dict(self, rownum, file_type, folder_path, format_string):
# if var_name != '' and formula != '':
self.filename_dict[file_type][rownum] = [folder_path, format_string]
logging.debug(f'{self.filename_dict}')
#create file list
def create_list(self):
df = | pd.DataFrame() | pandas.DataFrame |
# coding: utf-8
# Import libraries
import pandas as pd
from pandas import ExcelWriter
from openpyxl import load_workbook
import pickle
import numpy as np
def summarize_reg(gene_set, n_data_matrix):
"""
The SUMMARIZE_REG operation summarizes all the data analysis results, by collecting them in convenient tables that exported locally in Excel files.
:param gene_set: the set of genes of interest to summarize
:param n_data_matrix: number identifying the data matrix to summarize (only 2,3 and 5 values are permitted)
Example::
import genereg as gr
gr.SummaryResults.summarize_reg(gene_set='DNA_REPAIR', n_data_matrix=2)
gr.SummaryResults.summarize_reg(gene_set='DNA_REPAIR', n_data_matrix=3)
gr.SummaryResults.summarize_reg(gene_set='DNA_REPAIR', n_data_matrix=5)
"""
# Check input parameters
if n_data_matrix not in [2, 3, 5]:
raise ValueError('Data Matrix ERROR! Possible values: {2,3,5}')
# Define the model to summarize
model = str(n_data_matrix)
# Define the previous model to check
if model == '3':
previous_model = str(int(model)-1)
elif model == '5':
previous_model = str(int(model)-2)
# Import the dictionary of genes of interest with their candidate regulatory genes
dict_RegulGenes = pickle.load(open('./2_Regulatory_Genes/dict_RegulGenes.p', 'rb'))
# Import the list of genes of interest and extract in a list the Gene Symbols of all the genes belonging to the current gene set
EntrezConversion_df = pd.read_excel('./Genes_of_Interest.xlsx',sheetname='Sheet1',header=0,converters={'GENE_SYMBOL':str,'ENTREZ_GENE_ID':str,'GENE_SET':str})
SYMs_current_pathway = []
for index, row in EntrezConversion_df.iterrows():
sym = row['GENE_SYMBOL']
path = row['GENE_SET']
if path == gene_set:
SYMs_current_pathway.append(sym)
if (model == '3') or (model == '5'):
# Create a list containing the Gene Symbols of the regulatory genes of the genes in the current gene set
current_regulatory_genes = []
for key, value in dict_RegulGenes.items():
if key in SYMs_current_pathway:
for gene in value:
if gene not in current_regulatory_genes:
current_regulatory_genes.append(gene)
if (model == '5'):
# Create a list containing the Gene Symbols of genes in the other gene sets
SYMs_other_pathways = []
for index, row in EntrezConversion_df.iterrows():
sym = row['GENE_SYMBOL']
path = row['GENE_SET']
if not (path == gene_set):
SYMs_other_pathways.append(sym)
# Create a list containing the Gene Symbols of the regulatory genes of the genes in the other gene sets
regulatory_genes_other = []
for key, value in dict_RegulGenes.items():
if key not in SYMs_current_pathway:
for gene in value:
if gene not in regulatory_genes_other:
regulatory_genes_other.append(gene)
# Create a dataframe to store final summary results of feature selection and linear regression for each gene of interest
if model == '2':
lr_summary_df = pd.DataFrame(index=SYMs_current_pathway, columns=['Inital N° Features','Discarded Features','N° Features Selected','R2','Adj.R2'])
else:
lr_summary_df = pd.DataFrame(index=SYMs_current_pathway, columns=['Inital N° Features','N° New Features w.r.t. Previous Model','Discarded Features','Features Available for Selection','N° Features Selected','R2','Adj.R2'])
for current_gene in SYMs_current_pathway:
# Import the current and, if present, the previous model of the current gene
gene_ID = EntrezConversion_df.loc[EntrezConversion_df['GENE_SYMBOL'] == current_gene, 'ENTREZ_GENE_ID'].iloc[0]
model_gene_df = pd.read_excel('./4_Data_Matrix_Construction/Model'+model+'/Gene_'+gene_ID+'_['+current_gene+']'+'_('+gene_set+')-Model_v'+model+'.xlsx',sheetname='Sheet1',header=0)
if not (model == '2'):
previous_model_df = pd.read_excel('./4_Data_Matrix_Construction/Model'+previous_model+'/Gene_'+gene_ID+'_['+current_gene+']'+'_('+gene_set+')-Model_v'+previous_model+'.xlsx',sheetname='Sheet1',header=0)
# Extract the list of new features, added to the current model, w.r.t. the previous one
if not (model == '2'):
current_model_col_names = set(list(model_gene_df.columns.values))
previous_model_col_names = set(list(previous_model_df.columns.values))
new_features = list(current_model_col_names - previous_model_col_names)
lr_summary_df.set_value(current_gene,'N° New Features w.r.t. Previous Model',len(new_features))
# Import the feature selection and linear regression summary tables
feature_sel_df = pd.read_excel('./5_Data_Analysis/'+gene_set+'/FeatureSelection/M'+model+'/Feature_Selection_SUMMARY.xlsx',sheetname='Sheet1',header=0)
lin_reg_df = pd.read_excel('./5_Data_Analysis/'+gene_set+'/LinearRegression/M'+model+'/Linear_Regression_R2_SCORES.xlsx',sheetname='Sheet1',header=0)
# Extract and store the results in the summary dataframe
n_features = feature_sel_df.get_value(current_gene,'TOT Inital N° Features')
n_feat_discarded = feature_sel_df.get_value(current_gene,'Discarded Features')
if not (model == '2'):
n_features_available = feature_sel_df.get_value(current_gene,'Features Available for Selection')
n_feat_selected = feature_sel_df.get_value(current_gene,'N° Features Selected')
lin_reg_r2_adj = lin_reg_df.get_value(current_gene,'Adj.R2')
lin_reg_r2 = lin_reg_df.get_value(current_gene,'R2')
lr_summary_df.set_value(current_gene,'Inital N° Features',n_features)
lr_summary_df.set_value(current_gene,'Discarded Features',n_feat_discarded)
if not (model == '2'):
lr_summary_df.set_value(current_gene,'Features Available for Selection',n_features_available)
lr_summary_df.set_value(current_gene,'N° Features Selected',n_feat_selected)
lr_summary_df.set_value(current_gene,'Adj.R2',lin_reg_r2_adj)
lr_summary_df.set_value(current_gene,'R2',lin_reg_r2)
# Export the summary dataframe in an Excel file
lr_summary_df = lr_summary_df.sort_values(by=['Adj.R2'], ascending=[False])
filename = './5_Data_Analysis/'+gene_set+'/Feature_Selection_and_Linear_Regression.xlsx'
writer = ExcelWriter(filename,engine='openpyxl')
try:
writer.book = load_workbook(filename)
writer.sheets = dict((ws.title, ws) for ws in writer.book.worksheets)
except IOError:
# if the file does not exist yet, I will create it
pass
lr_summary_df.to_excel(writer,'M'+model)
writer.save()
# Extract relevant features for each gene of the current gene set and store them in a summary table and define a dataframe to summarize the features selected for each model gene
features_summary_df = pd.DataFrame(index=SYMs_current_pathway)
for current_gene in SYMs_current_pathway:
gene_ID = EntrezConversion_df.loc[EntrezConversion_df['GENE_SYMBOL'] == current_gene, 'ENTREZ_GENE_ID'].iloc[0]
# Import the regression coefficients
coeff_df = pd.read_excel('./5_Data_Analysis/'+gene_set+'/LinearRegression/M'+model+'/Coefficients/Coefficients_(M'+model+')-Gene_'+gene_ID+'_['+current_gene+'].xlsx',sheetname='Sheet1',header=0)
# Import the confidence intervals
ci_df = pd.read_excel('./5_Data_Analysis/'+gene_set+'/LinearRegression/M'+model+'/ConfidenceIntervals/Confidence_Intervals_(M'+model+')-Gene_'+gene_ID+'_['+current_gene+'].xlsx',sheetname='Sheet1',header=0)
# Import the correlation matrix
corr_df = pd.read_excel('./5_Data_Analysis/'+gene_set+'/LinearRegression/M'+model+'/CorrelationMatrix/Correlation_Matrix_(M'+model+')-Gene_'+gene_ID+'_['+current_gene+'].xlsx',sheetname='Sheet1',header=0)
# Select the relevant features on the basis of the confidence intervals (i.e. if the confidence interval does not contain 0, then the feature is significant for the model)
relevant_features = []
for index, row in ci_df.iterrows():
s = row['Significant Feature?']
if s == 'YES':
relevant_features.append(index)
# Create a dataframe to store the results and fill it with requested information
relevant_features_df = pd.DataFrame(index=relevant_features, columns=['Regression Coefficient','Feature Description','Correlation with EXPRESSION ('+current_gene+')'])
for index, row in coeff_df.iterrows():
gene = row['feature']
if gene in relevant_features:
coeff = row['coefficient']
relevant_features_df.set_value(gene,'Regression Coefficient',coeff)
for index, row in corr_df.iterrows():
if index in relevant_features:
corr_with_target = row['EXPRESSION ('+current_gene+')']
relevant_features_df.set_value(index,'Correlation with EXPRESSION ('+current_gene+')',corr_with_target)
# Add the features descriptions
if model == '2':
for f in relevant_features:
if f in SYMs_current_pathway:
descr = 'Gene of the '+gene_set+' set'
relevant_features_df.set_value(f,'Feature Description',descr)
elif 'METHYLATION' in f:
descr = 'Methylation of the model gene ['+current_gene+'] in the '+gene_set+' set'
relevant_features_df.set_value(f,'Feature Description',descr)
elif f in dict_RegulGenes[current_gene]:
descr = 'Candidate regulatory gene of the model gene ['+current_gene+'] of the '+gene_set+' set'
relevant_features_df.set_value(f,'Feature Description',descr)
elif model == '3':
for f in relevant_features:
if f in SYMs_current_pathway:
descr = 'Gene of the '+gene_set+' set'
relevant_features_df.set_value(f,'Feature Description',descr)
elif 'METHYLATION' in f:
descr = 'Methylation of the model gene ['+current_gene+'] in the '+gene_set+' set'
relevant_features_df.set_value(f,'Feature Description',descr)
elif f in dict_RegulGenes[current_gene]:
descr = 'Candidate regulatory gene of the model gene ['+current_gene+'] of the '+gene_set+' set'
relevant_features_df.set_value(f,'Feature Description',descr)
elif not(f in dict_RegulGenes[current_gene]) and (f in current_regulatory_genes):
descr = 'Candidate regulatory gene of the genes in the '+gene_set+' set'
relevant_features_df.set_value(f,'Feature Description',descr)
elif model == '5':
for f in relevant_features:
if f in SYMs_current_pathway:
descr = 'Gene of the '+gene_set+' set'
relevant_features_df.set_value(f,'Feature Description',descr)
elif 'METHYLATION' in f:
descr = 'Methylation of the model gene ['+current_gene+'] in the '+gene_set+' set'
relevant_features_df.set_value(f,'Feature Description',descr)
elif f in dict_RegulGenes[current_gene]:
descr = 'Candidate regulatory gene of the model gene ['+current_gene+'] of the '+gene_set+' set'
relevant_features_df.set_value(f,'Feature Description',descr)
elif not(f in dict_RegulGenes[current_gene]) and (f in current_regulatory_genes):
descr = 'Candidate regulatory gene of the genes in the '+gene_set+' set'
relevant_features_df.set_value(f,'Feature Description',descr)
elif f in SYMs_other_pathways:
df_temp = EntrezConversion_df.loc[EntrezConversion_df['GENE_SYMBOL'] == f].copy()
f_pathways = (df_temp.GENE_SET.unique()).tolist()
descr = 'Gene of the gene sets: '+(', '.join(f_pathways))
relevant_features_df.set_value(f,'Feature Description',descr)
elif f in regulatory_genes_other:
regulated_genes_other = []
for key, value in dict_RegulGenes.items():
if key in SYMs_other_pathways:
if f in value:
regulated_genes_other.append(key)
df_temp = EntrezConversion_df.loc[EntrezConversion_df['GENE_SYMBOL'].isin(regulated_genes_other)].copy()
f_pathways = (df_temp.GENE_SET.unique()).tolist()
descr = 'Candidate regulatory gene of the gene sets: '+(', '.join(f_pathways))
relevant_features_df.set_value(f,'Feature Description',descr)
# Export the dataframe in an Excel file
relevant_features_df = relevant_features_df.sort_values(by=['Regression Coefficient'], ascending=[False])
filename = './5_Data_Analysis/'+gene_set+'/Relevant_Features-Gene_'+gene_ID+'_['+current_gene+'].xlsx'
writer = ExcelWriter(filename,engine='openpyxl')
try:
writer.book = load_workbook(filename)
writer.sheets = dict((ws.title, ws) for ws in writer.book.worksheets)
except IOError:
# if the file does not exist yet, I will create it
pass
relevant_features_df.to_excel(writer,'M'+model)
writer.save()
relevance_order = 0
for index, row in relevant_features_df.iterrows():
relevance_order = relevance_order + 1
str_order = str(relevance_order)
features_summary_df.set_value(current_gene, index, str_order)
# Export the summary dataframe in an Excel file
filename = './5_Data_Analysis/'+gene_set+'/Order_of_Features_Selected.xlsx'
writer = ExcelWriter(filename,engine='openpyxl')
try:
writer.book = load_workbook(filename)
writer.sheets = dict((ws.title, ws) for ws in writer.book.worksheets)
except IOError:
# if the file does not exist yet, I will create it
pass
features_summary_df.to_excel(writer,'M'+model)
writer.save()
def summarize_r2(gene_set):
"""
The SUMMARIZE_R2 operation summarizes R2 and Adjusted R2 scores for each target gene in each regression model, storing them locally in a single Excel file.
:param gene_set: the set of genes of interest to summarize
Example::
import genereg as gr
gr.SummaryResults.summarize_r2(gene_set='DNA_REPAIR')
"""
# Define the models to summarize
models = ['2','3','5']
# Import the list of genes of interest and extract in a list the Gene Symbols of all the genes belonging to the current gene set
EntrezConversion_df = pd.read_excel('./Genes_of_Interest.xlsx',sheetname='Sheet1',header=0,converters={'GENE_SYMBOL':str,'ENTREZ_GENE_ID':str,'GENE_SET':str})
SYMs_current_pathway = []
for index, row in EntrezConversion_df.iterrows():
sym = row['GENE_SYMBOL']
path = row['GENE_SET']
if path == gene_set:
SYMs_current_pathway.append(sym)
# Create a dataframe to store the final summary about features selected and R2 scores, for each gene of interest
summary_df = | pd.DataFrame(index=SYMs_current_pathway, columns=['Selected Features (M2)','R2 (M2)','Adj.R2 (M2)','Selected Features (M3)','R2 (M3)','Adj.R2 (M3)','Selected Features (M5)','R2 (M5)','Adj.R2 (M5)']) | pandas.DataFrame |
#!/usr/bin/python3
# -*- coding: utf-8 -*-
# *****************************************************************************/
# * Authors: <NAME>
# *****************************************************************************/
"""transformCSV.py
This module contains the basic functions for creating the content of a configuration file from CSV.
Args:
--inFile: Path for the configuration file where the time series data values CSV
--outFile: Path for the configuration file where the time series data values INI
--debug: Boolean flag to activate verbose printing for debug use
Example:
Default usage:
$ python transformCSV.py
Specific usage:
$ python transformCSV.py
--inFile C:\raad\src\software\time-series.csv
--outFile C:\raad\src\software\time-series.ini
--debug True
"""
import sys
import datetime
import optparse
import traceback
import pandas
import numpy
import os
import pprint
import csv
if sys.version_info.major > 2:
import configparser as cF
else:
import ConfigParser as cF
class TransformMetaData(object):
debug = False
fileName = None
fileLocation = None
columnsList = None
analysisFrameFormat = None
uniqueLists = None
analysisFrame = None
def __init__(self, inputFileName=None, debug=False, transform=False, sectionName=None, outFolder=None,
outFile='time-series-madness.ini'):
if isinstance(debug, bool):
self.debug = debug
if inputFileName is None:
return
elif os.path.exists(os.path.abspath(inputFileName)):
self.fileName = inputFileName
self.fileLocation = os.path.exists(os.path.abspath(inputFileName))
(analysisFrame, analysisFrameFormat, uniqueLists, columnNamesList) = self.CSVtoFrame(
inputFileName=self.fileName)
self.analysisFrame = analysisFrame
self.columnsList = columnNamesList
self.analysisFrameFormat = analysisFrameFormat
self.uniqueLists = uniqueLists
if transform:
passWrite = self.frameToINI(analysisFrame=analysisFrame, sectionName=sectionName, outFolder=outFolder,
outFile=outFile)
print(f"Pass Status is : {passWrite}")
return
def getColumnList(self):
return self.columnsList
def getAnalysisFrameFormat(self):
return self.analysisFrameFormat
def getuniqueLists(self):
return self.uniqueLists
def getAnalysisFrame(self):
return self.analysisFrame
@staticmethod
def getDateParser(formatString="%Y-%m-%d %H:%M:%S.%f"):
return (lambda x: pandas.datetime.strptime(x, formatString)) # 2020-06-09 19:14:00.000
def getHeaderFromFile(self, headerFilePath=None, method=1):
if headerFilePath is None:
return (None, None)
if method == 1:
fieldnames = pandas.read_csv(headerFilePath, index_col=0, nrows=0).columns.tolist()
elif method == 2:
with open(headerFilePath, 'r') as infile:
reader = csv.DictReader(infile)
fieldnames = list(reader.fieldnames)
elif method == 3:
fieldnames = list(pandas.read_csv(headerFilePath, nrows=1).columns)
else:
fieldnames = None
fieldDict = {}
for indexName, valueName in enumerate(fieldnames):
fieldDict[valueName] = pandas.StringDtype()
return (fieldnames, fieldDict)
def CSVtoFrame(self, inputFileName=None):
if inputFileName is None:
return (None, None)
# Load File
print("Processing File: {0}...\n".format(inputFileName))
self.fileLocation = inputFileName
# Create data frame
analysisFrame = pandas.DataFrame()
analysisFrameFormat = self._getDataFormat()
inputDataFrame = pandas.read_csv(filepath_or_buffer=inputFileName,
sep='\t',
names=self._getDataFormat(),
# dtype=self._getDataFormat()
# header=None
# float_precision='round_trip'
# engine='c',
# parse_dates=['date_column'],
# date_parser=True,
# na_values=['NULL']
)
if self.debug: # Preview data.
print(inputDataFrame.head(5))
# analysisFrame.astype(dtype=analysisFrameFormat)
# Cleanup data
analysisFrame = inputDataFrame.copy(deep=True)
analysisFrame.apply(pandas.to_numeric, errors='coerce') # Fill in bad data with Not-a-Number (NaN)
# Create lists of unique strings
uniqueLists = []
columnNamesList = []
for columnName in analysisFrame.columns:
if self.debug:
print('Column Name : ', columnName)
print('Column Contents : ', analysisFrame[columnName].values)
if isinstance(analysisFrame[columnName].dtypes, str):
columnUniqueList = analysisFrame[columnName].unique().tolist()
else:
columnUniqueList = None
columnNamesList.append(columnName)
uniqueLists.append([columnName, columnUniqueList])
if self.debug: # Preview data.
print(analysisFrame.head(5))
return (analysisFrame, analysisFrameFormat, uniqueLists, columnNamesList)
def frameToINI(self, analysisFrame=None, sectionName='Unknown', outFolder=None, outFile='nil.ini'):
if analysisFrame is None:
return False
try:
if outFolder is None:
outFolder = os.getcwd()
configFilePath = os.path.join(outFolder, outFile)
configINI = cF.ConfigParser()
configINI.add_section(sectionName)
for (columnName, columnData) in analysisFrame:
if self.debug:
print('Column Name : ', columnName)
print('Column Contents : ', columnData.values)
print("Column Contents Length:", len(columnData.values))
print("Column Contents Type", type(columnData.values))
writeList = "["
for colIndex, colValue in enumerate(columnData):
writeList = f"{writeList}'{colValue}'"
if colIndex < len(columnData) - 1:
writeList = f"{writeList}, "
writeList = f"{writeList}]"
configINI.set(sectionName, columnName, writeList)
if not os.path.exists(configFilePath) or os.stat(configFilePath).st_size == 0:
with open(configFilePath, 'w') as configWritingFile:
configINI.write(configWritingFile)
noErrors = True
except ValueError as e:
errorString = ("ERROR in {__file__} @{framePrintNo} with {ErrorFound}".format(__file__=str(__file__),
framePrintNo=str(
sys._getframe().f_lineno),
ErrorFound=e))
print(errorString)
noErrors = False
return noErrors
@staticmethod
def _validNumericalFloat(inValue):
"""
Determines if the value is a valid numerical object.
Args:
inValue: floating-point value
Returns: Value in floating-point or Not-A-Number.
"""
try:
return numpy.float128(inValue)
except ValueError:
return numpy.nan
@staticmethod
def _calculateMean(x):
"""
Calculates the mean in a multiplication method since division produces an infinity or NaN
Args:
x: Input data set. We use a data frame.
Returns: Calculated mean for a vector data frame.
"""
try:
mean = numpy.float128(numpy.average(x, weights=numpy.ones_like(numpy.float128(x)) / numpy.float128(x.size)))
except ValueError:
mean = 0
pass
return mean
def _calculateStd(self, data):
"""
Calculates the standard deviation in a multiplication method since division produces a infinity or NaN
Args:
data: Input data set. We use a data frame.
Returns: Calculated standard deviation for a vector data frame.
"""
sd = 0
try:
n = numpy.float128(data.size)
if n <= 1:
return numpy.float128(0.0)
# Use multiplication version of mean since numpy bug causes infinity.
mean = self._calculateMean(data)
sd = numpy.float128(mean)
# Calculate standard deviation
for el in data:
diff = numpy.float128(el) - numpy.float128(mean)
sd += (diff) ** 2
points = numpy.float128(n - 1)
sd = numpy.float128(numpy.sqrt(numpy.float128(sd) / numpy.float128(points)))
except ValueError:
pass
return sd
def _determineQuickStats(self, dataAnalysisFrame, columnName=None, multiplierSigma=3.0):
"""
Determines stats based on a vector to get the data shape.
Args:
dataAnalysisFrame: Dataframe to do analysis on.
columnName: Column name of the data frame.
multiplierSigma: Sigma range for the stats.
Returns: Set of stats.
"""
meanValue = 0
sigmaValue = 0
sigmaRangeValue = 0
topValue = 0
try:
# Clean out anomoly due to random invalid inputs.
if (columnName is not None):
meanValue = self._calculateMean(dataAnalysisFrame[columnName])
if meanValue == numpy.nan:
meanValue = numpy.float128(1)
sigmaValue = self._calculateStd(dataAnalysisFrame[columnName])
if float(sigmaValue) is float(numpy.nan):
sigmaValue = numpy.float128(1)
multiplier = numpy.float128(multiplierSigma) # Stats: 1 sigma = 68%, 2 sigma = 95%, 3 sigma = 99.7
sigmaRangeValue = (sigmaValue * multiplier)
if float(sigmaRangeValue) is float(numpy.nan):
sigmaRangeValue = numpy.float128(1)
topValue = numpy.float128(meanValue + sigmaRangeValue)
print("Name:{} Mean= {}, Sigma= {}, {}*Sigma= {}".format(columnName,
meanValue,
sigmaValue,
multiplier,
sigmaRangeValue))
except ValueError:
pass
return (meanValue, sigmaValue, sigmaRangeValue, topValue)
def _cleanZerosForColumnInFrame(self, dataAnalysisFrame, columnName='cycles'):
"""
Cleans the data frame with data values that are invalid. I.E. inf, NaN
Args:
dataAnalysisFrame: Dataframe to do analysis on.
columnName: Column name of the data frame.
Returns: Cleaned dataframe.
"""
dataAnalysisCleaned = None
try:
# Clean out anomoly due to random invalid inputs.
(meanValue, sigmaValue, sigmaRangeValue, topValue) = self._determineQuickStats(
dataAnalysisFrame=dataAnalysisFrame, columnName=columnName)
# dataAnalysisCleaned = dataAnalysisFrame[dataAnalysisFrame[columnName] != 0]
# When the cycles are negative or zero we missed cleaning up a row.
# logicVector = (dataAnalysisFrame[columnName] != 0)
# dataAnalysisCleaned = dataAnalysisFrame[logicVector]
logicVector = (dataAnalysisCleaned[columnName] >= 1)
dataAnalysisCleaned = dataAnalysisCleaned[logicVector]
# These timed out mean + 2 * sd
logicVector = (dataAnalysisCleaned[columnName] < topValue) # Data range
dataAnalysisCleaned = dataAnalysisCleaned[logicVector]
except ValueError:
pass
return dataAnalysisCleaned
def _cleanFrame(self, dataAnalysisTemp, cleanColumn=False, columnName='cycles'):
"""
Args:
dataAnalysisTemp: Dataframe to do analysis on.
cleanColumn: Flag to clean the data frame.
columnName: Column name of the data frame.
Returns: cleaned dataframe
"""
try:
replacementList = [pandas.NaT, numpy.Infinity, numpy.NINF, 'NaN', 'inf', '-inf', 'NULL']
if cleanColumn is True:
dataAnalysisTemp = self._cleanZerosForColumnInFrame(dataAnalysisTemp, columnName=columnName)
dataAnalysisTemp = dataAnalysisTemp.replace(to_replace=replacementList,
value=numpy.nan)
dataAnalysisTemp = dataAnalysisTemp.dropna()
except ValueError:
pass
return dataAnalysisTemp
@staticmethod
def _getDataFormat():
"""
Return the dataframe setup for the CSV file generated from server.
Returns: dictionary data format for pandas.
"""
dataFormat = {
"Serial_Number": pandas.StringDtype(),
"LogTime0": pandas.StringDtype(), # @todo force rename
"Id0": pandas.StringDtype(), # @todo force rename
"DriveId": pandas.StringDtype(),
"JobRunId": pandas.StringDtype(),
"LogTime1": pandas.StringDtype(), # @todo force rename
"Comment0": pandas.StringDtype(), # @todo force rename
"CriticalWarning": pandas.StringDtype(),
"Temperature": pandas.StringDtype(),
"AvailableSpare": pandas.StringDtype(),
"AvailableSpareThreshold": pandas.StringDtype(),
"PercentageUsed": pandas.StringDtype(),
"DataUnitsReadL": pandas.StringDtype(),
"DataUnitsReadU": pandas.StringDtype(),
"DataUnitsWrittenL": pandas.StringDtype(),
"DataUnitsWrittenU": pandas.StringDtype(),
"HostReadCommandsL": pandas.StringDtype(),
"HostReadCommandsU": pandas.StringDtype(),
"HostWriteCommandsL": pandas.StringDtype(),
"HostWriteCommandsU": pandas.StringDtype(),
"ControllerBusyTimeL": pandas.StringDtype(),
"ControllerBusyTimeU": pandas.StringDtype(),
"PowerCyclesL": pandas.StringDtype(),
"PowerCyclesU": pandas.StringDtype(),
"PowerOnHoursL": pandas.StringDtype(),
"PowerOnHoursU": pandas.StringDtype(),
"UnsafeShutdownsL": pandas.StringDtype(),
"UnsafeShutdownsU": pandas.StringDtype(),
"MediaErrorsL": pandas.StringDtype(),
"MediaErrorsU": pandas.StringDtype(),
"NumErrorInfoLogsL": pandas.StringDtype(),
"NumErrorInfoLogsU": pandas.StringDtype(),
"ProgramFailCountN": pandas.StringDtype(),
"ProgramFailCountR": pandas.StringDtype(),
"EraseFailCountN": pandas.StringDtype(),
"EraseFailCountR": pandas.StringDtype(),
"WearLevelingCountN": pandas.StringDtype(),
"WearLevelingCountR": pandas.StringDtype(),
"E2EErrorDetectCountN": pandas.StringDtype(),
"E2EErrorDetectCountR": pandas.StringDtype(),
"CRCErrorCountN": pandas.StringDtype(),
"CRCErrorCountR": pandas.StringDtype(),
"MediaWearPercentageN": pandas.StringDtype(),
"MediaWearPercentageR": pandas.StringDtype(),
"HostReadsN": pandas.StringDtype(),
"HostReadsR": pandas.StringDtype(),
"TimedWorkloadN": pandas.StringDtype(),
"TimedWorkloadR": pandas.StringDtype(),
"ThermalThrottleStatusN": pandas.StringDtype(),
"ThermalThrottleStatusR": pandas.StringDtype(),
"RetryBuffOverflowCountN": pandas.StringDtype(),
"RetryBuffOverflowCountR": pandas.StringDtype(),
"PLLLockLossCounterN": pandas.StringDtype(),
"PLLLockLossCounterR": pandas.StringDtype(),
"NandBytesWrittenN": pandas.StringDtype(),
"NandBytesWrittenR": pandas.StringDtype(),
"HostBytesWrittenN": pandas.StringDtype(),
"HostBytesWrittenR": pandas.StringDtype(),
"SystemAreaLifeRemainingN": pandas.StringDtype(),
"SystemAreaLifeRemainingR": pandas.StringDtype(),
"RelocatableSectorCountN": pandas.StringDtype(),
"RelocatableSectorCountR": pandas.StringDtype(),
"SoftECCErrorRateN": pandas.StringDtype(),
"SoftECCErrorRateR": pandas.StringDtype(),
"UnexpectedPowerLossN": pandas.StringDtype(),
"UnexpectedPowerLossR": pandas.StringDtype(),
"MediaErrorCountN": pandas.StringDtype(),
"MediaErrorCountR": pandas.StringDtype(),
"NandBytesReadN": pandas.StringDtype(),
"NandBytesReadR": pandas.StringDtype(),
"WarningCompTempTime": pandas.StringDtype(),
"CriticalCompTempTime": pandas.StringDtype(),
"TempSensor1": pandas.StringDtype(),
"TempSensor2": pandas.StringDtype(),
"TempSensor3": pandas.StringDtype(),
"TempSensor4": pandas.StringDtype(),
"TempSensor5": pandas.StringDtype(),
"TempSensor6": pandas.StringDtype(),
"TempSensor7": pandas.StringDtype(),
"TempSensor8": pandas.StringDtype(),
"ThermalManagementTemp1TransitionCount": pandas.StringDtype(),
"ThermalManagementTemp2TransitionCount": pandas.StringDtype(),
"TotalTimeForThermalManagementTemp1": pandas.StringDtype(),
"TotalTimeForThermalManagementTemp2": pandas.StringDtype(),
"Core_Num": pandas.StringDtype(),
"Id1": pandas.StringDtype(), # @todo force rename
"Job_Run_Id": pandas.StringDtype(),
"Stats_Time": pandas.StringDtype(),
"HostReads": pandas.StringDtype(),
"HostWrites": pandas.StringDtype(),
"NandReads": pandas.StringDtype(),
"NandWrites": pandas.StringDtype(),
"ProgramErrors": pandas.StringDtype(),
"EraseErrors": pandas.StringDtype(),
"ErrorCount": pandas.StringDtype(),
"BitErrorsHost1": pandas.StringDtype(),
"BitErrorsHost2": pandas.StringDtype(),
"BitErrorsHost3": pandas.StringDtype(),
"BitErrorsHost4": pandas.StringDtype(),
"BitErrorsHost5": pandas.StringDtype(),
"BitErrorsHost6": pandas.StringDtype(),
"BitErrorsHost7": pandas.StringDtype(),
"BitErrorsHost8": pandas.StringDtype(),
"BitErrorsHost9": pandas.StringDtype(),
"BitErrorsHost10": pandas.StringDtype(),
"BitErrorsHost11": pandas.StringDtype(),
"BitErrorsHost12": pandas.StringDtype(),
"BitErrorsHost13": pandas.StringDtype(),
"BitErrorsHost14": pandas.StringDtype(),
"BitErrorsHost15": pandas.StringDtype(),
"ECCFail": pandas.StringDtype(),
"GrownDefects": pandas.StringDtype(),
"FreeMemory": pandas.StringDtype(),
"WriteAllowance": pandas.StringDtype(),
"ModelString": pandas.StringDtype(),
"ValidBlocks": pandas.StringDtype(),
"TokenBlocks": pandas.StringDtype(),
"SpuriousPFCount": pandas.StringDtype(),
"SpuriousPFLocations1": pandas.StringDtype(),
"SpuriousPFLocations2": pandas.StringDtype(),
"SpuriousPFLocations3": pandas.StringDtype(),
"SpuriousPFLocations4": pandas.StringDtype(),
"SpuriousPFLocations5": pandas.StringDtype(),
"SpuriousPFLocations6": pandas.StringDtype(),
"SpuriousPFLocations7": pandas.StringDtype(),
"SpuriousPFLocations8": pandas.StringDtype(),
"BitErrorsNonHost1": pandas.StringDtype(),
"BitErrorsNonHost2": pandas.StringDtype(),
"BitErrorsNonHost3": pandas.StringDtype(),
"BitErrorsNonHost4": pandas.StringDtype(),
"BitErrorsNonHost5": pandas.StringDtype(),
"BitErrorsNonHost6": pandas.StringDtype(),
"BitErrorsNonHost7": pandas.StringDtype(),
"BitErrorsNonHost8": pandas.StringDtype(),
"BitErrorsNonHost9": pandas.StringDtype(),
"BitErrorsNonHost10": pandas.StringDtype(),
"BitErrorsNonHost11": pandas.StringDtype(),
"BitErrorsNonHost12": pandas.StringDtype(),
"BitErrorsNonHost13": pandas.StringDtype(),
"BitErrorsNonHost14": pandas.StringDtype(),
"BitErrorsNonHost15": pandas.StringDtype(),
"ECCFailNonHost": pandas.StringDtype(),
"NSversion": pandas.StringDtype(),
"numBands": pandas.StringDtype(),
"minErase": pandas.StringDtype(),
"maxErase": pandas.StringDtype(),
"avgErase": pandas.StringDtype(),
"minMVolt": pandas.StringDtype(),
"maxMVolt": pandas.StringDtype(),
"avgMVolt": pandas.StringDtype(),
"minMAmp": pandas.StringDtype(),
"maxMAmp": pandas.StringDtype(),
"avgMAmp": pandas.StringDtype(),
"comment1": pandas.StringDtype(), # @todo force rename
"minMVolt12v": pandas.StringDtype(),
"maxMVolt12v": pandas.StringDtype(),
"avgMVolt12v": pandas.StringDtype(),
"minMAmp12v": pandas.StringDtype(),
"maxMAmp12v": pandas.StringDtype(),
"avgMAmp12v": pandas.StringDtype(),
"nearMissSector": pandas.StringDtype(),
"nearMissDefect": pandas.StringDtype(),
"nearMissOverflow": pandas.StringDtype(),
"replayUNC": pandas.StringDtype(),
"Drive_Id": pandas.StringDtype(),
"indirectionMisses": pandas.StringDtype(),
"BitErrorsHost16": pandas.StringDtype(),
"BitErrorsHost17": pandas.StringDtype(),
"BitErrorsHost18": pandas.StringDtype(),
"BitErrorsHost19": pandas.StringDtype(),
"BitErrorsHost20": pandas.StringDtype(),
"BitErrorsHost21": pandas.StringDtype(),
"BitErrorsHost22": pandas.StringDtype(),
"BitErrorsHost23": pandas.StringDtype(),
"BitErrorsHost24": pandas.StringDtype(),
"BitErrorsHost25": pandas.StringDtype(),
"BitErrorsHost26": pandas.StringDtype(),
"BitErrorsHost27": pandas.StringDtype(),
"BitErrorsHost28": pandas.StringDtype(),
"BitErrorsHost29": pandas.StringDtype(),
"BitErrorsHost30": pandas.StringDtype(),
"BitErrorsHost31": pandas.StringDtype(),
"BitErrorsHost32": pandas.StringDtype(),
"BitErrorsHost33": pandas.StringDtype(),
"BitErrorsHost34": pandas.StringDtype(),
"BitErrorsHost35": pandas.StringDtype(),
"BitErrorsHost36": pandas.StringDtype(),
"BitErrorsHost37": pandas.StringDtype(),
"BitErrorsHost38": pandas.StringDtype(),
"BitErrorsHost39": pandas.StringDtype(),
"BitErrorsHost40": pandas.StringDtype(),
"XORRebuildSuccess": pandas.StringDtype(),
"XORRebuildFail": pandas.StringDtype(),
"BandReloForError": pandas.StringDtype(),
"mrrSuccess": pandas.StringDtype(),
"mrrFail": pandas.StringDtype(),
"mrrNudgeSuccess": pandas.StringDtype(),
"mrrNudgeHarmless": pandas.StringDtype(),
"mrrNudgeFail": pandas.StringDtype(),
"totalErases": pandas.StringDtype(),
"dieOfflineCount": pandas.StringDtype(),
"curtemp": pandas.StringDtype(),
"mintemp": pandas.StringDtype(),
"maxtemp": pandas.StringDtype(),
"oventemp": pandas.StringDtype(),
"allZeroSectors": pandas.StringDtype(),
"ctxRecoveryEvents": pandas.StringDtype(),
"ctxRecoveryErases": pandas.StringDtype(),
"NSversionMinor": pandas.StringDtype(),
"lifeMinTemp": pandas.StringDtype(),
"lifeMaxTemp": pandas.StringDtype(),
"powerCycles": pandas.StringDtype(),
"systemReads": pandas.StringDtype(),
"systemWrites": pandas.StringDtype(),
"readRetryOverflow": pandas.StringDtype(),
"unplannedPowerCycles": pandas.StringDtype(),
"unsafeShutdowns": pandas.StringDtype(),
"defragForcedReloCount": pandas.StringDtype(),
"bandReloForBDR": pandas.StringDtype(),
"bandReloForDieOffline": pandas.StringDtype(),
"bandReloForPFail": pandas.StringDtype(),
"bandReloForWL": pandas.StringDtype(),
"provisionalDefects": pandas.StringDtype(),
"uncorrectableProgErrors": pandas.StringDtype(),
"powerOnSeconds": pandas.StringDtype(),
"bandReloForChannelTimeout": pandas.StringDtype(),
"fwDowngradeCount": pandas.StringDtype(),
"dramCorrectablesTotal": pandas.StringDtype(),
"hb_id": pandas.StringDtype(),
"dramCorrectables1to1": pandas.StringDtype(),
"dramCorrectables4to1": pandas.StringDtype(),
"dramCorrectablesSram": pandas.StringDtype(),
"dramCorrectablesUnknown": pandas.StringDtype(),
"pliCapTestInterval": pandas.StringDtype(),
"pliCapTestCount": pandas.StringDtype(),
"pliCapTestResult": pandas.StringDtype(),
"pliCapTestTimeStamp": pandas.StringDtype(),
"channelHangSuccess": pandas.StringDtype(),
"channelHangFail": pandas.StringDtype(),
"BitErrorsHost41": pandas.StringDtype(),
"BitErrorsHost42": pandas.StringDtype(),
"BitErrorsHost43": pandas.StringDtype(),
"BitErrorsHost44": pandas.StringDtype(),
"BitErrorsHost45": pandas.StringDtype(),
"BitErrorsHost46": pandas.StringDtype(),
"BitErrorsHost47": pandas.StringDtype(),
"BitErrorsHost48": pandas.StringDtype(),
"BitErrorsHost49": pandas.StringDtype(),
"BitErrorsHost50": pandas.StringDtype(),
"BitErrorsHost51": pandas.StringDtype(),
"BitErrorsHost52": pandas.StringDtype(),
"BitErrorsHost53": pandas.StringDtype(),
"BitErrorsHost54": pandas.StringDtype(),
"BitErrorsHost55": pandas.StringDtype(),
"BitErrorsHost56": pandas.StringDtype(),
"mrrNearMiss": pandas.StringDtype(),
"mrrRereadAvg": pandas.StringDtype(),
"readDisturbEvictions": pandas.StringDtype(),
"L1L2ParityError": pandas.StringDtype(),
"pageDefects": pandas.StringDtype(),
"pageProvisionalTotal": pandas.StringDtype(),
"ASICTemp": pandas.StringDtype(),
"PMICTemp": pandas.StringDtype(),
"size": pandas.StringDtype(),
"lastWrite": pandas.StringDtype(),
"timesWritten": pandas.StringDtype(),
"maxNumContextBands": pandas.StringDtype(),
"blankCount": pandas.StringDtype(),
"cleanBands": pandas.StringDtype(),
"avgTprog": pandas.StringDtype(),
"avgEraseCount": pandas.StringDtype(),
"edtcHandledBandCnt": pandas.StringDtype(),
"bandReloForNLBA": pandas.StringDtype(),
"bandCrossingDuringPliCount": pandas.StringDtype(),
"bitErrBucketNum": pandas.StringDtype(),
"sramCorrectablesTotal": pandas.StringDtype(),
"l1SramCorrErrCnt": pandas.StringDtype(),
"l2SramCorrErrCnt": pandas.StringDtype(),
"parityErrorValue": pandas.StringDtype(),
"parityErrorType": pandas.StringDtype(),
"mrr_LutValidDataSize": pandas.StringDtype(),
"pageProvisionalDefects": pandas.StringDtype(),
"plisWithErasesInProgress": pandas.StringDtype(),
"lastReplayDebug": pandas.StringDtype(),
"externalPreReadFatals": pandas.StringDtype(),
"hostReadCmd": pandas.StringDtype(),
"hostWriteCmd": pandas.StringDtype(),
"trimmedSectors": pandas.StringDtype(),
"trimTokens": pandas.StringDtype(),
"mrrEventsInCodewords": pandas.StringDtype(),
"mrrEventsInSectors": pandas.StringDtype(),
"powerOnMicroseconds": pandas.StringDtype(),
"mrrInXorRecEvents": pandas.StringDtype(),
"mrrFailInXorRecEvents": pandas.StringDtype(),
"mrrUpperpageEvents": pandas.StringDtype(),
"mrrLowerpageEvents": pandas.StringDtype(),
"mrrSlcpageEvents": pandas.StringDtype(),
"mrrReReadTotal": pandas.StringDtype(),
"powerOnResets": pandas.StringDtype(),
"powerOnMinutes": pandas.StringDtype(),
"throttleOnMilliseconds": pandas.StringDtype(),
"ctxTailMagic": pandas.StringDtype(),
"contextDropCount": pandas.StringDtype(),
"lastCtxSequenceId": pandas.StringDtype(),
"currCtxSequenceId": pandas.StringDtype(),
"mbliEraseCount": pandas.StringDtype(),
"pageAverageProgramCount": pandas.StringDtype(),
"bandAverageEraseCount": pandas.StringDtype(),
"bandTotalEraseCount": pandas.StringDtype(),
"bandReloForXorRebuildFail": pandas.StringDtype(),
"defragSpeculativeMiss": pandas.StringDtype(),
"uncorrectableBackgroundScan": pandas.StringDtype(),
"BitErrorsHost57": pandas.StringDtype(),
"BitErrorsHost58": pandas.StringDtype(),
"BitErrorsHost59": pandas.StringDtype(),
"BitErrorsHost60": pandas.StringDtype(),
"BitErrorsHost61": pandas.StringDtype(),
"BitErrorsHost62": pandas.StringDtype(),
"BitErrorsHost63": pandas.StringDtype(),
"BitErrorsHost64": pandas.StringDtype(),
"BitErrorsHost65": pandas.StringDtype(),
"BitErrorsHost66": pandas.StringDtype(),
"BitErrorsHost67": pandas.StringDtype(),
"BitErrorsHost68": pandas.StringDtype(),
"BitErrorsHost69": pandas.StringDtype(),
"BitErrorsHost70": pandas.StringDtype(),
"BitErrorsHost71": pandas.StringDtype(),
"BitErrorsHost72": pandas.StringDtype(),
"BitErrorsHost73": pandas.StringDtype(),
"BitErrorsHost74": pandas.StringDtype(),
"BitErrorsHost75": pandas.StringDtype(),
"BitErrorsHost76": pandas.StringDtype(),
"BitErrorsHost77": pandas.StringDtype(),
"BitErrorsHost78": pandas.StringDtype(),
"BitErrorsHost79": pandas.StringDtype(),
"BitErrorsHost80": pandas.StringDtype(),
"bitErrBucketArray1": pandas.StringDtype(),
"bitErrBucketArray2": pandas.StringDtype(),
"bitErrBucketArray3": pandas.StringDtype(),
"bitErrBucketArray4": pandas.StringDtype(),
"bitErrBucketArray5": pandas.StringDtype(),
"bitErrBucketArray6": pandas.StringDtype(),
"bitErrBucketArray7": pandas.StringDtype(),
"bitErrBucketArray8": pandas.StringDtype(),
"bitErrBucketArray9": pandas.StringDtype(),
"bitErrBucketArray10": pandas.StringDtype(),
"bitErrBucketArray11": pandas.StringDtype(),
"bitErrBucketArray12": pandas.StringDtype(),
"bitErrBucketArray13": pandas.StringDtype(),
"bitErrBucketArray14": pandas.StringDtype(),
"bitErrBucketArray15": pandas.StringDtype(),
"bitErrBucketArray16": pandas.StringDtype(),
"bitErrBucketArray17": pandas.StringDtype(),
"bitErrBucketArray18": pandas.StringDtype(),
"bitErrBucketArray19": pandas.StringDtype(),
"bitErrBucketArray20": pandas.StringDtype(),
"bitErrBucketArray21": pandas.StringDtype(),
"bitErrBucketArray22": pandas.StringDtype(),
"bitErrBucketArray23": pandas.StringDtype(),
"bitErrBucketArray24": pandas.StringDtype(),
"bitErrBucketArray25": pandas.StringDtype(),
"bitErrBucketArray26": pandas.StringDtype(),
"bitErrBucketArray27": pandas.StringDtype(),
"bitErrBucketArray28": pandas.StringDtype(),
"bitErrBucketArray29": pandas.StringDtype(),
"bitErrBucketArray30": pandas.StringDtype(),
"bitErrBucketArray31": pandas.StringDtype(),
"bitErrBucketArray32": pandas.StringDtype(),
"bitErrBucketArray33": pandas.StringDtype(),
"bitErrBucketArray34": pandas.StringDtype(),
"bitErrBucketArray35": pandas.StringDtype(),
"bitErrBucketArray36": pandas.StringDtype(),
"bitErrBucketArray37": pandas.StringDtype(),
"bitErrBucketArray38": pandas.StringDtype(),
"bitErrBucketArray39": pandas.StringDtype(),
"bitErrBucketArray40": pandas.StringDtype(),
"bitErrBucketArray41": pandas.StringDtype(),
"bitErrBucketArray42": pandas.StringDtype(),
"bitErrBucketArray43": pandas.StringDtype(),
"bitErrBucketArray44": pandas.StringDtype(),
"bitErrBucketArray45": pandas.StringDtype(),
"bitErrBucketArray46": pandas.StringDtype(),
"bitErrBucketArray47": pandas.StringDtype(),
"bitErrBucketArray48": pandas.StringDtype(),
"bitErrBucketArray49": pandas.StringDtype(),
"bitErrBucketArray50": pandas.StringDtype(),
"bitErrBucketArray51": pandas.StringDtype(),
"bitErrBucketArray52": pandas.StringDtype(),
"bitErrBucketArray53": pandas.StringDtype(),
"bitErrBucketArray54": pandas.StringDtype(),
"bitErrBucketArray55": pandas.StringDtype(),
"bitErrBucketArray56": pandas.StringDtype(),
"bitErrBucketArray57": pandas.StringDtype(),
"bitErrBucketArray58": pandas.StringDtype(),
"bitErrBucketArray59": pandas.StringDtype(),
"bitErrBucketArray60": pandas.StringDtype(),
"bitErrBucketArray61": pandas.StringDtype(),
"bitErrBucketArray62": pandas.StringDtype(),
"bitErrBucketArray63": pandas.StringDtype(),
"bitErrBucketArray64": pandas.StringDtype(),
"bitErrBucketArray65": pandas.StringDtype(),
"bitErrBucketArray66": pandas.StringDtype(),
"bitErrBucketArray67": pandas.StringDtype(),
"bitErrBucketArray68": pandas.StringDtype(),
"bitErrBucketArray69": pandas.StringDtype(),
"bitErrBucketArray70": pandas.StringDtype(),
"bitErrBucketArray71": pandas.StringDtype(),
"bitErrBucketArray72": pandas.StringDtype(),
"bitErrBucketArray73": pandas.StringDtype(),
"bitErrBucketArray74": pandas.StringDtype(),
"bitErrBucketArray75": pandas.StringDtype(),
"bitErrBucketArray76": pandas.StringDtype(),
"bitErrBucketArray77": pandas.StringDtype(),
"bitErrBucketArray78": pandas.StringDtype(),
"bitErrBucketArray79": pandas.StringDtype(),
"bitErrBucketArray80": pandas.StringDtype(),
"mrr_successDistribution1": pandas.StringDtype(),
"mrr_successDistribution2": pandas.StringDtype(),
"mrr_successDistribution3": pandas.StringDtype(),
"mrr_successDistribution4": pandas.StringDtype(),
"mrr_successDistribution5": pandas.StringDtype(),
"mrr_successDistribution6": pandas.StringDtype(),
"mrr_successDistribution7": pandas.StringDtype(),
"mrr_successDistribution8": pandas.StringDtype(),
"mrr_successDistribution9": pandas.StringDtype(),
"mrr_successDistribution10": pandas.StringDtype(),
"mrr_successDistribution11": pandas.StringDtype(),
"mrr_successDistribution12": pandas.StringDtype(),
"mrr_successDistribution13": pandas.StringDtype(),
"mrr_successDistribution14": pandas.StringDtype(),
"mrr_successDistribution15": pandas.StringDtype(),
"mrr_successDistribution16": pandas.StringDtype(),
"mrr_successDistribution17": pandas.StringDtype(),
"mrr_successDistribution18": pandas.StringDtype(),
"mrr_successDistribution19": pandas.StringDtype(),
"mrr_successDistribution20": pandas.StringDtype(),
"mrr_successDistribution21": pandas.StringDtype(),
"mrr_successDistribution22": pandas.StringDtype(),
"mrr_successDistribution23": pandas.StringDtype(),
"mrr_successDistribution24": pandas.StringDtype(),
"mrr_successDistribution25": pandas.StringDtype(),
"mrr_successDistribution26": pandas.StringDtype(),
"mrr_successDistribution27": pandas.StringDtype(),
"mrr_successDistribution28": pandas.StringDtype(),
"mrr_successDistribution29": pandas.StringDtype(),
"mrr_successDistribution30": pandas.StringDtype(),
"mrr_successDistribution31": pandas.StringDtype(),
"mrr_successDistribution32": pandas.StringDtype(),
"mrr_successDistribution33": pandas.StringDtype(),
"mrr_successDistribution34": pandas.StringDtype(),
"mrr_successDistribution35": pandas.StringDtype(),
"mrr_successDistribution36": pandas.StringDtype(),
"mrr_successDistribution37": pandas.StringDtype(),
"mrr_successDistribution38": pandas.StringDtype(),
"mrr_successDistribution39": pandas.StringDtype(),
"mrr_successDistribution40": pandas.StringDtype(),
"mrr_successDistribution41": pandas.StringDtype(),
"mrr_successDistribution42": pandas.StringDtype(),
"mrr_successDistribution43": pandas.StringDtype(),
"mrr_successDistribution44": pandas.StringDtype(),
"mrr_successDistribution45": pandas.StringDtype(),
"mrr_successDistribution46": pandas.StringDtype(),
"mrr_successDistribution47": pandas.StringDtype(),
"mrr_successDistribution48": pandas.StringDtype(),
"mrr_successDistribution49": pandas.StringDtype(),
"mrr_successDistribution50": pandas.StringDtype(),
"mrr_successDistribution51": pandas.StringDtype(),
"mrr_successDistribution52": pandas.StringDtype(),
"mrr_successDistribution53": pandas.StringDtype(),
"mrr_successDistribution54": pandas.StringDtype(),
"mrr_successDistribution55": pandas.StringDtype(),
"mrr_successDistribution56": pandas.StringDtype(),
"mrr_successDistribution57": pandas.StringDtype(),
"mrr_successDistribution58": pandas.StringDtype(),
"mrr_successDistribution59": pandas.StringDtype(),
"mrr_successDistribution60": pandas.StringDtype(),
"mrr_successDistribution61": pandas.StringDtype(),
"mrr_successDistribution62": pandas.StringDtype(),
"mrr_successDistribution63": pandas.StringDtype(),
"mrr_successDistribution64": pandas.StringDtype(),
"blDowngradeCount": pandas.StringDtype(),
"snapReads": pandas.StringDtype(),
"pliCapTestTime": pandas.StringDtype(),
"currentTimeToFreeSpaceRecovery": pandas.StringDtype(),
"worstTimeToFreeSpaceRecovery": pandas.StringDtype(),
"rspnandReads": pandas.StringDtype(),
"cachednandReads": pandas.StringDtype(),
"spnandReads": pandas.StringDtype(),
"dpnandReads": pandas.StringDtype(),
"qpnandReads": pandas.StringDtype(),
"verifynandReads": pandas.StringDtype(),
"softnandReads": pandas.StringDtype(),
"spnandWrites": pandas.StringDtype(),
"dpnandWrites": pandas.StringDtype(),
"qpnandWrites": pandas.StringDtype(),
"opnandWrites": pandas.StringDtype(),
"xpnandWrites": pandas.StringDtype(),
"unalignedHostWriteCmd": pandas.StringDtype(),
"randomReadCmd": pandas.StringDtype(),
"randomWriteCmd": pandas.StringDtype(),
"secVenCmdCount": pandas.StringDtype(),
"secVenCmdCountFails": pandas.StringDtype(),
"mrrFailOnSlcOtfPages": pandas.StringDtype(),
"mrrFailOnSlcOtfPageMarkedAsMBPD": pandas.StringDtype(),
"lcorParitySeedErrors": pandas.StringDtype(),
"fwDownloadFails": pandas.StringDtype(),
"fwAuthenticationFails": pandas.StringDtype(),
"fwSecurityRev": pandas.StringDtype(),
"isCapacitorHealthly": pandas.StringDtype(),
"fwWRCounter": pandas.StringDtype(),
"sysAreaEraseFailCount": pandas.StringDtype(),
"iusDefragRelocated4DataRetention": pandas.StringDtype(),
"I2CTemp": pandas.StringDtype(),
"lbaMismatchOnNandReads": pandas.StringDtype(),
"currentWriteStreamsCount": pandas.StringDtype(),
"nandWritesPerStream1": pandas.StringDtype(),
"nandWritesPerStream2": pandas.StringDtype(),
"nandWritesPerStream3": pandas.StringDtype(),
"nandWritesPerStream4": pandas.StringDtype(),
"nandWritesPerStream5": pandas.StringDtype(),
"nandWritesPerStream6": pandas.StringDtype(),
"nandWritesPerStream7": pandas.StringDtype(),
"nandWritesPerStream8": pandas.StringDtype(),
"nandWritesPerStream9": pandas.StringDtype(),
"nandWritesPerStream10": pandas.StringDtype(),
"nandWritesPerStream11": pandas.StringDtype(),
"nandWritesPerStream12": pandas.StringDtype(),
"nandWritesPerStream13": pandas.StringDtype(),
"nandWritesPerStream14": pandas.StringDtype(),
"nandWritesPerStream15": pandas.StringDtype(),
"nandWritesPerStream16": pandas.StringDtype(),
"nandWritesPerStream17": pandas.StringDtype(),
"nandWritesPerStream18": pandas.StringDtype(),
"nandWritesPerStream19": pandas.StringDtype(),
"nandWritesPerStream20": pandas.StringDtype(),
"nandWritesPerStream21": pandas.StringDtype(),
"nandWritesPerStream22": pandas.StringDtype(),
"nandWritesPerStream23": pandas.StringDtype(),
"nandWritesPerStream24": pandas.StringDtype(),
"nandWritesPerStream25": pandas.StringDtype(),
"nandWritesPerStream26": pandas.StringDtype(),
"nandWritesPerStream27": pandas.StringDtype(),
"nandWritesPerStream28": pandas.StringDtype(),
"nandWritesPerStream29": pandas.StringDtype(),
"nandWritesPerStream30": pandas.StringDtype(),
"nandWritesPerStream31": pandas.StringDtype(),
"nandWritesPerStream32": pandas.StringDtype(),
"hostSoftReadSuccess": pandas.StringDtype(),
"xorInvokedCount": pandas.StringDtype(),
"comresets": pandas.StringDtype(),
"syncEscapes": pandas.StringDtype(),
"rErrHost": pandas.StringDtype(),
"rErrDevice": pandas.StringDtype(),
"iCrcs": pandas.StringDtype(),
"linkSpeedDrops": pandas.StringDtype(),
"mrrXtrapageEvents": pandas.StringDtype(),
"mrrToppageEvents": pandas.StringDtype(),
"hostXorSuccessCount": pandas.StringDtype(),
"hostXorFailCount": pandas.StringDtype(),
"nandWritesWithPreReadPerStream1": pandas.StringDtype(),
"nandWritesWithPreReadPerStream2": pandas.StringDtype(),
"nandWritesWithPreReadPerStream3": pandas.StringDtype(),
"nandWritesWithPreReadPerStream4": pandas.StringDtype(),
"nandWritesWithPreReadPerStream5": pandas.StringDtype(),
"nandWritesWithPreReadPerStream6": pandas.StringDtype(),
"nandWritesWithPreReadPerStream7": pandas.StringDtype(),
"nandWritesWithPreReadPerStream8": pandas.StringDtype(),
"nandWritesWithPreReadPerStream9": pandas.StringDtype(),
"nandWritesWithPreReadPerStream10": pandas.StringDtype(),
"nandWritesWithPreReadPerStream11": pandas.StringDtype(),
"nandWritesWithPreReadPerStream12": pandas.StringDtype(),
"nandWritesWithPreReadPerStream13": pandas.StringDtype(),
"nandWritesWithPreReadPerStream14": pandas.StringDtype(),
"nandWritesWithPreReadPerStream15": pandas.StringDtype(),
"nandWritesWithPreReadPerStream16": pandas.StringDtype(),
"nandWritesWithPreReadPerStream17": pandas.StringDtype(),
"nandWritesWithPreReadPerStream18": pandas.StringDtype(),
"nandWritesWithPreReadPerStream19": pandas.StringDtype(),
"nandWritesWithPreReadPerStream20": pandas.StringDtype(),
"nandWritesWithPreReadPerStream21": pandas.StringDtype(),
"nandWritesWithPreReadPerStream22": pandas.StringDtype(),
"nandWritesWithPreReadPerStream23": pandas.StringDtype(),
"nandWritesWithPreReadPerStream24": pandas.StringDtype(),
"nandWritesWithPreReadPerStream25": pandas.StringDtype(),
"nandWritesWithPreReadPerStream26": pandas.StringDtype(),
"nandWritesWithPreReadPerStream27": pandas.StringDtype(),
"nandWritesWithPreReadPerStream28": pandas.StringDtype(),
"nandWritesWithPreReadPerStream29": pandas.StringDtype(),
"nandWritesWithPreReadPerStream30": pandas.StringDtype(),
"nandWritesWithPreReadPerStream31": pandas.StringDtype(),
"nandWritesWithPreReadPerStream32": pandas.StringDtype(),
"dramCorrectables8to1": pandas.StringDtype(),
"driveRecoveryCount": pandas.StringDtype(),
"mprLiteReads": pandas.StringDtype(),
"eccErrOnMprLiteReads": pandas.StringDtype(),
"readForwardingXpPreReadCount": pandas.StringDtype(),
"readForwardingUpPreReadCount": pandas.StringDtype(),
"readForwardingLpPreReadCount": pandas.StringDtype(),
"pweDefectCompensationCredit": pandas.StringDtype(),
"planarXorRebuildFailure": pandas.StringDtype(),
"itgXorRebuildFailure": pandas.StringDtype(),
"planarXorRebuildSuccess": pandas.StringDtype(),
"itgXorRebuildSuccess": pandas.StringDtype(),
"xorLoggingSkippedSIcBand": pandas.StringDtype(),
"xorLoggingSkippedDieOffline": pandas.StringDtype(),
"xorLoggingSkippedDieAbsent": pandas.StringDtype(),
"xorLoggingSkippedBandErased": pandas.StringDtype(),
"xorLoggingSkippedNoEntry": pandas.StringDtype(),
"xorAuditSuccess": pandas.StringDtype(),
"maxSuspendCount": pandas.StringDtype(),
"suspendLimitPerPrgm": pandas.StringDtype(),
"psrCountStats": pandas.StringDtype(),
"readNandBuffCount": pandas.StringDtype(),
"readNandBufferRspErrorCount": pandas.StringDtype(),
"ddpNandWrites": pandas.StringDtype(),
"totalDeallocatedSectorsInCore": pandas.StringDtype(),
"prefetchHostReads": pandas.StringDtype(),
"hostReadtoDSMDCount": pandas.StringDtype(),
"hostWritetoDSMDCount": pandas.StringDtype(),
"snapReads4k": pandas.StringDtype(),
"snapReads8k": pandas.StringDtype(),
"snapReads16k": pandas.StringDtype(),
"xorLoggingTriggered": pandas.StringDtype(),
"xorLoggingAborted": pandas.StringDtype(),
"xorLoggingSkippedHistory": pandas.StringDtype(),
"deckDisturbRelocationUD": pandas.StringDtype(),
"deckDisturbRelocationMD": pandas.StringDtype(),
"deckDisturbRelocationLD": pandas.StringDtype(),
"bbdProactiveReadRetry": pandas.StringDtype(),
"statsRestoreRequired": pandas.StringDtype(),
"statsAESCount": pandas.StringDtype(),
"statsHESCount": pandas.StringDtype(),
"psrCountStats1": pandas.StringDtype(),
"psrCountStats2": pandas.StringDtype(),
"psrCountStats3": pandas.StringDtype(),
"psrCountStats4": pandas.StringDtype(),
"psrCountStats5": pandas.StringDtype(),
"psrCountStats6": pandas.StringDtype(),
"psrCountStats7": pandas.StringDtype(),
"psrCountStats8": pandas.StringDtype(),
"psrCountStats9": pandas.StringDtype(),
"psrCountStats10": pandas.StringDtype(),
"psrCountStats11": pandas.StringDtype(),
"psrCountStats12": pandas.StringDtype(),
"psrCountStats13": pandas.StringDtype(),
"psrCountStats14": pandas.StringDtype(),
"psrCountStats15": pandas.StringDtype(),
"psrCountStats16": pandas.StringDtype(),
"psrCountStats17": pandas.StringDtype(),
"psrCountStats18": | pandas.StringDtype() | pandas.StringDtype |
from datetime import datetime
from typing import Any, List, Union
import pandas as pd
from binance.client import Client
from binance.exceptions import BinanceAPIException
from yacht.data.markets.base import H5Market
from yacht.logger import Logger
class Binance(H5Market):
def __init__(
self,
get_features: List[str],
logger: Logger,
api_key,
api_secret,
storage_dir: str,
include_weekends: bool,
read_only: bool
):
super().__init__(get_features, logger, api_key, api_secret, storage_dir, 'binance.h5', include_weekends, read_only)
self.client = Client(api_key, api_secret)
def request(
self,
ticker: str,
interval: str,
start: datetime,
end: datetime = None
) -> Union[List[List[Any]], pd.DataFrame]:
if '-' not in ticker:
ticker = f'{ticker}USDT'
else:
ticker = ''.join(ticker.split('-'))
start = start.strftime('%d %b, %Y')
kwargs = {
'symbol': ticker,
'interval': interval,
'start_str': start
}
if end:
end = end.strftime('%d %b, %Y')
kwargs['end_str'] = end
try:
return self.client.get_historical_klines(**kwargs)
except BinanceAPIException as e:
self.logger.info(f'Binance does not support ticker: {ticker}')
raise e
def process_request(self, data: Union[List[List[Any]], pd.DataFrame], **kwargs) -> pd.DataFrame:
df = pd.DataFrame(
data,
columns=[
'Open time',
'Open',
'High',
'Low',
'Close',
'Volume',
'Close time',
'Quote asset volume',
'Number of trades',
'Taker buy base asset volume',
'Taker buy quote asset volume',
'Ignore'
])
df['Open time'] = pd.to_datetime(df['Open time'], unit='ms')
df['Open time'] = df['Open time']
df['Open'] = pd.to_numeric(df['Open'])
df['High'] = pd.to_numeric(df['High'])
df['Low'] = pd.to_numeric(df['Low'])
df['Close'] = pd.to_numeric(df['Close'])
df['Volume'] = | pd.to_numeric(df['Volume']) | pandas.to_numeric |
# -*- coding: utf-8 -*-
# Copyright (c) 2016-2021 by University of Kassel and Fraunhofer Institute for Energy Economics
# and Energy System Technology (IEE), Kassel. All rights reserved.
import numpy as np
import pandas as pd
from pandapower.results_branch import _get_branch_results, _get_branch_results_3ph
from pandapower.results_bus import _get_bus_results, _set_buses_out_of_service, \
_get_shunt_results, _get_p_q_results, _get_bus_v_results, _get_bus_v_results_3ph, _get_p_q_results_3ph, \
_get_bus_results_3ph
from pandapower.results_gen import _get_gen_results, _get_gen_results_3ph
suffix_mode = {"sc": "sc", "se": "est", "pf_3ph": "3ph"}
def _extract_results(net, ppc):
_set_buses_out_of_service(ppc)
bus_lookup_aranged = _get_aranged_lookup(net)
_get_bus_v_results(net, ppc)
bus_pq = _get_p_q_results(net, ppc, bus_lookup_aranged)
_get_shunt_results(net, ppc, bus_lookup_aranged, bus_pq)
_get_branch_results(net, ppc, bus_lookup_aranged, bus_pq)
_get_gen_results(net, ppc, bus_lookup_aranged, bus_pq)
_get_bus_results(net, ppc, bus_pq)
if net._options["mode"] == "opf":
_get_costs(net, ppc)
def _extract_results_3ph(net, ppc0, ppc1, ppc2):
# reset_results(net, False)
_set_buses_out_of_service(ppc0)
_set_buses_out_of_service(ppc1)
_set_buses_out_of_service(ppc2)
bus_lookup_aranged = _get_aranged_lookup(net)
_get_bus_v_results_3ph(net, ppc0, ppc1, ppc2)
bus_pq = _get_p_q_results_3ph(net, bus_lookup_aranged)
# _get_shunt_results(net, ppc, bus_lookup_aranged, bus_pq)
_get_branch_results_3ph(net, ppc0, ppc1, ppc2, bus_lookup_aranged, bus_pq)
_get_gen_results_3ph(net, ppc0, ppc1, ppc2, bus_lookup_aranged, bus_pq)
_get_bus_results_3ph(net, bus_pq)
def _extract_results_se(net, ppc):
_set_buses_out_of_service(ppc)
bus_lookup_aranged = _get_aranged_lookup(net)
_get_bus_v_results(net, ppc, suffix="_est")
bus_pq = np.zeros(shape=(len(net["bus"].index), 2), dtype=np.float)
_get_branch_results(net, ppc, bus_lookup_aranged, bus_pq, suffix="_est")
def _get_costs(net, ppc):
net.res_cost = ppc['obj']
def _get_aranged_lookup(net):
# generate bus_lookup net -> consecutive ordering
maxBus = max(net["bus"].index.values)
bus_lookup_aranged = -np.ones(maxBus + 1, dtype=int)
bus_lookup_aranged[net["bus"].index.values] = np.arange(len(net["bus"].index.values))
return bus_lookup_aranged
def verify_results(net, mode="pf"):
elements = get_relevant_elements(mode)
suffix = suffix_mode.get(mode, None)
for element in elements:
res_element, res_empty_element = get_result_tables(element, suffix)
index_equal = False if res_element not in net else net[element].index.equals(net[res_element].index)
if not index_equal:
if net["_options"]["init_results"] and element == "bus":
# if the indices of bus and res_bus are not equal, but init_results is set, the voltage vector
# is wrong. A UserWarning is raised in this case. For all other elements the result table is emptied.
raise UserWarning("index of result table '{}' is not equal to the element table '{}'. The init result"
" option may lead to a non-converged power flow.".format(res_element, element))
# init result table for
init_element(net, element)
if element == "bus":
net._options["init_vm_pu"] = "auto"
net._options["init_va_degree"] = "auto"
def get_result_tables(element, suffix=None):
res_element = "res_" + element
res_element_with_suffix = res_element if suffix is None else res_element + "_%s" % suffix
if suffix == suffix_mode.get("se", None):
# State estimation used default result table
return res_element_with_suffix, "_empty_%s" % res_element
else:
return res_element_with_suffix, "_empty_%s" % res_element_with_suffix
def empty_res_element(net, element, suffix=None):
res_element, res_empty_element = get_result_tables(element, suffix)
if res_empty_element in net:
net[res_element] = net[res_empty_element].copy()
else:
net[res_element] = pd.DataFrame()
def init_element(net, element, suffix=None):
res_element, res_empty_element = get_result_tables(element, suffix)
index = net[element].index
if len(index):
# init empty dataframe
if res_empty_element in net:
columns = net[res_empty_element].columns
net[res_element] = pd.DataFrame(np.nan, index=index,
columns=columns, dtype='float')
else:
net[res_element] = | pd.DataFrame(index=index, dtype='float') | pandas.DataFrame |
"""
inspiration from R Package - PerformanceAnalytics
"""
from collections import OrderedDict
import pandas as pd
import numpy as np
from tia.analysis.util import per_series
PER_YEAR_MAP = {
'BA': 1.,
'BAS': 1.,
'A': 1.,
'AS': 1.,
'BQ': 4.,
'BQS': 4.,
'Q': 4.,
'QS': 4.,
'D': 365.,
'B': 252.,
'BMS': 12.,
'BM': 12.,
'MS': 12.,
'M': 12.,
'W': 52.,
}
def guess_freq(index):
# admittedly weak way of doing this...This needs to be abolished
if isinstance(index, (pd.Series, pd.DataFrame)):
index = index.index
if hasattr(index, 'freqstr') and index.freqstr:
return index.freqstr[0]
elif len(index) < 3:
raise Exception('cannot guess frequency with less than 3 items')
else:
lb = min(7, len(index))
idx_zip = lambda: list(zip(index[-lb:-1], index[-(lb-1):]))
diff = min([t2 - t1 for t1, t2, in idx_zip()])
if diff.days <= 1:
if 5 in index.dayofweek or 6 in index.dayofweek:
return 'D'
else:
return 'B'
elif diff.days == 7:
return 'W'
else:
diff = min([t2.month - t1.month for t1, t2, in idx_zip()])
if diff == 1:
return 'M'
diff = min([t2.year - t1.year for t1, t2, in idx_zip()])
if diff == 1:
return 'A'
strs = ','.join([i.strftime('%Y-%m-%d') for i in index[-lb:]])
raise Exception('unable to determine frequency, last %s dates %s' % (lb, strs))
def periodicity(freq_or_frame):
"""
resolve the number of periods per year
"""
if hasattr(freq_or_frame, 'rule_code'):
rc = freq_or_frame.rule_code
rc = rc.split('-')[0]
factor = PER_YEAR_MAP.get(rc, None)
if factor is not None:
return factor / abs(freq_or_frame.n)
else:
raise Exception('Failed to determine periodicity. No factor mapping for %s' % freq_or_frame)
elif isinstance(freq_or_frame, str):
factor = PER_YEAR_MAP.get(freq_or_frame, None)
if factor is not None:
return factor
else:
raise Exception('Failed to determine periodicity. No factor mapping for %s' % freq_or_frame)
elif isinstance(freq_or_frame, (pd.Series, pd.DataFrame, pd.TimeSeries)):
freq = freq_or_frame.index.freq
if not freq:
freq = pd.infer_freq(freq_or_frame.index)
if freq:
return periodicity(freq)
else:
# Attempt to resolve it
import warnings
freq = guess_freq(freq_or_frame.index)
warnings.warn('frequency not set. guessed it to be %s' % freq)
return periodicity(freq)
else:
return periodicity(freq)
else:
raise ValueError("periodicity expects DataFrame, Series, or rule_code property")
periods_in_year = periodicity
def _resolve_periods_in_year(scale, frame):
""" Convert the scale to an annualzation factor. If scale is None then attempt to resolve from frame. If scale is a scalar then
use it. If scale is a string then use it to lookup the annual factor
"""
if scale is None:
return periodicity(frame)
elif isinstance(scale, str):
return periodicity(scale)
elif np.isscalar(scale):
return scale
else:
raise ValueError("scale must be None, scalar, or string, not %s" % type(scale))
def excess_returns(returns, bm=0):
"""
Return the excess amount of returns above the given benchmark bm
"""
return returns - bm
def returns(prices, method='simple', periods=1, fill_method='pad', limit=None, freq=None):
"""
compute the returns for the specified prices.
method: [simple,compound,log], compound is log
"""
if method not in ('simple', 'compound', 'log'):
raise ValueError("Invalid method type. Valid values are ('simple', 'compound')")
if method == 'simple':
return prices.pct_change(periods=periods, fill_method=fill_method, limit=limit, freq=freq)
else:
if freq is not None:
raise NotImplementedError("TODO: implement this logic if needed")
if isinstance(prices, pd.Series):
if fill_method is None:
data = prices
else:
data = prices.fillna(method=fill_method, limit=limit)
data = np.log(data / data.shift(periods=periods))
mask = pd.isnull(prices.values)
np.putmask(data.values, mask, np.nan)
return data
else:
return pd.DataFrame(
{name: returns(col, method, periods, fill_method, limit, freq) for name, col in prices.items()},
columns=prices.columns,
index=prices.index)
def returns_cumulative(returns, geometric=True, expanding=False):
""" return the cumulative return
Parameters
----------
returns : DataFrame or Series
geometric : bool, default is True
If True, geometrically link returns
expanding : bool default is False
If True, return expanding series/frame of returns
If False, return the final value(s)
"""
if expanding:
if geometric:
return (1. + returns).cumprod() - 1.
else:
return returns.cumsum()
else:
if geometric:
return (1. + returns).prod() - 1.
else:
return returns.sum()
def rolling_returns_cumulative(returns, window, min_periods=1, geometric=True):
""" return the rolling cumulative returns
Parameters
----------
returns : DataFrame or Series
window : number of observations
min_periods : minimum number of observations in a window
geometric : link the returns geometrically
"""
if geometric:
rc = lambda x: (1. + x[np.isfinite(x)]).prod() - 1.
else:
rc = lambda x: (x[np.isfinite(x)]).sum()
return pd.rolling_apply(returns, window, rc, min_periods=min_periods)
def returns_annualized(returns, geometric=True, scale=None, expanding=False):
""" return the annualized cumulative returns
Parameters
----------
returns : DataFrame or Series
geometric : link the returns geometrically
scale: None or scalar or string (ie 12 for months in year),
If None, attempt to resolve from returns
If scalar, then use this as the annualization factor
If string, then pass this to periodicity function to resolve annualization factor
expanding: bool, default is False
If True, return expanding series/frames.
If False, return final result.
"""
scale = _resolve_periods_in_year(scale, returns)
if expanding:
if geometric:
n = pd.expanding_count(returns)
return ((1. + returns).cumprod() ** (scale / n)) - 1.
else:
return pd.expanding_mean(returns) * scale
else:
if geometric:
n = returns.count()
return ((1. + returns).prod() ** (scale / n)) - 1.
else:
return returns.mean() * scale
def drawdowns(returns, geometric=True):
"""
compute the drawdown series for the period return series
return: periodic return Series or DataFrame
"""
wealth = 1. + returns_cumulative(returns, geometric=geometric, expanding=True)
values = wealth.values
if values.ndim == 2:
ncols = values.shape[-1]
values = np.vstack(([1.] * ncols, values))
maxwealth = pd.expanding_max(values)[1:]
dds = wealth / maxwealth - 1.
dds[dds > 0] = 0 # Can happen if first returns are positive
return dds
elif values.ndim == 1:
values = np.hstack(([1.], values))
maxwealth = pd.expanding_max(values)[1:]
dds = wealth / maxwealth - 1.
dds[dds > 0] = 0 # Can happen if first returns are positive
return dds
else:
raise ValueError('unable to process array with %s dimensions' % values.ndim)
def max_drawdown(returns=None, geometric=True, dd=None, inc_date=False):
"""
compute the max draw down.
returns: period return Series or DataFrame
dd: drawdown Series or DataFrame (mutually exclusive with returns)
"""
if (returns is None and dd is None) or (returns is not None and dd is not None):
raise ValueError('returns and drawdowns are mutually exclusive')
if returns is not None:
dd = drawdowns(returns, geometric=geometric)
if isinstance(dd, pd.DataFrame):
vals = [max_drawdown(dd=dd[c], inc_date=inc_date) for c in dd.columns]
cols = ['maxxdd'] + (inc_date and ['maxdd_dt'] or [])
res = pd.DataFrame(vals, columns=cols, index=dd.columns)
return res if inc_date else res.maxdd
else:
mddidx = dd.idxmin()
# if mddidx == dd.index[0]:
# # no maxff
# return 0 if not inc_date else (0, None)
#else:
sub = dd[:mddidx]
start = sub[::-1].idxmax()
mdd = dd[mddidx]
# return start, mddidx, mdd
return mdd if not inc_date else (mdd, mddidx)
@per_series(result_is_frame=1)
def drawdown_info(returns, geometric=True):
"""Return a DataFrame containing information about ALL the drawdowns for the rets. The frame
contains the columns:
'dd start': drawdown start date
'dd end': drawdown end date
'maxdd': maximium drawdown
'maxdd dt': maximum drawdown
'days': duration of drawdown
"""
dd = drawdowns(returns, geometric=True).to_frame()
last = dd.index[-1]
dd.columns = ['vals']
dd['nonzero'] = (dd.vals != 0).astype(int)
dd['gid'] = (dd.nonzero.shift(1) != dd.nonzero).astype(int).cumsum()
idxname = dd.index.name or 'index'
ixs = dd.reset_index().groupby(['nonzero', 'gid'])[idxname].apply(lambda x: np.array(x))
rows = []
if 1 in ixs:
for ix in ixs[1]:
sub = dd.ix[ix]
# need to get t+1 since actually draw down ends on the 0 value
end = dd.index[dd.index.get_loc(sub.index[-1]) + (last != sub.index[-1] and 1 or 0)]
rows.append([sub.index[0], end, sub.vals.min(), sub.vals.idxmin()])
f = pd.DataFrame.from_records(rows, columns=['dd start', 'dd end', 'maxdd', 'maxdd dt'])
f['days'] = (f['dd end'] - f['dd start']).astype('timedelta64[D]')
return f
def std_annualized(returns, scale=None, expanding=0):
scale = _resolve_periods_in_year(scale, returns)
if expanding:
return np.sqrt(scale) * pd.expanding_std(returns)
else:
return np.sqrt(scale) * returns.std()
def sharpe(returns, rfr=0, expanding=0):
"""
returns: periodic return string
rfr: risk free rate
expanding: bool
"""
if expanding:
excess = excess_returns(returns, rfr)
return pd.expanding_mean(excess) / pd.expanding_std(returns)
else:
return excess_returns(returns, rfr).mean() / returns.std()
def sharpe_annualized(returns, rfr_ann=0, scale=None, expanding=False, geometric=False):
scale = _resolve_periods_in_year(scale, returns)
stdann = std_annualized(returns, scale=scale, expanding=expanding)
retsann = returns_annualized(returns, scale=scale, expanding=expanding, geometric=geometric)
return (retsann - rfr_ann) / stdann
def downside_deviation(rets, mar=0, expanding=0, full=0, ann=0):
"""Compute the downside deviation for the specifed return series
:param rets: periodic return series
:param mar: minimum acceptable rate of return (MAR)
:param full: If True, use the lenght of full series. If False, use only values below MAR
:param expanding:
:param ann: True if result should be annualized
"""
below = rets[rets < mar]
if expanding:
n = pd.expanding_count(rets)[below.index] if full else pd.expanding_count(below)
dd = np.sqrt(((below - mar) ** 2).cumsum() / n)
if ann:
dd *= np.sqrt(periods_in_year(rets))
return dd.reindex(rets.index).ffill()
else:
n = rets.count() if full else below.count()
dd = np.sqrt(((below - mar) ** 2).sum() / n)
if ann:
dd *= np.sqrt(periods_in_year(rets))
return dd
def sortino_ratio(rets, rfr_ann=0, mar=0, full=0, expanding=0):
"""Compute the sortino ratio as (Ann Rets - Risk Free Rate) / Downside Deviation Ann
:param rets: period return series
:param rfr_ann: annualized risk free rate
:param mar: minimum acceptable rate of return (MAR)
:param full: If True, use the lenght of full series. If False, use only values below MAR
:param expanding:
:return:
"""
annrets = returns_annualized(rets, expanding=expanding) - rfr_ann
return annrets / downside_deviation(rets, mar=mar, expanding=expanding, full=full, ann=1)
def information_ratio(rets, bm_rets, scale=None, expanding=False):
"""Information ratio, a common measure of manager efficiency, evaluates excess returns over a benchmark
versus tracking error.
:param rets: period returns
:param bm_rets: periodic benchmark returns (not annualized)
:param scale: None or the scale to be used for annualization
:param expanding:
:return:
"""
scale = _resolve_periods_in_year(scale, rets)
rets_ann = returns_annualized(rets, scale=scale, expanding=expanding)
bm_rets_ann = returns_annualized(rets, scale=scale, expanding=expanding)
tracking_error_ann = std_annualized((rets - bm_rets), scale=scale, expanding=expanding)
return (rets_ann - bm_rets_ann) / tracking_error_ann
def upside_potential_ratio(rets, mar=0, full=0, expanding=0):
if isinstance(rets, pd.Series):
above = rets[rets > mar]
excess = -mar + above
if expanding:
n = pd.expanding_count(rets) if full else pd.expanding_count(above)
upside = excess.cumsum() / n
downside = downside_deviation(rets, mar=mar, full=full, expanding=1)
return (upside / downside).reindex(rets.index).fillna(method='ffill')
else:
n = rets.count() if full else above.count()
upside = excess.sum() / n
downside = downside_deviation(rets, mar=mar, full=full)
return upside / downside
else:
vals = {c: upside_potential_ratio(rets[c], mar=mar, full=full, expanding=expanding) for c in rets.columns}
if expanding:
return | pd.DataFrame(vals, columns=rets.columns) | pandas.DataFrame |
from statistics import mean
import geopandas
from shapely.geometry import LineString
import numpy as np
import pandas as pd
_MAP_KWARGS = [
"location",
"prefer_canvas",
"no_touch",
"disable_3d",
"png_enabled",
"zoom_control",
"crs",
"zoom_start",
"left",
"top",
"position",
"min_zoom",
"max_zoom",
"min_lat",
"max_lat",
"min_lon",
"max_lon",
"max_bounds",
]
def _explore(
df,
column=None,
cmap=None,
color=None,
m=None,
tiles="OpenStreetMap",
attr=None,
tooltip=True,
popup=False,
highlight=True,
categorical=False,
legend=True,
scheme=None,
k=5,
vmin=None,
vmax=None,
width="100%",
height="100%",
categories=None,
classification_kwds=None,
control_scale=True,
marker_type=None,
marker_kwds={},
style_kwds={},
highlight_kwds={},
missing_kwds={},
tooltip_kwds={},
popup_kwds={},
legend_kwds={},
**kwargs,
):
"""Interactive map based on GeoPandas and folium/leaflet.js
Generate an interactive leaflet map based on :class:`~geopandas.GeoDataFrame`
Parameters
----------
column : str, np.array, pd.Series (default None)
The name of the dataframe column, :class:`numpy.array`,
or :class:`pandas.Series` to be plotted. If :class:`numpy.array` or
:class:`pandas.Series` are used then it must have same length as dataframe.
cmap : str, matplotlib.Colormap, branca.colormap or function (default None)
The name of a colormap recognized by ``matplotlib``, a list-like of colors,
:class:`matplotlib.colors.Colormap`, a :class:`branca.colormap.ColorMap` or
function that returns a named color or hex based on the column
value, e.g.::
def my_colormap(value): # scalar value defined in 'column'
if value > 1:
return "green"
return "red"
color : str, array-like (default None)
Named color or a list-like of colors (named or hex).
m : folium.Map (default None)
Existing map instance on which to draw the plot.
tiles : str, xyzservices.TileProvider (default 'OpenStreetMap Mapnik')
Map tileset to use. Can choose from the list supported by folium, query a
:class:`xyzservices.TileProvider` by a name from ``xyzservices.providers``,
pass :class:`xyzservices.TileProvider` object or pass custom XYZ URL.
The current list of built-in providers (when ``xyzservices`` is not available):
``["OpenStreetMap", "Stamen Terrain", “Stamen Toner", “Stamen Watercolor"
"CartoDB positron", “CartoDB dark_matter"]``
You can pass a custom tileset to Folium by passing a Leaflet-style URL
to the tiles parameter: ``http://{s}.yourtiles.com/{z}/{x}/{y}.png``.
Be sure to check their terms and conditions and to provide attribution with
the ``attr`` keyword.
attr : str (default None)
Map tile attribution; only required if passing custom tile URL.
tooltip : bool, str, int, list (default True)
Display GeoDataFrame attributes when hovering over the object.
``True`` includes all columns. ``False`` removes tooltip. Pass string or list of
strings to specify a column(s). Integer specifies first n columns to be
included. Defaults to ``True``.
popup : bool, str, int, list (default False)
Input GeoDataFrame attributes for object displayed when clicking.
``True`` includes all columns. ``False`` removes popup. Pass string or list of
strings to specify a column(s). Integer specifies first n columns to be
included. Defaults to ``False``.
highlight : bool (default True)
Enable highlight functionality when hovering over a geometry.
categorical : bool (default False)
If ``False``, ``cmap`` will reflect numerical values of the
column being plotted. For non-numerical columns, this
will be set to True.
legend : bool (default True)
Plot a legend in choropleth plots.
Ignored if no ``column`` is given.
scheme : str (default None)
Name of a choropleth classification scheme (requires ``mapclassify`` >= 2.4.0).
A :func:`mapclassify.classify` will be used
under the hood. Supported are all schemes provided by ``mapclassify`` (e.g.
``'BoxPlot'``, ``'EqualInterval'``, ``'FisherJenks'``, ``'FisherJenksSampled'``,
``'HeadTailBreaks'``, ``'JenksCaspall'``, ``'JenksCaspallForced'``,
``'JenksCaspallSampled'``, ``'MaxP'``, ``'MaximumBreaks'``,
``'NaturalBreaks'``, ``'Quantiles'``, ``'Percentiles'``, ``'StdMean'``,
``'UserDefined'``). Arguments can be passed in ``classification_kwds``.
k : int (default 5)
Number of classes
vmin : None or float (default None)
Minimum value of ``cmap``. If ``None``, the minimum data value
in the column to be plotted is used.
vmax : None or float (default None)
Maximum value of ``cmap``. If ``None``, the maximum data value
in the column to be plotted is used.
width : pixel int or percentage string (default: '100%')
Width of the folium :class:`~folium.folium.Map`. If the argument
m is given explicitly, width is ignored.
height : pixel int or percentage string (default: '100%')
Height of the folium :class:`~folium.folium.Map`. If the argument
m is given explicitly, height is ignored.
categories : list-like
Ordered list-like object of categories to be used for categorical plot.
classification_kwds : dict (default None)
Keyword arguments to pass to mapclassify
control_scale : bool, (default True)
Whether to add a control scale on the map.
marker_type : str, folium.Circle, folium.CircleMarker, folium.Marker (default None)
Allowed string options are ('marker', 'circle', 'circle_marker'). Defaults to
folium.CircleMarker.
marker_kwds: dict (default {})
Additional keywords to be passed to the selected ``marker_type``, e.g.:
radius : float (default 2 for ``circle_marker`` and 50 for ``circle``))
Radius of the circle, in meters (for ``circle``) or pixels
(for ``circle_marker``).
fill : bool (default True)
Whether to fill the ``circle`` or ``circle_marker`` with color.
icon : folium.map.Icon
the :class:`folium.map.Icon` object to use to render the marker.
draggable : bool (default False)
Set to True to be able to drag the marker around the map.
style_kwds : dict (default {})
Additional style to be passed to folium ``style_function``:
stroke : bool (default True)
Whether to draw stroke along the path. Set it to ``False`` to
disable borders on polygons or circles.
color : str
Stroke color
weight : int
Stroke width in pixels
opacity : float (default 1.0)
Stroke opacity
fill : boolean (default True)
Whether to fill the path with color. Set it to ``False`` to
disable filling on polygons or circles.
fillColor : str
Fill color. Defaults to the value of the color option
fillOpacity : float (default 0.5)
Fill opacity.
Plus all supported by :func:`folium.vector_layers.path_options`. See the
documentation of :class:`folium.features.GeoJson` for details.
highlight_kwds : dict (default {})
Style to be passed to folium highlight_function. Uses the same keywords
as ``style_kwds``. When empty, defaults to ``{"fillOpacity": 0.75}``.
tooltip_kwds : dict (default {})
Additional keywords to be passed to :class:`folium.features.GeoJsonTooltip`,
e.g. ``aliases``, ``labels``, or ``sticky``.
popup_kwds : dict (default {})
Additional keywords to be passed to :class:`folium.features.GeoJsonPopup`,
e.g. ``aliases`` or ``labels``.
legend_kwds : dict (default {})
Additional keywords to be passed to the legend.
Currently supported customisation:
caption : string
Custom caption of the legend. Defaults to the column name.
Additional accepted keywords when ``scheme`` is specified:
colorbar : bool (default True)
An option to control the style of the legend. If True, continuous
colorbar will be used. If False, categorical legend will be used for bins.
scale : bool (default True)
Scale bins along the colorbar axis according to the bin edges (True)
or use the equal length for each bin (False)
fmt : string (default "{:.2f}")
A formatting specification for the bin edges of the classes in the
legend. For example, to have no decimals: ``{"fmt": "{:.0f}"}``. Applies
if ``colorbar=False``.
labels : list-like
A list of legend labels to override the auto-generated labels.
Needs to have the same number of elements as the number of
classes (`k`). Applies if ``colorbar=False``.
interval : boolean (default False)
An option to control brackets from mapclassify legend.
If True, open/closed interval brackets are shown in the legend.
Applies if ``colorbar=False``.
max_labels : int, default 10
Maximum number of colorbar tick labels (requires branca>=0.5.0)
**kwargs : dict
Additional options to be passed on to the folium object.
Returns
-------
m : folium.folium.Map
folium :class:`~folium.folium.Map` instance
Examples
--------
>>> df = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres"))
>>> df.head(2) # doctest: +SKIP
pop_est continent name iso_a3 \
gdp_md_est geometry
0 920938 Oceania Fiji FJI 8374.0 MULTIPOLY\
GON (((180.00000 -16.06713, 180.00000...
1 53950935 Africa Tanzania TZA 150600.0 POLYGON (\
(33.90371 -0.95000, 34.07262 -1.05982...
>>> df.explore("pop_est", cmap="Blues") # doctest: +SKIP
"""
try:
import branca as bc
import folium
import matplotlib.cm as cm
import matplotlib.colors as colors
import matplotlib.pyplot as plt
from mapclassify import classify
except (ImportError, ModuleNotFoundError):
raise ImportError(
"The 'folium', 'matplotlib' and 'mapclassify' packages are required for "
"'explore()'. You can install them using "
"'conda install -c conda-forge folium matplotlib mapclassify' "
"or 'pip install folium matplotlib mapclassify'."
)
# xyservices is an optional dependency
try:
import xyzservices
HAS_XYZSERVICES = True
except (ImportError, ModuleNotFoundError):
HAS_XYZSERVICES = False
gdf = df.copy()
# convert LinearRing to LineString
rings_mask = df.geom_type == "LinearRing"
if rings_mask.any():
gdf.geometry[rings_mask] = gdf.geometry[rings_mask].apply(
lambda g: LineString(g)
)
if gdf.crs is None:
kwargs["crs"] = "Simple"
tiles = None
elif not gdf.crs.equals(4326):
gdf = gdf.to_crs(4326)
# create folium.Map object
if m is None:
# Get bounds to specify location and map extent
bounds = gdf.total_bounds
location = kwargs.pop("location", None)
if location is None:
x = mean([bounds[0], bounds[2]])
y = mean([bounds[1], bounds[3]])
location = (y, x)
if "zoom_start" in kwargs.keys():
fit = False
else:
fit = True
else:
fit = False
# get a subset of kwargs to be passed to folium.Map
map_kwds = {i: kwargs[i] for i in kwargs.keys() if i in _MAP_KWARGS}
if HAS_XYZSERVICES:
# match provider name string to xyzservices.TileProvider
if isinstance(tiles, str):
try:
tiles = xyzservices.providers.query_name(tiles)
except ValueError:
pass
if isinstance(tiles, xyzservices.TileProvider):
attr = attr if attr else tiles.html_attribution
map_kwds["min_zoom"] = tiles.get("min_zoom", 0)
map_kwds["max_zoom"] = tiles.get("max_zoom", 18)
tiles = tiles.build_url(scale_factor="{r}")
m = folium.Map(
location=location,
control_scale=control_scale,
tiles=tiles,
attr=attr,
width=width,
height=height,
**map_kwds,
)
# fit bounds to get a proper zoom level
if fit:
m.fit_bounds([[bounds[1], bounds[0]], [bounds[3], bounds[2]]])
for map_kwd in _MAP_KWARGS:
kwargs.pop(map_kwd, None)
nan_idx = None
if column is not None:
if | pd.api.types.is_list_like(column) | pandas.api.types.is_list_like |
from collections import OrderedDict
from datetime import datetime, timedelta
import numpy as np
import numpy.ma as ma
import pytest
from pandas._libs import iNaT, lib
from pandas.core.dtypes.common import is_categorical_dtype, is_datetime64tz_dtype
from pandas.core.dtypes.dtypes import (
CategoricalDtype,
DatetimeTZDtype,
IntervalDtype,
PeriodDtype,
)
import pandas as pd
from pandas import (
Categorical,
DataFrame,
Index,
Interval,
IntervalIndex,
MultiIndex,
NaT,
Period,
Series,
Timestamp,
date_range,
isna,
period_range,
timedelta_range,
)
import pandas._testing as tm
from pandas.core.arrays import IntervalArray, period_array
class TestSeriesConstructors:
@pytest.mark.parametrize(
"constructor,check_index_type",
[
# NOTE: some overlap with test_constructor_empty but that test does not
# test for None or an empty generator.
# test_constructor_pass_none tests None but only with the index also
# passed.
(lambda: Series(), True),
(lambda: Series(None), True),
(lambda: Series({}), True),
(lambda: Series(()), False), # creates a RangeIndex
(lambda: Series([]), False), # creates a RangeIndex
(lambda: Series((_ for _ in [])), False), # creates a RangeIndex
(lambda: Series(data=None), True),
(lambda: Series(data={}), True),
(lambda: Series(data=()), False), # creates a RangeIndex
(lambda: Series(data=[]), False), # creates a RangeIndex
(lambda: Series(data=(_ for _ in [])), False), # creates a RangeIndex
],
)
def test_empty_constructor(self, constructor, check_index_type):
with tm.assert_produces_warning(DeprecationWarning, check_stacklevel=False):
expected = Series()
result = constructor()
assert len(result.index) == 0
tm.assert_series_equal(result, expected, check_index_type=check_index_type)
def test_invalid_dtype(self):
# GH15520
msg = "not understood"
invalid_list = [pd.Timestamp, "pd.Timestamp", list]
for dtype in invalid_list:
with pytest.raises(TypeError, match=msg):
Series([], name="time", dtype=dtype)
def test_invalid_compound_dtype(self):
# GH#13296
c_dtype = np.dtype([("a", "i8"), ("b", "f4")])
cdt_arr = np.array([(1, 0.4), (256, -13)], dtype=c_dtype)
with pytest.raises(ValueError, match="Use DataFrame instead"):
Series(cdt_arr, index=["A", "B"])
def test_scalar_conversion(self):
# Pass in scalar is disabled
scalar = Series(0.5)
assert not isinstance(scalar, float)
# Coercion
assert float(Series([1.0])) == 1.0
assert int(Series([1.0])) == 1
def test_constructor(self, datetime_series):
with tm.assert_produces_warning(DeprecationWarning, check_stacklevel=False):
empty_series = Series()
assert datetime_series.index.is_all_dates
# Pass in Series
derived = Series(datetime_series)
assert derived.index.is_all_dates
assert tm.equalContents(derived.index, datetime_series.index)
# Ensure new index is not created
assert id(datetime_series.index) == id(derived.index)
# Mixed type Series
mixed = Series(["hello", np.NaN], index=[0, 1])
assert mixed.dtype == np.object_
assert mixed[1] is np.NaN
assert not empty_series.index.is_all_dates
with tm.assert_produces_warning(DeprecationWarning, check_stacklevel=False):
assert not Series().index.is_all_dates
# exception raised is of type Exception
with pytest.raises(Exception, match="Data must be 1-dimensional"):
Series(np.random.randn(3, 3), index=np.arange(3))
mixed.name = "Series"
rs = Series(mixed).name
xp = "Series"
assert rs == xp
# raise on MultiIndex GH4187
m = MultiIndex.from_arrays([[1, 2], [3, 4]])
msg = "initializing a Series from a MultiIndex is not supported"
with pytest.raises(NotImplementedError, match=msg):
Series(m)
@pytest.mark.parametrize("input_class", [list, dict, OrderedDict])
def test_constructor_empty(self, input_class):
with tm.assert_produces_warning(DeprecationWarning, check_stacklevel=False):
empty = Series()
empty2 = Series(input_class())
# these are Index() and RangeIndex() which don't compare type equal
# but are just .equals
tm.assert_series_equal(empty, empty2, check_index_type=False)
# With explicit dtype:
empty = Series(dtype="float64")
empty2 = Series(input_class(), dtype="float64")
tm.assert_series_equal(empty, empty2, check_index_type=False)
# GH 18515 : with dtype=category:
empty = Series(dtype="category")
empty2 = Series(input_class(), dtype="category")
tm.assert_series_equal(empty, empty2, check_index_type=False)
if input_class is not list:
# With index:
with tm.assert_produces_warning(DeprecationWarning, check_stacklevel=False):
empty = Series(index=range(10))
empty2 = Series(input_class(), index=range(10))
tm.assert_series_equal(empty, empty2)
# With index and dtype float64:
empty = Series(np.nan, index=range(10))
empty2 = Series(input_class(), index=range(10), dtype="float64")
tm.assert_series_equal(empty, empty2)
# GH 19853 : with empty string, index and dtype str
empty = Series("", dtype=str, index=range(3))
empty2 = Series("", index=range(3))
tm.assert_series_equal(empty, empty2)
@pytest.mark.parametrize("input_arg", [np.nan, float("nan")])
def test_constructor_nan(self, input_arg):
empty = Series(dtype="float64", index=range(10))
empty2 = Series(input_arg, index=range(10))
tm.assert_series_equal(empty, empty2, check_index_type=False)
@pytest.mark.parametrize(
"dtype",
["f8", "i8", "M8[ns]", "m8[ns]", "category", "object", "datetime64[ns, UTC]"],
)
@pytest.mark.parametrize("index", [None, pd.Index([])])
def test_constructor_dtype_only(self, dtype, index):
# GH-20865
result = pd.Series(dtype=dtype, index=index)
assert result.dtype == dtype
assert len(result) == 0
def test_constructor_no_data_index_order(self):
with tm.assert_produces_warning(DeprecationWarning, check_stacklevel=False):
result = pd.Series(index=["b", "a", "c"])
assert result.index.tolist() == ["b", "a", "c"]
def test_constructor_no_data_string_type(self):
# GH 22477
result = pd.Series(index=[1], dtype=str)
assert np.isnan(result.iloc[0])
@pytest.mark.parametrize("item", ["entry", "ѐ", 13])
def test_constructor_string_element_string_type(self, item):
# GH 22477
result = pd.Series(item, index=[1], dtype=str)
assert result.iloc[0] == str(item)
def test_constructor_dtype_str_na_values(self, string_dtype):
# https://github.com/pandas-dev/pandas/issues/21083
ser = Series(["x", None], dtype=string_dtype)
result = ser.isna()
expected = Series([False, True])
tm.assert_series_equal(result, expected)
assert ser.iloc[1] is None
ser = Series(["x", np.nan], dtype=string_dtype)
assert np.isnan(ser.iloc[1])
def test_constructor_series(self):
index1 = ["d", "b", "a", "c"]
index2 = sorted(index1)
s1 = Series([4, 7, -5, 3], index=index1)
s2 = Series(s1, index=index2)
tm.assert_series_equal(s2, s1.sort_index())
def test_constructor_iterable(self):
# GH 21987
class Iter:
def __iter__(self):
for i in range(10):
yield i
expected = Series(list(range(10)), dtype="int64")
result = Series(Iter(), dtype="int64")
tm.assert_series_equal(result, expected)
def test_constructor_sequence(self):
# GH 21987
expected = Series(list(range(10)), dtype="int64")
result = Series(range(10), dtype="int64")
tm.assert_series_equal(result, expected)
def test_constructor_single_str(self):
# GH 21987
expected = Series(["abc"])
result = Series("abc")
tm.assert_series_equal(result, expected)
def test_constructor_list_like(self):
# make sure that we are coercing different
# list-likes to standard dtypes and not
# platform specific
expected = Series([1, 2, 3], dtype="int64")
for obj in [[1, 2, 3], (1, 2, 3), np.array([1, 2, 3], dtype="int64")]:
result = Series(obj, index=[0, 1, 2])
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("dtype", ["bool", "int32", "int64", "float64"])
def test_constructor_index_dtype(self, dtype):
# GH 17088
s = Series(Index([0, 2, 4]), dtype=dtype)
assert s.dtype == dtype
@pytest.mark.parametrize(
"input_vals",
[
([1, 2]),
(["1", "2"]),
(list(pd.date_range("1/1/2011", periods=2, freq="H"))),
(list(pd.date_range("1/1/2011", periods=2, freq="H", tz="US/Eastern"))),
([pd.Interval(left=0, right=5)]),
],
)
def test_constructor_list_str(self, input_vals, string_dtype):
# GH 16605
# Ensure that data elements from a list are converted to strings
# when dtype is str, 'str', or 'U'
result = Series(input_vals, dtype=string_dtype)
expected = Series(input_vals).astype(string_dtype)
tm.assert_series_equal(result, expected)
def test_constructor_list_str_na(self, string_dtype):
result = Series([1.0, 2.0, np.nan], dtype=string_dtype)
expected = Series(["1.0", "2.0", np.nan], dtype=object)
tm.assert_series_equal(result, expected)
assert np.isnan(result[2])
def test_constructor_generator(self):
gen = (i for i in range(10))
result = Series(gen)
exp = Series(range(10))
tm.assert_series_equal(result, exp)
gen = (i for i in range(10))
result = Series(gen, index=range(10, 20))
exp.index = range(10, 20)
tm.assert_series_equal(result, exp)
def test_constructor_map(self):
# GH8909
m = map(lambda x: x, range(10))
result = Series(m)
exp = Series(range(10))
tm.assert_series_equal(result, exp)
m = map(lambda x: x, range(10))
result = Series(m, index=range(10, 20))
exp.index = range(10, 20)
tm.assert_series_equal(result, exp)
def test_constructor_categorical(self):
cat = pd.Categorical([0, 1, 2, 0, 1, 2], ["a", "b", "c"], fastpath=True)
res = Series(cat)
tm.assert_categorical_equal(res.values, cat)
# can cast to a new dtype
result = Series(pd.Categorical([1, 2, 3]), dtype="int64")
expected = pd.Series([1, 2, 3], dtype="int64")
tm.assert_series_equal(result, expected)
# GH12574
cat = Series(pd.Categorical([1, 2, 3]), dtype="category")
assert is_categorical_dtype(cat)
assert is_categorical_dtype(cat.dtype)
s = Series([1, 2, 3], dtype="category")
assert is_categorical_dtype(s)
assert is_categorical_dtype(s.dtype)
def test_constructor_categorical_with_coercion(self):
factor = Categorical(["a", "b", "b", "a", "a", "c", "c", "c"])
# test basic creation / coercion of categoricals
s = Series(factor, name="A")
assert s.dtype == "category"
assert len(s) == len(factor)
str(s.values)
str(s)
# in a frame
df = DataFrame({"A": factor})
result = df["A"]
tm.assert_series_equal(result, s)
result = df.iloc[:, 0]
tm.assert_series_equal(result, s)
assert len(df) == len(factor)
str(df.values)
str(df)
df = DataFrame({"A": s})
result = df["A"]
tm.assert_series_equal(result, s)
assert len(df) == len(factor)
str(df.values)
str(df)
# multiples
df = DataFrame({"A": s, "B": s, "C": 1})
result1 = df["A"]
result2 = df["B"]
tm.assert_series_equal(result1, s)
tm.assert_series_equal(result2, s, check_names=False)
assert result2.name == "B"
assert len(df) == len(factor)
str(df.values)
str(df)
# GH8623
x = DataFrame(
[[1, "<NAME>"], [2, "<NAME>"], [1, "<NAME>"]],
columns=["person_id", "person_name"],
)
x["person_name"] = Categorical(x.person_name) # doing this breaks transform
expected = x.iloc[0].person_name
result = x.person_name.iloc[0]
assert result == expected
result = x.person_name[0]
assert result == expected
result = x.person_name.loc[0]
assert result == expected
def test_constructor_categorical_dtype(self):
result = pd.Series(
["a", "b"], dtype=CategoricalDtype(["a", "b", "c"], ordered=True)
)
assert is_categorical_dtype(result.dtype) is True
tm.assert_index_equal(result.cat.categories, pd.Index(["a", "b", "c"]))
assert result.cat.ordered
result = pd.Series(["a", "b"], dtype=CategoricalDtype(["b", "a"]))
assert is_categorical_dtype(result.dtype)
tm.assert_index_equal(result.cat.categories, pd.Index(["b", "a"]))
assert result.cat.ordered is False
# GH 19565 - Check broadcasting of scalar with Categorical dtype
result = Series(
"a", index=[0, 1], dtype=CategoricalDtype(["a", "b"], ordered=True)
)
expected = Series(
["a", "a"], index=[0, 1], dtype=CategoricalDtype(["a", "b"], ordered=True)
)
tm.assert_series_equal(result, expected)
def test_constructor_categorical_string(self):
# GH 26336: the string 'category' maintains existing CategoricalDtype
cdt = CategoricalDtype(categories=list("dabc"), ordered=True)
expected = Series(list("abcabc"), dtype=cdt)
# Series(Categorical, dtype='category') keeps existing dtype
cat = Categorical(list("abcabc"), dtype=cdt)
result = Series(cat, dtype="category")
tm.assert_series_equal(result, expected)
# Series(Series[Categorical], dtype='category') keeps existing dtype
result = Series(result, dtype="category")
tm.assert_series_equal(result, expected)
def test_categorical_sideeffects_free(self):
# Passing a categorical to a Series and then changing values in either
# the series or the categorical should not change the values in the
# other one, IF you specify copy!
cat = Categorical(["a", "b", "c", "a"])
s = Series(cat, copy=True)
assert s.cat is not cat
s.cat.categories = [1, 2, 3]
exp_s = np.array([1, 2, 3, 1], dtype=np.int64)
exp_cat = np.array(["a", "b", "c", "a"], dtype=np.object_)
tm.assert_numpy_array_equal(s.__array__(), exp_s)
tm.assert_numpy_array_equal(cat.__array__(), exp_cat)
# setting
s[0] = 2
exp_s2 = np.array([2, 2, 3, 1], dtype=np.int64)
tm.assert_numpy_array_equal(s.__array__(), exp_s2)
tm.assert_numpy_array_equal(cat.__array__(), exp_cat)
# however, copy is False by default
# so this WILL change values
cat = Categorical(["a", "b", "c", "a"])
s = Series(cat)
assert s.values is cat
s.cat.categories = [1, 2, 3]
exp_s = np.array([1, 2, 3, 1], dtype=np.int64)
tm.assert_numpy_array_equal(s.__array__(), exp_s)
tm.assert_numpy_array_equal(cat.__array__(), exp_s)
s[0] = 2
exp_s2 = np.array([2, 2, 3, 1], dtype=np.int64)
tm.assert_numpy_array_equal(s.__array__(), exp_s2)
tm.assert_numpy_array_equal(cat.__array__(), exp_s2)
def test_unordered_compare_equal(self):
left = pd.Series(["a", "b", "c"], dtype=CategoricalDtype(["a", "b"]))
right = pd.Series(pd.Categorical(["a", "b", np.nan], categories=["a", "b"]))
tm.assert_series_equal(left, right)
def test_constructor_maskedarray(self):
data = ma.masked_all((3,), dtype=float)
result = Series(data)
expected = Series([np.nan, np.nan, np.nan])
tm.assert_series_equal(result, expected)
data[0] = 0.0
data[2] = 2.0
index = ["a", "b", "c"]
result = Series(data, index=index)
expected = Series([0.0, np.nan, 2.0], index=index)
tm.assert_series_equal(result, expected)
data[1] = 1.0
result = Series(data, index=index)
expected = Series([0.0, 1.0, 2.0], index=index)
tm.assert_series_equal(result, expected)
data = ma.masked_all((3,), dtype=int)
result = Series(data)
expected = Series([np.nan, np.nan, np.nan], dtype=float)
tm.assert_series_equal(result, expected)
data[0] = 0
data[2] = 2
index = ["a", "b", "c"]
result = Series(data, index=index)
expected = Series([0, np.nan, 2], index=index, dtype=float)
tm.assert_series_equal(result, expected)
data[1] = 1
result = Series(data, index=index)
expected = Series([0, 1, 2], index=index, dtype=int)
tm.assert_series_equal(result, expected)
data = ma.masked_all((3,), dtype=bool)
result = Series(data)
expected = Series([np.nan, np.nan, np.nan], dtype=object)
tm.assert_series_equal(result, expected)
data[0] = True
data[2] = False
index = ["a", "b", "c"]
result = Series(data, index=index)
expected = Series([True, np.nan, False], index=index, dtype=object)
tm.assert_series_equal(result, expected)
data[1] = True
result = Series(data, index=index)
expected = Series([True, True, False], index=index, dtype=bool)
tm.assert_series_equal(result, expected)
data = ma.masked_all((3,), dtype="M8[ns]")
result = Series(data)
expected = Series([iNaT, iNaT, iNaT], dtype="M8[ns]")
tm.assert_series_equal(result, expected)
data[0] = datetime(2001, 1, 1)
data[2] = datetime(2001, 1, 3)
index = ["a", "b", "c"]
result = Series(data, index=index)
expected = Series(
[datetime(2001, 1, 1), iNaT, datetime(2001, 1, 3)],
index=index,
dtype="M8[ns]",
)
tm.assert_series_equal(result, expected)
data[1] = datetime(2001, 1, 2)
result = Series(data, index=index)
expected = Series(
[datetime(2001, 1, 1), datetime(2001, 1, 2), datetime(2001, 1, 3)],
index=index,
dtype="M8[ns]",
)
tm.assert_series_equal(result, expected)
def test_constructor_maskedarray_hardened(self):
# Check numpy masked arrays with hard masks -- from GH24574
data = ma.masked_all((3,), dtype=float).harden_mask()
result = pd.Series(data)
expected = pd.Series([np.nan, np.nan, np.nan])
tm.assert_series_equal(result, expected)
def test_series_ctor_plus_datetimeindex(self):
rng = date_range("20090415", "20090519", freq="B")
data = {k: 1 for k in rng}
result = Series(data, index=rng)
assert result.index is rng
def test_constructor_default_index(self):
s = Series([0, 1, 2])
tm.assert_index_equal(s.index, pd.Index(np.arange(3)))
@pytest.mark.parametrize(
"input",
[
[1, 2, 3],
(1, 2, 3),
list(range(3)),
pd.Categorical(["a", "b", "a"]),
(i for i in range(3)),
map(lambda x: x, range(3)),
],
)
def test_constructor_index_mismatch(self, input):
# GH 19342
# test that construction of a Series with an index of different length
# raises an error
msg = "Length of passed values is 3, index implies 4"
with pytest.raises(ValueError, match=msg):
Series(input, index=np.arange(4))
def test_constructor_numpy_scalar(self):
# GH 19342
# construction with a numpy scalar
# should not raise
result = Series(np.array(100), index=np.arange(4), dtype="int64")
expected = Series(100, index=np.arange(4), dtype="int64")
tm.assert_series_equal(result, expected)
def test_constructor_broadcast_list(self):
# GH 19342
# construction with single-element container and index
# should raise
msg = "Length of passed values is 1, index implies 3"
with pytest.raises(ValueError, match=msg):
Series(["foo"], index=["a", "b", "c"])
def test_constructor_corner(self):
df = tm.makeTimeDataFrame()
objs = [df, df]
s = Series(objs, index=[0, 1])
assert isinstance(s, Series)
def test_constructor_sanitize(self):
s = Series(np.array([1.0, 1.0, 8.0]), dtype="i8")
assert s.dtype == np.dtype("i8")
s = Series(np.array([1.0, 1.0, np.nan]), copy=True, dtype="i8")
assert s.dtype == np.dtype("f8")
def test_constructor_copy(self):
# GH15125
# test dtype parameter has no side effects on copy=True
for data in [[1.0], np.array([1.0])]:
x = Series(data)
y = pd.Series(x, copy=True, dtype=float)
# copy=True maintains original data in Series
tm.assert_series_equal(x, y)
# changes to origin of copy does not affect the copy
x[0] = 2.0
assert not x.equals(y)
assert x[0] == 2.0
assert y[0] == 1.0
@pytest.mark.parametrize(
"index",
[
pd.date_range("20170101", periods=3, tz="US/Eastern"),
pd.date_range("20170101", periods=3),
pd.timedelta_range("1 day", periods=3),
pd.period_range("2012Q1", periods=3, freq="Q"),
pd.Index(list("abc")),
pd.Int64Index([1, 2, 3]),
pd.RangeIndex(0, 3),
],
ids=lambda x: type(x).__name__,
)
def test_constructor_limit_copies(self, index):
# GH 17449
# limit copies of input
s = pd.Series(index)
# we make 1 copy; this is just a smoke test here
assert s._mgr.blocks[0].values is not index
def test_constructor_pass_none(self):
with tm.assert_produces_warning(DeprecationWarning, check_stacklevel=False):
s = Series(None, index=range(5))
assert s.dtype == np.float64
s = Series(None, index=range(5), dtype=object)
assert s.dtype == np.object_
# GH 7431
# inference on the index
with tm.assert_produces_warning(DeprecationWarning, check_stacklevel=False):
s = Series(index=np.array([None]))
expected = Series(index=Index([None]))
tm.assert_series_equal(s, expected)
def test_constructor_pass_nan_nat(self):
# GH 13467
exp = Series([np.nan, np.nan], dtype=np.float64)
assert exp.dtype == np.float64
tm.assert_series_equal(Series([np.nan, np.nan]), exp)
tm.assert_series_equal(Series(np.array([np.nan, np.nan])), exp)
exp = Series([pd.NaT, pd.NaT])
assert exp.dtype == "datetime64[ns]"
tm.assert_series_equal(Series([pd.NaT, pd.NaT]), exp)
tm.assert_series_equal(Series(np.array([pd.NaT, pd.NaT])), exp)
tm.assert_series_equal(Series([pd.NaT, np.nan]), exp)
tm.assert_series_equal(Series(np.array([pd.NaT, np.nan])), exp)
tm.assert_series_equal(Series([np.nan, pd.NaT]), exp)
tm.assert_series_equal(Series(np.array([np.nan, pd.NaT])), exp)
def test_constructor_cast(self):
msg = "could not convert string to float"
with pytest.raises(ValueError, match=msg):
Series(["a", "b", "c"], dtype=float)
def test_constructor_unsigned_dtype_overflow(self, uint_dtype):
# see gh-15832
msg = "Trying to coerce negative values to unsigned integers"
with pytest.raises(OverflowError, match=msg):
Series([-1], dtype=uint_dtype)
def test_constructor_coerce_float_fail(self, any_int_dtype):
# see gh-15832
msg = "Trying to coerce float values to integers"
with pytest.raises(ValueError, match=msg):
Series([1, 2, 3.5], dtype=any_int_dtype)
def test_constructor_coerce_float_valid(self, float_dtype):
s = Series([1, 2, 3.5], dtype=float_dtype)
expected = Series([1, 2, 3.5]).astype(float_dtype)
tm.assert_series_equal(s, expected)
def test_constructor_dtype_no_cast(self):
# see gh-1572
s = Series([1, 2, 3])
s2 = Series(s, dtype=np.int64)
s2[1] = 5
assert s[1] == 5
def test_constructor_datelike_coercion(self):
# GH 9477
# incorrectly inferring on dateimelike looking when object dtype is
# specified
s = Series([Timestamp("20130101"), "NOV"], dtype=object)
assert s.iloc[0] == Timestamp("20130101")
assert s.iloc[1] == "NOV"
assert s.dtype == object
# the dtype was being reset on the slicing and re-inferred to datetime
# even thought the blocks are mixed
belly = "216 3T19".split()
wing1 = "2T15 4H19".split()
wing2 = "416 4T20".split()
mat = pd.to_datetime("2016-01-22 2019-09-07".split())
df = pd.DataFrame({"wing1": wing1, "wing2": wing2, "mat": mat}, index=belly)
result = df.loc["3T19"]
assert result.dtype == object
result = df.loc["216"]
assert result.dtype == object
def test_constructor_datetimes_with_nulls(self):
# gh-15869
for arr in [
np.array([None, None, None, None, datetime.now(), None]),
np.array([None, None, datetime.now(), None]),
]:
result = Series(arr)
assert result.dtype == "M8[ns]"
def test_constructor_dtype_datetime64(self):
s = Series(iNaT, dtype="M8[ns]", index=range(5))
assert isna(s).all()
# in theory this should be all nulls, but since
# we are not specifying a dtype is ambiguous
s = Series(iNaT, index=range(5))
assert not isna(s).all()
s = Series(np.nan, dtype="M8[ns]", index=range(5))
assert isna(s).all()
s = Series([datetime(2001, 1, 2, 0, 0), iNaT], dtype="M8[ns]")
assert isna(s[1])
assert s.dtype == "M8[ns]"
s = Series([datetime(2001, 1, 2, 0, 0), np.nan], dtype="M8[ns]")
assert isna(s[1])
assert s.dtype == "M8[ns]"
# GH3416
dates = [
np.datetime64(datetime(2013, 1, 1)),
np.datetime64(datetime(2013, 1, 2)),
np.datetime64(datetime(2013, 1, 3)),
]
s = Series(dates)
assert s.dtype == "M8[ns]"
s.iloc[0] = np.nan
assert s.dtype == "M8[ns]"
# GH3414 related
expected = Series(
[datetime(2013, 1, 1), datetime(2013, 1, 2), datetime(2013, 1, 3)],
dtype="datetime64[ns]",
)
result = Series(Series(dates).astype(np.int64) / 1000000, dtype="M8[ms]")
tm.assert_series_equal(result, expected)
result = Series(dates, dtype="datetime64[ns]")
tm.assert_series_equal(result, expected)
expected = Series(
[pd.NaT, datetime(2013, 1, 2), datetime(2013, 1, 3)], dtype="datetime64[ns]"
)
result = Series([np.nan] + dates[1:], dtype="datetime64[ns]")
tm.assert_series_equal(result, expected)
dts = Series(dates, dtype="datetime64[ns]")
# valid astype
dts.astype("int64")
# invalid casting
msg = r"cannot astype a datetimelike from \[datetime64\[ns\]\] to \[int32\]"
with pytest.raises(TypeError, match=msg):
dts.astype("int32")
# ints are ok
# we test with np.int64 to get similar results on
# windows / 32-bit platforms
result = Series(dts, dtype=np.int64)
expected = Series(dts.astype(np.int64))
tm.assert_series_equal(result, expected)
# invalid dates can be help as object
result = Series([datetime(2, 1, 1)])
assert result[0] == datetime(2, 1, 1, 0, 0)
result = Series([datetime(3000, 1, 1)])
assert result[0] == datetime(3000, 1, 1, 0, 0)
# don't mix types
result = Series([Timestamp("20130101"), 1], index=["a", "b"])
assert result["a"] == Timestamp("20130101")
assert result["b"] == 1
# GH6529
# coerce datetime64 non-ns properly
dates = date_range("01-Jan-2015", "01-Dec-2015", freq="M")
values2 = dates.view(np.ndarray).astype("datetime64[ns]")
expected = Series(values2, index=dates)
for dtype in ["s", "D", "ms", "us", "ns"]:
values1 = dates.view(np.ndarray).astype(f"M8[{dtype}]")
result = Series(values1, dates)
tm.assert_series_equal(result, expected)
# GH 13876
# coerce to non-ns to object properly
expected = Series(values2, index=dates, dtype=object)
for dtype in ["s", "D", "ms", "us", "ns"]:
values1 = dates.view(np.ndarray).astype(f"M8[{dtype}]")
result = Series(values1, index=dates, dtype=object)
tm.assert_series_equal(result, expected)
# leave datetime.date alone
dates2 = np.array([d.date() for d in dates.to_pydatetime()], dtype=object)
series1 = Series(dates2, dates)
tm.assert_numpy_array_equal(series1.values, dates2)
assert series1.dtype == object
# these will correctly infer a datetime
s = Series([None, pd.NaT, "2013-08-05 15:30:00.000001"])
assert s.dtype == "datetime64[ns]"
s = Series([np.nan, pd.NaT, "2013-08-05 15:30:00.000001"])
assert s.dtype == "datetime64[ns]"
s = Series([pd.NaT, None, "2013-08-05 15:30:00.000001"])
assert s.dtype == "datetime64[ns]"
s = Series([pd.NaT, np.nan, "2013-08-05 15:30:00.000001"])
assert s.dtype == "datetime64[ns]"
# tz-aware (UTC and other tz's)
# GH 8411
dr = date_range("20130101", periods=3)
assert Series(dr).iloc[0].tz is None
dr = date_range("20130101", periods=3, tz="UTC")
assert str(Series(dr).iloc[0].tz) == "UTC"
dr = date_range("20130101", periods=3, tz="US/Eastern")
assert str(Series(dr).iloc[0].tz) == "US/Eastern"
# non-convertible
s = Series([1479596223000, -1479590, pd.NaT])
assert s.dtype == "object"
assert s[2] is pd.NaT
assert "NaT" in str(s)
# if we passed a NaT it remains
s = Series([datetime(2010, 1, 1), datetime(2, 1, 1), pd.NaT])
assert s.dtype == "object"
assert s[2] is pd.NaT
assert "NaT" in str(s)
# if we passed a nan it remains
s = Series([datetime(2010, 1, 1), datetime(2, 1, 1), np.nan])
assert s.dtype == "object"
assert s[2] is np.nan
assert "NaN" in str(s)
def test_constructor_with_datetime_tz(self):
# 8260
# support datetime64 with tz
dr = date_range("20130101", periods=3, tz="US/Eastern")
s = Series(dr)
assert s.dtype.name == "datetime64[ns, US/Eastern]"
assert s.dtype == "datetime64[ns, US/Eastern]"
assert is_datetime64tz_dtype(s.dtype)
assert "datetime64[ns, US/Eastern]" in str(s)
# export
result = s.values
assert isinstance(result, np.ndarray)
assert result.dtype == "datetime64[ns]"
exp = pd.DatetimeIndex(result)
exp = exp.tz_localize("UTC").tz_convert(tz=s.dt.tz)
tm.assert_index_equal(dr, exp)
# indexing
result = s.iloc[0]
assert result == Timestamp(
"2013-01-01 00:00:00-0500", tz="US/Eastern", freq="D"
)
result = s[0]
assert result == Timestamp(
"2013-01-01 00:00:00-0500", tz="US/Eastern", freq="D"
)
result = s[Series([True, True, False], index=s.index)]
tm.assert_series_equal(result, s[0:2])
result = s.iloc[0:1]
tm.assert_series_equal(result, Series(dr[0:1]))
# concat
result = pd.concat([s.iloc[0:1], s.iloc[1:]])
tm.assert_series_equal(result, s)
# short str
assert "datetime64[ns, US/Eastern]" in str(s)
# formatting with NaT
result = s.shift()
assert "datetime64[ns, US/Eastern]" in str(result)
assert "NaT" in str(result)
# long str
t = Series(date_range("20130101", periods=1000, tz="US/Eastern"))
assert "datetime64[ns, US/Eastern]" in str(t)
result = pd.DatetimeIndex(s, freq="infer")
tm.assert_index_equal(result, dr)
# inference
s = Series(
[
pd.Timestamp("2013-01-01 13:00:00-0800", tz="US/Pacific"),
pd.Timestamp("2013-01-02 14:00:00-0800", tz="US/Pacific"),
]
)
assert s.dtype == "datetime64[ns, US/Pacific]"
assert lib.infer_dtype(s, skipna=True) == "datetime64"
s = Series(
[
pd.Timestamp("2013-01-01 13:00:00-0800", tz="US/Pacific"),
pd.Timestamp("2013-01-02 14:00:00-0800", tz="US/Eastern"),
]
)
assert s.dtype == "object"
assert lib.infer_dtype(s, skipna=True) == "datetime"
# with all NaT
s = Series(pd.NaT, index=[0, 1], dtype="datetime64[ns, US/Eastern]")
expected = Series(pd.DatetimeIndex(["NaT", "NaT"], tz="US/Eastern"))
tm.assert_series_equal(s, expected)
@pytest.mark.parametrize("arr_dtype", [np.int64, np.float64])
@pytest.mark.parametrize("dtype", ["M8", "m8"])
@pytest.mark.parametrize("unit", ["ns", "us", "ms", "s", "h", "m", "D"])
def test_construction_to_datetimelike_unit(self, arr_dtype, dtype, unit):
# tests all units
# gh-19223
dtype = f"{dtype}[{unit}]"
arr = np.array([1, 2, 3], dtype=arr_dtype)
s = Series(arr)
result = s.astype(dtype)
expected = Series(arr.astype(dtype))
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("arg", ["2013-01-01 00:00:00", pd.NaT, np.nan, None])
def test_constructor_with_naive_string_and_datetimetz_dtype(self, arg):
# GH 17415: With naive string
result = Series([arg], dtype="datetime64[ns, CET]")
expected = Series(pd.Timestamp(arg)).dt.tz_localize("CET")
tm.assert_series_equal(result, expected)
def test_constructor_datetime64_bigendian(self):
# GH#30976
ms = np.datetime64(1, "ms")
arr = np.array([np.datetime64(1, "ms")], dtype=">M8[ms]")
result = Series(arr)
expected = Series([Timestamp(ms)])
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("interval_constructor", [IntervalIndex, IntervalArray])
def test_construction_interval(self, interval_constructor):
# construction from interval & array of intervals
intervals = interval_constructor.from_breaks(np.arange(3), closed="right")
result = Series(intervals)
assert result.dtype == "interval[int64]"
tm.assert_index_equal(Index(result.values), Index(intervals))
@pytest.mark.parametrize(
"data_constructor", [list, np.array], ids=["list", "ndarray[object]"]
)
def test_constructor_infer_interval(self, data_constructor):
# GH 23563: consistent closed results in interval dtype
data = [pd.Interval(0, 1), pd.Interval(0, 2), None]
result = pd.Series(data_constructor(data))
expected = pd.Series(IntervalArray(data))
assert result.dtype == "interval[float64]"
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize(
"data_constructor", [list, np.array], ids=["list", "ndarray[object]"]
)
def test_constructor_interval_mixed_closed(self, data_constructor):
# GH 23563: mixed closed results in object dtype (not interval dtype)
data = [pd.Interval(0, 1, closed="both"), pd.Interval(0, 2, closed="neither")]
result = Series(data_constructor(data))
assert result.dtype == object
assert result.tolist() == data
def test_construction_consistency(self):
# make sure that we are not re-localizing upon construction
# GH 14928
s = Series(pd.date_range("20130101", periods=3, tz="US/Eastern"))
result = Series(s, dtype=s.dtype)
tm.assert_series_equal(result, s)
result = Series(s.dt.tz_convert("UTC"), dtype=s.dtype)
tm.assert_series_equal(result, s)
result = Series(s.values, dtype=s.dtype)
tm.assert_series_equal(result, s)
@pytest.mark.parametrize(
"data_constructor", [list, np.array], ids=["list", "ndarray[object]"]
)
def test_constructor_infer_period(self, data_constructor):
data = [pd.Period("2000", "D"), pd.Period("2001", "D"), None]
result = pd.Series(data_constructor(data))
expected = pd.Series(period_array(data))
tm.assert_series_equal(result, expected)
assert result.dtype == "Period[D]"
def test_constructor_period_incompatible_frequency(self):
data = [pd.Period("2000", "D"), pd.Period("2001", "A")]
result = pd.Series(data)
assert result.dtype == object
assert result.tolist() == data
def test_constructor_periodindex(self):
# GH7932
# converting a PeriodIndex when put in a Series
pi = period_range("20130101", periods=5, freq="D")
s = Series(pi)
assert s.dtype == "Period[D]"
expected = Series(pi.astype(object))
tm.assert_series_equal(s, expected)
def test_constructor_dict(self):
d = {"a": 0.0, "b": 1.0, "c": 2.0}
result = Series(d, index=["b", "c", "d", "a"])
expected = Series([1, 2, np.nan, 0], index=["b", "c", "d", "a"])
tm.assert_series_equal(result, expected)
pidx = tm.makePeriodIndex(100)
d = {pidx[0]: 0, pidx[1]: 1}
result = Series(d, index=pidx)
expected = Series(np.nan, pidx, dtype=np.float64)
expected.iloc[0] = 0
expected.iloc[1] = 1
tm.assert_series_equal(result, expected)
def test_constructor_dict_list_value_explicit_dtype(self):
# GH 18625
d = {"a": [[2], [3], [4]]}
result = Series(d, index=["a"], dtype="object")
expected = Series(d, index=["a"])
tm.assert_series_equal(result, expected)
def test_constructor_dict_order(self):
# GH19018
# initialization ordering: by insertion order if python>= 3.6, else
# order by value
d = {"b": 1, "a": 0, "c": 2}
result = Series(d)
expected = Series([1, 0, 2], index=list("bac"))
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize(
"data,dtype",
[
(Period("2020-01"), PeriodDtype("M")),
(Interval(left=0, right=5), IntervalDtype("int64")),
(
Timestamp("2011-01-01", tz="US/Eastern"),
DatetimeTZDtype(tz="US/Eastern"),
),
],
)
def test_constructor_dict_extension(self, data, dtype):
d = {"a": data}
result = Series(d, index=["a"])
expected = Series(data, index=["a"], dtype=dtype)
assert result.dtype == dtype
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("value", [2, np.nan, None, float("nan")])
def test_constructor_dict_nan_key(self, value):
# GH 18480
d = {1: "a", value: "b", float("nan"): "c", 4: "d"}
result = Series(d).sort_values()
expected = | Series(["a", "b", "c", "d"], index=[1, value, np.nan, 4]) | pandas.Series |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy as np
import pandas as pd
import pandas.util.testing as pdt
import pytest
from recordlinkage.preprocessing import clean
from recordlinkage.preprocessing import phonenumbers
from recordlinkage.preprocessing import phonetic
from recordlinkage.preprocessing import phonetic_algorithms
from recordlinkage.preprocessing import value_occurence
class TestCleaningStandardise(object):
def test_clean(self):
values = pd.Series([
'Mary-ann', 'Bob :)', 'Angel', 'Bob (alias Billy)', 'Mary ann',
'John', np.nan
])
expected = pd.Series(
['mary ann', 'bob', 'angel', 'bob', 'mary ann', 'john', np.nan])
clean_series = clean(values)
# Check if series are identical.
pdt.assert_series_equal(clean_series, expected)
clean_series_nothing = clean(
values,
lowercase=False,
replace_by_none=False,
replace_by_whitespace=False,
strip_accents=False,
remove_brackets=False)
# Check if ntohing happend.
pdt.assert_series_equal(clean_series_nothing, values)
def test_clean_empty(self):
""" Test the cleaning of an empty Series"""
# Check empty series
pdt.assert_series_equal(clean(pd.Series()), pd.Series())
def test_clean_unicode(self):
values = pd.Series([
u'Mary-ann', u'Bob :)', u'Angel', u'Bob (alias Billy)',
u'Mary ann', u'John', np.nan
])
expected = pd.Series([
u'mary ann', u'bob', u'angel', u'bob', u'mary ann', u'john', np.nan
])
clean_series = clean(values)
# Check if series are identical.
pdt.assert_series_equal(clean_series, expected)
def test_clean_parameters(self):
values = pd.Series([
u'Mary-ann', u'Bob :)', u'Angel', u'Bob (alias Billy)',
u'Mary ann', u'John', np.nan
])
expected = pd.Series([
u'<NAME>', u'bob', u'angel', u'bob', u'<NAME>', u'john', np.nan
])
clean_series = clean(
values,
lowercase=True,
replace_by_none=r'[^ \-\_A-Za-z0-9]+',
replace_by_whitespace=r'[\-\_]',
remove_brackets=True)
# Check if series are identical.
pdt.assert_series_equal(clean_series, expected)
def test_clean_lower(self):
values = pd.Series([np.nan, 'LowerHigher', 'HIGHERLOWER'])
expected = pd.Series([np.nan, 'lowerhigher', 'higherlower'])
clean_series = clean(values, lowercase=True)
# Check if series are identical.
pdt.assert_series_equal(clean_series, expected)
def test_clean_brackets(self):
values = pd.Series([np.nan, 'bra(cke)ts', 'brackets with (brackets)'])
expected = pd.Series([np.nan, 'brats', 'brackets with'])
clean_series = clean(values, remove_brackets=True)
# Check if series are identical.
pdt.assert_series_equal(clean_series, expected)
def test_clean_accent_stripping(self):
values = pd.Series(['ősdfésdfë', 'without'])
expected = pd.Series(['osdfesdfe', 'without'])
values_unicode = pd.Series([u'ősdfésdfë', u'without'])
expected_unicode = pd.Series([u'osdfesdfe', u'without'])
# values_callable = pd.Series([u'ősdfésdfë', u'without'])
# expected_callable = pd.Series([u'ősdfésdfë', u'without'])
# # Callable.
# pdt.assert_series_equal(
# clean(values_callable, strip_accents=lambda x: x),
# expected_callable)
# Check if series are identical.
pdt.assert_series_equal(
clean(values, strip_accents='unicode'), expected)
# Check if series are identical.
pdt.assert_series_equal(clean(values, strip_accents='ascii'), expected)
# Check if series are identical.
pdt.assert_series_equal(
clean(values_unicode, strip_accents='unicode'), expected_unicode)
# Check if series are identical.
pdt.assert_series_equal(
clean(values_unicode, strip_accents='ascii'), expected_unicode)
with pytest.raises(ValueError):
clean(values, strip_accents='unknown_algorithm')
def test_clean_phonenumbers(self):
values = pd.Series(
[np.nan, '0033612345678', '+1 201 123 4567', '+336-123 45678'])
expected = pd.Series(
[np.nan, '0033612345678', '+12011234567', '+33612345678'])
clean_series = phonenumbers(values)
# Check if series are identical.
pdt.assert_series_equal(clean_series, expected)
def test_value_occurence(self):
values = pd.Series([
np.nan, np.nan, 'str1', 'str1', 'str1', 'str1', 'str2', 'str3',
'str3', 'str1'
])
expected = | pd.Series([2, 2, 5, 5, 5, 5, 1, 2, 2, 5]) | pandas.Series |
# coding: utf-8
# ### Convert csv data to geojson.
#
# Thanks to https://gist.github.com/Spaxe/94e130c73a1b835d3c30ea672ec7e5fe
import pandas
import json
data_in = pandas.read_csv("../data/italy_v01.csv",index_col=0,header=0,sep='\s+')
data_in.T.plot()
latlon = | pandas.read_csv("../data/italy_latlon.csv",index_col=0,header=0,sep='\s+') | pandas.read_csv |
# -*- encoding: utf-8 -*-
import sys
from PyQt5.QtWidgets import *
from PyQt5.QtGui import *
from PyQt5.QtCore import *
import os
from pandas import read_csv, isna
from numpy import median, average
from datetime import datetime
import json
import subprocess
import time
import re
import requests
import urllib
def get_service_route():
return '192.168.127.12', '8221', '8222'
def get_report_from_service(report_struct, project_path, IP, ipynb_port):
host = 'http://' + str(IP)
port = ipynb_port
service_url = str(host) + ':' + str(port) + '/'
data = report_struct
params = json.dumps({"reporter": {"data": str(json.dumps(data))}})
responce = requests.get(service_url, data=params)
result = responce.json()
target = result['target']
ipynb = result['report']
with open(str(project_path) + str(target) + '_structure.json', 'w') as file:
json.dump(data, file)
with open('reports/' + 'predict_' + str(target) + '.ipynb', 'w') as file:
file.write(ipynb)
def make_names(strng):
strng = re.sub('[^a-zA-Z0-9\n]', '_', strng)
return strng
class FBConfig(object):
def __init__(self, proj_path, target):
self.proj_path = proj_path
self.target = target
self.config = None
self.load_config()
def save_config(self):
with open(str(self.proj_path) + 'FastBenchmark.config', 'w') as file:
json.dump(self.config, file)
def load_config(self):
if 'FastBenchmark.config' in os.listdir(self.proj_path):
with open(str(self.proj_path) + 'FastBenchmark.config') as file:
self.config = json.load(file)
if self.target not in self.config:
self.config[self.target] = {
'to_drop': [],
'bot_config': {
'pred_prob': 'prediction [prediction] with probability [probability]'
},
'host': '',
'port': ''
}
else:
self.config = {
self.target: {
'to_drop': [],
'bot_config': {
'pred_prob': 'prediction [prediction] with probability [probability]'
},
'host': '',
'port': ''
}
}
def get_drop(self):
return self.config[self.target]['to_drop']
def set_drop(self, variables):
self.config[self.target]['to_drop'] = variables
def get_bot_config(self):
self.load_config()
if len(self.config[self.target]['bot_config']) > 1:
if str(self.target) + '_encode.log' in os.listdir(self.proj_path):
with open(str(self.proj_path) + str(self.target) + '_encode.log') as file:
enc_log = json.load(file)
if 'columns' in enc_log:
for col in enc_log['columns']:
for variable in self.config[self.target]['bot_config']:
if variable == col['name']:
if len(self.config[self.target]['bot_config'][variable]['answers']) == 0:
for i in range(len(col['values'])):
self.config[self.target]['bot_config'][variable]['answers'][col['values'][i]] = i
return self.config[self.target]['bot_config']
else:
if str(self.target) + '_structure.json' in os.listdir(self.proj_path):
with open(str(self.proj_path) + str(self.target) + '_structure.json') as file:
variables_data = json.load(file)
if 'columns' in variables_data:
for col in variables_data['columns']:
if col['name'] != self.target:
col_data = {
'question': col['name'] + '?',
'answers': {}
}
self.config[self.target]['bot_config'][col['name']] = col_data
if str(self.target) + '_encode.log' in os.listdir(self.proj_path):
with open(str(self.proj_path) + str(self.target) + '_encode.log') as file:
enc_log = json.load(file)
if 'columns' in enc_log:
for col in enc_log['columns']:
for variable in self.config[self.target]['bot_config']:
if variable == col['name']:
for i in range(len(col['values'])):
self.config[self.target]['bot_config'][variable]['answers'][col['values'][i]] = i
self.save_config()
return self.config[self.target]['bot_config']
def set_bot_config(self, conf_data):
self.load_config()
self.get_bot_config()
self.config[self.target]['bot_config'] = conf_data
self.save_config()
class FastBenchmark(QWidget):
def __init__(self):
super(FastBenchmark, self).__init__()
self.target = ''
self.csv_path = ''
self.project_path = ''
self.token = '<KEY>'
self.csvSeparators = [',', ';', '\t']
self.cat_coef = 1
self.m_a_coef = 0.8
self.dataReaded = False
self.data_length = 0
self.csvSeparator = None
self.data = None
self.target_column_name = None
self.dataFormat = None
self.metric = 'rmse'
self.target_type = None
self.target_template = None
self.columns = []
self.jupyter_subs = {}
self.document_subs = {}
self.service_subs = {}
self.bot_subs = {}
self.hosts_ports = {}
self.fb_config = None
self.initUI()
self.IP, self.ipynb_port, self.serv_port = get_service_route()
def check_for_date(self, col_name):
if self.csvSeparator is None:
self.separator()
df = self.data.sample(min(50, self.data.shape[0]))
val = df[col_name].values.tolist()
rating = 0
i = 0
for v in val:
try:
datetime.fromisoformat(v)
rating += 1
except: pass
i += 1
if rating > 0.4*min(50, self.data.shape[0]):
return True
else:
return False
def dtype_to_str(self, dt, length=1):
if dt == "int64":
if length <= 10:
if length == 2:
return "num-category_bool"
else:
return "num-category"
else:
return "numeric"
if dt == "float64":
if length <= 10:
if length == 2:
return "num-category_bool"
else:
return "num-category"
else:
return "numeric"
if dt == "datetime64":
return "dates"
if dt == "object":
if length > self.cat_coef:
return "string"
if length <= self.cat_coef:
if length == 2:
return "category_bool"
else:
return "category"
def define_data_types(self):
if self.csvSeparator is None:
self.read_data()
cols = self.data.columns
parts = []
part = None
for i in range(len(cols)):
if cols[i] not in self.fb_config.get_drop():
if cols[i] == self.target_column_name:
if self.dtype_to_str(self.data.dtypes[i].name, len(set(self.data[cols[i]]))) == 'category':
self.metric = 'class'
part = {"dataType": 'category',
"dataTemplate": "None",
"name": str(cols[i]),
"NaN": round(sum(isna(self.data[cols[i]]))*100/self.data.shape[0])}
if self.dtype_to_str(self.data.dtypes[i].name, len(set(self.data[cols[i]]))) == 'category_bool':
self.metric = 'class'
part = {"dataType": 'category_bool',
"dataTemplate": "None",
"name": str(cols[i]),
"NaN": round(sum(isna(self.data[cols[i]]))*100/self.data.shape[0])}
if self.dtype_to_str(self.data.dtypes[i].name, len(set(self.data[cols[i]]))) == 'num-category':
self.metric = 'class'
part = {"dataType": 'category',
"dataTemplate": "numeric",
"name": str(cols[i]),
"NaN": round(sum(isna(self.data[cols[i]]))*100/self.data.shape[0])}
if self.dtype_to_str(self.data.dtypes[i].name, len(set(self.data[cols[i]]))) == 'num-category_bool':
self.metric = 'class'
part = {"dataType": 'category_bool',
"dataTemplate": "numeric",
"name": str(cols[i]),
"NaN": round(sum(isna(self.data[cols[i]]))*100/self.data.shape[0])}
if self.dtype_to_str(self.data.dtypes[i].name, len(set(self.data[cols[i]]))) == 'numeric':
mediana = median(self.data[cols[i]])
avg = average(self.data[cols[i]])
if mediana == 0:
mediana = 1
if avg == 0:
avg = 1
if avg >= mediana and mediana/avg <= self.m_a_coef:
self.metric = 'rmsle'
if avg < mediana and avg/mediana <= self.m_a_coef:
self.metric = 'rmsle'
part = {"dataType": 'numeric',
"dataTemplate": "None",
"name": str(cols[i]),
"NaN": round(sum(isna(self.data[cols[i]]))*100/self.data.shape[0])}
else:
if self.check_for_date(cols[i]):
part = {"dataType": 'date',
"dataTemplate": 'None',
"name": str(cols[i]),
"NaN": round(sum(isna(self.data[cols[i]]))*100/self.data.shape[0])}
elif self.dtype_to_str(self.data.dtypes[i].name, len(set(self.data[cols[i]]))) == 'string':
part = {"dataType": 'string',
"dataTemplate": "None",
"name": str(cols[i]),
"NaN": round(sum(isna(self.data[cols[i]]))*100/self.data.shape[0])}
elif self.dtype_to_str(self.data.dtypes[i].name, len(set(self.data[cols[i]]))) == 'num-category':
part = {"dataType": 'category',
"dataTemplate": "numeric",
"name": str(cols[i]),
"NaN": round(sum(isna(self.data[cols[i]]))*100/self.data.shape[0])}
elif self.dtype_to_str(self.data.dtypes[i].name, len(set(self.data[cols[i]]))) == 'num-category_bool':
part = {"dataType": 'category_bool',
"dataTemplate": "numeric",
"name": str(cols[i]),
"NaN": round(sum( | isna(self.data[cols[i]]) | pandas.isna |
"""Main module."""
import json
from collections import defaultdict
import numpy as np
import pandas as pd
from copy import deepcopy
from math import nan, isnan
from .constants import IMAGING_PARAMS
DIRECT_IMAGING_PARAMS = IMAGING_PARAMS - set(["NSliceTimes"])
def check_merging_operations(action_csv, raise_on_error=False):
"""Checks that the merges in an action csv are possible.
To be mergable the
"""
actions = | pd.read_csv(action_csv) | pandas.read_csv |
#%%
import os
import sys
import pandas as pd
import numpy as np
from tqdm import tqdm
from sklearn.metrics import roc_auc_score
import torch
from utils import *
HOME = os.path.dirname(os.path.abspath(__file__))
# DATA_DIR = '/home/scao/Documents/kaggle-riiid-test/data/'
# MODEL_DIR = f'/home/scao/Documents/kaggle-riiid-test/model/'
MODEL_DIR = HOME+'/model/'
DATA_DIR = HOME+'/data/'
PRIVATE = False
DEBUG = False
MAX_SEQ = 150
VAL_BATCH_SIZE = 4096
TEST_BATCH_SIZE = 4096
SIMU_PUB_SIZE = 25_000
SIMU_PRI_SIZE = 250_000
#%%
class Iter_Valid(object):
def __init__(self, df, max_user=1000):
df = df.reset_index(drop=True)
self.df = df
self.user_answer = df['user_answer'].astype(str).values
self.answered_correctly = df['answered_correctly'].astype(str).values
df['prior_group_responses'] = "[]"
df['prior_group_answers_correct'] = "[]"
self.sample_df = df[df['content_type_id'] == 0][['row_id']]
self.sample_df['answered_correctly'] = 0
self.len = len(df)
self.user_id = df.user_id.values
self.task_container_id = df.task_container_id.values
self.content_type_id = df.content_type_id.values
self.max_user = max_user
self.current = 0
self.pre_user_answer_list = []
self.pre_answered_correctly_list = []
def __iter__(self):
return self
def fix_df(self, user_answer_list, answered_correctly_list, pre_start):
df= self.df[pre_start:self.current].copy()
sample_df = self.sample_df[pre_start:self.current].copy()
df.loc[pre_start,'prior_group_responses'] = '[' + ",".join(self.pre_user_answer_list) + ']'
df.loc[pre_start,'prior_group_answers_correct'] = '[' + ",".join(self.pre_answered_correctly_list) + ']'
self.pre_user_answer_list = user_answer_list
self.pre_answered_correctly_list = answered_correctly_list
return df, sample_df
def __next__(self):
added_user = set()
pre_start = self.current
pre_added_user = -1
pre_task_container_id = -1
user_answer_list = []
answered_correctly_list = []
while self.current < self.len:
crr_user_id = self.user_id[self.current]
crr_task_container_id = self.task_container_id[self.current]
crr_content_type_id = self.content_type_id[self.current]
if crr_content_type_id == 1:
# no more than one task_container_id of "questions" from any single user
# so we only care for content_type_id == 0 to break loop
user_answer_list.append(self.user_answer[self.current])
answered_correctly_list.append(self.answered_correctly[self.current])
self.current += 1
continue
if crr_user_id in added_user and ((crr_user_id != pre_added_user) or (crr_task_container_id != pre_task_container_id)):
# known user(not prev user or differnt task container)
return self.fix_df(user_answer_list, answered_correctly_list, pre_start)
if len(added_user) == self.max_user:
if crr_user_id == pre_added_user and crr_task_container_id == pre_task_container_id:
user_answer_list.append(self.user_answer[self.current])
answered_correctly_list.append(self.answered_correctly[self.current])
self.current += 1
continue
else:
return self.fix_df(user_answer_list, answered_correctly_list, pre_start)
added_user.add(crr_user_id)
pre_added_user = crr_user_id
pre_task_container_id = crr_task_container_id
user_answer_list.append(self.user_answer[self.current])
answered_correctly_list.append(self.answered_correctly[self.current])
self.current += 1
if pre_start < self.current:
return self.fix_df(user_answer_list, answered_correctly_list, pre_start)
else:
raise StopIteration()
if DEBUG:
test_df = pd.read_pickle(DATA_DIR+'cv2_valid.pickle')
test_df[:SIMU_PUB_SIZE].to_pickle(DATA_DIR+'test_pub_simu.pickle')
#%%
if __name__ == "__main__":
get_system()
try:
from transformer.transformer import *
except:
ModuleNotFoundError('transformer not found')
print("Loading test set....")
if PRIVATE:
test_df = pd.read_pickle(DATA_DIR+'cv2_valid.pickle')
else:
test_df = pd.read_pickle(DATA_DIR+'test_pub_simu.pickle')
test_df = test_df[:SIMU_PUB_SIZE]
df_questions = pd.read_csv(DATA_DIR+'questions.csv')
train_df = pd.read_parquet(DATA_DIR+'cv2_valid.parquet')
train_df = preprocess(train_df, df_questions)
d, user_id_to_idx_train = get_feats(train_df)
print("Loaded test.")
iter_test = Iter_Valid(test_df, max_user=1000)
predicted = []
def set_predict(df):
predicted.append(df)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f'\n\n Using device: {device} \n\n')
model_files = find_files(name='transformer',path=MODEL_DIR)
model_file = model_files[0]
# model_file = '/home/scao/Documents/kaggle-riiid-test/model/transformer_head_8_embed_512_seq_150_auc_0.7515.pt'
conf = dict(ninp=512,
nhead=8,
nhid=128,
nlayers=2,
dropout=0.3)
model = load_model(model_file, conf=conf)
print(f'\nLoaded {model_file}.')
model.eval()
print(model)
prev_test_df = None
len_test = len(test_df)
with tqdm(total=len_test) as pbar:
for idx, (current_test, current_prediction_df) in enumerate(iter_test):
'''
concised iter_env
'''
if prev_test_df is not None:
'''Making use of answers to previous questions'''
answers = eval(current_test["prior_group_answers_correct"].iloc[0])
responses = eval(current_test["prior_group_responses"].iloc[0])
prev_test_df['answered_correctly'] = answers
prev_test_df['user_answer'] = responses
prev_test_df = prev_test_df[prev_test_df['content_type_id'] == False]
prev_test_df = preprocess(prev_test_df, df_questions)
d_prev, user_id_to_idx_prev = get_feats(prev_test_df)
d = update_users(d_prev, d, user_id_to_idx_prev, user_id_to_idx_train)
# no labels
# d_test, user_id_to_idx = get_feats_test(current_test)
# dataset_test = RiiidTest(d=d_test)
# test_loader = DataLoader(dataset=dataset_test, batch_size=VAL_BATCH_SIZE,
# collate_fn=collate_fn_test, shuffle=False, drop_last=False)
prev_test_df = current_test.copy()
'''Labels for verification'''
current_test = preprocess(current_test, df_questions)
d_test, user_id_to_idx_test = get_feats_val(current_test, max_seq=MAX_SEQ)
d_test = update_users(d, d_test, user_id_to_idx_train, user_id_to_idx_test, test_flag=True)
dataset_test = RiiidVal(d=d_test)
test_loader = DataLoader(dataset=dataset_test,
batch_size=TEST_BATCH_SIZE,
collate_fn=collate_fn_val, shuffle=False, drop_last=False)
# the problem with current feature gen is that
# using groupby user_id sorts the user_id and makes it different from the
# test_df's order
output_all = []
labels_all = []
for _, batch in enumerate(test_loader):
content_id, _, part_id, prior_question_elapsed_time, mask, labels, pred_mask = batch
target_id = batch[1].to(device).long()
content_id = Variable(content_id.cuda())
part_id = Variable(part_id.cuda())
prior_question_elapsed_time = Variable(prior_question_elapsed_time.cuda())
mask = Variable(mask.cuda())
with torch.no_grad():
output = model(content_id, part_id, prior_question_elapsed_time, mask_padding= mask)
pred_probs = torch.softmax(output[pred_mask], dim=1)
output_all.extend(pred_probs[:,1].reshape(-1).data.cpu().numpy())
labels_all.extend(labels[~mask].reshape(-1).data.numpy())
'''prediction code ends'''
current_test['answered_correctly'] = output_all
set_predict(current_test.loc[:,['row_id', 'answered_correctly']])
pbar.update(len(current_test))
y_true = test_df[test_df.content_type_id == 0]['answered_correctly']
y_pred = | pd.concat(predicted) | pandas.concat |
import datetime
from datetime import timedelta
from distutils.version import LooseVersion
from io import BytesIO
import os
import re
from warnings import catch_warnings, simplefilter
import numpy as np
import pytest
from pandas.compat import is_platform_little_endian, is_platform_windows
import pandas.util._test_decorators as td
from pandas.core.dtypes.common import is_categorical_dtype
import pandas as pd
from pandas import (
Categorical,
CategoricalIndex,
DataFrame,
DatetimeIndex,
Index,
Int64Index,
MultiIndex,
RangeIndex,
Series,
Timestamp,
bdate_range,
concat,
date_range,
isna,
timedelta_range,
)
from pandas.tests.io.pytables.common import (
_maybe_remove,
create_tempfile,
ensure_clean_path,
ensure_clean_store,
safe_close,
safe_remove,
tables,
)
import pandas.util.testing as tm
from pandas.io.pytables import (
ClosedFileError,
HDFStore,
PossibleDataLossError,
Term,
read_hdf,
)
from pandas.io import pytables as pytables # noqa: E402 isort:skip
from pandas.io.pytables import TableIterator # noqa: E402 isort:skip
_default_compressor = "blosc"
ignore_natural_naming_warning = pytest.mark.filterwarnings(
"ignore:object name:tables.exceptions.NaturalNameWarning"
)
@pytest.mark.single
class TestHDFStore:
def test_format_kwarg_in_constructor(self, setup_path):
# GH 13291
with ensure_clean_path(setup_path) as path:
with pytest.raises(ValueError):
HDFStore(path, format="table")
def test_context(self, setup_path):
path = create_tempfile(setup_path)
try:
with HDFStore(path) as tbl:
raise ValueError("blah")
except ValueError:
pass
finally:
safe_remove(path)
try:
with HDFStore(path) as tbl:
tbl["a"] = tm.makeDataFrame()
with HDFStore(path) as tbl:
assert len(tbl) == 1
assert type(tbl["a"]) == DataFrame
finally:
safe_remove(path)
def test_conv_read_write(self, setup_path):
path = create_tempfile(setup_path)
try:
def roundtrip(key, obj, **kwargs):
obj.to_hdf(path, key, **kwargs)
return read_hdf(path, key)
o = tm.makeTimeSeries()
tm.assert_series_equal(o, roundtrip("series", o))
o = tm.makeStringSeries()
tm.assert_series_equal(o, roundtrip("string_series", o))
o = tm.makeDataFrame()
tm.assert_frame_equal(o, roundtrip("frame", o))
# table
df = DataFrame(dict(A=range(5), B=range(5)))
df.to_hdf(path, "table", append=True)
result = read_hdf(path, "table", where=["index>2"])
tm.assert_frame_equal(df[df.index > 2], result)
finally:
safe_remove(path)
def test_long_strings(self, setup_path):
# GH6166
df = DataFrame(
{"a": tm.rands_array(100, size=10)}, index=tm.rands_array(100, size=10)
)
with ensure_clean_store(setup_path) as store:
store.append("df", df, data_columns=["a"])
result = store.select("df")
tm.assert_frame_equal(df, result)
def test_api(self, setup_path):
# GH4584
# API issue when to_hdf doesn't accept append AND format args
with ensure_clean_path(setup_path) as path:
df = tm.makeDataFrame()
df.iloc[:10].to_hdf(path, "df", append=True, format="table")
df.iloc[10:].to_hdf(path, "df", append=True, format="table")
tm.assert_frame_equal(read_hdf(path, "df"), df)
# append to False
df.iloc[:10].to_hdf(path, "df", append=False, format="table")
df.iloc[10:].to_hdf(path, "df", append=True, format="table")
tm.assert_frame_equal(read_hdf(path, "df"), df)
with ensure_clean_path(setup_path) as path:
df = tm.makeDataFrame()
df.iloc[:10].to_hdf(path, "df", append=True)
df.iloc[10:].to_hdf(path, "df", append=True, format="table")
tm.assert_frame_equal(read_hdf(path, "df"), df)
# append to False
df.iloc[:10].to_hdf(path, "df", append=False, format="table")
df.iloc[10:].to_hdf(path, "df", append=True)
tm.assert_frame_equal(read_hdf(path, "df"), df)
with ensure_clean_path(setup_path) as path:
df = tm.makeDataFrame()
df.to_hdf(path, "df", append=False, format="fixed")
tm.assert_frame_equal(read_hdf(path, "df"), df)
df.to_hdf(path, "df", append=False, format="f")
tm.assert_frame_equal(read_hdf(path, "df"), df)
df.to_hdf(path, "df", append=False)
tm.assert_frame_equal(read_hdf(path, "df"), df)
df.to_hdf(path, "df")
tm.assert_frame_equal(read_hdf(path, "df"), df)
with ensure_clean_store(setup_path) as store:
path = store._path
df = tm.makeDataFrame()
_maybe_remove(store, "df")
store.append("df", df.iloc[:10], append=True, format="table")
store.append("df", df.iloc[10:], append=True, format="table")
tm.assert_frame_equal(store.select("df"), df)
# append to False
_maybe_remove(store, "df")
store.append("df", df.iloc[:10], append=False, format="table")
store.append("df", df.iloc[10:], append=True, format="table")
tm.assert_frame_equal(store.select("df"), df)
# formats
_maybe_remove(store, "df")
store.append("df", df.iloc[:10], append=False, format="table")
store.append("df", df.iloc[10:], append=True, format="table")
tm.assert_frame_equal(store.select("df"), df)
_maybe_remove(store, "df")
store.append("df", df.iloc[:10], append=False, format="table")
store.append("df", df.iloc[10:], append=True, format=None)
tm.assert_frame_equal(store.select("df"), df)
with ensure_clean_path(setup_path) as path:
# Invalid.
df = tm.makeDataFrame()
with pytest.raises(ValueError):
df.to_hdf(path, "df", append=True, format="f")
with pytest.raises(ValueError):
df.to_hdf(path, "df", append=True, format="fixed")
with pytest.raises(TypeError):
df.to_hdf(path, "df", append=True, format="foo")
with pytest.raises(TypeError):
df.to_hdf(path, "df", append=False, format="bar")
# File path doesn't exist
path = ""
with pytest.raises(FileNotFoundError):
read_hdf(path, "df")
def test_api_default_format(self, setup_path):
# default_format option
with ensure_clean_store(setup_path) as store:
df = tm.makeDataFrame()
pd.set_option("io.hdf.default_format", "fixed")
_maybe_remove(store, "df")
store.put("df", df)
assert not store.get_storer("df").is_table
with pytest.raises(ValueError):
store.append("df2", df)
pd.set_option("io.hdf.default_format", "table")
_maybe_remove(store, "df")
store.put("df", df)
assert store.get_storer("df").is_table
_maybe_remove(store, "df2")
store.append("df2", df)
assert store.get_storer("df").is_table
pd.set_option("io.hdf.default_format", None)
with ensure_clean_path(setup_path) as path:
df = tm.makeDataFrame()
pd.set_option("io.hdf.default_format", "fixed")
df.to_hdf(path, "df")
with HDFStore(path) as store:
assert not store.get_storer("df").is_table
with pytest.raises(ValueError):
df.to_hdf(path, "df2", append=True)
pd.set_option("io.hdf.default_format", "table")
df.to_hdf(path, "df3")
with HDFStore(path) as store:
assert store.get_storer("df3").is_table
df.to_hdf(path, "df4", append=True)
with HDFStore(path) as store:
assert store.get_storer("df4").is_table
pd.set_option("io.hdf.default_format", None)
def test_keys(self, setup_path):
with ensure_clean_store(setup_path) as store:
store["a"] = tm.makeTimeSeries()
store["b"] = tm.makeStringSeries()
store["c"] = tm.makeDataFrame()
assert len(store) == 3
expected = {"/a", "/b", "/c"}
assert set(store.keys()) == expected
assert set(store) == expected
def test_keys_ignore_hdf_softlink(self, setup_path):
# GH 20523
# Puts a softlink into HDF file and rereads
with ensure_clean_store(setup_path) as store:
df = DataFrame(dict(A=range(5), B=range(5)))
store.put("df", df)
assert store.keys() == ["/df"]
store._handle.create_soft_link(store._handle.root, "symlink", "df")
# Should ignore the softlink
assert store.keys() == ["/df"]
def test_iter_empty(self, setup_path):
with ensure_clean_store(setup_path) as store:
# GH 12221
assert list(store) == []
def test_repr(self, setup_path):
with ensure_clean_store(setup_path) as store:
repr(store)
store.info()
store["a"] = tm.makeTimeSeries()
store["b"] = tm.makeStringSeries()
store["c"] = tm.makeDataFrame()
df = tm.makeDataFrame()
df["obj1"] = "foo"
df["obj2"] = "bar"
df["bool1"] = df["A"] > 0
df["bool2"] = df["B"] > 0
df["bool3"] = True
df["int1"] = 1
df["int2"] = 2
df["timestamp1"] = Timestamp("20010102")
df["timestamp2"] = Timestamp("20010103")
df["datetime1"] = datetime.datetime(2001, 1, 2, 0, 0)
df["datetime2"] = datetime.datetime(2001, 1, 3, 0, 0)
df.loc[3:6, ["obj1"]] = np.nan
df = df._consolidate()._convert(datetime=True)
with catch_warnings(record=True):
simplefilter("ignore", pd.errors.PerformanceWarning)
store["df"] = df
# make a random group in hdf space
store._handle.create_group(store._handle.root, "bah")
assert store.filename in repr(store)
assert store.filename in str(store)
store.info()
# storers
with ensure_clean_store(setup_path) as store:
df = tm.makeDataFrame()
store.append("df", df)
s = store.get_storer("df")
repr(s)
str(s)
@ignore_natural_naming_warning
def test_contains(self, setup_path):
with ensure_clean_store(setup_path) as store:
store["a"] = tm.makeTimeSeries()
store["b"] = tm.makeDataFrame()
store["foo/bar"] = tm.makeDataFrame()
assert "a" in store
assert "b" in store
assert "c" not in store
assert "foo/bar" in store
assert "/foo/bar" in store
assert "/foo/b" not in store
assert "bar" not in store
# gh-2694: tables.NaturalNameWarning
with catch_warnings(record=True):
store["node())"] = tm.makeDataFrame()
assert "node())" in store
def test_versioning(self, setup_path):
with ensure_clean_store(setup_path) as store:
store["a"] = tm.makeTimeSeries()
store["b"] = tm.makeDataFrame()
df = tm.makeTimeDataFrame()
_maybe_remove(store, "df1")
store.append("df1", df[:10])
store.append("df1", df[10:])
assert store.root.a._v_attrs.pandas_version == "0.15.2"
assert store.root.b._v_attrs.pandas_version == "0.15.2"
assert store.root.df1._v_attrs.pandas_version == "0.15.2"
# write a file and wipe its versioning
_maybe_remove(store, "df2")
store.append("df2", df)
# this is an error because its table_type is appendable, but no
# version info
store.get_node("df2")._v_attrs.pandas_version = None
with pytest.raises(Exception):
store.select("df2")
def test_mode(self, setup_path):
df = tm.makeTimeDataFrame()
def check(mode):
with ensure_clean_path(setup_path) as path:
# constructor
if mode in ["r", "r+"]:
with pytest.raises(IOError):
HDFStore(path, mode=mode)
else:
store = HDFStore(path, mode=mode)
assert store._handle.mode == mode
store.close()
with ensure_clean_path(setup_path) as path:
# context
if mode in ["r", "r+"]:
with pytest.raises(IOError):
with HDFStore(path, mode=mode) as store: # noqa
pass
else:
with HDFStore(path, mode=mode) as store:
assert store._handle.mode == mode
with ensure_clean_path(setup_path) as path:
# conv write
if mode in ["r", "r+"]:
with pytest.raises(IOError):
df.to_hdf(path, "df", mode=mode)
df.to_hdf(path, "df", mode="w")
else:
df.to_hdf(path, "df", mode=mode)
# conv read
if mode in ["w"]:
with pytest.raises(ValueError):
read_hdf(path, "df", mode=mode)
else:
result = read_hdf(path, "df", mode=mode)
tm.assert_frame_equal(result, df)
def check_default_mode():
# read_hdf uses default mode
with ensure_clean_path(setup_path) as path:
df.to_hdf(path, "df", mode="w")
result = read_hdf(path, "df")
tm.assert_frame_equal(result, df)
check("r")
check("r+")
check("a")
check("w")
check_default_mode()
def test_reopen_handle(self, setup_path):
with ensure_clean_path(setup_path) as path:
store = HDFStore(path, mode="a")
store["a"] = tm.makeTimeSeries()
# invalid mode change
with pytest.raises(PossibleDataLossError):
store.open("w")
store.close()
assert not store.is_open
# truncation ok here
store.open("w")
assert store.is_open
assert len(store) == 0
store.close()
assert not store.is_open
store = HDFStore(path, mode="a")
store["a"] = tm.makeTimeSeries()
# reopen as read
store.open("r")
assert store.is_open
assert len(store) == 1
assert store._mode == "r"
store.close()
assert not store.is_open
# reopen as append
store.open("a")
assert store.is_open
assert len(store) == 1
assert store._mode == "a"
store.close()
assert not store.is_open
# reopen as append (again)
store.open("a")
assert store.is_open
assert len(store) == 1
assert store._mode == "a"
store.close()
assert not store.is_open
def test_open_args(self, setup_path):
with ensure_clean_path(setup_path) as path:
df = tm.makeDataFrame()
# create an in memory store
store = HDFStore(
path, mode="a", driver="H5FD_CORE", driver_core_backing_store=0
)
store["df"] = df
store.append("df2", df)
tm.assert_frame_equal(store["df"], df)
tm.assert_frame_equal(store["df2"], df)
store.close()
# the file should not have actually been written
assert not os.path.exists(path)
def test_flush(self, setup_path):
with ensure_clean_store(setup_path) as store:
store["a"] = tm.makeTimeSeries()
store.flush()
store.flush(fsync=True)
def test_get(self, setup_path):
with ensure_clean_store(setup_path) as store:
store["a"] = tm.makeTimeSeries()
left = store.get("a")
right = store["a"]
tm.assert_series_equal(left, right)
left = store.get("/a")
right = store["/a"]
tm.assert_series_equal(left, right)
with pytest.raises(KeyError, match="'No object named b in the file'"):
store.get("b")
@pytest.mark.parametrize(
"where, expected",
[
(
"/",
{
"": ({"first_group", "second_group"}, set()),
"/first_group": (set(), {"df1", "df2"}),
"/second_group": ({"third_group"}, {"df3", "s1"}),
"/second_group/third_group": (set(), {"df4"}),
},
),
(
"/second_group",
{
"/second_group": ({"third_group"}, {"df3", "s1"}),
"/second_group/third_group": (set(), {"df4"}),
},
),
],
)
def test_walk(self, where, expected, setup_path):
# GH10143
objs = {
"df1": pd.DataFrame([1, 2, 3]),
"df2": pd.DataFrame([4, 5, 6]),
"df3": pd.DataFrame([6, 7, 8]),
"df4": pd.DataFrame([9, 10, 11]),
"s1": pd.Series([10, 9, 8]),
# Next 3 items aren't pandas objects and should be ignored
"a1": np.array([[1, 2, 3], [4, 5, 6]]),
"tb1": np.array([(1, 2, 3), (4, 5, 6)], dtype="i,i,i"),
"tb2": np.array([(7, 8, 9), (10, 11, 12)], dtype="i,i,i"),
}
with ensure_clean_store("walk_groups.hdf", mode="w") as store:
store.put("/first_group/df1", objs["df1"])
store.put("/first_group/df2", objs["df2"])
store.put("/second_group/df3", objs["df3"])
store.put("/second_group/s1", objs["s1"])
store.put("/second_group/third_group/df4", objs["df4"])
# Create non-pandas objects
store._handle.create_array("/first_group", "a1", objs["a1"])
store._handle.create_table("/first_group", "tb1", obj=objs["tb1"])
store._handle.create_table("/second_group", "tb2", obj=objs["tb2"])
assert len(list(store.walk(where=where))) == len(expected)
for path, groups, leaves in store.walk(where=where):
assert path in expected
expected_groups, expected_frames = expected[path]
assert expected_groups == set(groups)
assert expected_frames == set(leaves)
for leaf in leaves:
frame_path = "/".join([path, leaf])
obj = store.get(frame_path)
if "df" in leaf:
tm.assert_frame_equal(obj, objs[leaf])
else:
tm.assert_series_equal(obj, objs[leaf])
def test_getattr(self, setup_path):
with ensure_clean_store(setup_path) as store:
s = tm.makeTimeSeries()
store["a"] = s
# test attribute access
result = store.a
tm.assert_series_equal(result, s)
result = getattr(store, "a")
tm.assert_series_equal(result, s)
df = tm.makeTimeDataFrame()
store["df"] = df
result = store.df
tm.assert_frame_equal(result, df)
# errors
for x in ["d", "mode", "path", "handle", "complib"]:
with pytest.raises(AttributeError):
getattr(store, x)
# not stores
for x in ["mode", "path", "handle", "complib"]:
getattr(store, "_{x}".format(x=x))
def test_put(self, setup_path):
with ensure_clean_store(setup_path) as store:
ts = tm.makeTimeSeries()
df = tm.makeTimeDataFrame()
store["a"] = ts
store["b"] = df[:10]
store["foo/bar/bah"] = df[:10]
store["foo"] = df[:10]
store["/foo"] = df[:10]
store.put("c", df[:10], format="table")
# not OK, not a table
with pytest.raises(ValueError):
store.put("b", df[10:], append=True)
# node does not currently exist, test _is_table_type returns False
# in this case
_maybe_remove(store, "f")
with pytest.raises(ValueError):
store.put("f", df[10:], append=True)
# can't put to a table (use append instead)
with pytest.raises(ValueError):
store.put("c", df[10:], append=True)
# overwrite table
store.put("c", df[:10], format="table", append=False)
tm.assert_frame_equal(df[:10], store["c"])
def test_put_string_index(self, setup_path):
with ensure_clean_store(setup_path) as store:
index = Index(
["I am a very long string index: {i}".format(i=i) for i in range(20)]
)
s = Series(np.arange(20), index=index)
df = DataFrame({"A": s, "B": s})
store["a"] = s
tm.assert_series_equal(store["a"], s)
store["b"] = df
tm.assert_frame_equal(store["b"], df)
# mixed length
index = Index(
["abcdefghijklmnopqrstuvwxyz1234567890"]
+ ["I am a very long string index: {i}".format(i=i) for i in range(20)]
)
s = Series(np.arange(21), index=index)
df = DataFrame({"A": s, "B": s})
store["a"] = s
tm.assert_series_equal(store["a"], s)
store["b"] = df
tm.assert_frame_equal(store["b"], df)
def test_put_compression(self, setup_path):
with ensure_clean_store(setup_path) as store:
df = tm.makeTimeDataFrame()
store.put("c", df, format="table", complib="zlib")
tm.assert_frame_equal(store["c"], df)
# can't compress if format='fixed'
with pytest.raises(ValueError):
store.put("b", df, format="fixed", complib="zlib")
@td.skip_if_windows_python_3
def test_put_compression_blosc(self, setup_path):
df = tm.makeTimeDataFrame()
with ensure_clean_store(setup_path) as store:
# can't compress if format='fixed'
with pytest.raises(ValueError):
store.put("b", df, format="fixed", complib="blosc")
store.put("c", df, format="table", complib="blosc")
tm.assert_frame_equal(store["c"], df)
def test_complibs_default_settings(self, setup_path):
# GH15943
df = tm.makeDataFrame()
# Set complevel and check if complib is automatically set to
# default value
with ensure_clean_path(setup_path) as tmpfile:
df.to_hdf(tmpfile, "df", complevel=9)
result = pd.read_hdf(tmpfile, "df")
tm.assert_frame_equal(result, df)
with tables.open_file(tmpfile, mode="r") as h5file:
for node in h5file.walk_nodes(where="/df", classname="Leaf"):
assert node.filters.complevel == 9
assert node.filters.complib == "zlib"
# Set complib and check to see if compression is disabled
with ensure_clean_path(setup_path) as tmpfile:
df.to_hdf(tmpfile, "df", complib="zlib")
result = pd.read_hdf(tmpfile, "df")
tm.assert_frame_equal(result, df)
with tables.open_file(tmpfile, mode="r") as h5file:
for node in h5file.walk_nodes(where="/df", classname="Leaf"):
assert node.filters.complevel == 0
assert node.filters.complib is None
# Check if not setting complib or complevel results in no compression
with ensure_clean_path(setup_path) as tmpfile:
df.to_hdf(tmpfile, "df")
result = pd.read_hdf(tmpfile, "df")
tm.assert_frame_equal(result, df)
with tables.open_file(tmpfile, mode="r") as h5file:
for node in h5file.walk_nodes(where="/df", classname="Leaf"):
assert node.filters.complevel == 0
assert node.filters.complib is None
# Check if file-defaults can be overridden on a per table basis
with ensure_clean_path(setup_path) as tmpfile:
store = pd.HDFStore(tmpfile)
store.append("dfc", df, complevel=9, complib="blosc")
store.append("df", df)
store.close()
with tables.open_file(tmpfile, mode="r") as h5file:
for node in h5file.walk_nodes(where="/df", classname="Leaf"):
assert node.filters.complevel == 0
assert node.filters.complib is None
for node in h5file.walk_nodes(where="/dfc", classname="Leaf"):
assert node.filters.complevel == 9
assert node.filters.complib == "blosc"
def test_complibs(self, setup_path):
# GH14478
df = tm.makeDataFrame()
# Building list of all complibs and complevels tuples
all_complibs = tables.filters.all_complibs
# Remove lzo if its not available on this platform
if not tables.which_lib_version("lzo"):
all_complibs.remove("lzo")
# Remove bzip2 if its not available on this platform
if not tables.which_lib_version("bzip2"):
all_complibs.remove("bzip2")
all_levels = range(0, 10)
all_tests = [(lib, lvl) for lib in all_complibs for lvl in all_levels]
for (lib, lvl) in all_tests:
with ensure_clean_path(setup_path) as tmpfile:
gname = "foo"
# Write and read file to see if data is consistent
df.to_hdf(tmpfile, gname, complib=lib, complevel=lvl)
result = pd.read_hdf(tmpfile, gname)
tm.assert_frame_equal(result, df)
# Open file and check metadata
# for correct amount of compression
h5table = tables.open_file(tmpfile, mode="r")
for node in h5table.walk_nodes(where="/" + gname, classname="Leaf"):
assert node.filters.complevel == lvl
if lvl == 0:
assert node.filters.complib is None
else:
assert node.filters.complib == lib
h5table.close()
def test_put_integer(self, setup_path):
# non-date, non-string index
df = DataFrame(np.random.randn(50, 100))
self._check_roundtrip(df, tm.assert_frame_equal, setup_path)
@td.xfail_non_writeable
def test_put_mixed_type(self, setup_path):
df = tm.makeTimeDataFrame()
df["obj1"] = "foo"
df["obj2"] = "bar"
df["bool1"] = df["A"] > 0
df["bool2"] = df["B"] > 0
df["bool3"] = True
df["int1"] = 1
df["int2"] = 2
df["timestamp1"] = Timestamp("20010102")
df["timestamp2"] = Timestamp("20010103")
df["datetime1"] = datetime.datetime(2001, 1, 2, 0, 0)
df["datetime2"] = datetime.datetime(2001, 1, 3, 0, 0)
df.loc[3:6, ["obj1"]] = np.nan
df = df._consolidate()._convert(datetime=True)
with ensure_clean_store(setup_path) as store:
_maybe_remove(store, "df")
# PerformanceWarning
with catch_warnings(record=True):
simplefilter("ignore", pd.errors.PerformanceWarning)
store.put("df", df)
expected = store.get("df")
tm.assert_frame_equal(expected, df)
@pytest.mark.filterwarnings(
"ignore:object name:tables.exceptions.NaturalNameWarning"
)
def test_append(self, setup_path):
with ensure_clean_store(setup_path) as store:
# this is allowed by almost always don't want to do it
# tables.NaturalNameWarning):
with catch_warnings(record=True):
df = tm.makeTimeDataFrame()
_maybe_remove(store, "df1")
store.append("df1", df[:10])
store.append("df1", df[10:])
tm.assert_frame_equal(store["df1"], df)
_maybe_remove(store, "df2")
store.put("df2", df[:10], format="table")
store.append("df2", df[10:])
tm.assert_frame_equal(store["df2"], df)
_maybe_remove(store, "df3")
store.append("/df3", df[:10])
store.append("/df3", df[10:])
tm.assert_frame_equal(store["df3"], df)
# this is allowed by almost always don't want to do it
# tables.NaturalNameWarning
_maybe_remove(store, "/df3 foo")
store.append("/df3 foo", df[:10])
store.append("/df3 foo", df[10:])
tm.assert_frame_equal(store["df3 foo"], df)
# dtype issues - mizxed type in a single object column
df = DataFrame(data=[[1, 2], [0, 1], [1, 2], [0, 0]])
df["mixed_column"] = "testing"
df.loc[2, "mixed_column"] = np.nan
_maybe_remove(store, "df")
store.append("df", df)
tm.assert_frame_equal(store["df"], df)
# uints - test storage of uints
uint_data = DataFrame(
{
"u08": Series(
np.random.randint(0, high=255, size=5), dtype=np.uint8
),
"u16": Series(
np.random.randint(0, high=65535, size=5), dtype=np.uint16
),
"u32": Series(
np.random.randint(0, high=2 ** 30, size=5), dtype=np.uint32
),
"u64": Series(
[2 ** 58, 2 ** 59, 2 ** 60, 2 ** 61, 2 ** 62],
dtype=np.uint64,
),
},
index=np.arange(5),
)
_maybe_remove(store, "uints")
store.append("uints", uint_data)
tm.assert_frame_equal(store["uints"], uint_data)
# uints - test storage of uints in indexable columns
_maybe_remove(store, "uints")
# 64-bit indices not yet supported
store.append("uints", uint_data, data_columns=["u08", "u16", "u32"])
tm.assert_frame_equal(store["uints"], uint_data)
def test_append_series(self, setup_path):
with ensure_clean_store(setup_path) as store:
# basic
ss = tm.makeStringSeries()
ts = tm.makeTimeSeries()
ns = Series(np.arange(100))
store.append("ss", ss)
result = store["ss"]
tm.assert_series_equal(result, ss)
assert result.name is None
store.append("ts", ts)
result = store["ts"]
tm.assert_series_equal(result, ts)
assert result.name is None
ns.name = "foo"
store.append("ns", ns)
result = store["ns"]
tm.assert_series_equal(result, ns)
assert result.name == ns.name
# select on the values
expected = ns[ns > 60]
result = store.select("ns", "foo>60")
tm.assert_series_equal(result, expected)
# select on the index and values
expected = ns[(ns > 70) & (ns.index < 90)]
result = store.select("ns", "foo>70 and index<90")
tm.assert_series_equal(result, expected)
# multi-index
mi = DataFrame(np.random.randn(5, 1), columns=["A"])
mi["B"] = np.arange(len(mi))
mi["C"] = "foo"
mi.loc[3:5, "C"] = "bar"
mi.set_index(["C", "B"], inplace=True)
s = mi.stack()
s.index = s.index.droplevel(2)
store.append("mi", s)
tm.assert_series_equal(store["mi"], s)
def test_store_index_types(self, setup_path):
# GH5386
# test storing various index types
with ensure_clean_store(setup_path) as store:
def check(format, index):
df = DataFrame(np.random.randn(10, 2), columns=list("AB"))
df.index = index(len(df))
_maybe_remove(store, "df")
store.put("df", df, format=format)
tm.assert_frame_equal(df, store["df"])
for index in [
tm.makeFloatIndex,
tm.makeStringIndex,
tm.makeIntIndex,
tm.makeDateIndex,
]:
check("table", index)
check("fixed", index)
# period index currently broken for table
# seee GH7796 FIXME
check("fixed", tm.makePeriodIndex)
# check('table',tm.makePeriodIndex)
# unicode
index = tm.makeUnicodeIndex
check("table", index)
check("fixed", index)
@pytest.mark.skipif(
not is_platform_little_endian(), reason="reason platform is not little endian"
)
def test_encoding(self, setup_path):
with ensure_clean_store(setup_path) as store:
df = DataFrame(dict(A="foo", B="bar"), index=range(5))
df.loc[2, "A"] = np.nan
df.loc[3, "B"] = np.nan
_maybe_remove(store, "df")
store.append("df", df, encoding="ascii")
tm.assert_frame_equal(store["df"], df)
expected = df.reindex(columns=["A"])
result = store.select("df", Term("columns=A", encoding="ascii"))
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize(
"val",
[
[b"E\xc9, 17", b"", b"a", b"b", b"c"],
[b"E\xc9, 17", b"a", b"b", b"c"],
[b"EE, 17", b"", b"a", b"b", b"c"],
[b"E\xc9, 17", b"\xf8\xfc", b"a", b"b", b"c"],
[b"", b"a", b"b", b"c"],
[b"\xf8\xfc", b"a", b"b", b"c"],
[b"A\xf8\xfc", b"", b"a", b"b", b"c"],
[np.nan, b"", b"b", b"c"],
[b"A\xf8\xfc", np.nan, b"", b"b", b"c"],
],
)
@pytest.mark.parametrize("dtype", ["category", object])
def test_latin_encoding(self, setup_path, dtype, val):
enc = "latin-1"
nan_rep = ""
key = "data"
val = [x.decode(enc) if isinstance(x, bytes) else x for x in val]
ser = pd.Series(val, dtype=dtype)
with ensure_clean_path(setup_path) as store:
ser.to_hdf(store, key, format="table", encoding=enc, nan_rep=nan_rep)
retr = read_hdf(store, key)
s_nan = ser.replace(nan_rep, np.nan)
if is_categorical_dtype(s_nan):
assert is_categorical_dtype(retr)
tm.assert_series_equal(
s_nan, retr, check_dtype=False, check_categorical=False
)
else:
tm.assert_series_equal(s_nan, retr)
# FIXME: don't leave commented-out
# fails:
# for x in examples:
# roundtrip(s, nan_rep=b'\xf8\xfc')
def test_append_some_nans(self, setup_path):
with ensure_clean_store(setup_path) as store:
df = DataFrame(
{
"A": Series(np.random.randn(20)).astype("int32"),
"A1": np.random.randn(20),
"A2": np.random.randn(20),
"B": "foo",
"C": "bar",
"D": Timestamp("20010101"),
"E": datetime.datetime(2001, 1, 2, 0, 0),
},
index=np.arange(20),
)
# some nans
_maybe_remove(store, "df1")
df.loc[0:15, ["A1", "B", "D", "E"]] = np.nan
store.append("df1", df[:10])
store.append("df1", df[10:])
tm.assert_frame_equal(store["df1"], df)
# first column
df1 = df.copy()
df1.loc[:, "A1"] = np.nan
_maybe_remove(store, "df1")
store.append("df1", df1[:10])
store.append("df1", df1[10:])
tm.assert_frame_equal(store["df1"], df1)
# 2nd column
df2 = df.copy()
df2.loc[:, "A2"] = np.nan
_maybe_remove(store, "df2")
store.append("df2", df2[:10])
store.append("df2", df2[10:])
tm.assert_frame_equal(store["df2"], df2)
# datetimes
df3 = df.copy()
df3.loc[:, "E"] = np.nan
_maybe_remove(store, "df3")
store.append("df3", df3[:10])
store.append("df3", df3[10:])
tm.assert_frame_equal(store["df3"], df3)
def test_append_all_nans(self, setup_path):
with ensure_clean_store(setup_path) as store:
df = DataFrame(
{"A1": np.random.randn(20), "A2": np.random.randn(20)},
index=np.arange(20),
)
df.loc[0:15, :] = np.nan
# nan some entire rows (dropna=True)
_maybe_remove(store, "df")
store.append("df", df[:10], dropna=True)
store.append("df", df[10:], dropna=True)
tm.assert_frame_equal(store["df"], df[-4:])
# nan some entire rows (dropna=False)
_maybe_remove(store, "df2")
store.append("df2", df[:10], dropna=False)
store.append("df2", df[10:], dropna=False)
tm.assert_frame_equal(store["df2"], df)
# tests the option io.hdf.dropna_table
pd.set_option("io.hdf.dropna_table", False)
_maybe_remove(store, "df3")
store.append("df3", df[:10])
store.append("df3", df[10:])
tm.assert_frame_equal(store["df3"], df)
pd.set_option("io.hdf.dropna_table", True)
_maybe_remove(store, "df4")
store.append("df4", df[:10])
store.append("df4", df[10:])
tm.assert_frame_equal(store["df4"], df[-4:])
# nan some entire rows (string are still written!)
df = DataFrame(
{
"A1": np.random.randn(20),
"A2": np.random.randn(20),
"B": "foo",
"C": "bar",
},
index=np.arange(20),
)
df.loc[0:15, :] = np.nan
_maybe_remove(store, "df")
store.append("df", df[:10], dropna=True)
store.append("df", df[10:], dropna=True)
tm.assert_frame_equal(store["df"], df)
_maybe_remove(store, "df2")
store.append("df2", df[:10], dropna=False)
store.append("df2", df[10:], dropna=False)
tm.assert_frame_equal(store["df2"], df)
# nan some entire rows (but since we have dates they are still
# written!)
df = DataFrame(
{
"A1": np.random.randn(20),
"A2": np.random.randn(20),
"B": "foo",
"C": "bar",
"D": Timestamp("20010101"),
"E": datetime.datetime(2001, 1, 2, 0, 0),
},
index=np.arange(20),
)
df.loc[0:15, :] = np.nan
_maybe_remove(store, "df")
store.append("df", df[:10], dropna=True)
store.append("df", df[10:], dropna=True)
tm.assert_frame_equal(store["df"], df)
_maybe_remove(store, "df2")
store.append("df2", df[:10], dropna=False)
store.append("df2", df[10:], dropna=False)
tm.assert_frame_equal(store["df2"], df)
# Test to make sure defaults are to not drop.
# Corresponding to Issue 9382
df_with_missing = DataFrame(
{"col1": [0, np.nan, 2], "col2": [1, np.nan, np.nan]}
)
with ensure_clean_path(setup_path) as path:
df_with_missing.to_hdf(path, "df_with_missing", format="table")
reloaded = read_hdf(path, "df_with_missing")
tm.assert_frame_equal(df_with_missing, reloaded)
def test_read_missing_key_close_store(self, setup_path):
# GH 25766
with ensure_clean_path(setup_path) as path:
df = pd.DataFrame({"a": range(2), "b": range(2)})
df.to_hdf(path, "k1")
with pytest.raises(KeyError, match="'No object named k2 in the file'"):
pd.read_hdf(path, "k2")
# smoke test to test that file is properly closed after
# read with KeyError before another write
df.to_hdf(path, "k2")
def test_read_missing_key_opened_store(self, setup_path):
# GH 28699
with ensure_clean_path(setup_path) as path:
df = pd.DataFrame({"a": range(2), "b": range(2)})
df.to_hdf(path, "k1")
store = pd.HDFStore(path, "r")
with pytest.raises(KeyError, match="'No object named k2 in the file'"):
pd.read_hdf(store, "k2")
# Test that the file is still open after a KeyError and that we can
# still read from it.
pd.read_hdf(store, "k1")
def test_append_frame_column_oriented(self, setup_path):
with ensure_clean_store(setup_path) as store:
# column oriented
df = tm.makeTimeDataFrame()
_maybe_remove(store, "df1")
store.append("df1", df.iloc[:, :2], axes=["columns"])
store.append("df1", df.iloc[:, 2:])
tm.assert_frame_equal(store["df1"], df)
result = store.select("df1", "columns=A")
expected = df.reindex(columns=["A"])
tm.assert_frame_equal(expected, result)
# selection on the non-indexable
result = store.select("df1", ("columns=A", "index=df.index[0:4]"))
expected = df.reindex(columns=["A"], index=df.index[0:4])
tm.assert_frame_equal(expected, result)
# this isn't supported
with pytest.raises(TypeError):
store.select("df1", "columns=A and index>df.index[4]")
def test_append_with_different_block_ordering(self, setup_path):
# GH 4096; using same frames, but different block orderings
with ensure_clean_store(setup_path) as store:
for i in range(10):
df = DataFrame(np.random.randn(10, 2), columns=list("AB"))
df["index"] = range(10)
df["index"] += i * 10
df["int64"] = Series([1] * len(df), dtype="int64")
df["int16"] = Series([1] * len(df), dtype="int16")
if i % 2 == 0:
del df["int64"]
df["int64"] = Series([1] * len(df), dtype="int64")
if i % 3 == 0:
a = df.pop("A")
df["A"] = a
df.set_index("index", inplace=True)
store.append("df", df)
# test a different ordering but with more fields (like invalid
# combinate)
with ensure_clean_store(setup_path) as store:
df = DataFrame(np.random.randn(10, 2), columns=list("AB"), dtype="float64")
df["int64"] = Series([1] * len(df), dtype="int64")
df["int16"] = Series([1] * len(df), dtype="int16")
store.append("df", df)
# store additional fields in different blocks
df["int16_2"] = Series([1] * len(df), dtype="int16")
with pytest.raises(ValueError):
store.append("df", df)
# store multile additional fields in different blocks
df["float_3"] = Series([1.0] * len(df), dtype="float64")
with pytest.raises(ValueError):
store.append("df", df)
def test_append_with_strings(self, setup_path):
with ensure_clean_store(setup_path) as store:
with catch_warnings(record=True):
def check_col(key, name, size):
assert (
getattr(store.get_storer(key).table.description, name).itemsize
== size
)
# avoid truncation on elements
df = DataFrame([[123, "asdqwerty"], [345, "dggnhebbsdfbdfb"]])
store.append("df_big", df)
tm.assert_frame_equal(store.select("df_big"), df)
check_col("df_big", "values_block_1", 15)
# appending smaller string ok
df2 = DataFrame([[124, "asdqy"], [346, "dggnhefbdfb"]])
store.append("df_big", df2)
expected = concat([df, df2])
tm.assert_frame_equal(store.select("df_big"), expected)
check_col("df_big", "values_block_1", 15)
# avoid truncation on elements
df = DataFrame([[123, "asdqwerty"], [345, "dggnhebbsdfbdfb"]])
store.append("df_big2", df, min_itemsize={"values": 50})
tm.assert_frame_equal(store.select("df_big2"), df)
check_col("df_big2", "values_block_1", 50)
# bigger string on next append
store.append("df_new", df)
df_new = DataFrame(
[[124, "abcdefqhij"], [346, "abcdefghijklmnopqrtsuvwxyz"]]
)
with pytest.raises(ValueError):
store.append("df_new", df_new)
# min_itemsize on Series index (GH 11412)
df = tm.makeMixedDataFrame().set_index("C")
store.append("ss", df["B"], min_itemsize={"index": 4})
tm.assert_series_equal(store.select("ss"), df["B"])
# same as above, with data_columns=True
store.append(
"ss2", df["B"], data_columns=True, min_itemsize={"index": 4}
)
tm.assert_series_equal(store.select("ss2"), df["B"])
# min_itemsize in index without appending (GH 10381)
store.put("ss3", df, format="table", min_itemsize={"index": 6})
# just make sure there is a longer string:
df2 = df.copy().reset_index().assign(C="longer").set_index("C")
store.append("ss3", df2)
tm.assert_frame_equal(store.select("ss3"), pd.concat([df, df2]))
# same as above, with a Series
store.put("ss4", df["B"], format="table", min_itemsize={"index": 6})
store.append("ss4", df2["B"])
tm.assert_series_equal(
store.select("ss4"), pd.concat([df["B"], df2["B"]])
)
# with nans
_maybe_remove(store, "df")
df = tm.makeTimeDataFrame()
df["string"] = "foo"
df.loc[1:4, "string"] = np.nan
df["string2"] = "bar"
df.loc[4:8, "string2"] = np.nan
df["string3"] = "bah"
df.loc[1:, "string3"] = np.nan
store.append("df", df)
result = store.select("df")
tm.assert_frame_equal(result, df)
with ensure_clean_store(setup_path) as store:
def check_col(key, name, size):
assert getattr(
store.get_storer(key).table.description, name
).itemsize, size
df = DataFrame(dict(A="foo", B="bar"), index=range(10))
# a min_itemsize that creates a data_column
_maybe_remove(store, "df")
store.append("df", df, min_itemsize={"A": 200})
check_col("df", "A", 200)
assert store.get_storer("df").data_columns == ["A"]
# a min_itemsize that creates a data_column2
_maybe_remove(store, "df")
store.append("df", df, data_columns=["B"], min_itemsize={"A": 200})
check_col("df", "A", 200)
assert store.get_storer("df").data_columns == ["B", "A"]
# a min_itemsize that creates a data_column2
_maybe_remove(store, "df")
store.append("df", df, data_columns=["B"], min_itemsize={"values": 200})
check_col("df", "B", 200)
check_col("df", "values_block_0", 200)
assert store.get_storer("df").data_columns == ["B"]
# infer the .typ on subsequent appends
_maybe_remove(store, "df")
store.append("df", df[:5], min_itemsize=200)
store.append("df", df[5:], min_itemsize=200)
tm.assert_frame_equal(store["df"], df)
# invalid min_itemsize keys
df = DataFrame(["foo", "foo", "foo", "barh", "barh", "barh"], columns=["A"])
_maybe_remove(store, "df")
with pytest.raises(ValueError):
store.append("df", df, min_itemsize={"foo": 20, "foobar": 20})
def test_append_with_empty_string(self, setup_path):
with ensure_clean_store(setup_path) as store:
# with all empty strings (GH 12242)
df = DataFrame({"x": ["a", "b", "c", "d", "e", "f", ""]})
store.append("df", df[:-1], min_itemsize={"x": 1})
store.append("df", df[-1:], min_itemsize={"x": 1})
tm.assert_frame_equal(store.select("df"), df)
def test_to_hdf_with_min_itemsize(self, setup_path):
with ensure_clean_path(setup_path) as path:
# min_itemsize in index with to_hdf (GH 10381)
df = tm.makeMixedDataFrame().set_index("C")
df.to_hdf(path, "ss3", format="table", min_itemsize={"index": 6})
# just make sure there is a longer string:
df2 = df.copy().reset_index().assign(C="longer").set_index("C")
df2.to_hdf(path, "ss3", append=True, format="table")
tm.assert_frame_equal(pd.read_hdf(path, "ss3"), pd.concat([df, df2]))
# same as above, with a Series
df["B"].to_hdf(path, "ss4", format="table", min_itemsize={"index": 6})
df2["B"].to_hdf(path, "ss4", append=True, format="table")
tm.assert_series_equal(
pd.read_hdf(path, "ss4"), pd.concat([df["B"], df2["B"]])
)
@pytest.mark.parametrize(
"format", [pytest.param("fixed", marks=td.xfail_non_writeable), "table"]
)
def test_to_hdf_errors(self, format, setup_path):
data = ["\ud800foo"]
ser = pd.Series(data, index=pd.Index(data))
with ensure_clean_path(setup_path) as path:
# GH 20835
ser.to_hdf(path, "table", format=format, errors="surrogatepass")
result = pd.read_hdf(path, "table", errors="surrogatepass")
tm.assert_series_equal(result, ser)
def test_append_with_data_columns(self, setup_path):
with ensure_clean_store(setup_path) as store:
df = tm.makeTimeDataFrame()
df.iloc[0, df.columns.get_loc("B")] = 1.0
_maybe_remove(store, "df")
store.append("df", df[:2], data_columns=["B"])
store.append("df", df[2:])
tm.assert_frame_equal(store["df"], df)
# check that we have indices created
assert store._handle.root.df.table.cols.index.is_indexed is True
assert store._handle.root.df.table.cols.B.is_indexed is True
# data column searching
result = store.select("df", "B>0")
expected = df[df.B > 0]
tm.assert_frame_equal(result, expected)
# data column searching (with an indexable and a data_columns)
result = store.select("df", "B>0 and index>df.index[3]")
df_new = df.reindex(index=df.index[4:])
expected = df_new[df_new.B > 0]
tm.assert_frame_equal(result, expected)
# data column selection with a string data_column
df_new = df.copy()
df_new["string"] = "foo"
df_new.loc[1:4, "string"] = np.nan
df_new.loc[5:6, "string"] = "bar"
_maybe_remove(store, "df")
store.append("df", df_new, data_columns=["string"])
result = store.select("df", "string='foo'")
expected = df_new[df_new.string == "foo"]
tm.assert_frame_equal(result, expected)
# using min_itemsize and a data column
def check_col(key, name, size):
assert (
getattr(store.get_storer(key).table.description, name).itemsize
== size
)
with ensure_clean_store(setup_path) as store:
_maybe_remove(store, "df")
store.append(
"df", df_new, data_columns=["string"], min_itemsize={"string": 30}
)
check_col("df", "string", 30)
_maybe_remove(store, "df")
store.append("df", df_new, data_columns=["string"], min_itemsize=30)
check_col("df", "string", 30)
_maybe_remove(store, "df")
store.append(
"df", df_new, data_columns=["string"], min_itemsize={"values": 30}
)
check_col("df", "string", 30)
with ensure_clean_store(setup_path) as store:
df_new["string2"] = "foobarbah"
df_new["string_block1"] = "foobarbah1"
df_new["string_block2"] = "foobarbah2"
_maybe_remove(store, "df")
store.append(
"df",
df_new,
data_columns=["string", "string2"],
min_itemsize={"string": 30, "string2": 40, "values": 50},
)
check_col("df", "string", 30)
check_col("df", "string2", 40)
check_col("df", "values_block_1", 50)
with ensure_clean_store(setup_path) as store:
# multiple data columns
df_new = df.copy()
df_new.iloc[0, df_new.columns.get_loc("A")] = 1.0
df_new.iloc[0, df_new.columns.get_loc("B")] = -1.0
df_new["string"] = "foo"
sl = df_new.columns.get_loc("string")
df_new.iloc[1:4, sl] = np.nan
df_new.iloc[5:6, sl] = "bar"
df_new["string2"] = "foo"
sl = df_new.columns.get_loc("string2")
df_new.iloc[2:5, sl] = np.nan
df_new.iloc[7:8, sl] = "bar"
_maybe_remove(store, "df")
store.append("df", df_new, data_columns=["A", "B", "string", "string2"])
result = store.select(
"df", "string='foo' and string2='foo' and A>0 and B<0"
)
expected = df_new[
(df_new.string == "foo")
& (df_new.string2 == "foo")
& (df_new.A > 0)
& (df_new.B < 0)
]
| tm.assert_frame_equal(result, expected, check_index_type=False) | pandas.util.testing.assert_frame_equal |
import datetime as dt
import numpy as np
import pandas as pd
# START USER INPUT
lgs_filepath = 'U:/CIO/#Data/output/investment/checker/lgs_table.csv'
jpm_filepath = 'U:/CIO/#Data/input/jpm/report/investment/LGSS Preliminary Performance 202005.xlsx'
lgs_dictionary_filepath = 'U:/CIO/#Data/input/lgs/dictionary/2020/06/New Dictionary_v10.xlsx'
FYTD = 11
report_date = dt.datetime(2020, 5, 31)
# End USER INPUT
# Reads LGS table
df_lgs = | pd.read_csv(lgs_filepath) | pandas.read_csv |
import pathlib
import sqlite3
import plt_tools
import xarray as xr
from atmPy.data_archives import arm
import datetime
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
class Data(object):
def __init__(self, kazr=True, ceilometer=True, timezone = -9, path2database = None, path2nsascience = '/mnt/telg/data/arm_data/OLI/tethered_balloon/nsascience/nsascience_avg1min/'):
self.timezone = timezone
self.send_message = lambda x: print(x)
self.path2database = pathlib.Path(path2database)
self.db_table_name = "vis_nsascience.0.1"
self.path2nsascience = pathlib.Path(path2nsascience)
self.path2kazr = pathlib.Path('/mnt/telg/data/arm_data/OLI/kazr_1min/')
self.path2ceil = pathlib.Path('/mnt/telg/data/arm_data/OLI/ceilometer_1min/')
# path2quicklooks = pathlib.Path('/mnt/telg/data/arm_data/OLI/tethered_balloon/nsascience/nsascience_quicklook/')
# get some database stuff
qu = "SELECT * FROM instrumet_performence_POPS_by_file"
with sqlite3.connect(self.path2database) as db:
self.inst_perform_pops = pd.read_sql(qu, db)
self.cm_meins = plt_tools.colormap.my_colormaps.relative_humidity(reverse=True)
self.cm_meins.set_bad('red')
self.lw_pgc = 3
# Preparations
##nasascience
df_nsa = pd.DataFrame(list(self.path2nsascience.glob('*.nc')), columns=['path'])
df_nsa['start_datetime'] = df_nsa.path.apply(lambda x: self.path2startdatetime(x, 1))
# df_nsa['start_datetime_AKst'] = df_nsa.start_datetime + pd.to_timedelta(self.timezone, unit='h')
df_nsa.sort_values('start_datetime', inplace=True)
# df_nsa['start_date_AKst'] = pd.DatetimeIndex(df_nsa.start_datetime_AKst).date
df_nsa['end_datetime'] = df_nsa.path.apply(lambda x: self.load_nsascience(x, return_end=True))
self._tp_df_nsa = df_nsa
# self.stakunique = np.unique(df_nsa.start_datetime)
# self.valid_dates = self.stakunique
self.valid_dates = np.sort(np.unique(np.concatenate((pd.DatetimeIndex(df_nsa.start_datetime + np.timedelta64(self.timezone, 'h')).date, pd.DatetimeIndex(df_nsa.end_datetime+ np.timedelta64(self.timezone, 'h')).date))))# + np.timedelta64(self.timezone, 'h')
self.valid_dates = pd.to_datetime(self.valid_dates)
self.df_nsa = df_nsa
## kazr
paths_kazr = list(self.path2kazr.glob('*'))
paths_kazr.sort()
df_kzar = | pd.DataFrame(paths_kazr, columns=['paths']) | pandas.DataFrame |
import datetime as dt
from itertools import product
import sys
import numpy as np
from numpy.random import RandomState
from numpy.testing import assert_allclose
import pandas as pd
from pandas.testing import assert_frame_equal
import pytest
from arch.data import sp500
from arch.tests.univariate.test_variance_forecasting import preserved_state
from arch.univariate import (
APARCH,
ARX,
EGARCH,
FIGARCH,
GARCH,
HARCH,
HARX,
ConstantMean,
ConstantVariance,
EWMAVariance,
MIDASHyperbolic,
RiskMetrics2006,
ZeroMean,
arch_model,
)
from arch.univariate.mean import _ar_forecast, _ar_to_impulse
SP500 = 100 * sp500.load()["Adj Close"].pct_change().dropna()
MEAN_MODELS = [
HARX(SP500, lags=[1, 5]),
ARX(SP500, lags=2),
ConstantMean(SP500),
ZeroMean(SP500),
]
VOLATILITIES = [
ConstantVariance(),
GARCH(),
FIGARCH(),
EWMAVariance(lam=0.94),
MIDASHyperbolic(),
HARCH(lags=[1, 5, 22]),
RiskMetrics2006(),
APARCH(),
EGARCH(),
]
MODEL_SPECS = list(product(MEAN_MODELS, VOLATILITIES))
IDS = [
f"{str(mean).split('(')[0]}-{str(vol).split('(')[0]}" for mean, vol in MODEL_SPECS
]
@pytest.fixture(params=MODEL_SPECS, ids=IDS)
def model_spec(request):
mean, vol = request.param
mean.volatility = vol
return mean
class TestForecasting(object):
@classmethod
def setup_class(cls):
cls.rng = RandomState(12345)
am = arch_model(None, mean="Constant", vol="Constant")
data = am.simulate(np.array([0.0, 10.0]), 1000)
data.index = pd.date_range("2000-01-01", periods=data.index.shape[0])
cls.zero_mean = data.data
am = arch_model(None, mean="AR", vol="Constant", lags=[1])
data = am.simulate(np.array([1.0, 0.9, 2]), 1000)
data.index = | pd.date_range("2000-01-01", periods=data.index.shape[0]) | pandas.date_range |
from datetime import (
datetime,
timedelta,
timezone,
)
import numpy as np
import pytest
import pytz
from pandas import (
Categorical,
DataFrame,
DatetimeIndex,
NaT,
Period,
Series,
Timedelta,
Timestamp,
date_range,
isna,
)
import pandas._testing as tm
class TestSeriesFillNA:
def test_fillna_nat(self):
series = Series([0, 1, 2, NaT.value], dtype="M8[ns]")
filled = series.fillna(method="pad")
filled2 = series.fillna(value=series.values[2])
expected = series.copy()
expected.values[3] = expected.values[2]
tm.assert_series_equal(filled, expected)
tm.assert_series_equal(filled2, expected)
df = DataFrame({"A": series})
filled = df.fillna(method="pad")
filled2 = df.fillna(value=series.values[2])
expected = DataFrame({"A": expected})
tm.assert_frame_equal(filled, expected)
tm.assert_frame_equal(filled2, expected)
series = Series([NaT.value, 0, 1, 2], dtype="M8[ns]")
filled = series.fillna(method="bfill")
filled2 = series.fillna(value=series[1])
expected = series.copy()
expected[0] = expected[1]
tm.assert_series_equal(filled, expected)
tm.assert_series_equal(filled2, expected)
df = DataFrame({"A": series})
filled = df.fillna(method="bfill")
filled2 = df.fillna(value=series[1])
expected = DataFrame({"A": expected})
tm.assert_frame_equal(filled, expected)
tm.assert_frame_equal(filled2, expected)
def test_fillna_value_or_method(self, datetime_series):
msg = "Cannot specify both 'value' and 'method'"
with pytest.raises(ValueError, match=msg):
datetime_series.fillna(value=0, method="ffill")
def test_fillna(self):
ts = Series([0.0, 1.0, 2.0, 3.0, 4.0], index=tm.makeDateIndex(5))
tm.assert_series_equal(ts, ts.fillna(method="ffill"))
ts[2] = np.NaN
exp = Series([0.0, 1.0, 1.0, 3.0, 4.0], index=ts.index)
tm.assert_series_equal(ts.fillna(method="ffill"), exp)
exp = Series([0.0, 1.0, 3.0, 3.0, 4.0], index=ts.index)
tm.assert_series_equal(ts.fillna(method="backfill"), exp)
exp = Series([0.0, 1.0, 5.0, 3.0, 4.0], index=ts.index)
tm.assert_series_equal(ts.fillna(value=5), exp)
msg = "Must specify a fill 'value' or 'method'"
with pytest.raises(ValueError, match=msg):
ts.fillna()
def test_fillna_nonscalar(self):
# GH#5703
s1 = Series([np.nan])
s2 = Series([1])
result = s1.fillna(s2)
expected = Series([1.0])
tm.assert_series_equal(result, expected)
result = s1.fillna({})
tm.assert_series_equal(result, s1)
result = s1.fillna(Series((), dtype=object))
tm.assert_series_equal(result, s1)
result = s2.fillna(s1)
tm.assert_series_equal(result, s2)
result = s1.fillna({0: 1})
tm.assert_series_equal(result, expected)
result = s1.fillna({1: 1})
tm.assert_series_equal(result, Series([np.nan]))
result = s1.fillna({0: 1, 1: 1})
tm.assert_series_equal(result, expected)
result = s1.fillna(Series({0: 1, 1: 1}))
tm.assert_series_equal(result, expected)
result = s1.fillna(Series({0: 1, 1: 1}, index=[4, 5]))
tm.assert_series_equal(result, s1)
def test_fillna_aligns(self):
s1 = Series([0, 1, 2], list("abc"))
s2 = Series([0, np.nan, 2], list("bac"))
result = s2.fillna(s1)
expected = Series([0, 0, 2.0], list("bac"))
tm.assert_series_equal(result, expected)
def test_fillna_limit(self):
ser = Series(np.nan, index=[0, 1, 2])
result = ser.fillna(999, limit=1)
expected = Series([999, np.nan, np.nan], index=[0, 1, 2])
tm.assert_series_equal(result, expected)
result = ser.fillna(999, limit=2)
expected = Series([999, 999, np.nan], index=[0, 1, 2])
tm.assert_series_equal(result, expected)
def test_fillna_dont_cast_strings(self):
# GH#9043
# make sure a string representation of int/float values can be filled
# correctly without raising errors or being converted
vals = ["0", "1.5", "-0.3"]
for val in vals:
ser = Series([0, 1, np.nan, np.nan, 4], dtype="float64")
result = ser.fillna(val)
expected = Series([0, 1, val, val, 4], dtype="object")
tm.assert_series_equal(result, expected)
def test_fillna_consistency(self):
# GH#16402
# fillna with a tz aware to a tz-naive, should result in object
ser = Series([Timestamp("20130101"), NaT])
result = ser.fillna(Timestamp("20130101", tz="US/Eastern"))
expected = Series(
[Timestamp("20130101"), Timestamp("2013-01-01", tz="US/Eastern")],
dtype="object",
)
tm.assert_series_equal(result, expected)
msg = "The 'errors' keyword in "
with tm.assert_produces_warning(FutureWarning, match=msg):
# where (we ignore the errors=)
result = ser.where(
[True, False], Timestamp("20130101", tz="US/Eastern"), errors="ignore"
)
tm.assert_series_equal(result, expected)
with tm.assert_produces_warning(FutureWarning, match=msg):
result = ser.where(
[True, False], Timestamp("20130101", tz="US/Eastern"), errors="ignore"
)
tm.assert_series_equal(result, expected)
# with a non-datetime
result = ser.fillna("foo")
expected = Series([Timestamp("20130101"), "foo"])
tm.assert_series_equal(result, expected)
# assignment
ser2 = ser.copy()
ser2[1] = "foo"
tm.assert_series_equal(ser2, expected)
def test_fillna_downcast(self):
# GH#15277
# infer int64 from float64
ser = Series([1.0, np.nan])
result = ser.fillna(0, downcast="infer")
expected = Series([1, 0])
tm.assert_series_equal(result, expected)
# infer int64 from float64 when fillna value is a dict
ser = Series([1.0, np.nan])
result = ser.fillna({1: 0}, downcast="infer")
expected = Series([1, 0])
tm.assert_series_equal(result, expected)
def test_timedelta_fillna(self, frame_or_series):
# GH#3371
ser = Series(
[
Timestamp("20130101"),
Timestamp("20130101"),
Timestamp("20130102"),
Timestamp("20130103 9:01:01"),
]
)
td = ser.diff()
obj = frame_or_series(td)
# reg fillna
result = obj.fillna(Timedelta(seconds=0))
expected = Series(
[
timedelta(0),
timedelta(0),
timedelta(1),
timedelta(days=1, seconds=9 * 3600 + 60 + 1),
]
)
expected = frame_or_series(expected)
tm.assert_equal(result, expected)
# interpreted as seconds, no longer supported
msg = "value should be a 'Timedelta', 'NaT', or array of those. Got 'int'"
with pytest.raises(TypeError, match=msg):
obj.fillna(1)
result = obj.fillna(Timedelta(seconds=1))
expected = Series(
[
timedelta(seconds=1),
timedelta(0),
timedelta(1),
timedelta(days=1, seconds=9 * 3600 + 60 + 1),
]
)
expected = frame_or_series(expected)
tm.assert_equal(result, expected)
result = obj.fillna(timedelta(days=1, seconds=1))
expected = Series(
[
timedelta(days=1, seconds=1),
timedelta(0),
timedelta(1),
timedelta(days=1, seconds=9 * 3600 + 60 + 1),
]
)
expected = frame_or_series(expected)
tm.assert_equal(result, expected)
result = obj.fillna(np.timedelta64(10 ** 9))
expected = Series(
[
timedelta(seconds=1),
timedelta(0),
timedelta(1),
timedelta(days=1, seconds=9 * 3600 + 60 + 1),
]
)
expected = frame_or_series(expected)
tm.assert_equal(result, expected)
result = obj.fillna(NaT)
expected = Series(
[
NaT,
timedelta(0),
timedelta(1),
timedelta(days=1, seconds=9 * 3600 + 60 + 1),
],
dtype="m8[ns]",
)
expected = frame_or_series(expected)
tm.assert_equal(result, expected)
# ffill
td[2] = np.nan
obj = frame_or_series(td)
result = obj.ffill()
expected = td.fillna(Timedelta(seconds=0))
expected[0] = np.nan
expected = frame_or_series(expected)
tm.assert_equal(result, expected)
# bfill
td[2] = np.nan
obj = frame_or_series(td)
result = obj.bfill()
expected = td.fillna(Timedelta(seconds=0))
expected[2] = timedelta(days=1, seconds=9 * 3600 + 60 + 1)
expected = frame_or_series(expected)
tm.assert_equal(result, expected)
def test_datetime64_fillna(self):
ser = Series(
[
Timestamp("20130101"),
Timestamp("20130101"),
Timestamp("20130102"),
Timestamp("20130103 9:01:01"),
]
)
ser[2] = np.nan
# ffill
result = ser.ffill()
expected = Series(
[
Timestamp("20130101"),
Timestamp("20130101"),
Timestamp("20130101"),
Timestamp("20130103 9:01:01"),
]
)
tm.assert_series_equal(result, expected)
# bfill
result = ser.bfill()
expected = Series(
[
Timestamp("20130101"),
Timestamp("20130101"),
Timestamp("20130103 9:01:01"),
Timestamp("20130103 9:01:01"),
]
)
tm.assert_series_equal(result, expected)
def test_datetime64_fillna_backfill(self):
# GH#6587
# make sure that we are treating as integer when filling
msg = "containing strings is deprecated"
with tm.assert_produces_warning(FutureWarning, match=msg):
# this also tests inference of a datetime-like with NaT's
ser = Series([NaT, NaT, "2013-08-05 15:30:00.000001"])
expected = Series(
[
"2013-08-05 15:30:00.000001",
"2013-08-05 15:30:00.000001",
"2013-08-05 15:30:00.000001",
],
dtype="M8[ns]",
)
result = ser.fillna(method="backfill")
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("tz", ["US/Eastern", "Asia/Tokyo"])
def test_datetime64_tz_fillna(self, tz):
# DatetimeLikeBlock
ser = Series(
[
Timestamp("2011-01-01 10:00"),
NaT,
Timestamp("2011-01-03 10:00"),
NaT,
]
)
null_loc = Series([False, True, False, True])
result = ser.fillna(Timestamp("2011-01-02 10:00"))
expected = Series(
[
Timestamp("2011-01-01 10:00"),
Timestamp("2011-01-02 10:00"),
Timestamp("2011-01-03 10:00"),
Timestamp("2011-01-02 10:00"),
]
)
tm.assert_series_equal(expected, result)
# check s is not changed
tm.assert_series_equal(isna(ser), null_loc)
result = ser.fillna(Timestamp("2011-01-02 10:00", tz=tz))
expected = Series(
[
Timestamp("2011-01-01 10:00"),
Timestamp("2011-01-02 10:00", tz=tz),
Timestamp("2011-01-03 10:00"),
Timestamp("2011-01-02 10:00", tz=tz),
]
)
tm.assert_series_equal(expected, result)
tm.assert_series_equal(isna(ser), null_loc)
result = ser.fillna("AAA")
expected = Series(
[
Timestamp("2011-01-01 10:00"),
"AAA",
Timestamp("2011-01-03 10:00"),
"AAA",
],
dtype=object,
)
tm.assert_series_equal(expected, result)
tm.assert_series_equal(isna(ser), null_loc)
result = ser.fillna(
{
1: Timestamp("2011-01-02 10:00", tz=tz),
3: Timestamp("2011-01-04 10:00"),
}
)
expected = Series(
[
Timestamp("2011-01-01 10:00"),
Timestamp("2011-01-02 10:00", tz=tz),
Timestamp("2011-01-03 10:00"),
Timestamp("2011-01-04 10:00"),
]
)
tm.assert_series_equal(expected, result)
tm.assert_series_equal(isna(ser), null_loc)
result = ser.fillna(
{1: Timestamp("2011-01-02 10:00"), 3: Timestamp("2011-01-04 10:00")}
)
expected = Series(
[
Timestamp("2011-01-01 10:00"),
Timestamp("2011-01-02 10:00"),
Timestamp("2011-01-03 10:00"),
Timestamp("2011-01-04 10:00"),
]
)
tm.assert_series_equal(expected, result)
tm.assert_series_equal(isna(ser), null_loc)
# DatetimeTZBlock
idx = DatetimeIndex(["2011-01-01 10:00", NaT, "2011-01-03 10:00", NaT], tz=tz)
ser = Series(idx)
assert ser.dtype == f"datetime64[ns, {tz}]"
tm.assert_series_equal(isna(ser), null_loc)
result = ser.fillna(Timestamp("2011-01-02 10:00"))
expected = Series(
[
Timestamp("2011-01-01 10:00", tz=tz),
Timestamp("2011-01-02 10:00"),
Timestamp("2011-01-03 10:00", tz=tz),
Timestamp("2011-01-02 10:00"),
]
)
tm.assert_series_equal(expected, result)
tm.assert_series_equal(isna(ser), null_loc)
result = ser.fillna(Timestamp("2011-01-02 10:00", tz=tz))
idx = DatetimeIndex(
[
"2011-01-01 10:00",
"2011-01-02 10:00",
"2011-01-03 10:00",
"2011-01-02 10:00",
],
tz=tz,
)
expected = Series(idx)
tm.assert_series_equal(expected, result)
tm.assert_series_equal(isna(ser), null_loc)
result = ser.fillna(Timestamp("2011-01-02 10:00", tz=tz).to_pydatetime())
idx = DatetimeIndex(
[
"2011-01-01 10:00",
"2011-01-02 10:00",
"2011-01-03 10:00",
"2011-01-02 10:00",
],
tz=tz,
)
expected = Series(idx)
tm.assert_series_equal(expected, result)
tm.assert_series_equal(isna(ser), null_loc)
result = ser.fillna("AAA")
expected = Series(
[
Timestamp("2011-01-01 10:00", tz=tz),
"AAA",
Timestamp("2011-01-03 10:00", tz=tz),
"AAA",
],
dtype=object,
)
tm.assert_series_equal(expected, result)
tm.assert_series_equal(isna(ser), null_loc)
result = ser.fillna(
{
1: Timestamp("2011-01-02 10:00", tz=tz),
3: Timestamp("2011-01-04 10:00"),
}
)
expected = Series(
[
Timestamp("2011-01-01 10:00", tz=tz),
Timestamp("2011-01-02 10:00", tz=tz),
Timestamp("2011-01-03 10:00", tz=tz),
Timestamp("2011-01-04 10:00"),
]
)
tm.assert_series_equal(expected, result)
tm.assert_series_equal(isna(ser), null_loc)
result = ser.fillna(
{
1: Timestamp("2011-01-02 10:00", tz=tz),
3: Timestamp("2011-01-04 10:00", tz=tz),
}
)
expected = Series(
[
Timestamp("2011-01-01 10:00", tz=tz),
Timestamp("2011-01-02 10:00", tz=tz),
Timestamp("2011-01-03 10:00", tz=tz),
Timestamp("2011-01-04 10:00", tz=tz),
]
)
tm.assert_series_equal(expected, result)
tm.assert_series_equal(isna(ser), null_loc)
# filling with a naive/other zone, coerce to object
result = ser.fillna(Timestamp("20130101"))
expected = Series(
[
Timestamp("2011-01-01 10:00", tz=tz),
Timestamp("2013-01-01"),
Timestamp("2011-01-03 10:00", tz=tz),
Timestamp("2013-01-01"),
]
)
tm.assert_series_equal(expected, result)
tm.assert_series_equal(isna(ser), null_loc)
with tm.assert_produces_warning(FutureWarning, match="mismatched timezone"):
result = ser.fillna(Timestamp("20130101", tz="US/Pacific"))
expected = Series(
[
Timestamp("2011-01-01 10:00", tz=tz),
Timestamp("2013-01-01", tz="US/Pacific"),
Timestamp("2011-01-03 10:00", tz=tz),
Timestamp("2013-01-01", tz="US/Pacific"),
]
)
tm.assert_series_equal(expected, result)
tm.assert_series_equal(isna(ser), null_loc)
def test_fillna_dt64tz_with_method(self):
# with timezone
# GH#15855
ser = Series([Timestamp("2012-11-11 00:00:00+01:00"), NaT])
exp = Series(
[
Timestamp("2012-11-11 00:00:00+01:00"),
Timestamp("2012-11-11 00:00:00+01:00"),
]
)
tm.assert_series_equal(ser.fillna(method="pad"), exp)
ser = Series([NaT, Timestamp("2012-11-11 00:00:00+01:00")])
exp = Series(
[
Timestamp("2012-11-11 00:00:00+01:00"),
Timestamp("2012-11-11 00:00:00+01:00"),
]
)
tm.assert_series_equal(ser.fillna(method="bfill"), exp)
def test_fillna_pytimedelta(self):
# GH#8209
ser = Series([np.nan, Timedelta("1 days")], index=["A", "B"])
result = ser.fillna(timedelta(1))
expected = Series(Timedelta("1 days"), index=["A", "B"])
tm.assert_series_equal(result, expected)
def test_fillna_period(self):
# GH#13737
ser = Series([Period("2011-01", freq="M"), Period("NaT", freq="M")])
res = ser.fillna(Period("2012-01", freq="M"))
exp = Series([Period("2011-01", freq="M"), Period("2012-01", freq="M")])
tm.assert_series_equal(res, exp)
assert res.dtype == "Period[M]"
def test_fillna_dt64_timestamp(self, frame_or_series):
ser = Series(
[
Timestamp("20130101"),
Timestamp("20130101"),
Timestamp("20130102"),
Timestamp("20130103 9:01:01"),
]
)
ser[2] = np.nan
obj = frame_or_series(ser)
# reg fillna
result = obj.fillna(Timestamp("20130104"))
expected = Series(
[
Timestamp("20130101"),
Timestamp("20130101"),
Timestamp("20130104"),
Timestamp("20130103 9:01:01"),
]
)
expected = frame_or_series(expected)
tm.assert_equal(result, expected)
result = obj.fillna(NaT)
expected = obj
tm.assert_equal(result, expected)
def test_fillna_dt64_non_nao(self):
# GH#27419
ser = Series([Timestamp("2010-01-01"), NaT, Timestamp("2000-01-01")])
val = np.datetime64("1975-04-05", "ms")
result = ser.fillna(val)
expected = Series(
[Timestamp("2010-01-01"), Timestamp("1975-04-05"), Timestamp("2000-01-01")]
)
tm.assert_series_equal(result, expected)
def test_fillna_numeric_inplace(self):
x = Series([np.nan, 1.0, np.nan, 3.0, np.nan], ["z", "a", "b", "c", "d"])
y = x.copy()
return_value = y.fillna(value=0, inplace=True)
assert return_value is None
expected = x.fillna(value=0)
tm.assert_series_equal(y, expected)
# ---------------------------------------------------------------
# CategoricalDtype
@pytest.mark.parametrize(
"fill_value, expected_output",
[
("a", ["a", "a", "b", "a", "a"]),
({1: "a", 3: "b", 4: "b"}, ["a", "a", "b", "b", "b"]),
({1: "a"}, ["a", "a", "b", np.nan, np.nan]),
({1: "a", 3: "b"}, ["a", "a", "b", "b", np.nan]),
(Series("a"), ["a", np.nan, "b", np.nan, np.nan]),
(Series("a", index=[1]), ["a", "a", "b", np.nan, np.nan]),
(Series({1: "a", 3: "b"}), ["a", "a", "b", "b", np.nan]),
(Series(["a", "b"], index=[3, 4]), ["a", np.nan, "b", "a", "b"]),
],
)
def test_fillna_categorical(self, fill_value, expected_output):
# GH#17033
# Test fillna for a Categorical series
data = ["a", np.nan, "b", np.nan, np.nan]
ser = Series(Categorical(data, categories=["a", "b"]))
exp = Series(Categorical(expected_output, categories=["a", "b"]))
result = ser.fillna(fill_value)
tm.assert_series_equal(result, exp)
@pytest.mark.parametrize(
"fill_value, expected_output",
[
(Series(["a", "b", "c", "d", "e"]), ["a", "b", "b", "d", "e"]),
(Series(["b", "d", "a", "d", "a"]), ["a", "d", "b", "d", "a"]),
(
Series(
Categorical(
["b", "d", "a", "d", "a"], categories=["b", "c", "d", "e", "a"]
)
),
["a", "d", "b", "d", "a"],
),
],
)
def test_fillna_categorical_with_new_categories(self, fill_value, expected_output):
# GH#26215
data = ["a", np.nan, "b", np.nan, np.nan]
ser = Series(Categorical(data, categories=["a", "b", "c", "d", "e"]))
exp = Series(Categorical(expected_output, categories=["a", "b", "c", "d", "e"]))
result = ser.fillna(fill_value)
tm.assert_series_equal(result, exp)
def test_fillna_categorical_raises(self):
data = ["a", np.nan, "b", np.nan, np.nan]
ser = Series(Categorical(data, categories=["a", "b"]))
cat = ser._values
msg = "Cannot setitem on a Categorical with a new category"
with pytest.raises(TypeError, match=msg):
ser.fillna("d")
msg2 = "Length of 'value' does not match."
with pytest.raises(ValueError, match=msg2):
cat.fillna(Series("d"))
with pytest.raises(TypeError, match=msg):
ser.fillna({1: "d", 3: "a"})
msg = '"value" parameter must be a scalar or dict, but you passed a "list"'
with pytest.raises(TypeError, match=msg):
ser.fillna(["a", "b"])
msg = '"value" parameter must be a scalar or dict, but you passed a "tuple"'
with pytest.raises(TypeError, match=msg):
ser.fillna(("a", "b"))
msg = (
'"value" parameter must be a scalar, dict '
'or Series, but you passed a "DataFrame"'
)
with pytest.raises(TypeError, match=msg):
ser.fillna(DataFrame({1: ["a"], 3: ["b"]}))
@pytest.mark.parametrize("dtype", [float, "float32", "float64"])
@pytest.mark.parametrize("fill_type", tm.ALL_REAL_NUMPY_DTYPES)
def test_fillna_float_casting(self, dtype, fill_type):
# GH-43424
ser = Series([np.nan, 1.2], dtype=dtype)
fill_values = Series([2, 2], dtype=fill_type)
result = ser.fillna(fill_values)
expected = Series([2.0, 1.2], dtype=dtype)
tm.assert_series_equal(result, expected)
def test_fillna_f32_upcast_with_dict(self):
# GH-43424
ser = Series([np.nan, 1.2], dtype=np.float32)
result = ser.fillna({0: 1})
expected = Series([1.0, 1.2], dtype=np.float32)
tm.assert_series_equal(result, expected)
# ---------------------------------------------------------------
# Invalid Usages
def test_fillna_invalid_method(self, datetime_series):
try:
datetime_series.fillna(method="ffil")
except ValueError as inst:
assert "ffil" in str(inst)
def test_fillna_listlike_invalid(self):
ser = Series(np.random.randint(-100, 100, 50))
msg = '"value" parameter must be a scalar or dict, but you passed a "list"'
with pytest.raises(TypeError, match=msg):
ser.fillna([1, 2])
msg = '"value" parameter must be a scalar or dict, but you passed a "tuple"'
with pytest.raises(TypeError, match=msg):
ser.fillna((1, 2))
def test_fillna_method_and_limit_invalid(self):
# related GH#9217, make sure limit is an int and greater than 0
ser = Series([1, 2, 3, None])
msg = "|".join(
[
r"Cannot specify both 'value' and 'method'\.",
"Limit must be greater than 0",
"Limit must be an integer",
]
)
for limit in [-1, 0, 1.0, 2.0]:
for method in ["backfill", "bfill", "pad", "ffill", None]:
with pytest.raises(ValueError, match=msg):
ser.fillna(1, limit=limit, method=method)
def test_fillna_datetime64_with_timezone_tzinfo(self):
# https://github.com/pandas-dev/pandas/issues/38851
# different tzinfos representing UTC treated as equal
ser = Series(date_range("2020", periods=3, tz="UTC"))
expected = ser.copy()
ser[1] = NaT
result = ser.fillna(datetime(2020, 1, 2, tzinfo=timezone.utc))
tm.assert_series_equal(result, expected)
# but we dont (yet) consider distinct tzinfos for non-UTC tz equivalent
ts = Timestamp("2000-01-01", tz="US/Pacific")
ser2 = Series(ser._values.tz_convert("dateutil/US/Pacific"))
assert ser2.dtype.kind == "M"
with tm.assert_produces_warning(FutureWarning, match="mismatched timezone"):
result = ser2.fillna(ts)
expected = Series([ser[0], ts, ser[2]], dtype=object)
# TODO(2.0): once deprecation is enforced
# expected = Series(
# [ser2[0], ts.tz_convert(ser2.dtype.tz), ser2[2]],
# dtype=ser2.dtype,
# )
tm.assert_series_equal(result, expected)
def test_fillna_pos_args_deprecation(self):
# https://github.com/pandas-dev/pandas/issues/41485
srs = Series([1, 2, 3, np.nan], dtype=float)
msg = (
r"In a future version of pandas all arguments of Series.fillna "
r"except for the argument 'value' will be keyword-only"
)
with tm.assert_produces_warning(FutureWarning, match=msg):
result = srs.fillna(0, None, None)
expected = | Series([1, 2, 3, 0], dtype=float) | pandas.Series |
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
from pandas import DataFrame
def create_df2(raw_data_location2):
df1 = pd.DataFrame()
df2 = pd.DataFrame()
df3 = | pd.DataFrame() | pandas.DataFrame |
# @name: create_fake_data.py
# @summary: Creates a series of fake patients and "data" simulating CViSB data
# @description: For prototyping and testing out CViSB data management website, creating a series of fake patients and data files.
# @sources:
# @depends:
# @author: <NAME>
# @email: <EMAIL>
# @license: Apache-2.0
# @date: 16 March 2018
import pandas as pd
import numpy as np
import io
import dropbox
# [ Set up params ] ---------------------------------------------------------------------------------------
token = ""
dropbox_folder = "/CViSB_test"
expt_file = 'expt_list.csv'
# [ Set up fake patient generator ] -----------------------------------------------------------------------
def fakePatients(number = 25):
ids = np.arange(number)
patients = pd.DataFrame()
create_id = np.vectorize(lambda x: 'fakeid' + str(x).zfill(4))
patients['patient_id'] = create_id(ids)
patients['sex'] = patients.apply(create_sex, axis = 1)
patients['age'] = patients.apply(create_age, axis = 1)
patients['cohort'] = patients.apply(create_cohort, axis = 1)
patients['cohort_exposure'] = patients.apply(create_exposure, axis = 1)
patients['timepoints'] = patients.apply(create_timepts, axis = 1)
return patients
def create_sex(x):
if (np.random.rand() > 0.5):
return('male')
else:
return('female')
def create_age(x):
return round(np.random.rand()*100)
def create_cohort(x):
if (np.random.rand() > 0.67):
return('Ebola')
else:
return('Lassa')
def create_exposure(x):
rand_num = np.random.rand()
if (rand_num > 0.8):
return("exposed")
elif (rand_num > 0.1):
if (np.random.rand() > 0.2):
return("died")
else:
return("survived")
else:
return("community")
def create_timepts(x):
rand_num = np.random.rand()
timepts = [
[0, 1],
[0, 1, 2],
[0, 1, 2, 3],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4, 7],
[0, 1, 2, 3, 4, 7, 10]
]
if (rand_num < 0.4):
return(timepts[0])
elif (rand_num < 0.6):
return(timepts[1])
elif (rand_num < 0.6):
return(timepts[2])
elif (rand_num < 0.6):
return(timepts[3])
elif (rand_num < 0.6):
return(timepts[4])
else:
return(timepts[5])
# [ Create patients ] -------------------------------------------------------------------------------------
patients = fakePatients()
# --- Upload to dropbox ---
dbx = dropbox.Dropbox(token)
dbx.files_upload(patients.to_csv(index = False).encode('utf-8'), dropbox_folder + '/fakepatient_roster.csv')
# dbx.files_upload(patients.to_csv(sep = '\t', index = False).encode('utf-8'), '/fakepatient_roster.tsv')
# [ Create samples ] --------------------------------------------------------------------------------------
# Convert array of timepoints to wide dataframe of timepoints
# TODO: use the function I wrote
tp = pd.DataFrame(patients['timepoints'].values.tolist(), index = patients.patient_id).reset_index()
# Convert to long dataframe
tp = pd.melt(tp, id_vars = ['patient_id'], value_name = 'timepoint').drop('variable', axis = 1)
# Remove NAs
tp = tp.dropna(axis = 0, how='any')
# Create a sample for every timepoint
sample_list = pd.DataFrame(data = {'sample_id': ['plasma', 'PMBC', 'hDNA', 'vDNA', 'hRNA', 'vRNA'], 'description': ['raw blood plasma', 'raw peripheral blood mononuclear cells', 'extracted host DNA', 'extracted viral DNA', 'extracted host RNA', 'extracted viral RNA']})
sample_list['tmp'] = 1
tp['tmp'] = 1
# Merge on drop timepoint 0; no biological data taken
samples = pd.merge(tp[tp.timepoint > 0], sample_list, on='tmp').drop('tmp', axis = 1)
# Fill some fields to be inputted later.
samples['creation_date'] = np.NaN
samples['storage_loc'] = np.NaN
samples['invalid'] = False
dbx.files_upload(samples.to_csv(index = False).encode('utf-8'), dropbox_folder + '/fakesample_list.csv', mode=dropbox.files.WriteMode.overwrite)
# [ Generate file list ] ----------------------------------------------------------------------------------
tp.drop('tmp', axis = 1)
def gen_filelist(patient_timepts):
# read in the experiment file structure
md, res = dbx.files_download(dropbox_folder +"/" + expt_file)
if(res.status_code == 200):
expts = pd.read_csv(io.StringIO(res.content.decode('utf-8')))
expts
def array2long(df, var, id_var):
# remove any NAs in column
df = df[df[var].notnull()]
if(any(df[var].apply(lambda x: type(x)) == str)):
# Convert string to array
df[var] = df[var].apply(lambda x: x.replace(', ', ',').split(','))
# splay data frame wide
temp = pd.DataFrame(df[var].values.tolist(), index = df[id_var]).reset_index()
# Convert to long dataframe
temp = pd.melt(temp, id_vars = [id_var], value_name = var).drop('variable', axis = 1)
# Remove NAs
temp = temp.dropna(axis = 0, how='any')
return temp
# BUG: fix the .loc for NAN setting
ex_times = array2long(expts, 'timepts', 'expt_id')
ex_files = array2long(expts, 'file_types', 'expt_id')
ex_pars = array2long(expts, 'params', 'expt_id')
# Merge on drop timepoint 0; no biological data taken
expt_files = pd.merge(ex_times, ex_files, on='expt_id')
expt_files = pd.merge(expt_files, ex_pars, on='expt_id', how='outer')
expt_files = pd.merge(expt_files, expts.drop(['timepts', 'file_types', 'params'], axis = 1), on='expt_id')
expt_files
pts = array2long(patients, var='timepoints', id_var = 'patient_id')
pts['timepts'] = pts.timepoints.apply(lambda x: str(round(x)))
pts.timepts[0]
files = pd.merge(pts, expt_files, left_on='timepts', right_on='timepts', how='left')
files.head()
def create_filename(row):
return row.patient_id + "_T" + row.timepts + "_" + row.expt_id + row.file_types
files['filename'] = files.apply(create_filename, axis = 1)
files.shape
# [ Create random dummy files ] ---------------------------------------------------------------------------
dummy_content = "This is not a real file."
np.random.seed(20180316)
ids = files.patient_id.unique()
ids
for patient_id in ids:
print('\n'+ patient_id)
# Filter by a specific patient_id
subset = files[files.patient_id == patient_id]
num_files = len(subset)
# For each patient, choose a random number of files to create
num2gen = np.random.randint(1, num_files)
file_idx = np.random.choice(num_files, size = num2gen, replace=False)
sel_files = subset.iloc[file_idx]
# loop over the selected file names and create fake files
for idx, row in sel_files.iterrows():
filename = row.dropbox + row.filename
print(filename)
dbx.files_upload(dummy_content.encode('utf-8'), filename, mode=dropbox.files.WriteMode.overwrite)
# [ Find if the files have already been uploaded ] --------------------------------------------------------
folders = []
for entry in dbx.files_list_folder(dropbox_folder + "/Data").entries:
print(entry)
if(type(entry) == dropbox.files.FolderMetadata):
folders.append(entry.path_display)
dbx.files_list_folder('').entries
fnames = []
fpaths = []
fdates = []
for folder in folders:
print(folder)
for entry in dbx.files_list_folder(folder).entries:
fnames.append(entry.name)
fpaths.append(entry.path_display)
fdates.append(entry.server_modified)
dbx_files = pd.DataFrame({'filename': fnames, 'date_modified': fdates})
files = pd.merge(files, dbx_files, on='filename', how='left')
files['status'] = | pd.notnull(files.date_modified) | pandas.notnull |
#####################################################.
# This file storesthe CSEARCH class #
# used in conformer generation #
#####################################################.
import math
import os
import sys
import time
import shutil
import subprocess
import glob
from pathlib import Path
import pandas as pd
import concurrent.futures as futures
import multiprocessing as mp
from progress.bar import IncrementalBar
from rdkit.Chem import AllChem as Chem
from rdkit.Chem import rdmolfiles, rdMolTransforms, PropertyMol, rdDistGeom, Lipinski
from aqme.filter import filters, ewin_filter, pre_E_filter, RMSD_and_E_filter
from aqme.csearch_utils import (
prepare_direct_smi,
prepare_smiles_files,
prepare_csv_files,
prepare_cdx_files,
prepare_com_files,
prepare_sdf_files,
prepare_pdb_files,
template_embed,
creation_of_dup_csv_csearch,
minimize_rdkit_energy,
com_2_xyz,
)
from aqme.fullmonte import generating_conformations_fullmonte, realign_mol
from aqme.utils import (
rules_get_charge,
substituted_mol,
load_variables,
set_metal_atomic_number,
getDihedralMatches,
smi_to_mol,
)
from aqme.crest import crest_opt
class csearch:
"""
Class containing all the functions from the CSEARCH module.
Parameters
----------
kwargs : argument class
Specify any arguments from the CSEARCH module (for a complete list of variables, visit the AQME documentation)
"""
def __init__(self, **kwargs):
start_time_overall = time.time()
# load default and user-specified variables
self.args = load_variables(kwargs, "csearch")
if self.args.program.lower() not in ["rdkit", "summ", "fullmonte", "crest"]:
self.args.log.write(
"\nx Program not supported for CSEARCH conformer generation! Specify: program='rdkit' (or summ, fullmonte, crest)"
)
self.args.log.finalize()
sys.exit()
if self.args.smi is None and self.args.input == "":
self.args.log.write(
"\nx Program requires either a SMILES or an input file to proceed! Please look up acceptable file formats. Specify: smi='CCC' (or input='filename.csv')"
)
self.args.log.finalize()
sys.exit()
os.chdir(self.args.w_dir_main)
# load files from AQME input
if self.args.smi is not None:
csearch_files = [self.args.name]
else:
csearch_files = glob.glob(self.args.input)
for csearch_file in csearch_files:
# load jobs for conformer generation
if self.args.smi is not None:
job_inputs = prepare_direct_smi(self.args)
else:
job_inputs = self.load_jobs(csearch_file)
self.args.log.write(
f"\nStarting CSEARCH with {len(job_inputs)} job(s) (SDF, XYZ, CSV, etc. files might contain multiple jobs/structures inside)\n"
)
# runs the conformer sampling with multiprocessors
self.run_csearch(job_inputs)
# store all the information into a CSV file
csearch_file_no_path = (
csearch_file.replace("/", "\\").split("\\")[-1].split(".")[0]
)
self.csearch_csv_file = self.args.w_dir_main.joinpath(
f"CSEARCH-Data-{csearch_file_no_path}.csv"
)
self.final_dup_data.to_csv(self.csearch_csv_file, index=False)
elapsed_time = round(time.time() - start_time_overall, 2)
self.args.log.write(f"\nTime CSEARCH: {elapsed_time} seconds\n")
self.args.log.finalize()
# this is added to avoid path problems in jupyter notebooks
os.chdir(self.args.initial_dir)
def load_jobs(self, csearch_file):
"""
Load information of the different molecules for conformer generation
"""
SUPPORTED_INPUTS = [
".smi",
".sdf",
".cdx",
".csv",
".com",
".gjf",
".mol",
".mol2",
".xyz",
".txt",
".yaml",
".yml",
".rtf",
".pdb",
]
file_format = os.path.splitext(csearch_file)[1]
# Checks
if file_format.lower() not in SUPPORTED_INPUTS:
self.args.log.write("\nx Input filetype not currently supported!")
self.args.log.finalize()
sys.exit()
if not os.path.exists(csearch_file):
self.args.log.write("\nx Input file not found!")
self.args.log.finalize()
sys.exit()
# if large system increase stack size
if self.args.stacksize != "1G":
os.environ["OMP_STACKSIZE"] = self.args.stacksize
smi_derivatives = [".smi", ".txt", ".yaml", ".yml", ".rtf"]
Extension2inputgen = dict()
for key in smi_derivatives:
Extension2inputgen[key] = prepare_smiles_files
Extension2inputgen[".csv"] = prepare_csv_files
Extension2inputgen[".cdx"] = prepare_cdx_files
Extension2inputgen[".gjf"] = prepare_com_files
Extension2inputgen[".com"] = prepare_com_files
Extension2inputgen[".xyz"] = prepare_com_files
Extension2inputgen[".sdf"] = prepare_sdf_files
Extension2inputgen[".mol"] = prepare_sdf_files
Extension2inputgen[".mol2"] = prepare_sdf_files
Extension2inputgen[".pdb"] = prepare_pdb_files
# Prepare the jobs
prepare_function = Extension2inputgen[file_format]
job_inputs = prepare_function(self.args, csearch_file)
return job_inputs
def run_csearch(self, job_inputs):
# create the dataframe to store the data
self.final_dup_data = creation_of_dup_csv_csearch(self.args.program)
bar = IncrementalBar(
"o Number of finished jobs from CSEARCH", max=len(job_inputs)
)
with futures.ProcessPoolExecutor(
max_workers=self.args.max_workers, mp_context=mp.get_context("spawn")
) as executor:
# Submit a set of asynchronous jobs
jobs = []
# Submit the Jobs
for job_input in job_inputs:
(
smi_,
name_,
charge_,
mult_,
constraints_atoms_,
constraints_dist_,
constraints_angle_,
constraints_dihedral_,
) = job_input
job = executor.submit(
self.compute_confs(
smi_,
name_,
charge_,
mult_,
constraints_atoms_,
constraints_dist_,
constraints_angle_,
constraints_dihedral_,
)
)
jobs.append(job)
bar.next()
bar.finish()
# removing temporary files
temp_files = [
"gfn2.out",
"xTB_opt.traj",
"ANI1_opt.traj",
"wbo",
"xtbrestart",
"ase.opt",
"xtb.opt",
"gfnff_topo",
]
for file in temp_files:
if os.path.exists(file):
os.remove(file)
def compute_confs(
self,
smi,
name,
charge,
mult,
constraints_atoms,
constraints_dist,
constraints_angle,
constraints_dihedral,
):
"""
Function to start conformer generation
"""
if self.args.smi is not None:
(
mol,
self.args.constraints_dist,
self.args.constraints_angle,
self.args.constraints_dihedral,
) = smi_to_mol(
smi,
name,
self.args.program,
self.args.log,
constraints_dist,
constraints_angle,
constraints_dihedral,
)
else:
if self.args.input.split(".")[1] in [
"smi",
"csv",
"cdx",
"txt",
"yaml",
"yml",
"rtf",
]:
(
mol,
self.args.constraints_dist,
self.args.constraints_angle,
self.args.constraints_dihedral,
) = smi_to_mol(
smi,
name,
self.args.program,
self.args.log,
constraints_dist,
constraints_angle,
constraints_dihedral,
)
else:
(
mol,
self.args.constraints_dist,
self.args.constraints_angle,
self.args.constraints_dihedral,
) = (smi, constraints_dist, constraints_angle, constraints_dihedral)
if mol is None:
self.args.log.write(
f"\nx Failed to convert the provided SMILES ({smi}) to RDkit Mol object! Please check the starting smiles."
)
self.args.log.finalize()
sys.exit()
if self.args.charge is None:
self.args.charge = charge
if self.args.mult is None:
self.args.mult = mult
if self.args.destination is None:
self.csearch_folder = Path(self.args.w_dir_main).joinpath(
f"CSEARCH/{self.args.program}"
)
else:
self.csearch_folder = Path(self.args.destination)
self.csearch_folder.mkdir(exist_ok=True, parents=True)
if self.args.program in ["crest"] and self.args.smi is None:
if self.args.input.split(".")[1] in ["pdb", "mol2", "mol", "sdf"]:
command_pdb = [
"obabel",
f'-i{self.args.input.split(".")[1]}',
f'{name}.{self.args.input.split(".")[1]}',
"-oxyz",
f"-O{name}_{self.args.program}.xyz",
]
subprocess.run(
command_pdb,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
elif self.args.input.split(".")[1] in ["gjf", "com"]:
xyz_file, _, _ = com_2_xyz(
f'{name}.{self.args.input.split(".")[1]}'
)
os.move(xyz_file, f"{name}_{self.args.program}.xyz")
elif self.args.input.split(".")[1] == "xyz":
shutil.copy(f"{name}.xyz", f"{name}_{self.args.program}.xyz")
else:
rdmolfiles.MolToXYZFile(mol, f"{name}_{self.args.program}.xyz")
# Converts each line to an RDKit mol object
if self.args.verbose:
self.args.log.write(f"\n -> Input Molecule {Chem.MolToSmiles(mol)}")
if self.args.metal_complex:
for _ in self.args.metal_atoms:
self.args.metal_idx.append(None)
self.args.complex_coord.append(None)
self.args.metal_sym.append(None)
(
self.args.metal_idx,
self.args.complex_coord,
self.args.metal_sym,
) = substituted_mol(self, mol, "I")
# get pre-determined geometries for metal complexes
accepted_complex_types = [
"squareplanar",
"squarepyramidal",
"linear",
"trigonalplanar",
]
if self.args.complex_type in accepted_complex_types:
count_metals = 0
for metal_idx_ind in self.args.metal_idx:
if metal_idx_ind is not None:
count_metals += 1
if count_metals == 1:
template_kwargs = dict()
template_kwargs["complex_type"] = self.args.complex_type
template_kwargs["metal_idx"] = self.args.metal_idx
template_kwargs["maxsteps"] = self.args.opt_steps_rdkit
template_kwargs["heavyonly"] = self.args.heavyonly
template_kwargs["maxmatches"] = self.args.max_matches_rmsd
template_kwargs["mol"] = mol
items = template_embed(self, **template_kwargs)
total_data = creation_of_dup_csv_csearch(self.args.program)
for mol_obj, name_in, coord_map, alg_map, template in zip(*items):
data = self.conformer_generation(
mol_obj,
name_in,
constraints_atoms,
constraints_dist,
constraints_angle,
constraints_dihedral,
coord_map,
alg_map,
template,
)
frames = [total_data, data]
total_data = pd.concat(frames, sort=True)
else:
self.args.log.write(
"\nx Cannot use templates for complexes involving more than 1 metal or for organic molecueles."
)
total_data = None
else:
total_data = self.conformer_generation(
mol,
name,
constraints_atoms,
constraints_dist,
constraints_angle,
constraints_dihedral,
)
else:
total_data = self.conformer_generation(
mol,
name,
constraints_atoms,
constraints_dist,
constraints_angle,
constraints_dihedral,
)
# Updates the dataframe with infromation about conformer generation
frames = [self.final_dup_data, total_data]
self.final_dup_data = | pd.concat(frames, ignore_index=True, sort=True) | pandas.concat |
# This is a sample Python program that trains a simple TensorFlow California Housing model and run Batch Transform job.
# This implementation will work on your *local computer* or in the *AWS Cloud*.
# To run training and inference *locally* set: `config = get_config(LOCAL_MODE)`
# To run training and inference on the *cloud* set: `config = get_config(CLOUD_MODE)` and set a valid IAM role value in get_config()
#
# Prerequisites:
# 1. Install required Python packages:
# `pip install -r requirements.txt`
# 2. Docker Desktop installed and running on your computer:
# `docker ps`
# 3. You should have AWS credentials configured on your local machine
# in order to be able to pull the docker image from ECR.
###############################################################################################
import os
import pandas as pd
import sklearn.model_selection
from sagemaker.tensorflow import TensorFlow
from sklearn.datasets import *
from sklearn.preprocessing import StandardScaler
DUMMY_IAM_ROLE = 'arn:aws:iam::111111111111:role/service-role/AmazonSageMaker-ExecutionRole-20200101T000001'
def download_training_and_eval_data():
if os.path.isfile('./data/train/x_train.csv') and \
os.path.isfile('./data/test/x_test.csv') and \
os.path.isfile('./data/train/y_train.csv') and \
os.path.isfile('./data/test/y_test.csv'):
print('Training and evaluation datasets exist. Skipping Download')
else:
print('Downloading training and evaluation dataset')
data_dir = os.path.join(os.getcwd(), 'data')
os.makedirs(data_dir, exist_ok=True)
train_dir = os.path.join(os.getcwd(), 'data/train')
os.makedirs(train_dir, exist_ok=True)
test_dir = os.path.join(os.getcwd(), 'data/test')
os.makedirs(test_dir, exist_ok=True)
input_dir = os.path.join(os.getcwd(), 'data/input')
os.makedirs(input_dir, exist_ok=True)
output_dir = os.path.join(os.getcwd(), 'data/output')
os.makedirs(output_dir, exist_ok=True)
data_set = fetch_california_housing()
X = pd.DataFrame(data_set.data, columns=data_set.feature_names)
Y = pd.DataFrame(data_set.target)
# We partition the dataset into 2/3 training and 1/3 test set.
x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(X, Y, test_size=0.33)
scaler = StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
pd.DataFrame(x_train).to_csv(os.path.join(train_dir, 'x_train.csv'), header=None, index=False)
| pd.DataFrame(x_test) | pandas.DataFrame |
from datetime import datetime
import numpy as np
import pytest
import pandas as pd
from pandas import (
Categorical,
CategoricalIndex,
DataFrame,
Index,
MultiIndex,
Series,
qcut,
)
import pandas._testing as tm
def cartesian_product_for_groupers(result, args, names, fill_value=np.NaN):
"""Reindex to a cartesian production for the groupers,
preserving the nature (Categorical) of each grouper
"""
def f(a):
if isinstance(a, (CategoricalIndex, Categorical)):
categories = a.categories
a = Categorical.from_codes(
np.arange(len(categories)), categories=categories, ordered=a.ordered
)
return a
index = MultiIndex.from_product(map(f, args), names=names)
return result.reindex(index, fill_value=fill_value).sort_index()
_results_for_groupbys_with_missing_categories = {
# This maps the builtin groupby functions to their expected outputs for
# missing categories when they are called on a categorical grouper with
# observed=False. Some functions are expected to return NaN, some zero.
# These expected values can be used across several tests (i.e. they are
# the same for SeriesGroupBy and DataFrameGroupBy) but they should only be
# hardcoded in one place.
"all": np.NaN,
"any": np.NaN,
"count": 0,
"corrwith": np.NaN,
"first": np.NaN,
"idxmax": np.NaN,
"idxmin": np.NaN,
"last": np.NaN,
"mad": np.NaN,
"max": np.NaN,
"mean": np.NaN,
"median": np.NaN,
"min": np.NaN,
"nth": np.NaN,
"nunique": 0,
"prod": np.NaN,
"quantile": np.NaN,
"sem": np.NaN,
"size": 0,
"skew": np.NaN,
"std": np.NaN,
"sum": 0,
"var": np.NaN,
}
def test_apply_use_categorical_name(df):
cats = qcut(df.C, 4)
def get_stats(group):
return {
"min": group.min(),
"max": group.max(),
"count": group.count(),
"mean": group.mean(),
}
result = df.groupby(cats, observed=False).D.apply(get_stats)
assert result.index.names[0] == "C"
def test_basic():
cats = Categorical(
["a", "a", "a", "b", "b", "b", "c", "c", "c"],
categories=["a", "b", "c", "d"],
ordered=True,
)
data = DataFrame({"a": [1, 1, 1, 2, 2, 2, 3, 4, 5], "b": cats})
exp_index = CategoricalIndex(list("abcd"), name="b", ordered=True)
expected = DataFrame({"a": [1, 2, 4, np.nan]}, index=exp_index)
result = data.groupby("b", observed=False).mean()
tm.assert_frame_equal(result, expected)
cat1 = Categorical(["a", "a", "b", "b"], categories=["a", "b", "z"], ordered=True)
cat2 = Categorical(["c", "d", "c", "d"], categories=["c", "d", "y"], ordered=True)
df = DataFrame({"A": cat1, "B": cat2, "values": [1, 2, 3, 4]})
# single grouper
gb = df.groupby("A", observed=False)
exp_idx = CategoricalIndex(["a", "b", "z"], name="A", ordered=True)
expected = DataFrame({"values": Series([3, 7, 0], index=exp_idx)})
result = gb.sum()
tm.assert_frame_equal(result, expected)
# GH 8623
x = DataFrame(
[[1, "<NAME>"], [2, "<NAME>"], [1, "<NAME>"]],
columns=["person_id", "person_name"],
)
x["person_name"] = Categorical(x.person_name)
g = x.groupby(["person_id"], observed=False)
result = g.transform(lambda x: x)
tm.assert_frame_equal(result, x[["person_name"]])
result = x.drop_duplicates("person_name")
expected = x.iloc[[0, 1]]
tm.assert_frame_equal(result, expected)
def f(x):
return x.drop_duplicates("person_name").iloc[0]
result = g.apply(f)
expected = x.iloc[[0, 1]].copy()
expected.index = Index([1, 2], name="person_id")
expected["person_name"] = expected["person_name"].astype("object")
tm.assert_frame_equal(result, expected)
# GH 9921
# Monotonic
df = | DataFrame({"a": [5, 15, 25]}) | pandas.DataFrame |
import copy
import json
import os
import warnings
from dataclasses import dataclass
from typing import Optional, Union
import numpy as np
import tiledb
from tiledb import TileDBError, libtiledb
def check_dataframe_deps():
pd_error = """Pandas version >= 1.0 required for dataframe functionality.
Please `pip install pandas>=1.0` to proceed."""
pa_error = """PyArrow version >= 1.0 is suggested for dataframe functionality.
Please `pip install pyarrow>=1.0`."""
from distutils.version import LooseVersion
try:
import pandas as pd
except ImportError:
raise Exception(pd_error)
if LooseVersion(pd.__version__) < LooseVersion("1.0"):
raise Exception(pd_error)
try:
import pyarrow as pa
if LooseVersion(pa.__version__) < LooseVersion("1.0"):
warnings.warn(pa_error)
except ImportError:
warnings.warn(pa_error)
# Note: 'None' is used to indicate optionality for many of these options
# For example, if the `sparse` argument is unspecified we will default
# to False (dense) unless the input has string or heterogenous indexes.
TILEDB_KWARG_DEFAULTS = {
"ctx": None,
"sparse": None,
"index_dims": None,
"allows_duplicates": True,
"mode": "ingest",
"attr_filters": True,
"dim_filters": True,
"coords_filters": True,
"offsets_filters": True,
"full_domain": False,
"tile": None,
"row_start_idx": None,
"fillna": None,
"column_types": None,
"varlen_types": None,
"capacity": None,
"date_spec": None,
"cell_order": "row-major",
"tile_order": "row-major",
"timestamp": None,
"debug": None,
}
def parse_tiledb_kwargs(kwargs):
parsed_args = dict(TILEDB_KWARG_DEFAULTS)
for key in TILEDB_KWARG_DEFAULTS.keys():
if key in kwargs:
parsed_args[key] = kwargs.pop(key)
return parsed_args
@dataclass(frozen=True)
class ColumnInfo:
dtype: np.dtype
repr: Optional[str] = None
nullable: bool = False
var: bool = False
@classmethod
def from_values(cls, array_like, varlen_types=()):
from pandas.api import types as pd_types
if pd_types.is_object_dtype(array_like):
# Note: this does a full scan of the column... not sure what else to do here
# because Pandas allows mixed string column types (and actually has
# problems w/ allowing non-string types in object columns)
inferred_dtype = | pd_types.infer_dtype(array_like) | pandas.api.types.infer_dtype |
# Copyright 2019 Toyota Research Institute. All rights reserved.
"""
Module and scripts for generating descriptors (quantities listed
in cell_analysis.m) from cycle-level summary statistics.
Usage:
featurize [INPUT_JSON]
Options:
-h --help Show this screen
--version Show version
The `featurize` script will generate features according to the methods
contained in beep.featurize. It places output files corresponding to
features in `/data-share/features/`.
The input json must contain the following fields
* `file_list` - a list of processed cycler runs for which to generate features
The output json file will contain the following:
* `file_list` - a list of filenames corresponding to the locations of the features
Example:
```angular2
$ featurize '{"invalid_file_list": ["/data-share/renamed_cycler_files/FastCharge/FastCharge_0_CH33.csv",
"/data-share/renamed_cycler_files/FastCharge/FastCharge_1_CH44.csv"],
"file_list": ["/data-share/structure/FastCharge_2_CH29_structure.json"]}'
{"file_list": ["/data-share/features/FastCharge_2_CH29_full_model_features.json"]}
```
"""
import os
import json
import numpy as np
import pandas as pd
from docopt import docopt
from monty.json import MSONable
from monty.serialization import loadfn, dumpfn
from scipy.stats import skew, kurtosis
from beep.collate import scrub_underscore_suffix, add_suffix_to_filename
from beep.utils import KinesisEvents
from beep import logger, ENVIRONMENT, __version__
s = {'service': 'DataAnalyzer'}
class DegradationPredictor(MSONable):
"""
Object corresponding to feature matrix. Includes constructors
to initialize the feature vectors.
Attributes:
name (str): predictor object name.
X (pandas.DataFrame): data as records x features.
y (pandas.DataFrame): targets.
feature_labels (list): feature labels.
predict_only (bool): True/False to specify predict/train mode.
prediction_type (str): Type of regression - 'single' vs 'multi'.
predicted_quantity (str): 'cycle' or 'capacity'.
nominal_capacity (float):
"""
def __init__(self, name, X, feature_labels=None, y=None, nominal_capacity=1.1,
predict_only=False, predicted_quantity="cycle", prediction_type="multi"):
"""
Args:
name (str): predictor object name
X (pandas.DataFrame): features in DataFrame format.
name (str): name of method for featurization.
y (pandas.Dataframe or float): one or more outcomes.
predict_only (bool): True/False to specify predict/train mode.
predicted_quantity (str): 'cycle' or 'capacity'.
prediction_type (str): Type of regression - 'single' vs 'multi'.
"""
self.name = name
self.X = X
self.feature_labels = feature_labels
self.predict_only = predict_only
self.prediction_type = prediction_type
self.predicted_quantity = predicted_quantity
self.y = y
self.nominal_capacity = nominal_capacity
@classmethod
def from_processed_cycler_run_file(cls, path, features_label='full_model', predict_only=False,
predicted_quantity='cycle', prediction_type='multi',
diagnostic_features=False):
"""
Args:
path (str): string corresponding to file path with ProcessedCyclerRun object.
features_label (str): name of method for featurization.
predict_only (bool): True/False to specify predict/train mode.
predicted_quantity (str): 'cycle' or 'capacity'.
prediction_type (str): Type of regression - 'single' vs 'multi'.
diagnostic_features (bool): whether to compute diagnostic features.
"""
processed_cycler_run = loadfn(path)
if features_label == 'full_model':
return cls.init_full_model(processed_cycler_run, predict_only=predict_only,
predicted_quantity=predicted_quantity,
diagnostic_features=diagnostic_features,
prediction_type=prediction_type)
else:
raise NotImplementedError
@classmethod
def init_full_model(cls, processed_cycler_run, init_pred_cycle=10, mid_pred_cycle=91,
final_pred_cycle=100, predict_only=False, prediction_type='multi',
predicted_quantity="cycle", diagnostic_features=False):
"""
Generate features listed in early prediction manuscript
Args:
processed_cycler_run (beep.structure.ProcessedCyclerRun): information about cycler run
init_pred_cycle (int): index of initial cycle index used for predictions
mid_pred_cycle (int): index of intermediate cycle index used for predictions
final_pred_cycle (int): index of highest cycle index used for predictions
predict_only (bool): whether or not to include cycler life in the object
prediction_type (str): 'single': cycle life to reach 80% capacity.
'multi': remaining capacity at fixed cycles
predicted_quantity (str): quantity being predicted - cycles/capacity
diagnostic_features (bool): whether or not to compute diagnostic features
Returns:
beep.featurize.DegradationPredictor: DegradationPredictor corresponding to the ProcessedCyclerRun file.
"""
assert mid_pred_cycle > 10, 'Insufficient cycles for analysis'
assert final_pred_cycle > mid_pred_cycle, 'Must have final_pred_cycle > mid_pred_cycle'
ifinal = final_pred_cycle - 1 # python indexing
imid = mid_pred_cycle - 1
iini = init_pred_cycle - 1
summary = processed_cycler_run.summary
assert len(processed_cycler_run.summary) > final_pred_cycle, 'cycle count must exceed final_pred_cycle'
cycles_to_average_over = 40 # For nominal capacity, use median discharge capacity of first n cycles
interpolated_df = processed_cycler_run.cycles_interpolated
X = pd.DataFrame(np.zeros((1, 20)))
labels = []
# Discharge capacity, cycle 2 = Q(n=2)
X[0] = summary.discharge_capacity[1]
labels.append("discharge_capacity_cycle_2")
# Max discharge capacity - discharge capacity, cycle 2 = max_n(Q(n)) - Q(n=2)
X[1] = max(summary.discharge_capacity[np.arange(final_pred_cycle)] - summary.discharge_capacity[1])
labels.append("max_discharge_capacity_difference")
# Discharge capacity, cycle 100 = Q(n=100)
X[2] = summary.discharge_capacity[ifinal]
labels.append("discharge_capacity_cycle_100")
# Feature representing time-temperature integral over cycles 2 to 100
X[3] = np.nansum(summary.time_temperature_integrated[np.arange(final_pred_cycle)])
labels.append("integrated_time_temperature_cycles_1:100")
# Mean of charge times of first 5 cycles
X[4] = np.nanmean(summary.charge_duration[1:6])
labels.append("charge_time_cycles_1:5")
# Descriptors based on capacity loss between cycles 10 and 100.
Qd_final = interpolated_df.discharge_capacity[interpolated_df.cycle_index == ifinal]
Qd_10 = interpolated_df.discharge_capacity[interpolated_df.cycle_index == 9]
Vd = interpolated_df.voltage[interpolated_df.cycle_index == iini]
Qd_diff = Qd_final.values - Qd_10.values
X[5] = np.log10(np.abs(np.min(Qd_diff))) # Minimum
labels.append("abs_min_discharge_capacity_difference_cycles_2:100")
X[6] = np.log10(np.abs(np.mean(Qd_diff))) # Mean
labels.append("abs_mean_discharge_capacity_difference_cycles_2:100")
X[7] = np.log10(np.abs(np.var(Qd_diff))) # Variance
labels.append("abs_variance_discharge_capacity_difference_cycles_2:100")
X[8] = np.log10(np.abs(skew(Qd_diff))) # Skewness
labels.append("abs_skew_discharge_capacity_difference_cycles_2:100")
X[9] = np.log10(np.abs(kurtosis(Qd_diff))) # Kurtosis
labels.append("abs_kurtosis_discharge_capacity_difference_cycles_2:100")
X[10] = np.log10(np.abs(Qd_diff[0])) # First difference
labels.append("abs_first_discharge_capacity_difference_cycles_2:100")
X[11] = max(summary.temperature_maximum[list(range(1, final_pred_cycle))]) # Max T
labels.append("max_temperature_cycles_1:100")
X[12] = min(summary.temperature_minimum[list(range(1, final_pred_cycle))]) # Min T
labels.append("min_temperature_cycles_1:100")
# Slope and intercept of linear fit to discharge capacity as a fn of cycle #, cycles 2 to 100
X[13], X[14] = np.polyfit(
list(range(1, final_pred_cycle)),
summary.discharge_capacity[list(range(1, final_pred_cycle))], 1)
labels.append("slope_discharge_capacity_cycle_number_2:100")
labels.append("intercept_discharge_capacity_cycle_number_2:100")
# Slope and intercept of linear fit to discharge capacity as a fn of cycle #, cycles 91 to 100
X[15], X[16] = np.polyfit(
list(range(imid, final_pred_cycle)),
summary.discharge_capacity[list(range(imid, final_pred_cycle))], 1)
labels.append("slope_discharge_capacity_cycle_number_91:100")
labels.append("intercept_discharge_capacity_cycle_number_91:100")
IR_trend = summary.dc_internal_resistance[list(range(1, final_pred_cycle))]
if any(v == 0 for v in IR_trend):
IR_trend[IR_trend == 0] = np.nan
# Internal resistance minimum
X[17] = np.nanmin(IR_trend)
labels.append("min_internal_resistance_cycles_2:100")
# Internal resistance at cycle 2
X[18] = summary.dc_internal_resistance[1]
labels.append("internal_resistance_cycle_2")
# Internal resistance at cycle 100 - cycle 2
X[19] = summary.dc_internal_resistance[ifinal] - summary.dc_internal_resistance[1]
labels.append("internal_resistance_difference_cycles_2:100")
if diagnostic_features:
X_diagnostic, labels_diagnostic = init_diagnostic_features(processed_cycler_run)
X = | pd.concat([X, X_diagnostic], axis=1, sort=False) | pandas.concat |
"""Contains the pytest fixtures for running tests"""
from pvfactors.geometry.base import ShadeCollection, PVSegment, PVSurface
from pvfactors.geometry.pvrow import PVRowSide
from pvfactors.geometry.pvarray import OrderedPVArray
import pytest
import os
import pandas as pd
import pvlib
DIR_TEST = os.path.dirname(__file__)
DIR_TEST_DATA = os.path.join(DIR_TEST, 'test_files')
@pytest.fixture(scope='function')
def df_inputs_serial_calculation():
# Import simulation inputs for calculation
filename = "file_test_multiprocessing_inputs.csv"
df_inputs_simulation = pd.read_csv(os.path.join(DIR_TEST_DATA, filename),
index_col=0)
df_inputs_simulation.index = pd.DatetimeIndex(df_inputs_simulation.index)
idx_subset = 10
df_inputs_simulation = df_inputs_simulation.iloc[0:idx_subset, :]
yield df_inputs_simulation
@pytest.fixture(scope='function')
def df_perez_luminance():
""" Example of df_segments to be used for tests """
fp = os.path.join(DIR_TEST_DATA, 'file_test_df_perez_luminance.csv')
df_perez_luminance = pd.read_csv(fp, header=[0], index_col=0)
df_perez_luminance.index = (pd.DatetimeIndex(df_perez_luminance.index)
.tz_localize('UTC').tz_convert('Etc/GMT+7')
.tz_localize(None))
yield df_perez_luminance
@pytest.fixture(scope='function')
def pvsegments(shade_collections):
seg_1 = PVSegment(
illum_collection=shade_collections[0])
seg_2 = PVSegment(
shaded_collection=shade_collections[1])
yield seg_1, seg_2
@pytest.fixture(scope='function')
def shade_collections():
illum_col = ShadeCollection([PVSurface([(0, 0), (1, 0)], shaded=False)])
shaded_col = ShadeCollection([PVSurface([(1, 0), (2, 0)], shaded=True)])
yield illum_col, shaded_col
@pytest.fixture(scope='function')
def pvrow_side(pvsegments):
side = PVRowSide(pvsegments)
yield side
@pytest.fixture(scope='function')
def params():
pvarray_parameters = {
'n_pvrows': 3,
'pvrow_height': 2.5,
'pvrow_width': 2.,
'surface_azimuth': 90., # east oriented modules
'axis_azimuth': 0., # axis of rotation towards North
'surface_tilt': 20.,
'gcr': 0.4,
'solar_zenith': 20.,
'solar_azimuth': 90., # sun located in the east
'rho_ground': 0.2,
'rho_front_pvrow': 0.01,
'rho_back_pvrow': 0.03
}
yield pvarray_parameters
@pytest.fixture(scope='function')
def discr_params():
"""Discretized parameters, should have 5 segments on front of first PV row,
and 3 segments on back of second PV row"""
params = {
'n_pvrows': 3,
'pvrow_height': 1.5,
'pvrow_width': 1.,
'surface_tilt': 20.,
'surface_azimuth': 180.,
'gcr': 0.4,
'solar_zenith': 20.,
'solar_azimuth': 90., # sun located in the east
'axis_azimuth': 0., # axis of rotation towards North
'rho_ground': 0.2,
'rho_front_pvrow': 0.01,
'rho_back_pvrow': 0.03,
'cut': {0: {'front': 5}, 1: {'back': 3, 'front': 2}}
}
yield params
@pytest.fixture(scope='function')
def params_direct_shading(params):
params.update({'gcr': 0.6, 'surface_tilt': 60, 'solar_zenith': 60})
yield params
@pytest.fixture(scope='function')
def ordered_pvarray(params):
pvarray = OrderedPVArray.fit_from_dict_of_scalars(params)
yield pvarray
@pytest.fixture(scope='function')
def params_serial():
arguments = {
'n_pvrows': 2,
'pvrow_height': 1.5,
'pvrow_width': 1.,
'axis_azimuth': 0.,
'surface_tilt': 20.,
'surface_azimuth': 90,
'gcr': 0.3,
'solar_zenith': 30.,
'solar_azimuth': 90.,
'rho_ground': 0.22,
'rho_front_pvrow': 0.01,
'rho_back_pvrow': 0.03
}
yield arguments
@pytest.fixture(scope='function')
def fn_report_example():
def fn_report(pvarray): return {
'qinc_front': pvarray.ts_pvrows[1].front.get_param_weighted('qinc'),
'qinc_back': pvarray.ts_pvrows[1].back.get_param_weighted('qinc'),
'qabs_back': pvarray.ts_pvrows[1].back.get_param_weighted('qabs'),
'iso_front': pvarray.ts_pvrows[1]
.front.get_param_weighted('isotropic'),
'iso_back': pvarray.ts_pvrows[1].back.get_param_weighted('isotropic')}
yield fn_report
def generate_tucson_clrsky_met_data():
"""Helper function to generate timeseries data, taken from pvlib
documentation"""
# Define site and timestamps
latitude, longitude, tz, altitude = 32.2, -111, 'US/Arizona', 700
times = pd.date_range(start='2019-01-01 01:00', end='2020-01-01',
freq='60Min', tz=tz)
gcr = 0.3
max_angle = 50
# Calculate MET data
solpos = pvlib.solarposition.get_solarposition(times, latitude, longitude)
apparent_zenith = solpos['apparent_zenith']
azimuth = solpos['azimuth']
airmass = pvlib.atmosphere.get_relative_airmass(apparent_zenith)
pressure = pvlib.atmosphere.alt2pres(altitude)
airmass = pvlib.atmosphere.get_absolute_airmass(airmass, pressure)
linke_turbidity = pvlib.clearsky.lookup_linke_turbidity(times, latitude,
longitude)
dni_extra = pvlib.irradiance.get_extra_radiation(times)
ineichen = pvlib.clearsky.ineichen(
apparent_zenith, airmass, linke_turbidity, altitude, dni_extra)
# Calculate single axis tracking data
trk = pvlib.tracking.singleaxis(apparent_zenith, azimuth,
max_angle=max_angle, gcr=gcr,
backtrack=True)
# Get outputs
df_inputs = pd.concat(
[ineichen[['dni', 'dhi']], solpos[['apparent_zenith', 'azimuth']],
trk[['surface_tilt', 'surface_azimuth']]],
axis=1).rename(columns={'apparent_zenith': 'solar_zenith',
'azimuth': 'solar_azimuth'})
print(df_inputs.head())
df_inputs.to_csv('file_test_inputs_MET_clearsky_tucson.csv')
@pytest.fixture(scope='function')
def df_inputs_clearsky_8760():
tz = 'US/Arizona'
fp = os.path.join(DIR_TEST_DATA,
'file_test_inputs_MET_clearsky_tucson.csv')
df = pd.read_csv(fp, index_col=0)
df.index = | pd.DatetimeIndex(df.index) | pandas.DatetimeIndex |
#!/usr/bin/env python
# coding: utf-8
# In[1]:
# import shapefile
# import finoa
# import shapely
import numpy as np
import matplotlib.pyplot as plt
get_ipython().run_line_magic('matplotlib', 'inline')
import pandas as pd
# from pyproj import Proj, transform
# import stateplane
# from datetime import datetime
import pickle
# import multiprocessing as mp
import gc
# In[2]:
import shapefile
import geopandas as gpd
# # Create crash table
# In[20]:
def create_crash_table(year='15'):
crash_add_gra_final = pd.read_pickle('/home/andy/Documents/sync/PITA_new/Data/crash_' + year + '_keypoint.pkl')
crash_add_gra_final['keplist_0x'] = [i[0] if type(i) == list else -1 for i in
crash_add_gra_final.PennShkeyplist_grav.values]
crash_add_gra_final['keplist_0y'] = [i[1] if type(i) == list else -1 for i in
crash_add_gra_final.PennShkeyplist_grav.values]
crash_14 = pd.read_csv("../crashes/crash20" + year + "/CRASH.txt", low_memory=False)
crash_14_clean = crash_14[pd.notna(crash_14.CRASH_DATE) & pd.notna(crash_14.TIME_OF_DAY)].copy()
crash_14_clean['Time_qry'] = [str(int(i[0])) + str(int(i[1])).zfill(4) for i in
crash_14_clean[['CRASH_DATE', 'TIME_OF_DAY']].values]
crash_time_unique = crash_14_clean.drop_duplicates(['CRN', 'Time_qry'])
crash_severity = pd.read_csv("../crashes/crash20" + year + "/FLAG.txt", low_memory=False)
crash_severity_unique = crash_severity[['CRN', 'FATAL_OR_MAJ_INJ']].drop_duplicates(['CRN', 'FATAL_OR_MAJ_INJ'])
crash_table = crash_add_gra_final.merge(right=crash_time_unique[['CRN', 'Time_qry']], left_on='CRN', right_on='CRN',
how='left').merge(
right=crash_severity_unique, left_on='CRN', right_on='CRN', how='left')
crash_table.to_pickle(
'../CMU_rcrs_all_events_2010-2014-selected/RCRS_2015_17/important/crash_table_' + year + '.pkl')
# In[25]:
create_crash_table(year='15')
create_crash_table(year='16')
create_crash_table(year='17')
# In[22]:
create_crash_table(year='14')
# In[23]:
crash_table = pd.concat(
[pd.read_pickle('../CMU_rcrs_all_events_2010-2014-selected/RCRS_2015_17/important/crash_table_' + year + '.pkl') for
year in ['14', '15', '16', '17']], axis=0)
# # Export to db
# In[25]:
from sqlalchemy import Column, Integer, String, ForeignKey, Float, create_engine
crash_engine = create_engine(
'sqlite:////media/andy/b4a51c70-19cd-420f-91e4-c7adf2274c39/WorkZone/Data/CMU_rcrs_all_events_2010-2014-selected/RCRS_2015_17/important/crash_db.db',
echo=False)
# In[48]:
crash_db = crash_table[
['CRN', 'LANE_COUNT', 'RDWY_ORIENT', 'ROUTE', 'SPEED_LIMIT', 'PennShIDs_grav', 'Time_qry', 'keplist_0x',
'keplist_0y', 'FATAL_OR_MAJ_INJ']]
crash_db = crash_db[~crash_db['Time_qry'].astype('str').str.contains('99')]
crash_db['Time_stamp'] = | pd.to_datetime(crash_db['Time_qry'], format='%Y%m%d%H%M') | pandas.to_datetime |
import pandas as pd
import numpy as np
from datetime import datetime
from datetime import timedelta
import settings
import const
from src import util
from src.preprocess import times
from src.preprocess.preprocess import PreProcess
from src.feature_generators.hybrid_fg import HybridFG
from src.methods.hybrid import Hybrid
def predict(date_borders=None):
today = times.to_datetime(datetime.utcnow().date())
if date_borders is None:
date_borders = [times.to_datetime(today + timedelta(days=1))] # tomorrow
batch_folder = "batch\\" if len(date_borders) > 1 else ""
outputs = {d_border: list() for d_border in date_borders}
actuals = {d_border: list() for d_border in date_borders}
smape_total = {d_border: {const.BJ: 0, const.LD: 0} for d_border in date_borders}
smape_count = {d_border: {const.BJ: 0, const.LD: 0} for d_border in date_borders}
for city, pollutants in cases.items():
observed = pd.read_csv(config[getattr(const, city + "_OBSERVED")], sep=';', low_memory=True)
stations = pd.read_csv(config[getattr(const, city + "_STATIONS")], sep=';', low_memory=True)
# keep only a necessary time range for feature generation for only predictable stations
observed = times.select(df=observed, time_key=const.TIME, from_time='18-04-20 00')
observed[const.TIME] = | pd.to_datetime(observed[const.TIME], format=const.T_FORMAT) | pandas.to_datetime |
from hagelslag.data.ModelOutput import ModelOutput
from hagelslag.util.make_proj_grids import read_arps_map_file, read_ncar_map_file, make_proj_grids
import numpy as np
import pandas as pd
from scipy.ndimage import gaussian_filter
from scipy.signal import fftconvolve
from scipy.stats import gamma, bernoulli
from scipy.interpolate import interp1d
from scipy.spatial import cKDTree
from skimage.morphology import disk
from netCDF4 import Dataset, date2num, num2date
import os
from glob import glob
from os.path import join, exists
import json
from datetime import timedelta
try:
from ncepgrib2 import Grib2Encode, dump
grib_support = True
except ImportError("ncepgrib2 not available"):
grib_support = False
class EnsembleMemberProduct(object):
def __init__(self, ensemble_name, model_name, member, run_date, variable, start_date, end_date, path, single_step,
map_file=None, condition_model_name=None, condition_threshold=0.5):
self.ensemble_name = ensemble_name
self.model_name = model_name
self.member = member
self.run_date = run_date
self.variable = variable
self.start_date = start_date
self.end_date = end_date
self.times = pd.DatetimeIndex(start=self.start_date, end=self.end_date, freq="1H")
self.forecast_hours = (self.times - self.run_date).astype('timedelta64[h]').values
self.path = path
self.single_step = single_step
self.track_forecasts = None
self.data = None
self.map_file = map_file
self.proj_dict = None
self.grid_dict = None
self.mapping_data = None
self.condition_model_name = condition_model_name
self.condition_threshold = condition_threshold
self.percentiles = None
self.num_samples = None
self.percentile_data = None
if self.map_file is not None:
if self.map_file[-3:] == "map":
self.proj_dict, self.grid_dict = read_arps_map_file(self.map_file)
else:
self.proj_dict, self.grid_dict = read_ncar_map_file(self.map_file)
self.mapping_data = make_proj_grids(self.proj_dict, self.grid_dict)
self.units = ""
self.nc_patches = None
self.hail_forecast_table = None
def load_data(self, num_samples=1000, percentiles=None):
"""
Args:
num_samples: Number of random samples at each grid point
percentiles: Which percentiles to extract from the random samples
Returns:
"""
self.percentiles = percentiles
self.num_samples = num_samples
if self.model_name.lower() in ["wrf"]:
mo = ModelOutput(self.ensemble_name, self.member, self.run_date, self.variable,
self.start_date, self.end_date, self.path, self.single_step)
mo.load_data()
self.data = mo.data[:]
if mo.units == "m":
self.data *= 1000
self.units = "mm"
else:
self.units = mo.units
else:
if self.track_forecasts is None:
self.load_track_data()
self.units = "mm"
self.data = np.zeros((self.forecast_hours.size,
self.mapping_data["lon"].shape[0],
self.mapping_data["lon"].shape[1]), dtype=np.float32)
if self.percentiles is not None:
self.percentile_data = np.zeros([len(self.percentiles)] + list(self.data.shape))
full_condition_name = "condition_" + self.condition_model_name.replace(" ", "-")
dist_model_name = "dist" + "_" + self.model_name.replace(" ", "-")
for track_forecast in self.track_forecasts:
times = track_forecast["properties"]["times"]
for s, step in enumerate(track_forecast["features"]):
forecast_params = step["properties"][dist_model_name]
if self.condition_model_name is not None:
condition = step["properties"][full_condition_name]
else:
condition = None
forecast_time = self.run_date + timedelta(hours=times[s])
if forecast_time in self.times:
t = np.where(self.times == forecast_time)[0][0]
mask = np.array(step["properties"]["masks"], dtype=int).ravel()
rankings = np.argsort(np.array(step["properties"]["timesteps"]).ravel()[mask==1])
i = np.array(step["properties"]["i"], dtype=int).ravel()[mask == 1][rankings]
j = np.array(step["properties"]["j"], dtype=int).ravel()[mask == 1][rankings]
if rankings.size > 0 and forecast_params[0] > 0.1 and 1 < forecast_params[2] < 100:
raw_samples = np.sort(gamma.rvs(forecast_params[0], loc=forecast_params[1],
scale=forecast_params[2],
size=(num_samples, rankings.size)),
axis=1)
if self.percentiles is None:
samples = raw_samples.mean(axis=0)
if condition >= self.condition_threshold:
self.data[t, i, j] = samples
else:
for p, percentile in enumerate(self.percentiles):
if percentile != "mean":
if condition >= self.condition_threshold:
self.percentile_data[p, t, i, j] = np.percentile(raw_samples, percentile,
axis=0)
else:
if condition >= self.condition_threshold:
self.percentile_data[p, t, i, j] = np.mean(raw_samples, axis=0)
samples = raw_samples.mean(axis=0)
if condition >= self.condition_threshold:
self.data[t, i, j] = samples
def load_track_data(self):
run_date_str = self.run_date.strftime("%Y%m%d")
print("Load track forecasts {0} {1}".format(self.ensemble_name, run_date_str))
track_files = sorted(glob(self.path + "/".join([run_date_str, self.member]) + "/*.json"))
if len(track_files) > 0:
self.track_forecasts = []
for track_file in track_files:
tfo = open(track_file)
self.track_forecasts.append(json.load(tfo))
tfo.close()
else:
self.track_forecasts = []
def load_forecast_csv_data(self, csv_path):
forecast_file = join(csv_path, "hail_forecasts_{0}_{1}_{2}.csv".format(self.ensemble_name,
self.member,
self.run_date.strftime("%Y%m%d")))
if exists(forecast_file):
self.hail_forecast_table = pd.read_csv(forecast_file)
return
def load_forecast_netcdf_data(self, nc_path):
nc_file = join(nc_path, "{0}_{1}_{2}_model_patches.nc".format(self.ensemble_name,
self.run_date.strftime("%Y%m%d%H"),
self.member))
print(nc_file)
nc_patches = Dataset(nc_file)
nc_times = pd.DatetimeIndex(num2date(nc_patches.variables["time"][:],
nc_patches.variables["time"].units))
time_indices = np.in1d(nc_times, self.times)
self.nc_patches = dict()
self.nc_patches["time"] = nc_times[time_indices]
self.nc_patches["forecast_hour"] = nc_patches.variables["time"][time_indices]
self.nc_patches["obj_values"] = nc_patches.variables[nc_patches.object_variable + "_curr"][time_indices]
self.nc_patches["masks"] = nc_patches.variables["masks"][time_indices]
self.nc_patches["i"] = nc_patches.variables["i"][time_indices]
self.nc_patches["j"] = nc_patches.variables["j"][time_indices]
nc_patches.close()
return
def quantile_match(self):
mask_indices = np.where(self.nc_patches["masks"] == 1)
obj_values = self.nc_patches["obj_values"][mask_indices]
percentiles = np.linspace(0.1, 99.9, 100)
obj_per_vals = np.percentile(obj_values, percentiles)
per_func = interp1d(obj_per_vals, percentiles / 100.0, bounds_error=False, fill_value=(0.1, 99.9))
obj_percentiles = np.zeros(self.nc_patches["masks"].shape)
obj_percentiles[mask_indices] = per_func(obj_values)
obj_hail_sizes = np.zeros(obj_percentiles.shape)
model_name = self.model_name.replace(" ", "-")
self.units = "mm"
self.data = np.zeros((self.forecast_hours.size,
self.mapping_data["lon"].shape[0],
self.mapping_data["lon"].shape[1]), dtype=np.float32)
sh = self.forecast_hours.min()
for p in range(obj_hail_sizes.shape[0]):
if self.hail_forecast_table.loc[p, self.condition_model_name.replace(" ", "-") + "_conditionthresh"] > 0.5:
patch_mask = np.where(self.nc_patches["masks"][p] == 1)
obj_hail_sizes[p,
patch_mask[0],
patch_mask[1]] = gamma.ppf(obj_percentiles[p,
patch_mask[0],
patch_mask[1]],
self.hail_forecast_table.loc[p,
model_name + "_shape"],
self.hail_forecast_table.loc[p,
model_name + "_location"],
self.hail_forecast_table.loc[p,
model_name + "_scale"])
self.data[self.nc_patches["forecast_hour"][p] - sh,
self.nc_patches["i"][p, patch_mask[0], patch_mask[1]],
self.nc_patches["j"][p, patch_mask[0], patch_mask[1]]] = obj_hail_sizes[p, patch_mask[0], patch_mask[1]]
return
def neighborhood_probability(self, threshold, radius):
"""
Calculate a probability based on the number of grid points in an area that exceed a threshold.
Args:
threshold:
radius:
Returns:
"""
weights = disk(radius, dtype=np.uint8)
thresh_data = np.zeros(self.data.shape[1:], dtype=np.uint8)
neighbor_prob = np.zeros(self.data.shape, dtype=np.float32)
for t in np.arange(self.data.shape[0]):
thresh_data[self.data[t] >= threshold] = 1
maximized = fftconvolve(thresh_data, weights, mode="same")
maximized[maximized > 1] = 1
maximized[maximized < 1] = 0
neighbor_prob[t] = fftconvolve(maximized, weights, mode="same")
thresh_data[:] = 0
neighbor_prob[neighbor_prob < 1] = 0
neighbor_prob /= weights.sum()
return neighbor_prob
def period_max_neighborhood_probability(self, threshold, radius):
weights = disk(radius, dtype=np.uint8)
thresh_data = np.zeros(self.data.shape[1:], dtype=np.uint8)
thresh_data[self.data.max(axis=0) >= threshold] = 1
maximized = fftconvolve(thresh_data, weights, mode="same")
maximized[maximized > 1] = 1
maximized[maximized < 1] = 0
neighborhood_prob = fftconvolve(maximized, weights, mode="same")
neighborhood_prob[neighborhood_prob < 1] = 0
neighborhood_prob /= weights.sum()
return neighborhood_prob
def period_surrogate_severe_prob(self, threshold, radius, sigma, stagger):
i_grid, j_grid = np.indices(self.data.shape[1:])
max_data = self.data.max(axis=0)
max_points = np.array(np.where(max_data >= threshold)).T
max_tree = cKDTree(max_points)
stagger_points = np.vstack((i_grid[::stagger, ::stagger].ravel(), j_grid[::stagger, ::stagger].ravel())).T
valid_stagger_points = np.zeros(stagger_points.shape[0])
stagger_tree = cKDTree(stagger_points)
hit_points = np.unique(np.concatenate(max_tree.query_ball_tree(stagger_tree, radius)))
valid_stagger_points[hit_points] += 1
surrogate_grid = valid_stagger_points.reshape(i_grid[::stagger, ::stagger].shape)
surrogate_grid = gaussian_filter(surrogate_grid, sigma)
return surrogate_grid
def encode_grib2_percentile(self):
"""
Encodes member percentile data to GRIB2 format.
Returns:
Series of GRIB2 messages
"""
lscale = 1e6
grib_id_start = [7, 0, 14, 14, 2]
gdsinfo = np.array([0, np.product(self.data.shape[-2:]), 0, 0, 30], dtype=np.int32)
lon_0 = self.proj_dict["lon_0"]
sw_lon = self.grid_dict["sw_lon"]
if lon_0 < 0:
lon_0 += 360
if sw_lon < 0:
sw_lon += 360
gdtmp1 = [1, 0, self.proj_dict['a'], 0, float(self.proj_dict['a']), 0, float(self.proj_dict['b']),
self.data.shape[-1], self.data.shape[-2], self.grid_dict["sw_lat"] * lscale,
sw_lon * lscale, 0, self.proj_dict["lat_0"] * lscale,
lon_0 * lscale,
self.grid_dict["dx"] * 1e3, self.grid_dict["dy"] * 1e3, 0b00000000, 0b01000000,
self.proj_dict["lat_1"] * lscale,
self.proj_dict["lat_2"] * lscale, -90 * lscale, 0]
pdtmp1 = np.array([1, # parameter category Moisture
31, # parameter number Hail
4, # Type of generating process Ensemble Forecast
0, # Background generating process identifier
31, # Generating process or model from NCEP
0, # Hours after reference time data cutoff
0, # Minutes after reference time data cutoff
1, # Forecast time units Hours
0, # Forecast time
1, # Type of first fixed surface Ground
1, # Scale value of first fixed surface
0, # Value of first fixed surface
1, # Type of second fixed surface
1, # Scale value of 2nd fixed surface
0, # Value of 2nd fixed surface
0, # Derived forecast type
self.num_samples # Number of ensemble members
], dtype=np.int32)
grib_objects = pd.Series(index=self.times, data=[None] * self.times.size, dtype=object)
drtmp1 = np.array([0, 0, 4, 8, 0], dtype=np.int32)
for t, time in enumerate(self.times):
time_list = list(self.run_date.utctimetuple()[0:6])
if grib_objects[time] is None:
grib_objects[time] = Grib2Encode(0, np.array(grib_id_start + time_list + [2, 1], dtype=np.int32))
grib_objects[time].addgrid(gdsinfo, gdtmp1)
pdtmp1[8] = (time.to_pydatetime() - self.run_date).total_seconds() / 3600.0
data = self.percentile_data[:, t] / 1000.0
masked_data = np.ma.array(data, mask=data <= 0)
for p, percentile in enumerate(self.percentiles):
print("GRIB {3} Percentile {0}. Max: {1} Min: {2}".format(percentile,
masked_data[p].max(),
masked_data[p].min(),
time))
if percentile in range(1, 100):
pdtmp1[-2] = percentile
grib_objects[time].addfield(6, pdtmp1[:-1], 0, drtmp1, masked_data[p])
else:
pdtmp1[-2] = 0
grib_objects[time].addfield(2, pdtmp1, 0, drtmp1, masked_data[p])
return grib_objects
def encode_grib2_data(self):
"""
Encodes member percentile data to GRIB2 format.
Returns:
Series of GRIB2 messages
"""
lscale = 1e6
grib_id_start = [7, 0, 14, 14, 2]
gdsinfo = np.array([0, np.product(self.data.shape[-2:]), 0, 0, 30], dtype=np.int32)
lon_0 = self.proj_dict["lon_0"]
sw_lon = self.grid_dict["sw_lon"]
if lon_0 < 0:
lon_0 += 360
if sw_lon < 0:
sw_lon += 360
gdtmp1 = [1, 0, self.proj_dict['a'], 0, float(self.proj_dict['a']), 0, float(self.proj_dict['b']),
self.data.shape[-1], self.data.shape[-2], self.grid_dict["sw_lat"] * lscale,
sw_lon * lscale, 0, self.proj_dict["lat_0"] * lscale,
lon_0 * lscale,
self.grid_dict["dx"] * 1e3, self.grid_dict["dy"] * 1e3, 0b00000000, 0b01000000,
self.proj_dict["lat_1"] * lscale,
self.proj_dict["lat_2"] * lscale, -90 * lscale, 0]
pdtmp1 = np.array([1, # parameter category Moisture
31, # parameter number Hail
4, # Type of generating process Ensemble Forecast
0, # Background generating process identifier
31, # Generating process or model from NCEP
0, # Hours after reference time data cutoff
0, # Minutes after reference time data cutoff
1, # Forecast time units Hours
0, # Forecast time
1, # Type of first fixed surface Ground
1, # Scale value of first fixed surface
0, # Value of first fixed surface
1, # Type of second fixed surface
1, # Scale value of 2nd fixed surface
0, # Value of 2nd fixed surface
0, # Derived forecast type
1 # Number of ensemble members
], dtype=np.int32)
grib_objects = | pd.Series(index=self.times, data=[None] * self.times.size, dtype=object) | pandas.Series |
# -*- coding: utf-8 -*-
from __future__ import print_function
from datetime import datetime, timedelta
import functools
import itertools
import numpy as np
import numpy.ma as ma
import numpy.ma.mrecords as mrecords
from numpy.random import randn
import pytest
from pandas.compat import (
PY3, PY36, OrderedDict, is_platform_little_endian, lmap, long, lrange,
lzip, range, zip)
from pandas.core.dtypes.cast import construct_1d_object_array_from_listlike
from pandas.core.dtypes.common import is_integer_dtype
import pandas as pd
from pandas import (
Categorical, DataFrame, Index, MultiIndex, Series, Timedelta, Timestamp,
compat, date_range, isna)
from pandas.tests.frame.common import TestData
import pandas.util.testing as tm
MIXED_FLOAT_DTYPES = ['float16', 'float32', 'float64']
MIXED_INT_DTYPES = ['uint8', 'uint16', 'uint32', 'uint64', 'int8', 'int16',
'int32', 'int64']
class TestDataFrameConstructors(TestData):
def test_constructor(self):
df = DataFrame()
assert len(df.index) == 0
df = DataFrame(data={})
assert len(df.index) == 0
def test_constructor_mixed(self):
index, data = tm.getMixedTypeDict()
# TODO(wesm), incomplete test?
indexed_frame = DataFrame(data, index=index) # noqa
unindexed_frame = DataFrame(data) # noqa
assert self.mixed_frame['foo'].dtype == np.object_
def test_constructor_cast_failure(self):
foo = DataFrame({'a': ['a', 'b', 'c']}, dtype=np.float64)
assert foo['a'].dtype == object
# GH 3010, constructing with odd arrays
df = DataFrame(np.ones((4, 2)))
# this is ok
df['foo'] = np.ones((4, 2)).tolist()
# this is not ok
pytest.raises(ValueError, df.__setitem__, tuple(['test']),
np.ones((4, 2)))
# this is ok
df['foo2'] = np.ones((4, 2)).tolist()
def test_constructor_dtype_copy(self):
orig_df = DataFrame({
'col1': [1.],
'col2': [2.],
'col3': [3.]})
new_df = pd.DataFrame(orig_df, dtype=float, copy=True)
new_df['col1'] = 200.
assert orig_df['col1'][0] == 1.
def test_constructor_dtype_nocast_view(self):
df = DataFrame([[1, 2]])
should_be_view = DataFrame(df, dtype=df[0].dtype)
should_be_view[0][0] = 99
assert df.values[0, 0] == 99
should_be_view = DataFrame(df.values, dtype=df[0].dtype)
should_be_view[0][0] = 97
assert df.values[0, 0] == 97
def test_constructor_dtype_list_data(self):
df = DataFrame([[1, '2'],
[None, 'a']], dtype=object)
assert df.loc[1, 0] is None
assert df.loc[0, 1] == '2'
def test_constructor_list_frames(self):
# see gh-3243
result = DataFrame([DataFrame([])])
assert result.shape == (1, 0)
result = DataFrame([DataFrame(dict(A=lrange(5)))])
assert isinstance(result.iloc[0, 0], DataFrame)
def test_constructor_mixed_dtypes(self):
def _make_mixed_dtypes_df(typ, ad=None):
if typ == 'int':
dtypes = MIXED_INT_DTYPES
arrays = [np.array(np.random.rand(10), dtype=d)
for d in dtypes]
elif typ == 'float':
dtypes = MIXED_FLOAT_DTYPES
arrays = [np.array(np.random.randint(
10, size=10), dtype=d) for d in dtypes]
zipper = lzip(dtypes, arrays)
for d, a in zipper:
assert(a.dtype == d)
if ad is None:
ad = dict()
ad.update({d: a for d, a in zipper})
return DataFrame(ad)
def _check_mixed_dtypes(df, dtypes=None):
if dtypes is None:
dtypes = MIXED_FLOAT_DTYPES + MIXED_INT_DTYPES
for d in dtypes:
if d in df:
assert(df.dtypes[d] == d)
# mixed floating and integer coexinst in the same frame
df = _make_mixed_dtypes_df('float')
_check_mixed_dtypes(df)
# add lots of types
df = _make_mixed_dtypes_df('float', dict(A=1, B='foo', C='bar'))
_check_mixed_dtypes(df)
# GH 622
df = _make_mixed_dtypes_df('int')
_check_mixed_dtypes(df)
def test_constructor_complex_dtypes(self):
# GH10952
a = np.random.rand(10).astype(np.complex64)
b = np.random.rand(10).astype(np.complex128)
df = DataFrame({'a': a, 'b': b})
assert a.dtype == df.a.dtype
assert b.dtype == df.b.dtype
def test_constructor_dtype_str_na_values(self, string_dtype):
# https://github.com/pandas-dev/pandas/issues/21083
df = DataFrame({'A': ['x', None]}, dtype=string_dtype)
result = df.isna()
expected = DataFrame({"A": [False, True]})
tm.assert_frame_equal(result, expected)
assert df.iloc[1, 0] is None
df = DataFrame({'A': ['x', np.nan]}, dtype=string_dtype)
assert np.isnan(df.iloc[1, 0])
def test_constructor_rec(self):
rec = self.frame.to_records(index=False)
if PY3:
# unicode error under PY2
rec.dtype.names = list(rec.dtype.names)[::-1]
index = self.frame.index
df = DataFrame(rec)
tm.assert_index_equal(df.columns, pd.Index(rec.dtype.names))
df2 = DataFrame(rec, index=index)
tm.assert_index_equal(df2.columns, pd.Index(rec.dtype.names))
tm.assert_index_equal(df2.index, index)
rng = np.arange(len(rec))[::-1]
df3 = DataFrame(rec, index=rng, columns=['C', 'B'])
expected = DataFrame(rec, index=rng).reindex(columns=['C', 'B'])
tm.assert_frame_equal(df3, expected)
def test_constructor_bool(self):
df = DataFrame({0: np.ones(10, dtype=bool),
1: np.zeros(10, dtype=bool)})
assert df.values.dtype == np.bool_
def test_constructor_overflow_int64(self):
# see gh-14881
values = np.array([2 ** 64 - i for i in range(1, 10)],
dtype=np.uint64)
result = DataFrame({'a': values})
assert result['a'].dtype == np.uint64
# see gh-2355
data_scores = [(6311132704823138710, 273), (2685045978526272070, 23),
(8921811264899370420, 45),
(long(17019687244989530680), 270),
(long(9930107427299601010), 273)]
dtype = [('uid', 'u8'), ('score', 'u8')]
data = np.zeros((len(data_scores),), dtype=dtype)
data[:] = data_scores
df_crawls = DataFrame(data)
assert df_crawls['uid'].dtype == np.uint64
@pytest.mark.parametrize("values", [np.array([2**64], dtype=object),
np.array([2**65]), [2**64 + 1],
np.array([-2**63 - 4], dtype=object),
np.array([-2**64 - 1]), [-2**65 - 2]])
def test_constructor_int_overflow(self, values):
# see gh-18584
value = values[0]
result = DataFrame(values)
assert result[0].dtype == object
assert result[0][0] == value
def test_constructor_ordereddict(self):
import random
nitems = 100
nums = lrange(nitems)
random.shuffle(nums)
expected = ['A%d' % i for i in nums]
df = DataFrame(OrderedDict(zip(expected, [[0]] * nitems)))
assert expected == list(df.columns)
def test_constructor_dict(self):
frame = DataFrame({'col1': self.ts1,
'col2': self.ts2})
# col2 is padded with NaN
assert len(self.ts1) == 30
assert len(self.ts2) == 25
tm.assert_series_equal(self.ts1, frame['col1'], check_names=False)
exp = pd.Series(np.concatenate([[np.nan] * 5, self.ts2.values]),
index=self.ts1.index, name='col2')
tm.assert_series_equal(exp, frame['col2'])
frame = DataFrame({'col1': self.ts1,
'col2': self.ts2},
columns=['col2', 'col3', 'col4'])
assert len(frame) == len(self.ts2)
assert 'col1' not in frame
assert isna(frame['col3']).all()
# Corner cases
assert len(DataFrame({})) == 0
# mix dict and array, wrong size - no spec for which error should raise
# first
with pytest.raises(ValueError):
DataFrame({'A': {'a': 'a', 'b': 'b'}, 'B': ['a', 'b', 'c']})
# Length-one dict micro-optimization
frame = DataFrame({'A': {'1': 1, '2': 2}})
tm.assert_index_equal(frame.index, pd.Index(['1', '2']))
# empty dict plus index
idx = Index([0, 1, 2])
frame = DataFrame({}, index=idx)
assert frame.index is idx
# empty with index and columns
idx = Index([0, 1, 2])
frame = DataFrame({}, index=idx, columns=idx)
assert frame.index is idx
assert frame.columns is idx
assert len(frame._series) == 3
# with dict of empty list and Series
frame = DataFrame({'A': [], 'B': []}, columns=['A', 'B'])
tm.assert_index_equal(frame.index, Index([], dtype=np.int64))
# GH 14381
# Dict with None value
frame_none = DataFrame(dict(a=None), index=[0])
frame_none_list = DataFrame(dict(a=[None]), index=[0])
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
assert frame_none.get_value(0, 'a') is None
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
assert frame_none_list.get_value(0, 'a') is None
tm.assert_frame_equal(frame_none, frame_none_list)
# GH10856
# dict with scalar values should raise error, even if columns passed
msg = 'If using all scalar values, you must pass an index'
with pytest.raises(ValueError, match=msg):
DataFrame({'a': 0.7})
with pytest.raises(ValueError, match=msg):
DataFrame({'a': 0.7}, columns=['a'])
@pytest.mark.parametrize("scalar", [2, np.nan, None, 'D'])
def test_constructor_invalid_items_unused(self, scalar):
# No error if invalid (scalar) value is in fact not used:
result = DataFrame({'a': scalar}, columns=['b'])
expected = DataFrame(columns=['b'])
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize("value", [2, np.nan, None, float('nan')])
def test_constructor_dict_nan_key(self, value):
# GH 18455
cols = [1, value, 3]
idx = ['a', value]
values = [[0, 3], [1, 4], [2, 5]]
data = {cols[c]: Series(values[c], index=idx) for c in range(3)}
result = DataFrame(data).sort_values(1).sort_values('a', axis=1)
expected = DataFrame(np.arange(6, dtype='int64').reshape(2, 3),
index=idx, columns=cols)
tm.assert_frame_equal(result, expected)
result = DataFrame(data, index=idx).sort_values('a', axis=1)
tm.assert_frame_equal(result, expected)
result = DataFrame(data, index=idx, columns=cols)
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize("value", [np.nan, None, float('nan')])
def test_constructor_dict_nan_tuple_key(self, value):
# GH 18455
cols = Index([(11, 21), (value, 22), (13, value)])
idx = Index([('a', value), (value, 2)])
values = [[0, 3], [1, 4], [2, 5]]
data = {cols[c]: Series(values[c], index=idx) for c in range(3)}
result = (DataFrame(data)
.sort_values((11, 21))
.sort_values(('a', value), axis=1))
expected = DataFrame(np.arange(6, dtype='int64').reshape(2, 3),
index=idx, columns=cols)
tm.assert_frame_equal(result, expected)
result = DataFrame(data, index=idx).sort_values(('a', value), axis=1)
tm.assert_frame_equal(result, expected)
result = DataFrame(data, index=idx, columns=cols)
tm.assert_frame_equal(result, expected)
@pytest.mark.skipif(not PY36, reason='Insertion order for Python>=3.6')
def test_constructor_dict_order_insertion(self):
# GH19018
# initialization ordering: by insertion order if python>= 3.6
d = {'b': self.ts2, 'a': self.ts1}
frame = DataFrame(data=d)
expected = DataFrame(data=d, columns=list('ba'))
tm.assert_frame_equal(frame, expected)
@pytest.mark.skipif(PY36, reason='order by value for Python<3.6')
def test_constructor_dict_order_by_values(self):
# GH19018
# initialization ordering: by value if python<3.6
d = {'b': self.ts2, 'a': self.ts1}
frame = DataFrame(data=d)
expected = DataFrame(data=d, columns=list('ab'))
tm.assert_frame_equal(frame, expected)
def test_constructor_multi_index(self):
# GH 4078
# construction error with mi and all-nan frame
tuples = [(2, 3), (3, 3), (3, 3)]
mi = MultiIndex.from_tuples(tuples)
df = DataFrame(index=mi, columns=mi)
assert pd.isna(df).values.ravel().all()
tuples = [(3, 3), (2, 3), (3, 3)]
mi = MultiIndex.from_tuples(tuples)
df = DataFrame(index=mi, columns=mi)
assert pd.isna(df).values.ravel().all()
def test_constructor_error_msgs(self):
msg = "Empty data passed with indices specified."
# passing an empty array with columns specified.
with pytest.raises(ValueError, match=msg):
DataFrame(np.empty(0), columns=list('abc'))
msg = "Mixing dicts with non-Series may lead to ambiguous ordering."
# mix dict and array, wrong size
with pytest.raises(ValueError, match=msg):
DataFrame({'A': {'a': 'a', 'b': 'b'},
'B': ['a', 'b', 'c']})
# wrong size ndarray, GH 3105
msg = r"Shape of passed values is \(3, 4\), indices imply \(3, 3\)"
with pytest.raises(ValueError, match=msg):
DataFrame(np.arange(12).reshape((4, 3)),
columns=['foo', 'bar', 'baz'],
index=pd.date_range('2000-01-01', periods=3))
# higher dim raise exception
with pytest.raises(ValueError, match='Must pass 2-d input'):
DataFrame(np.zeros((3, 3, 3)), columns=['A', 'B', 'C'], index=[1])
# wrong size axis labels
msg = ("Shape of passed values "
r"is \(3, 2\), indices "
r"imply \(3, 1\)")
with pytest.raises(ValueError, match=msg):
DataFrame(np.random.rand(2, 3), columns=['A', 'B', 'C'], index=[1])
msg = ("Shape of passed values "
r"is \(3, 2\), indices "
r"imply \(2, 2\)")
with pytest.raises(ValueError, match=msg):
DataFrame(np.random.rand(2, 3), columns=['A', 'B'], index=[1, 2])
msg = ("If using all scalar "
"values, you must pass "
"an index")
with pytest.raises(ValueError, match=msg):
DataFrame({'a': False, 'b': True})
def test_constructor_with_embedded_frames(self):
# embedded data frames
df1 = DataFrame({'a': [1, 2, 3], 'b': [3, 4, 5]})
df2 = DataFrame([df1, df1 + 10])
df2.dtypes
str(df2)
result = df2.loc[0, 0]
tm.assert_frame_equal(result, df1)
result = df2.loc[1, 0]
tm.assert_frame_equal(result, df1 + 10)
def test_constructor_subclass_dict(self):
# Test for passing dict subclass to constructor
data = {'col1': tm.TestSubDict((x, 10.0 * x) for x in range(10)),
'col2': tm.TestSubDict((x, 20.0 * x) for x in range(10))}
df = DataFrame(data)
refdf = DataFrame({col: dict(compat.iteritems(val))
for col, val in compat.iteritems(data)})
tm.assert_frame_equal(refdf, df)
data = tm.TestSubDict(compat.iteritems(data))
df = DataFrame(data)
tm.assert_frame_equal(refdf, df)
# try with defaultdict
from collections import defaultdict
data = {}
self.frame['B'][:10] = np.nan
for k, v in compat.iteritems(self.frame):
dct = defaultdict(dict)
dct.update(v.to_dict())
data[k] = dct
frame = DataFrame(data)
tm.assert_frame_equal(self.frame.sort_index(), frame)
def test_constructor_dict_block(self):
expected = np.array([[4., 3., 2., 1.]])
df = DataFrame({'d': [4.], 'c': [3.], 'b': [2.], 'a': [1.]},
columns=['d', 'c', 'b', 'a'])
tm.assert_numpy_array_equal(df.values, expected)
def test_constructor_dict_cast(self):
# cast float tests
test_data = {
'A': {'1': 1, '2': 2},
'B': {'1': '1', '2': '2', '3': '3'},
}
frame = DataFrame(test_data, dtype=float)
assert len(frame) == 3
assert frame['B'].dtype == np.float64
assert frame['A'].dtype == np.float64
frame = DataFrame(test_data)
assert len(frame) == 3
assert frame['B'].dtype == np.object_
assert frame['A'].dtype == np.float64
# can't cast to float
test_data = {
'A': dict(zip(range(20), tm.makeStringIndex(20))),
'B': dict(zip(range(15), randn(15)))
}
frame = DataFrame(test_data, dtype=float)
assert len(frame) == 20
assert frame['A'].dtype == np.object_
assert frame['B'].dtype == np.float64
def test_constructor_dict_dont_upcast(self):
d = {'Col1': {'Row1': 'A String', 'Row2': np.nan}}
df = DataFrame(d)
assert isinstance(df['Col1']['Row2'], float)
dm = DataFrame([[1, 2], ['a', 'b']], index=[1, 2], columns=[1, 2])
assert isinstance(dm[1][1], int)
def test_constructor_dict_of_tuples(self):
# GH #1491
data = {'a': (1, 2, 3), 'b': (4, 5, 6)}
result = DataFrame(data)
expected = DataFrame({k: list(v) for k, v in compat.iteritems(data)})
tm.assert_frame_equal(result, expected, check_dtype=False)
def test_constructor_dict_multiindex(self):
def check(result, expected):
return tm.assert_frame_equal(result, expected, check_dtype=True,
check_index_type=True,
check_column_type=True,
check_names=True)
d = {('a', 'a'): {('i', 'i'): 0, ('i', 'j'): 1, ('j', 'i'): 2},
('b', 'a'): {('i', 'i'): 6, ('i', 'j'): 5, ('j', 'i'): 4},
('b', 'c'): {('i', 'i'): 7, ('i', 'j'): 8, ('j', 'i'): 9}}
_d = sorted(d.items())
df = DataFrame(d)
expected = DataFrame(
[x[1] for x in _d],
index=MultiIndex.from_tuples([x[0] for x in _d])).T
expected.index = MultiIndex.from_tuples(expected.index)
check(df, expected)
d['z'] = {'y': 123., ('i', 'i'): 111, ('i', 'j'): 111, ('j', 'i'): 111}
_d.insert(0, ('z', d['z']))
expected = DataFrame(
[x[1] for x in _d],
index=Index([x[0] for x in _d], tupleize_cols=False)).T
expected.index = Index(expected.index, tupleize_cols=False)
df = DataFrame(d)
df = df.reindex(columns=expected.columns, index=expected.index)
check(df, expected)
def test_constructor_dict_datetime64_index(self):
# GH 10160
dates_as_str = ['1984-02-19', '1988-11-06', '1989-12-03', '1990-03-15']
def create_data(constructor):
return {i: {constructor(s): 2 * i}
for i, s in enumerate(dates_as_str)}
data_datetime64 = create_data(np.datetime64)
data_datetime = create_data(lambda x: datetime.strptime(x, '%Y-%m-%d'))
data_Timestamp = create_data(Timestamp)
expected = DataFrame([{0: 0, 1: None, 2: None, 3: None},
{0: None, 1: 2, 2: None, 3: None},
{0: None, 1: None, 2: 4, 3: None},
{0: None, 1: None, 2: None, 3: 6}],
index=[Timestamp(dt) for dt in dates_as_str])
result_datetime64 = DataFrame(data_datetime64)
result_datetime = DataFrame(data_datetime)
result_Timestamp = DataFrame(data_Timestamp)
tm.assert_frame_equal(result_datetime64, expected)
tm.assert_frame_equal(result_datetime, expected)
tm.assert_frame_equal(result_Timestamp, expected)
def test_constructor_dict_timedelta64_index(self):
# GH 10160
td_as_int = [1, 2, 3, 4]
def create_data(constructor):
return {i: {constructor(s): 2 * i}
for i, s in enumerate(td_as_int)}
data_timedelta64 = create_data(lambda x: np.timedelta64(x, 'D'))
data_timedelta = create_data(lambda x: timedelta(days=x))
data_Timedelta = create_data(lambda x: Timedelta(x, 'D'))
expected = DataFrame([{0: 0, 1: None, 2: None, 3: None},
{0: None, 1: 2, 2: None, 3: None},
{0: None, 1: None, 2: 4, 3: None},
{0: None, 1: None, 2: None, 3: 6}],
index=[Timedelta(td, 'D') for td in td_as_int])
result_timedelta64 = DataFrame(data_timedelta64)
result_timedelta = DataFrame(data_timedelta)
result_Timedelta = DataFrame(data_Timedelta)
tm.assert_frame_equal(result_timedelta64, expected)
tm.assert_frame_equal(result_timedelta, expected)
tm.assert_frame_equal(result_Timedelta, expected)
def test_constructor_period(self):
# PeriodIndex
a = pd.PeriodIndex(['2012-01', 'NaT', '2012-04'], freq='M')
b = pd.PeriodIndex(['2012-02-01', '2012-03-01', 'NaT'], freq='D')
df = pd.DataFrame({'a': a, 'b': b})
assert df['a'].dtype == a.dtype
assert df['b'].dtype == b.dtype
# list of periods
df = pd.DataFrame({'a': a.astype(object).tolist(),
'b': b.astype(object).tolist()})
assert df['a'].dtype == a.dtype
assert df['b'].dtype == b.dtype
def test_nested_dict_frame_constructor(self):
rng = pd.period_range('1/1/2000', periods=5)
df = DataFrame(randn(10, 5), columns=rng)
data = {}
for col in df.columns:
for row in df.index:
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
data.setdefault(col, {})[row] = df.get_value(row, col)
result = DataFrame(data, columns=rng)
tm.assert_frame_equal(result, df)
data = {}
for col in df.columns:
for row in df.index:
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
data.setdefault(row, {})[col] = df.get_value(row, col)
result = DataFrame(data, index=rng).T
tm.assert_frame_equal(result, df)
def _check_basic_constructor(self, empty):
# mat: 2d matrix with shape (3, 2) to input. empty - makes sized
# objects
mat = empty((2, 3), dtype=float)
# 2-D input
frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2])
assert len(frame.index) == 2
assert len(frame.columns) == 3
# 1-D input
frame = DataFrame(empty((3,)), columns=['A'], index=[1, 2, 3])
assert len(frame.index) == 3
assert len(frame.columns) == 1
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
index=[1, 2], dtype=np.int64)
assert frame.values.dtype == np.int64
# wrong size axis labels
msg = r'Shape of passed values is \(3, 2\), indices imply \(3, 1\)'
with pytest.raises(ValueError, match=msg):
DataFrame(mat, columns=['A', 'B', 'C'], index=[1])
msg = r'Shape of passed values is \(3, 2\), indices imply \(2, 2\)'
with pytest.raises(ValueError, match=msg):
DataFrame(mat, columns=['A', 'B'], index=[1, 2])
# higher dim raise exception
with pytest.raises(ValueError, match='Must pass 2-d input'):
DataFrame(empty((3, 3, 3)), columns=['A', 'B', 'C'],
index=[1])
# automatic labeling
frame = DataFrame(mat)
tm.assert_index_equal(frame.index, pd.Index(lrange(2)))
tm.assert_index_equal(frame.columns, pd.Index(lrange(3)))
frame = DataFrame(mat, index=[1, 2])
tm.assert_index_equal(frame.columns, pd.Index(lrange(3)))
frame = DataFrame(mat, columns=['A', 'B', 'C'])
tm.assert_index_equal(frame.index, pd.Index(lrange(2)))
# 0-length axis
frame = DataFrame(empty((0, 3)))
assert len(frame.index) == 0
frame = DataFrame(empty((3, 0)))
assert len(frame.columns) == 0
def test_constructor_ndarray(self):
self._check_basic_constructor(np.ones)
frame = DataFrame(['foo', 'bar'], index=[0, 1], columns=['A'])
assert len(frame) == 2
def test_constructor_maskedarray(self):
self._check_basic_constructor(ma.masked_all)
# Check non-masked values
mat = ma.masked_all((2, 3), dtype=float)
mat[0, 0] = 1.0
mat[1, 2] = 2.0
frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2])
assert 1.0 == frame['A'][1]
assert 2.0 == frame['C'][2]
# what is this even checking??
mat = ma.masked_all((2, 3), dtype=float)
frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2])
assert np.all(~np.asarray(frame == frame))
def test_constructor_maskedarray_nonfloat(self):
# masked int promoted to float
mat = ma.masked_all((2, 3), dtype=int)
# 2-D input
frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2])
assert len(frame.index) == 2
assert len(frame.columns) == 3
assert np.all(~np.asarray(frame == frame))
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
index=[1, 2], dtype=np.float64)
assert frame.values.dtype == np.float64
# Check non-masked values
mat2 = ma.copy(mat)
mat2[0, 0] = 1
mat2[1, 2] = 2
frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2])
assert 1 == frame['A'][1]
assert 2 == frame['C'][2]
# masked np.datetime64 stays (use NaT as null)
mat = ma.masked_all((2, 3), dtype='M8[ns]')
# 2-D input
frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2])
assert len(frame.index) == 2
assert len(frame.columns) == 3
assert isna(frame).values.all()
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
index=[1, 2], dtype=np.int64)
assert frame.values.dtype == np.int64
# Check non-masked values
mat2 = ma.copy(mat)
mat2[0, 0] = 1
mat2[1, 2] = 2
frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2])
assert 1 == frame['A'].view('i8')[1]
assert 2 == frame['C'].view('i8')[2]
# masked bool promoted to object
mat = ma.masked_all((2, 3), dtype=bool)
# 2-D input
frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2])
assert len(frame.index) == 2
assert len(frame.columns) == 3
assert np.all(~np.asarray(frame == frame))
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
index=[1, 2], dtype=object)
assert frame.values.dtype == object
# Check non-masked values
mat2 = ma.copy(mat)
mat2[0, 0] = True
mat2[1, 2] = False
frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2])
assert frame['A'][1] is True
assert frame['C'][2] is False
def test_constructor_mrecarray(self):
# Ensure mrecarray produces frame identical to dict of masked arrays
# from GH3479
assert_fr_equal = functools.partial(tm.assert_frame_equal,
check_index_type=True,
check_column_type=True,
check_frame_type=True)
arrays = [
('float', np.array([1.5, 2.0])),
('int', np.array([1, 2])),
('str', np.array(['abc', 'def'])),
]
for name, arr in arrays[:]:
arrays.append(('masked1_' + name,
np.ma.masked_array(arr, mask=[False, True])))
arrays.append(('masked_all', np.ma.masked_all((2,))))
arrays.append(('masked_none',
np.ma.masked_array([1.0, 2.5], mask=False)))
# call assert_frame_equal for all selections of 3 arrays
for comb in itertools.combinations(arrays, 3):
names, data = zip(*comb)
mrecs = mrecords.fromarrays(data, names=names)
# fill the comb
comb = {k: (v.filled() if hasattr(v, 'filled') else v)
for k, v in comb}
expected = DataFrame(comb, columns=names)
result = DataFrame(mrecs)
assert_fr_equal(result, expected)
# specify columns
expected = DataFrame(comb, columns=names[::-1])
result = DataFrame(mrecs, columns=names[::-1])
assert_fr_equal(result, expected)
# specify index
expected = DataFrame(comb, columns=names, index=[1, 2])
result = DataFrame(mrecs, index=[1, 2])
assert_fr_equal(result, expected)
def test_constructor_corner_shape(self):
df = DataFrame(index=[])
assert df.values.shape == (0, 0)
@pytest.mark.parametrize("data, index, columns, dtype, expected", [
(None, lrange(10), ['a', 'b'], object, np.object_),
(None, None, ['a', 'b'], 'int64', np.dtype('int64')),
(None, lrange(10), ['a', 'b'], int, np.dtype('float64')),
({}, None, ['foo', 'bar'], None, np.object_),
({'b': 1}, lrange(10), list('abc'), int, np.dtype('float64'))
])
def test_constructor_dtype(self, data, index, columns, dtype, expected):
df = DataFrame(data, index, columns, dtype)
assert df.values.dtype == expected
def test_constructor_scalar_inference(self):
data = {'int': 1, 'bool': True,
'float': 3., 'complex': 4j, 'object': 'foo'}
df = DataFrame(data, index=np.arange(10))
assert df['int'].dtype == np.int64
assert df['bool'].dtype == np.bool_
assert df['float'].dtype == np.float64
assert df['complex'].dtype == np.complex128
assert df['object'].dtype == np.object_
def test_constructor_arrays_and_scalars(self):
df = DataFrame({'a': randn(10), 'b': True})
exp = DataFrame({'a': df['a'].values, 'b': [True] * 10})
tm.assert_frame_equal(df, exp)
with pytest.raises(ValueError, match='must pass an index'):
DataFrame({'a': False, 'b': True})
def test_constructor_DataFrame(self):
df = DataFrame(self.frame)
tm.assert_frame_equal(df, self.frame)
df_casted = DataFrame(self.frame, dtype=np.int64)
assert df_casted.values.dtype == np.int64
def test_constructor_more(self):
# used to be in test_matrix.py
arr = randn(10)
dm = DataFrame(arr, columns=['A'], index=np.arange(10))
assert dm.values.ndim == 2
arr = randn(0)
dm = DataFrame(arr)
assert dm.values.ndim == 2
assert dm.values.ndim == 2
# no data specified
dm = DataFrame(columns=['A', 'B'], index=np.arange(10))
assert dm.values.shape == (10, 2)
dm = DataFrame(columns=['A', 'B'])
assert dm.values.shape == (0, 2)
dm = DataFrame(index=np.arange(10))
assert dm.values.shape == (10, 0)
# can't cast
mat = np.array(['foo', 'bar'], dtype=object).reshape(2, 1)
with pytest.raises(ValueError, match='cast'):
DataFrame(mat, index=[0, 1], columns=[0], dtype=float)
dm = DataFrame(DataFrame(self.frame._series))
tm.assert_frame_equal(dm, self.frame)
# int cast
dm = DataFrame({'A': np.ones(10, dtype=int),
'B': np.ones(10, dtype=np.float64)},
index=np.arange(10))
assert len(dm.columns) == 2
assert dm.values.dtype == np.float64
def test_constructor_empty_list(self):
df = DataFrame([], index=[])
expected = DataFrame(index=[])
tm.assert_frame_equal(df, expected)
# GH 9939
df = DataFrame([], columns=['A', 'B'])
expected = DataFrame({}, columns=['A', 'B'])
tm.assert_frame_equal(df, expected)
# Empty generator: list(empty_gen()) == []
def empty_gen():
return
yield
df = DataFrame(empty_gen(), columns=['A', 'B'])
tm.assert_frame_equal(df, expected)
def test_constructor_list_of_lists(self):
# GH #484
df = DataFrame(data=[[1, 'a'], [2, 'b']], columns=["num", "str"])
assert is_integer_dtype(df['num'])
assert df['str'].dtype == np.object_
# GH 4851
# list of 0-dim ndarrays
expected = DataFrame({0: np.arange(10)})
data = [np.array(x) for x in range(10)]
result = DataFrame(data)
tm.assert_frame_equal(result, expected)
def test_constructor_sequence_like(self):
# GH 3783
# collections.Squence like
class DummyContainer(compat.Sequence):
def __init__(self, lst):
self._lst = lst
def __getitem__(self, n):
return self._lst.__getitem__(n)
def __len__(self, n):
return self._lst.__len__()
lst_containers = [DummyContainer([1, 'a']), DummyContainer([2, 'b'])]
columns = ["num", "str"]
result = DataFrame(lst_containers, columns=columns)
expected = DataFrame([[1, 'a'], [2, 'b']], columns=columns)
tm.assert_frame_equal(result, expected, check_dtype=False)
# GH 4297
# support Array
import array
result = DataFrame({'A': array.array('i', range(10))})
expected = DataFrame({'A': list(range(10))})
tm.assert_frame_equal(result, expected, check_dtype=False)
expected = DataFrame([list(range(10)), list(range(10))])
result = DataFrame([array.array('i', range(10)),
array.array('i', range(10))])
tm.assert_frame_equal(result, expected, check_dtype=False)
def test_constructor_iterable(self):
# GH 21987
class Iter():
def __iter__(self):
for i in range(10):
yield [1, 2, 3]
expected = DataFrame([[1, 2, 3]] * 10)
result = DataFrame(Iter())
tm.assert_frame_equal(result, expected)
def test_constructor_iterator(self):
expected = DataFrame([list(range(10)), list(range(10))])
result = DataFrame([range(10), range(10)])
tm.assert_frame_equal(result, expected)
def test_constructor_generator(self):
# related #2305
gen1 = (i for i in range(10))
gen2 = (i for i in range(10))
expected = DataFrame([list(range(10)), list(range(10))])
result = DataFrame([gen1, gen2])
tm.assert_frame_equal(result, expected)
gen = ([i, 'a'] for i in range(10))
result = DataFrame(gen)
expected = DataFrame({0: range(10), 1: 'a'})
tm.assert_frame_equal(result, expected, check_dtype=False)
def test_constructor_list_of_dicts(self):
data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]),
OrderedDict([['a', 1.5], ['b', 3], ['d', 6]]),
OrderedDict([['a', 1.5], ['d', 6]]),
OrderedDict(),
OrderedDict([['a', 1.5], ['b', 3], ['c', 4]]),
OrderedDict([['b', 3], ['c', 4], ['d', 6]])]
result = DataFrame(data)
expected = DataFrame.from_dict(dict(zip(range(len(data)), data)),
orient='index')
tm.assert_frame_equal(result, expected.reindex(result.index))
result = DataFrame([{}])
expected = DataFrame(index=[0])
tm.assert_frame_equal(result, expected)
def test_constructor_ordered_dict_preserve_order(self):
# see gh-13304
expected = DataFrame([[2, 1]], columns=['b', 'a'])
data = OrderedDict()
data['b'] = [2]
data['a'] = [1]
result = DataFrame(data)
tm.assert_frame_equal(result, expected)
data = OrderedDict()
data['b'] = 2
data['a'] = 1
result = DataFrame([data])
tm.assert_frame_equal(result, expected)
def test_constructor_ordered_dict_conflicting_orders(self):
# the first dict element sets the ordering for the DataFrame,
# even if there are conflicting orders from subsequent ones
row_one = OrderedDict()
row_one['b'] = 2
row_one['a'] = 1
row_two = OrderedDict()
row_two['a'] = 1
row_two['b'] = 2
row_three = {'b': 2, 'a': 1}
expected = DataFrame([[2, 1], [2, 1]], columns=['b', 'a'])
result = DataFrame([row_one, row_two])
tm.assert_frame_equal(result, expected)
expected = DataFrame([[2, 1], [2, 1], [2, 1]], columns=['b', 'a'])
result = DataFrame([row_one, row_two, row_three])
tm.assert_frame_equal(result, expected)
def test_constructor_list_of_series(self):
data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]),
OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])]
sdict = OrderedDict(zip(['x', 'y'], data))
idx = Index(['a', 'b', 'c'])
# all named
data2 = [Series([1.5, 3, 4], idx, dtype='O', name='x'),
Series([1.5, 3, 6], idx, name='y')]
result = DataFrame(data2)
expected = DataFrame.from_dict(sdict, orient='index')
tm.assert_frame_equal(result, expected)
# some unnamed
data2 = [Series([1.5, 3, 4], idx, dtype='O', name='x'),
Series([1.5, 3, 6], idx)]
result = DataFrame(data2)
sdict = OrderedDict(zip(['x', 'Unnamed 0'], data))
expected = DataFrame.from_dict(sdict, orient='index')
tm.assert_frame_equal(result.sort_index(), expected)
# none named
data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]),
OrderedDict([['a', 1.5], ['b', 3], ['d', 6]]),
OrderedDict([['a', 1.5], ['d', 6]]),
OrderedDict(),
OrderedDict([['a', 1.5], ['b', 3], ['c', 4]]),
OrderedDict([['b', 3], ['c', 4], ['d', 6]])]
data = [Series(d) for d in data]
result = DataFrame(data)
sdict = OrderedDict(zip(range(len(data)), data))
expected = DataFrame.from_dict(sdict, orient='index')
tm.assert_frame_equal(result, expected.reindex(result.index))
result2 = DataFrame(data, index=np.arange(6))
tm.assert_frame_equal(result, result2)
result = DataFrame([Series({})])
expected = DataFrame(index=[0])
tm.assert_frame_equal(result, expected)
data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]),
OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])]
sdict = OrderedDict(zip(range(len(data)), data))
idx = Index(['a', 'b', 'c'])
data2 = [Series([1.5, 3, 4], idx, dtype='O'),
Series([1.5, 3, 6], idx)]
result = DataFrame(data2)
expected = DataFrame.from_dict(sdict, orient='index')
tm.assert_frame_equal(result, expected)
def test_constructor_list_of_series_aligned_index(self):
series = [pd.Series(i, index=['b', 'a', 'c'], name=str(i))
for i in range(3)]
result = pd.DataFrame(series)
expected = pd.DataFrame({'b': [0, 1, 2],
'a': [0, 1, 2],
'c': [0, 1, 2]},
columns=['b', 'a', 'c'],
index=['0', '1', '2'])
tm.assert_frame_equal(result, expected)
def test_constructor_list_of_derived_dicts(self):
class CustomDict(dict):
pass
d = {'a': 1.5, 'b': 3}
data_custom = [CustomDict(d)]
data = [d]
result_custom = DataFrame(data_custom)
result = DataFrame(data)
tm.assert_frame_equal(result, result_custom)
def test_constructor_ragged(self):
data = {'A': randn(10),
'B': randn(8)}
with pytest.raises(ValueError, match='arrays must all be same length'):
DataFrame(data)
def test_constructor_scalar(self):
idx = Index(lrange(3))
df = DataFrame({"a": 0}, index=idx)
expected = DataFrame({"a": [0, 0, 0]}, index=idx)
tm.assert_frame_equal(df, expected, check_dtype=False)
def test_constructor_Series_copy_bug(self):
df = DataFrame(self.frame['A'], index=self.frame.index, columns=['A'])
df.copy()
def test_constructor_mixed_dict_and_Series(self):
data = {}
data['A'] = {'foo': 1, 'bar': 2, 'baz': 3}
data['B'] = Series([4, 3, 2, 1], index=['bar', 'qux', 'baz', 'foo'])
result = DataFrame(data)
assert result.index.is_monotonic
# ordering ambiguous, raise exception
with pytest.raises(ValueError, match='ambiguous ordering'):
DataFrame({'A': ['a', 'b'], 'B': {'a': 'a', 'b': 'b'}})
# this is OK though
result = DataFrame({'A': ['a', 'b'],
'B': Series(['a', 'b'], index=['a', 'b'])})
expected = DataFrame({'A': ['a', 'b'], 'B': ['a', 'b']},
index=['a', 'b'])
tm.assert_frame_equal(result, expected)
def test_constructor_tuples(self):
result = DataFrame({'A': [(1, 2), (3, 4)]})
expected = DataFrame({'A': Series([(1, 2), (3, 4)])})
tm.assert_frame_equal(result, expected)
def test_constructor_namedtuples(self):
# GH11181
from collections import namedtuple
named_tuple = namedtuple("Pandas", list('ab'))
tuples = [named_tuple(1, 3), named_tuple(2, 4)]
expected = DataFrame({'a': [1, 2], 'b': [3, 4]})
result = DataFrame(tuples)
tm.assert_frame_equal(result, expected)
# with columns
expected = DataFrame({'y': [1, 2], 'z': [3, 4]})
result = DataFrame(tuples, columns=['y', 'z'])
tm.assert_frame_equal(result, expected)
def test_constructor_orient(self):
data_dict = self.mixed_frame.T._series
recons = DataFrame.from_dict(data_dict, orient='index')
expected = self.mixed_frame.sort_index()
tm.assert_frame_equal(recons, expected)
# dict of sequence
a = {'hi': [32, 3, 3],
'there': [3, 5, 3]}
rs = DataFrame.from_dict(a, orient='index')
xp = DataFrame.from_dict(a).T.reindex(list(a.keys()))
tm.assert_frame_equal(rs, xp)
def test_from_dict_columns_parameter(self):
# GH 18529
# Test new columns parameter for from_dict that was added to make
# from_items(..., orient='index', columns=[...]) easier to replicate
result = DataFrame.from_dict(OrderedDict([('A', [1, 2]),
('B', [4, 5])]),
orient='index', columns=['one', 'two'])
expected = DataFrame([[1, 2], [4, 5]], index=['A', 'B'],
columns=['one', 'two'])
tm.assert_frame_equal(result, expected)
msg = "cannot use columns parameter with orient='columns'"
with pytest.raises(ValueError, match=msg):
DataFrame.from_dict(dict([('A', [1, 2]), ('B', [4, 5])]),
orient='columns', columns=['one', 'two'])
with pytest.raises(ValueError, match=msg):
DataFrame.from_dict(dict([('A', [1, 2]), ('B', [4, 5])]),
columns=['one', 'two'])
def test_constructor_Series_named(self):
a = Series([1, 2, 3], index=['a', 'b', 'c'], name='x')
df = DataFrame(a)
assert df.columns[0] == 'x'
tm.assert_index_equal(df.index, a.index)
# ndarray like
arr = np.random.randn(10)
s = Series(arr, name='x')
df = DataFrame(s)
expected = DataFrame(dict(x=s))
tm.assert_frame_equal(df, expected)
s = Series(arr, index=range(3, 13))
df = DataFrame(s)
expected = DataFrame({0: s})
tm.assert_frame_equal(df, expected)
pytest.raises(ValueError, DataFrame, s, columns=[1, 2])
# #2234
a = Series([], name='x')
df = DataFrame(a)
assert df.columns[0] == 'x'
# series with name and w/o
s1 = Series(arr, name='x')
df = DataFrame([s1, arr]).T
expected = DataFrame({'x': s1, 'Unnamed 0': arr},
columns=['x', 'Unnamed 0'])
tm.assert_frame_equal(df, expected)
# this is a bit non-intuitive here; the series collapse down to arrays
df = DataFrame([arr, s1]).T
expected = DataFrame({1: s1, 0: arr}, columns=[0, 1])
tm.assert_frame_equal(df, expected)
def test_constructor_Series_named_and_columns(self):
# GH 9232 validation
s0 = Series(range(5), name=0)
s1 = Series(range(5), name=1)
# matching name and column gives standard frame
tm.assert_frame_equal(pd.DataFrame(s0, columns=[0]),
s0.to_frame())
tm.assert_frame_equal(pd.DataFrame(s1, columns=[1]),
s1.to_frame())
# non-matching produces empty frame
assert pd.DataFrame(s0, columns=[1]).empty
assert pd.DataFrame(s1, columns=[0]).empty
def test_constructor_Series_differently_indexed(self):
# name
s1 = Series([1, 2, 3], index=['a', 'b', 'c'], name='x')
# no name
s2 = Series([1, 2, 3], index=['a', 'b', 'c'])
other_index = Index(['a', 'b'])
df1 = DataFrame(s1, index=other_index)
exp1 = DataFrame(s1.reindex(other_index))
assert df1.columns[0] == 'x'
tm.assert_frame_equal(df1, exp1)
df2 = DataFrame(s2, index=other_index)
exp2 = DataFrame(s2.reindex(other_index))
assert df2.columns[0] == 0
tm.assert_index_equal(df2.index, other_index)
tm.assert_frame_equal(df2, exp2)
def test_constructor_manager_resize(self):
index = list(self.frame.index[:5])
columns = list(self.frame.columns[:3])
result = DataFrame(self.frame._data, index=index,
columns=columns)
tm.assert_index_equal(result.index, Index(index))
tm.assert_index_equal(result.columns, Index(columns))
def test_constructor_from_items(self):
items = [(c, self.frame[c]) for c in self.frame.columns]
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
recons = DataFrame.from_items(items)
tm.assert_frame_equal(recons, self.frame)
# pass some columns
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
recons = DataFrame.from_items(items, columns=['C', 'B', 'A'])
tm.assert_frame_equal(recons, self.frame.loc[:, ['C', 'B', 'A']])
# orient='index'
row_items = [(idx, self.mixed_frame.xs(idx))
for idx in self.mixed_frame.index]
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
recons = DataFrame.from_items(row_items,
columns=self.mixed_frame.columns,
orient='index')
tm.assert_frame_equal(recons, self.mixed_frame)
assert recons['A'].dtype == np.float64
msg = "Must pass columns with orient='index'"
with pytest.raises(TypeError, match=msg):
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
DataFrame.from_items(row_items, orient='index')
# orient='index', but thar be tuples
arr = construct_1d_object_array_from_listlike(
[('bar', 'baz')] * len(self.mixed_frame))
self.mixed_frame['foo'] = arr
row_items = [(idx, list(self.mixed_frame.xs(idx)))
for idx in self.mixed_frame.index]
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
recons = DataFrame.from_items(row_items,
columns=self.mixed_frame.columns,
orient='index')
tm.assert_frame_equal(recons, self.mixed_frame)
assert isinstance(recons['foo'][0], tuple)
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
rs = DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])],
orient='index',
columns=['one', 'two', 'three'])
xp = DataFrame([[1, 2, 3], [4, 5, 6]], index=['A', 'B'],
columns=['one', 'two', 'three'])
| tm.assert_frame_equal(rs, xp) | pandas.util.testing.assert_frame_equal |
import pandas as pd
from sklearn import linear_model
import statsmodels.api as sm
import numpy as np
from scipy import stats
# df_2018 = pd.read_csv("/mnt/nadavrap-students/STS/data/2018_2019.csv")
# df_2016 = pd.read_csv("/mnt/nadavrap-students/STS/data/2016_2017.csv")
# df_2014 = pd.read_csv("/mnt/nadavrap-students/STS/data/2014_2015.csv")
# df_2012 = pd.read_csv("/mnt/nadavrap-students/STS/data/2012_2013.csv")
# df_2010 = pd.read_csv("/mnt/nadavrap-students/STS/data/2010_2011.csv")
#
# print (df_2018.stsrcom.unique())
# print (df_2016.stsrcom.unique())
# print (df_2014.stsrcom.unique())
# print (df_2012.stsrcom.unique())
# print (df_2010.stsrcom.unique())
# print (df_2018.stsrcHospD.unique())
# print (df_2016.stsrcHospD.unique())
# print (df_2014.stsrcHospD.unique())
# print (df_2012.stsrcHospD.unique())
# print (df_2010.stsrcHospD.unique())
# # print (df_2018.columns.tolist())
# df_union = pd.concat([df_2010, df_2012,df_2014,df_2016,df_2018], ignore_index=True)
# print (df_union)
# print (df_union['surgyear'].value_counts())
# for col in df_union.columns:
# print("Column '{}' have :: {} missing values.".format(col,df_union[col].isna().sum()))
# df_union= pd.read_csv("df_union.csv")
# cols_to_remove = []
# samples = len(df_union)
# for col in df_union.columns:
# nan_vals = df_union[col].isna().sum()
# prec_missing_vals = nan_vals / samples
# print("Column '{}' have :: {} missing values. {}%".format(col, df_union[col].isna().sum(), round(prec_missing_vals,3)))
# print (cols_to_remove)
#
# df_union.drop(cols_to_remove, axis=1, inplace=True)
# print("Number of Features : ",len(df_union.columns))
# for col in df_union.columns:
# print("Column '{}' have :: {} missing values.".format(col,df_union[col].isna().sum()))
#
# df_union.to_csv("df union after remove.csv")
# df_2018_ = pd.read_csv("/mnt/nadavrap-students/STS/data/2018_2019.csv")
df_all= pd.read_csv("/tmp/pycharm_project_723/df_union.csv")
print (df_all.reoperation.unique())
print (df_all.stsrcHospD.unique())
print (df_all.stsrcom.unique())
# mask = df_2018_['surgyear'] == 2018
# df_all = df_2018_[mask]
# mask_reop = df_all['reoperation'] == 1
# df_reop = df_all[mask_reop]
# df_op = df_all[~mask_reop]
def create_df_for_bins_hospid(col_mort):
df1 = df_all.groupby(['hospid', 'surgyear'])['hospid'].count().reset_index(name='total')
df2 = df_all.groupby(['hospid', 'surgyear'])['reoperation'].apply(lambda x: (x == 1).sum()).reset_index(
name='Reop')
df3 = df_all.groupby(['hospid', 'surgyear'])['reoperation'].apply(lambda x: (x == 0).sum()).reset_index(
name='FirstOperation')
df_aggr = pd.read_csv("aggregate_csv.csv")
mask_reop = df_all['reoperation'] == 1
df_reop = df_all[mask_reop]
df_op = df_all[~mask_reop]
dfmort = df_all.groupby(['hospid', 'surgyear'])[col_mort].apply(lambda x: (x == 1).sum()).reset_index(
name='Mortality_all')
dfmortf = df_op.groupby(['hospid', 'surgyear'])[col_mort].apply(lambda x: (x == 1).sum()).reset_index(
name='Mortality_first')
dfmortr = df_reop.groupby(['hospid', 'surgyear'])[col_mort].apply(lambda x: (x == 1).sum()).reset_index(
name='Mortality_reop')
df_comp = df_all.groupby(['hospid', 'surgyear'])['complics'].apply(lambda x: (x == 1).sum()).reset_index(
name='Complics_all')
df_compr = df_reop.groupby(['hospid', 'surgyear'])['complics'].apply(lambda x: (x == 1).sum()).reset_index(
name='Complics_reop')
df_compf = df_op.groupby(['hospid', 'surgyear'])['complics'].apply(lambda x: (x == 1).sum()).reset_index(
name='Complics_FirstOperation')
d1 = pd.merge(df1, df3, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'], how='outer')
d2 = pd.merge(d1, df2, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'], how='outer')
df5 = pd.merge(df_aggr, d2, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'],
how='inner') # how='left', on=['HospID','surgyear'])
del df5["Unnamed: 0"]
d3 = pd.merge(df5, dfmort, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'], how='outer')
d4 = pd.merge(d3, dfmortf, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'], how='outer')
d5 = pd.merge(d4, dfmortr, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'], how='outer')
d6 = pd.merge(d5, df_comp, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'], how='outer')
d7 = pd.merge(d6, df_compf, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'], how='outer')
d8 = pd.merge(d7, df_compr, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'], how='outer')
# df_sum_all_Years_total = pd.merge(d8, df_19, on='HospID', how='outer')
d8.fillna(0, inplace=True)
d8['mort_rate_All'] = (d8['Mortality_all'] / d8['total']) * 100
d8['Mortality_First_rate'] = (d8['Mortality_first'] / d8['FirstOperation']) * 100
d8['Mortality_Reop_rate'] = (d8['Mortality_reop'] / d8['Reop']) * 100
d8['Complics_rate_All'] = (d8['Complics_all'] / d8['total']) * 100
d8['Complics_First_rate'] = (d8['Complics_FirstOperation'] / d8['FirstOperation']) * 100
d8['Complics_Reop_rate'] = (d8['Complics_reop'] / d8['Reop']) * 100
d8.to_csv('hospid_year_allyears.csv')
df_PredMort_all = df_all.groupby(['hospid', 'surgyear'])['predmort'].mean().reset_index(name='PredMort_All_avg')
df_PredMort_op = df_op.groupby(['hospid', 'surgyear'])['predmort'].mean().reset_index(name='PredMort_First_avg')
df_PredMort_reop = df_reop.groupby(['hospid', 'surgyear'])['predmort'].mean().reset_index(
name='PredMort_Reoperation_avg')
df_PredComp_all = df_all.groupby(['hospid', 'surgyear'])['predmm'].mean().reset_index(name='PredComp_All_avg')
df_PredComp_op = df_op.groupby(['hospid', 'surgyear'])['predmm'].mean().reset_index(name='PredComp_First_avg')
df_PredComp_reop = df_reop.groupby(['hospid', 'surgyear'])['predmm'].mean().reset_index(
name='PredComp_Reoperation_avg')
d19 = pd.merge(d8, df_PredMort_all, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'], how='outer')
d9 = pd.merge(d19, df_PredMort_op, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'], how='outer')
d10 = pd.merge(d9, df_PredMort_reop, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'], how='outer')
d14 = pd.merge(d10, df_PredComp_all, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'], how='outer')
d11 = pd.merge(d14, df_PredComp_op, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'], how='outer')
d12 = pd.merge(d11, df_PredComp_reop, left_on=['hospid', 'surgyear'], right_on=['hospid', 'surgyear'], how='outer')
d12.fillna(0, inplace=True)
d12['Mort_observe/expected_All'] = (d12['mort_rate_All'] / d12['PredMort_All_avg'])
d12['Mort_observe/expected_First'] = (d12['Mortality_First_rate'] / d12['PredMort_First_avg'])
d12['Mort_observe/expected_Reop'] = (d12['Mortality_Reop_rate'] / d12['PredMort_Reoperation_avg'])
d12[['log_All_Mort', 'log_First_Mort', 'log_Reoperation_Mort']] = np.log2(
d12[['Mort_observe/expected_All', 'Mort_observe/expected_First', 'Mort_observe/expected_Reop']].replace(0,
np.nan))
d12.fillna(0, inplace=True)
d12['Comp_observe/expected_All'] = (d12['Complics_rate_All'] / d12['PredComp_All_avg'])
d12['Comp_observe/expected_First'] = (d12['Complics_First_rate'] / d12['PredComp_First_avg'])
d12['Comp_observe/expected_Reop'] = (d12['Complics_Reop_rate'] / d12['PredComp_Reoperation_avg'])
d12[['log_All_Comp', 'log_First_Comp', 'log_Reoperation_Comp']] = np.log2(
d12[['Comp_observe/expected_All', 'Comp_observe/expected_First', 'Comp_observe/expected_Reop']].replace(0,
np.nan))
d12.fillna(0, inplace=True)
d12.to_csv("hospid_allyears_expec_hospid_stsrcHospD.csv")
print(d12.info())
print(d12.columns.tolist())
#create_df_for_bins_hospid('stsrcHospD')
def create_df_for_bins_surgid(col_mort):
df1 = df_all.groupby(['surgid', 'surgyear'])['surgid'].count().reset_index(name='total')
df2 = df_all.groupby(['surgid', 'surgyear'])['reoperation'].apply(lambda x: (x == 1).sum()).reset_index(
name='Reop')
df3 = df_all.groupby(['surgid', 'surgyear'])['reoperation'].apply(lambda x: (x == 0).sum()).reset_index(
name='FirstOperation')
df_aggr = pd.read_csv("/tmp/pycharm_project_723/aggregate_surgid_csv.csv")
mask_reop = df_all['reoperation'] == 1
df_reop = df_all[mask_reop]
df_op = df_all[~mask_reop]
dfmort = df_all.groupby(['surgid', 'surgyear'])[col_mort].apply(lambda x: (x == 1).sum()).reset_index(
name='Mortality_all')
dfmortf = df_op.groupby(['surgid', 'surgyear'])[col_mort].apply(lambda x: (x == 1).sum()).reset_index(
name='Mortality_first')
dfmortr = df_reop.groupby(['surgid', 'surgyear'])[col_mort].apply(lambda x: (x == 1).sum()).reset_index(
name='Mortality_reop')
df_comp = df_all.groupby(['surgid', 'surgyear'])['complics'].apply(lambda x: (x == 1).sum()).reset_index(
name='Complics_all')
df_compr = df_reop.groupby(['surgid', 'surgyear'])['complics'].apply(lambda x: (x == 1).sum()).reset_index(
name='Complics_reop')
df_compf = df_op.groupby(['surgid', 'surgyear'])['complics'].apply(lambda x: (x == 1).sum()).reset_index(
name='Complics_FirstOperation')
d1 = pd.merge(df1, df3, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'], how='outer')
d2 = pd.merge(d1, df2, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'], how='outer')
df5 = pd.merge(df_aggr, d2, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'],how='inner')
# del df5["Unnamed: 0"]
d3 = pd.merge(df5, dfmort, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'], how='outer')
d4 = pd.merge(d3, dfmortf, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'], how='outer')
d5 = pd.merge(d4, dfmortr, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'], how='outer')
d6 = pd.merge(d5, df_comp, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'], how='outer')
d7 = pd.merge(d6, df_compf, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'], how='outer')
d8 = pd.merge(d7, df_compr, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'], how='outer')
# df_sum_all_Years_total = pd.merge(d8, df_19, on='HospID', how='outer')
d8.fillna(0, inplace=True)
d8['mort_rate_All'] = (d8['Mortality_all'] / d8['total']) * 100
d8['Mortality_First_rate'] = (d8['Mortality_first'] / d8['FirstOperation']) * 100
d8['Mortality_Reop_rate'] = (d8['Mortality_reop'] / d8['Reop']) * 100
d8['Complics_rate_All'] = (d8['Complics_all'] / d8['total']) * 100
d8['Complics_First_rate'] = (d8['Complics_FirstOperation'] / d8['FirstOperation']) * 100
d8['Complics_Reop_rate'] = (d8['Complics_reop'] / d8['Reop']) * 100
d8.to_csv('surgid_year_allyears.csv')
df_PredMort_all = df_all.groupby(['surgid', 'surgyear'])['predmort'].mean().reset_index(name='PredMort_All_avg')
df_PredMort_op = df_op.groupby(['surgid', 'surgyear'])['predmort'].mean().reset_index(name='PredMort_First_avg')
df_PredMort_reop = df_reop.groupby(['surgid', 'surgyear'])['predmort'].mean().reset_index(
name='PredMort_Reoperation_avg')
df_PredComp_all = df_all.groupby(['surgid', 'surgyear'])['predmm'].mean().reset_index(name='PredComp_All_avg')
df_PredComp_op = df_op.groupby(['surgid', 'surgyear'])['predmm'].mean().reset_index(name='PredComp_First_avg')
df_PredComp_reop = df_reop.groupby(['surgid', 'surgyear'])['predmm'].mean().reset_index(
name='PredComp_Reoperation_avg')
d19 = pd.merge(d8, df_PredMort_all, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'], how='outer')
d9 = pd.merge(d19, df_PredMort_op, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'], how='outer')
d10 = pd.merge(d9, df_PredMort_reop, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'], how='outer')
d14 = pd.merge(d10, df_PredComp_all, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'], how='outer')
d11 = pd.merge(d14, df_PredComp_op, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'], how='outer')
d12 = pd.merge(d11, df_PredComp_reop, left_on=['surgid', 'surgyear'], right_on=['surgid', 'surgyear'], how='outer')
d12.fillna(0, inplace=True)
d12['Mort_observe/expected_All'] = (d12['mort_rate_All'] / d12['PredMort_All_avg'])
d12['Mort_observe/expected_First'] = (d12['Mortality_First_rate'] / d12['PredMort_First_avg'])
d12['Mort_observe/expected_Reop'] = (d12['Mortality_Reop_rate'] / d12['PredMort_Reoperation_avg'])
d12[['log_All_Mort', 'log_First_Mort', 'log_Reoperation_Mort']] = np.log2(
d12[['Mort_observe/expected_All', 'Mort_observe/expected_First', 'Mort_observe/expected_Reop']].replace(0,
np.nan))
d12.fillna(0, inplace=True)
d12['Comp_observe/expected_All'] = (d12['Complics_rate_All'] / d12['PredComp_All_avg'])
d12['Comp_observe/expected_First'] = (d12['Complics_First_rate'] / d12['PredComp_First_avg'])
d12['Comp_observe/expected_Reop'] = (d12['Complics_Reop_rate'] / d12['PredComp_Reoperation_avg'])
d12[['log_All_Comp', 'log_First_Comp', 'log_Reoperation_Comp']] = np.log2(
d12[['Comp_observe/expected_All', 'Comp_observe/expected_First', 'Comp_observe/expected_Reop']].replace(0,
np.nan))
d12.fillna(0, inplace=True)
d12.to_csv("surgid_allyears_expec_surgid_stsrcom.csv")
print(d12.info())
print(d12.columns.tolist())
# create_df_for_bins_surgid('stsrcom')
def add_Summary_Data_To_ImputedData(df):
df1 = df.groupby(['hospid', 'surgyear'])['hospid'].count().reset_index(name='HospID_total_CABG')
df2 = df.groupby(['hospid', 'surgyear'])['reoperation'].apply(lambda x: (x == 1).sum()).reset_index(name='HospID_Reop_CABG')
df_aggr = pd.read_csv("aggregate_csv.csv")
df3 = pd.merge(df1, df, left_on=['hospid','surgyear'], right_on=['hospid','surgyear'], how='outer')
df4 = pd.merge(df2, df3, left_on=['hospid','surgyear'], right_on=['hospid','surgyear'], how='outer')
df5 = | pd.merge(df_aggr,df4,left_on=['hospid','surgyear'], right_on=['hospid','surgyear'], how='inner') | pandas.merge |
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
import numpy as np
from dash.dependencies import Output, Input
# dash helps you initialize your application.
# dash_core_components allows you to create interactive components like graphs, dropdowns, or date ranges.
# dash_html_components lets you access HTML tags.
######################################## 1. Get/transform data ##############################################
# read the source data
data = | pd.read_csv("data/avocado.csv") | pandas.read_csv |
#! -*- coding: utf-8 -*-
import ToolsNLP
import gensim
import numpy as np
from collections import Counter
import matplotlib.pyplot as plt
from wordcloud import WordCloud
import pandas as pd
import re
import site
import os
class TopicModelWrapper:
'''
Description::
トピックモデルを実行する
:param data:
入力データ
[[entry_id1, sentence1], [entry_id2, sentence2], ]
:param config_mw:
MecabWrapperのパラメータを設定
デフォルト設定
config_mw = {
'dicttype':'ipadic'
,'userdict': ''
,'stopword': ''
}
:param config_tn:
MecabWrapper.tokenizeのパラメータを設定
デフォルト設定
config_tn = {
'pos_filter': [['名詞', '一般', '*']]
,'is_normalized': True
,'is_org': True
,'is_pos': False
}
:param kwargs:
トピックモデルの設定
デフォルト設定
kwargs = {
# 基本設定
'stop_term_toprate': 0.05
,'stop_term_limitcnt': 3
,'topic_doc_threshold': 0.01
# コーパス設定
,'is_1len': True
,'is_term_fq': True
# LDA設定
,'num_topics': 100
,'iterations': 100
,'alpha': 'auto'
}
# 基本設定(stop_term_toprate = 単語出現頻度上位n%を除外する
,stop_term_limitcnt = 単語頻度がn以下を除外する
,topic_doc_threshold = トピックに対して紐づけるドキュメントのスコアの閾値)
# コーパス設定(is_1len = 形態素解析後の英数字の文字列が1のものを除外するかどうか
,is_term_fq = 単語出現頻度のフィルタを実施するかどうか)
# LDA設定(num_topics = トピック数
,iterations = イテレーション回数
,alpha = アルファの設定)
Usage::
>>> import ToolsNLP
# doctest用にエスケースしている。元のコード `data = [i.strip('\n').split('\t') for i in open('data.tsv', 'r')]`
>>> data = [i.strip('\\n').split('\\t') for i in open('data.tsv', 'r')]
>>> # data[:1] output:[['http://news.livedoor.com/article/detail/4778030/', '友人代表のスピーチ、独女はどうこなしている? ...]]
>>>
>>> t = ToolsNLP.TopicModelWrapper(data=data, config_mw={'dicttype':'neologd'})
read data ...
documents count => 5,980
tokenize text ...
make corpus ...
all token count => 24,220
topic count => 100
make topic model ...
make output data ...
DONE
>>>
'''
def __init__(self, data, config_mw={}, config_tn={}, **kwargs):
sitedir = site.getsitepackages()[-1]
installdir = os.path.join(sitedir, 'ToolsNLP')
self._fpath = installdir + '/.fonts/ipaexg.ttf'
# 基本設定
self._stop_term_limitcnt = kwargs.get('stop_term_limitcnt', 3)
self._topic_doc_threshold = kwargs.get('topic_doc_threshold', 0.01)
# コーパス設定
self._is_1len = kwargs.get('is_1len', True)
self._is_term_fq = kwargs.get('is_term_fq', True)
# LDA設定
self._num_topics = kwargs.get('num_topics', 100)
self._iterations = kwargs.get('iterations', 100)
self._alpha = kwargs.get('alpha', 'auto')
# 形態素解析設定
self._config_mw = config_mw
self._config_tn = config_tn
if 'pos_filter' not in self._config_tn:
self._config_tn['pos_filter'] = [['名詞', '一般', '*']]
self.m = ToolsNLP.MecabWrapper(**self._config_mw)
print('read data ...')
self._data = data
print('{}{:,}'.format('documents count => ',len(self._data)))
print('tokenize text ...')
self._texts = self.__tokenizer_text()
print('make corpus ...')
self._dictionary, self._corpus, self._texts_cleansing = self.__create_corpus()
print('{}{:,}'.format('topic count => ',self._num_topics))
print('make topic model ...')
self._lda = self.__create_topic_model()
self._lda_corpus = self._lda[self._corpus]
print('make output data ...')
self._topic_list, self._topic_df = self.__count_topic()
self._df_docweight = self.__proc_docweight()
print('DONE')
def __tokenizer_text(self):
return [self.m.tokenize(sentence=doc[1], is_list=True, **self._config_tn) for doc in self._data]
def __stop_term(self, is_1len, is_term_fq):
r = re.compile('^[ぁ-んァ-ン0-9a-zA-Z]$')
count = Counter(w for doc in self._texts for w in doc)
if is_1len and is_term_fq:
return [[w for w in doc if count[w] >= self._stop_term_limitcnt and not re.match(r,w)] for doc in self._texts]
elif is_1len and not is_term_fq:
return [[w for w in doc if count[w] >= self._stop_term_limitcnt] for doc in self._texts]
elif not is_1len and is_term_fq:
return [[w for w in doc if not re.match(r,w)] for doc in self._texts]
else:
return self._texts
def __create_corpus(self):
texts = self.__stop_term(self._is_1len, self._is_term_fq)
dictionary = gensim.corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
return dictionary, corpus, texts
def __create_topic_model(self):
return gensim.models.ldamodel.LdaModel(corpus=self._corpus
,id2word=self._dictionary
,num_topics=self._num_topics
,iterations=self._iterations
,alpha=self._alpha
,dtype=np.float64
)
def __count_topic(self):
topic_counnter = Counter(topic[0] for doc in self._lda_corpus for topic in doc if topic[1] > self._topic_doc_threshold).most_common()
topic_list = list(pd.DataFrame(topic_counnter)[0])
topic_df = | pd.DataFrame(topic_counnter, columns=['topic', 'cnt']) | pandas.DataFrame |
"""
Base and utility classes for pandas objects.
"""
import textwrap
import warnings
import numpy as np
import pandas._libs.lib as lib
import pandas.compat as compat
from pandas.compat import PYPY, OrderedDict, builtins, map, range
from pandas.compat.numpy import function as nv
from pandas.errors import AbstractMethodError
from pandas.util._decorators import Appender, Substitution, cache_readonly
from pandas.util._validators import validate_bool_kwarg
from pandas.core.dtypes.common import (
is_datetime64tz_dtype, is_datetimelike, is_extension_array_dtype,
is_extension_type, is_list_like, is_object_dtype, is_scalar)
from pandas.core.dtypes.generic import ABCDataFrame, ABCIndexClass, ABCSeries
from pandas.core.dtypes.missing import isna
from pandas.core import algorithms, common as com
from pandas.core.accessor import DirNamesMixin
import pandas.core.nanops as nanops
_shared_docs = dict()
_indexops_doc_kwargs = dict(klass='IndexOpsMixin', inplace='',
unique='IndexOpsMixin', duplicated='IndexOpsMixin')
class StringMixin(object):
"""implements string methods so long as object defines a `__unicode__`
method.
Handles Python2/3 compatibility transparently.
"""
# side note - this could be made into a metaclass if more than one
# object needs
# ----------------------------------------------------------------------
# Formatting
def __unicode__(self):
raise AbstractMethodError(self)
def __str__(self):
"""
Return a string representation for a particular Object
Invoked by str(df) in both py2/py3.
Yields Bytestring in Py2, Unicode String in py3.
"""
if compat.PY3:
return self.__unicode__()
return self.__bytes__()
def __bytes__(self):
"""
Return a string representation for a particular object.
Invoked by bytes(obj) in py3 only.
Yields a bytestring in both py2/py3.
"""
from pandas.core.config import get_option
encoding = get_option("display.encoding")
return self.__unicode__().encode(encoding, 'replace')
def __repr__(self):
"""
Return a string representation for a particular object.
Yields Bytestring in Py2, Unicode String in py3.
"""
return str(self)
class PandasObject(StringMixin, DirNamesMixin):
"""baseclass for various pandas objects"""
@property
def _constructor(self):
"""class constructor (for this class it's just `__class__`"""
return self.__class__
def __unicode__(self):
"""
Return a string representation for a particular object.
Invoked by unicode(obj) in py2 only. Yields a Unicode String in both
py2/py3.
"""
# Should be overwritten by base classes
return object.__repr__(self)
def _reset_cache(self, key=None):
"""
Reset cached properties. If ``key`` is passed, only clears that key.
"""
if getattr(self, '_cache', None) is None:
return
if key is None:
self._cache.clear()
else:
self._cache.pop(key, None)
def __sizeof__(self):
"""
Generates the total memory usage for an object that returns
either a value or Series of values
"""
if hasattr(self, 'memory_usage'):
mem = self.memory_usage(deep=True)
if not is_scalar(mem):
mem = mem.sum()
return int(mem)
# no memory_usage attribute, so fall back to
# object's 'sizeof'
return super(PandasObject, self).__sizeof__()
class NoNewAttributesMixin(object):
"""Mixin which prevents adding new attributes.
Prevents additional attributes via xxx.attribute = "something" after a
call to `self.__freeze()`. Mainly used to prevent the user from using
wrong attributes on a accessor (`Series.cat/.str/.dt`).
If you really want to add a new attribute at a later time, you need to use
`object.__setattr__(self, key, value)`.
"""
def _freeze(self):
"""Prevents setting additional attributes"""
object.__setattr__(self, "__frozen", True)
# prevent adding any attribute via s.xxx.new_attribute = ...
def __setattr__(self, key, value):
# _cache is used by a decorator
# We need to check both 1.) cls.__dict__ and 2.) getattr(self, key)
# because
# 1.) getattr is false for attributes that raise errors
# 2.) cls.__dict__ doesn't traverse into base classes
if (getattr(self, "__frozen", False) and not
(key == "_cache" or
key in type(self).__dict__ or
getattr(self, key, None) is not None)):
raise AttributeError("You cannot add any new attribute '{key}'".
format(key=key))
object.__setattr__(self, key, value)
class GroupByError(Exception):
pass
class DataError(GroupByError):
pass
class SpecificationError(GroupByError):
pass
class SelectionMixin(object):
"""
mixin implementing the selection & aggregation interface on a group-like
object sub-classes need to define: obj, exclusions
"""
_selection = None
_internal_names = ['_cache', '__setstate__']
_internal_names_set = set(_internal_names)
_builtin_table = OrderedDict((
(builtins.sum, np.sum),
(builtins.max, np.max),
(builtins.min, np.min),
))
_cython_table = OrderedDict((
(builtins.sum, 'sum'),
(builtins.max, 'max'),
(builtins.min, 'min'),
(np.all, 'all'),
(np.any, 'any'),
(np.sum, 'sum'),
(np.nansum, 'sum'),
(np.mean, 'mean'),
(np.nanmean, 'mean'),
(np.prod, 'prod'),
(np.nanprod, 'prod'),
(np.std, 'std'),
(np.nanstd, 'std'),
(np.var, 'var'),
(np.nanvar, 'var'),
(np.median, 'median'),
(np.nanmedian, 'median'),
(np.max, 'max'),
(np.nanmax, 'max'),
(np.min, 'min'),
(np.nanmin, 'min'),
(np.cumprod, 'cumprod'),
(np.nancumprod, 'cumprod'),
(np.cumsum, 'cumsum'),
(np.nancumsum, 'cumsum'),
))
@property
def _selection_name(self):
"""
return a name for myself; this would ideally be called
the 'name' property, but we cannot conflict with the
Series.name property which can be set
"""
if self._selection is None:
return None # 'result'
else:
return self._selection
@property
def _selection_list(self):
if not isinstance(self._selection, (list, tuple, ABCSeries,
ABCIndexClass, np.ndarray)):
return [self._selection]
return self._selection
@cache_readonly
def _selected_obj(self):
if self._selection is None or isinstance(self.obj, ABCSeries):
return self.obj
else:
return self.obj[self._selection]
@cache_readonly
def ndim(self):
return self._selected_obj.ndim
@cache_readonly
def _obj_with_exclusions(self):
if self._selection is not None and isinstance(self.obj,
ABCDataFrame):
return self.obj.reindex(columns=self._selection_list)
if len(self.exclusions) > 0:
return self.obj.drop(self.exclusions, axis=1)
else:
return self.obj
def __getitem__(self, key):
if self._selection is not None:
raise IndexError('Column(s) {selection} already selected'
.format(selection=self._selection))
if isinstance(key, (list, tuple, ABCSeries, ABCIndexClass,
np.ndarray)):
if len(self.obj.columns.intersection(key)) != len(key):
bad_keys = list(set(key).difference(self.obj.columns))
raise KeyError("Columns not found: {missing}"
.format(missing=str(bad_keys)[1:-1]))
return self._gotitem(list(key), ndim=2)
elif not getattr(self, 'as_index', False):
if key not in self.obj.columns:
raise KeyError("Column not found: {key}".format(key=key))
return self._gotitem(key, ndim=2)
else:
if key not in self.obj:
raise KeyError("Column not found: {key}".format(key=key))
return self._gotitem(key, ndim=1)
def _gotitem(self, key, ndim, subset=None):
"""
sub-classes to define
return a sliced object
Parameters
----------
key : string / list of selections
ndim : 1,2
requested ndim of result
subset : object, default None
subset to act on
"""
raise AbstractMethodError(self)
def aggregate(self, func, *args, **kwargs):
raise AbstractMethodError(self)
agg = aggregate
def _try_aggregate_string_function(self, arg, *args, **kwargs):
"""
if arg is a string, then try to operate on it:
- try to find a function (or attribute) on ourselves
- try to find a numpy function
- raise
"""
assert isinstance(arg, compat.string_types)
f = getattr(self, arg, None)
if f is not None:
if callable(f):
return f(*args, **kwargs)
# people may try to aggregate on a non-callable attribute
# but don't let them think they can pass args to it
assert len(args) == 0
assert len([kwarg for kwarg in kwargs
if kwarg not in ['axis', '_level']]) == 0
return f
f = getattr(np, arg, None)
if f is not None:
return f(self, *args, **kwargs)
raise ValueError("{arg} is an unknown string function".format(arg=arg))
def _aggregate(self, arg, *args, **kwargs):
"""
provide an implementation for the aggregators
Parameters
----------
arg : string, dict, function
*args : args to pass on to the function
**kwargs : kwargs to pass on to the function
Returns
-------
tuple of result, how
Notes
-----
how can be a string describe the required post-processing, or
None if not required
"""
is_aggregator = lambda x: isinstance(x, (list, tuple, dict))
is_nested_renamer = False
_axis = kwargs.pop('_axis', None)
if _axis is None:
_axis = getattr(self, 'axis', 0)
_level = kwargs.pop('_level', None)
if isinstance(arg, compat.string_types):
return self._try_aggregate_string_function(arg, *args,
**kwargs), None
if isinstance(arg, dict):
# aggregate based on the passed dict
if _axis != 0: # pragma: no cover
raise ValueError('Can only pass dict with axis=0')
obj = self._selected_obj
def nested_renaming_depr(level=4):
# deprecation of nested renaming
# GH 15931
warnings.warn(
("using a dict with renaming "
"is deprecated and will be removed in a future "
"version"),
FutureWarning, stacklevel=level)
# if we have a dict of any non-scalars
# eg. {'A' : ['mean']}, normalize all to
# be list-likes
if any(is_aggregator(x) for x in compat.itervalues(arg)):
new_arg = compat.OrderedDict()
for k, v in compat.iteritems(arg):
if not isinstance(v, (tuple, list, dict)):
new_arg[k] = [v]
else:
new_arg[k] = v
# the keys must be in the columns
# for ndim=2, or renamers for ndim=1
# ok for now, but deprecated
# {'A': { 'ra': 'mean' }}
# {'A': { 'ra': ['mean'] }}
# {'ra': ['mean']}
# not ok
# {'ra' : { 'A' : 'mean' }}
if isinstance(v, dict):
is_nested_renamer = True
if k not in obj.columns:
msg = ('cannot perform renaming for {key} with a '
'nested dictionary').format(key=k)
raise SpecificationError(msg)
nested_renaming_depr(4 + (_level or 0))
elif isinstance(obj, ABCSeries):
nested_renaming_depr()
elif (isinstance(obj, ABCDataFrame) and
k not in obj.columns):
raise KeyError(
"Column '{col}' does not exist!".format(col=k))
arg = new_arg
else:
# deprecation of renaming keys
# GH 15931
keys = list(compat.iterkeys(arg))
if (isinstance(obj, ABCDataFrame) and
len(obj.columns.intersection(keys)) != len(keys)):
nested_renaming_depr()
from pandas.core.reshape.concat import concat
def _agg_1dim(name, how, subset=None):
"""
aggregate a 1-dim with how
"""
colg = self._gotitem(name, ndim=1, subset=subset)
if colg.ndim != 1:
raise SpecificationError("nested dictionary is ambiguous "
"in aggregation")
return colg.aggregate(how, _level=(_level or 0) + 1)
def _agg_2dim(name, how):
"""
aggregate a 2-dim with how
"""
colg = self._gotitem(self._selection, ndim=2,
subset=obj)
return colg.aggregate(how, _level=None)
def _agg(arg, func):
"""
run the aggregations over the arg with func
return an OrderedDict
"""
result = compat.OrderedDict()
for fname, agg_how in compat.iteritems(arg):
result[fname] = func(fname, agg_how)
return result
# set the final keys
keys = list(compat.iterkeys(arg))
result = compat.OrderedDict()
# nested renamer
if is_nested_renamer:
result = list(_agg(arg, _agg_1dim).values())
if all(isinstance(r, dict) for r in result):
result, results = compat.OrderedDict(), result
for r in results:
result.update(r)
keys = list(compat.iterkeys(result))
else:
if self._selection is not None:
keys = None
# some selection on the object
elif self._selection is not None:
sl = set(self._selection_list)
# we are a Series like object,
# but may have multiple aggregations
if len(sl) == 1:
result = _agg(arg, lambda fname,
agg_how: _agg_1dim(self._selection, agg_how))
# we are selecting the same set as we are aggregating
elif not len(sl - set(keys)):
result = _agg(arg, _agg_1dim)
# we are a DataFrame, with possibly multiple aggregations
else:
result = _agg(arg, _agg_2dim)
# no selection
else:
try:
result = _agg(arg, _agg_1dim)
except SpecificationError:
# we are aggregating expecting all 1d-returns
# but we have 2d
result = _agg(arg, _agg_2dim)
# combine results
def is_any_series():
# return a boolean if we have *any* nested series
return any(isinstance(r, ABCSeries)
for r in compat.itervalues(result))
def is_any_frame():
# return a boolean if we have *any* nested series
return any(isinstance(r, ABCDataFrame)
for r in compat.itervalues(result))
if isinstance(result, list):
return concat(result, keys=keys, axis=1, sort=True), True
elif is_any_frame():
# we have a dict of DataFrames
# return a MI DataFrame
return concat([result[k] for k in keys],
keys=keys, axis=1), True
elif isinstance(self, ABCSeries) and is_any_series():
# we have a dict of Series
# return a MI Series
try:
result = concat(result)
except TypeError:
# we want to give a nice error here if
# we have non-same sized objects, so
# we don't automatically broadcast
raise ValueError("cannot perform both aggregation "
"and transformation operations "
"simultaneously")
return result, True
# fall thru
from pandas import DataFrame, Series
try:
result = DataFrame(result)
except ValueError:
# we have a dict of scalars
result = Series(result,
name=getattr(self, 'name', None))
return result, True
elif is_list_like(arg) and arg not in compat.string_types:
# we require a list, but not an 'str'
return self._aggregate_multiple_funcs(arg,
_level=_level,
_axis=_axis), None
else:
result = None
f = self._is_cython_func(arg)
if f and not args and not kwargs:
return getattr(self, f)(), None
# caller can react
return result, True
def _aggregate_multiple_funcs(self, arg, _level, _axis):
from pandas.core.reshape.concat import concat
if _axis != 0:
raise NotImplementedError("axis other than 0 is not supported")
if self._selected_obj.ndim == 1:
obj = self._selected_obj
else:
obj = self._obj_with_exclusions
results = []
keys = []
# degenerate case
if obj.ndim == 1:
for a in arg:
try:
colg = self._gotitem(obj.name, ndim=1, subset=obj)
results.append(colg.aggregate(a))
# make sure we find a good name
name = com.get_callable_name(a) or a
keys.append(name)
except (TypeError, DataError):
pass
except SpecificationError:
raise
# multiples
else:
for index, col in enumerate(obj):
try:
colg = self._gotitem(col, ndim=1,
subset=obj.iloc[:, index])
results.append(colg.aggregate(arg))
keys.append(col)
except (TypeError, DataError):
pass
except ValueError:
# cannot aggregate
continue
except SpecificationError:
raise
# if we are empty
if not len(results):
raise ValueError("no results")
try:
return concat(results, keys=keys, axis=1, sort=False)
except TypeError:
# we are concatting non-NDFrame objects,
# e.g. a list of scalars
from pandas.core.dtypes.cast import is_nested_object
from pandas import Series
result = Series(results, index=keys, name=self.name)
if is_nested_object(result):
raise ValueError("cannot combine transform and "
"aggregation operations")
return result
def _shallow_copy(self, obj=None, obj_type=None, **kwargs):
"""
return a new object with the replacement attributes
"""
if obj is None:
obj = self._selected_obj.copy()
if obj_type is None:
obj_type = self._constructor
if isinstance(obj, obj_type):
obj = obj.obj
for attr in self._attributes:
if attr not in kwargs:
kwargs[attr] = getattr(self, attr)
return obj_type(obj, **kwargs)
def _is_cython_func(self, arg):
"""
if we define an internal function for this argument, return it
"""
return self._cython_table.get(arg)
def _is_builtin_func(self, arg):
"""
if we define an builtin function for this argument, return it,
otherwise return the arg
"""
return self._builtin_table.get(arg, arg)
class IndexOpsMixin(object):
""" common ops mixin to support a unified interface / docs for Series /
Index
"""
# ndarray compatibility
__array_priority__ = 1000
def transpose(self, *args, **kwargs):
"""
Return the transpose, which is by definition self.
"""
nv.validate_transpose(args, kwargs)
return self
T = property(transpose, doc="Return the transpose, which is by "
"definition self.")
@property
def _is_homogeneous_type(self):
"""
Whether the object has a single dtype.
By definition, Series and Index are always considered homogeneous.
A MultiIndex may or may not be homogeneous, depending on the
dtypes of the levels.
See Also
--------
DataFrame._is_homogeneous_type
MultiIndex._is_homogeneous_type
"""
return True
@property
def shape(self):
"""
Return a tuple of the shape of the underlying data.
"""
return self._values.shape
@property
def ndim(self):
"""
Number of dimensions of the underlying data, by definition 1.
"""
return 1
def item(self):
"""
Return the first element of the underlying data as a python scalar.
"""
try:
return self.values.item()
except IndexError:
# copy numpy's message here because Py26 raises an IndexError
raise ValueError('can only convert an array of size 1 to a '
'Python scalar')
@property
def data(self):
"""
Return the data pointer of the underlying data.
"""
warnings.warn("{obj}.data is deprecated and will be removed "
"in a future version".format(obj=type(self).__name__),
FutureWarning, stacklevel=2)
return self.values.data
@property
def itemsize(self):
"""
Return the size of the dtype of the item of the underlying data.
"""
warnings.warn("{obj}.itemsize is deprecated and will be removed "
"in a future version".format(obj=type(self).__name__),
FutureWarning, stacklevel=2)
return self._ndarray_values.itemsize
@property
def nbytes(self):
"""
Return the number of bytes in the underlying data.
"""
return self._values.nbytes
@property
def strides(self):
"""
Return the strides of the underlying data.
"""
warnings.warn("{obj}.strides is deprecated and will be removed "
"in a future version".format(obj=type(self).__name__),
FutureWarning, stacklevel=2)
return self._ndarray_values.strides
@property
def size(self):
"""
Return the number of elements in the underlying data.
"""
return self._values.size
@property
def flags(self):
"""
Return the ndarray.flags for the underlying data.
"""
warnings.warn("{obj}.flags is deprecated and will be removed "
"in a future version".format(obj=type(self).__name__),
FutureWarning, stacklevel=2)
return self.values.flags
@property
def base(self):
"""
Return the base object if the memory of the underlying data is shared.
"""
warnings.warn("{obj}.base is deprecated and will be removed "
"in a future version".format(obj=type(self).__name__),
FutureWarning, stacklevel=2)
return self.values.base
@property
def array(self):
# type: () -> Union[np.ndarray, ExtensionArray]
"""
The actual Array backing this Series or Index.
.. versionadded:: 0.24.0
Returns
-------
array : numpy.ndarray or ExtensionArray
This is the actual array stored within this object. This differs
from ``.values`` which may require converting the data
to a different form.
See Also
--------
Index.to_numpy : Similar method that always returns a NumPy array.
Series.to_numpy : Similar method that always returns a NumPy array.
Notes
-----
This table lays out the different array types for each extension
dtype within pandas.
================== =============================
dtype array type
================== =============================
category Categorical
period PeriodArray
interval IntervalArray
IntegerNA IntegerArray
datetime64[ns, tz] DatetimeArray
================== =============================
For any 3rd-party extension types, the array type will be an
ExtensionArray.
For all remaining dtypes ``.array`` will be the :class:`numpy.ndarray`
stored within. If you absolutely need a NumPy array (possibly with
copying / coercing data), then use :meth:`Series.to_numpy` instead.
.. note::
``.array`` will always return the underlying object backing the
Series or Index. If a future version of pandas adds a specialized
extension type for a data type, then the return type of ``.array``
for that data type will change from an object-dtype ndarray to the
new ExtensionArray.
Examples
--------
>>> ser = pd.Series(pd.Categorical(['a', 'b', 'a']))
>>> ser.array
[a, b, a]
Categories (2, object): [a, b]
"""
return self._values
def to_numpy(self, dtype=None, copy=False):
"""
A NumPy ndarray representing the values in this Series or Index.
.. versionadded:: 0.24.0
Parameters
----------
dtype : str or numpy.dtype, optional
The dtype to pass to :meth:`numpy.asarray`
copy : bool, default False
Whether to ensure that the returned value is a not a view on
another array. Note that ``copy=False`` does not *ensure* that
``to_numpy()`` is no-copy. Rather, ``copy=True`` ensure that
a copy is made, even if not strictly necessary.
Returns
-------
numpy.ndarray
See Also
--------
Series.array : Get the actual data stored within.
Index.array : Get the actual data stored within.
DataFrame.to_numpy : Similar method for DataFrame.
Notes
-----
The returned array will be the same up to equality (values equal
in `self` will be equal in the returned array; likewise for values
that are not equal). When `self` contains an ExtensionArray, the
dtype may be different. For example, for a category-dtype Series,
``to_numpy()`` will return a NumPy array and the categorical dtype
will be lost.
For NumPy dtypes, this will be a reference to the actual data stored
in this Series or Index (assuming ``copy=False``). Modifying the result
in place will modify the data stored in the Series or Index (not that
we recommend doing that).
For extension types, ``to_numpy()`` *may* require copying data and
coercing the result to a NumPy type (possibly object), which may be
expensive. When you need a no-copy reference to the underlying data,
:attr:`Series.array` should be used instead.
This table lays out the different dtypes and return types of
``to_numpy()`` for various dtypes within pandas.
================== ================================
dtype array type
================== ================================
category[T] ndarray[T] (same dtype as input)
period ndarray[object] (Periods)
interval ndarray[object] (Intervals)
IntegerNA ndarray[object]
datetime64[ns, tz] ndarray[object] (Timestamps)
================== ================================
Examples
--------
>>> ser = pd.Series(pd.Categorical(['a', 'b', 'a']))
>>> ser.to_numpy()
array(['a', 'b', 'a'], dtype=object)
Specify the `dtype` to control how datetime-aware data is represented.
Use ``dtype=object`` to return an ndarray of pandas :class:`Timestamp`
objects, each with the correct ``tz``.
>>> ser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))
>>> ser.to_numpy(dtype=object)
array([Timestamp('2000-01-01 00:00:00+0100', tz='CET', freq='D'),
Timestamp('2000-01-02 00:00:00+0100', tz='CET', freq='D')],
dtype=object)
Or ``dtype='datetime64[ns]'`` to return an ndarray of native
datetime64 values. The values are converted to UTC and the timezone
info is dropped.
>>> ser.to_numpy(dtype="datetime64[ns]")
... # doctest: +ELLIPSIS
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00...'],
dtype='datetime64[ns]')
"""
if (is_extension_array_dtype(self.dtype) or
is_datetime64tz_dtype(self.dtype)):
# TODO(DatetimeArray): remove the second clause.
# TODO(GH-24345): Avoid potential double copy
result = np.asarray(self._values, dtype=dtype)
else:
result = self._values
if copy:
result = result.copy()
return result
@property
def _ndarray_values(self):
# type: () -> np.ndarray
"""
The data as an ndarray, possibly losing information.
The expectation is that this is cheap to compute, and is primarily
used for interacting with our indexers.
- categorical -> codes
"""
if is_extension_array_dtype(self):
return self.array._ndarray_values
return self.values
@property
def empty(self):
return not self.size
def max(self):
"""
Return the maximum value of the Index.
Returns
-------
scalar
Maximum value.
See Also
--------
Index.min : Return the minimum value in an Index.
Series.max : Return the maximum value in a Series.
DataFrame.max : Return the maximum values in a DataFrame.
Examples
--------
>>> idx = pd.Index([3, 2, 1])
>>> idx.max()
3
>>> idx = pd.Index(['c', 'b', 'a'])
>>> idx.max()
'c'
For a MultiIndex, the maximum is determined lexicographically.
>>> idx = pd.MultiIndex.from_product([('a', 'b'), (2, 1)])
>>> idx.max()
('b', 2)
"""
return nanops.nanmax(self.values)
def argmax(self, axis=None):
"""
Return a ndarray of the maximum argument indexer.
See Also
--------
numpy.ndarray.argmax
"""
return nanops.nanargmax(self.values)
def min(self):
"""
Return the minimum value of the Index.
Returns
-------
scalar
Minimum value.
See Also
--------
Index.max : Return the maximum value of the object.
Series.min : Return the minimum value in a Series.
DataFrame.min : Return the minimum values in a DataFrame.
Examples
--------
>>> idx = pd.Index([3, 2, 1])
>>> idx.min()
1
>>> idx = pd.Index(['c', 'b', 'a'])
>>> idx.min()
'a'
For a MultiIndex, the minimum is determined lexicographically.
>>> idx = pd.MultiIndex.from_product([('a', 'b'), (2, 1)])
>>> idx.min()
('a', 1)
"""
return nanops.nanmin(self.values)
def argmin(self, axis=None):
"""
Return a ndarray of the minimum argument indexer.
See Also
--------
numpy.ndarray.argmin
"""
return nanops.nanargmin(self.values)
def tolist(self):
"""
Return a list of the values.
These are each a scalar type, which is a Python scalar
(for str, int, float) or a pandas scalar
(for Timestamp/Timedelta/Interval/Period)
See Also
--------
numpy.ndarray.tolist
"""
if is_datetimelike(self._values):
return [com.maybe_box_datetimelike(x) for x in self._values]
elif is_extension_array_dtype(self._values):
return list(self._values)
else:
return self._values.tolist()
to_list = tolist
def __iter__(self):
"""
Return an iterator of the values.
These are each a scalar type, which is a Python scalar
(for str, int, float) or a pandas scalar
(for Timestamp/Timedelta/Interval/Period)
"""
# We are explicity making element iterators.
if is_datetimelike(self._values):
return map(com.maybe_box_datetimelike, self._values)
elif is_extension_array_dtype(self._values):
return iter(self._values)
else:
return map(self._values.item, range(self._values.size))
@cache_readonly
def hasnans(self):
"""
Return if I have any nans; enables various perf speedups.
"""
return bool(isna(self).any())
def _reduce(self, op, name, axis=0, skipna=True, numeric_only=None,
filter_type=None, **kwds):
""" perform the reduction type operation if we can """
func = getattr(self, name, None)
if func is None:
raise TypeError("{klass} cannot perform the operation {op}".format(
klass=self.__class__.__name__, op=name))
return func(**kwds)
def _map_values(self, mapper, na_action=None):
"""
An internal function that maps values using the input
correspondence (which can be a dict, Series, or function).
Parameters
----------
mapper : function, dict, or Series
The input correspondence object
na_action : {None, 'ignore'}
If 'ignore', propagate NA values, without passing them to the
mapping function
Returns
-------
applied : Union[Index, MultiIndex], inferred
The output of the mapping function applied to the index.
If the function returns a tuple with more than one element
a MultiIndex will be returned.
"""
# we can fastpath dict/Series to an efficient map
# as we know that we are not going to have to yield
# python types
if isinstance(mapper, dict):
if hasattr(mapper, '__missing__'):
# If a dictionary subclass defines a default value method,
# convert mapper to a lookup function (GH #15999).
dict_with_default = mapper
mapper = lambda x: dict_with_default[x]
else:
# Dictionary does not have a default. Thus it's safe to
# convert to an Series for efficiency.
# we specify the keys here to handle the
# possibility that they are tuples
from pandas import Series
mapper = Series(mapper)
if isinstance(mapper, ABCSeries):
# Since values were input this means we came from either
# a dict or a series and mapper should be an index
if is_extension_type(self.dtype):
values = self._values
else:
values = self.values
indexer = mapper.index.get_indexer(values)
new_values = algorithms.take_1d(mapper._values, indexer)
return new_values
# we must convert to python types
if is_extension_type(self.dtype):
values = self._values
if na_action is not None:
raise NotImplementedError
map_f = lambda values, f: values.map(f)
else:
values = self.astype(object)
values = getattr(values, 'values', values)
if na_action == 'ignore':
def map_f(values, f):
return lib.map_infer_mask(values, f,
isna(values).view(np.uint8))
else:
map_f = lib.map_infer
# mapper is a function
new_values = map_f(values, mapper)
return new_values
def value_counts(self, normalize=False, sort=True, ascending=False,
bins=None, dropna=True):
"""
Return a Series containing counts of unique values.
The resulting object will be in descending order so that the
first element is the most frequently-occurring element.
Excludes NA values by default.
Parameters
----------
normalize : boolean, default False
If True then the object returned will contain the relative
frequencies of the unique values.
sort : boolean, default True
Sort by values.
ascending : boolean, default False
Sort in ascending order.
bins : integer, optional
Rather than count values, group them into half-open bins,
a convenience for ``pd.cut``, only works with numeric data.
dropna : boolean, default True
Don't include counts of NaN.
Returns
-------
counts : Series
See Also
--------
Series.count: Number of non-NA elements in a Series.
DataFrame.count: Number of non-NA elements in a DataFrame.
Examples
--------
>>> index = pd.Index([3, 1, 2, 3, 4, np.nan])
>>> index.value_counts()
3.0 2
4.0 1
2.0 1
1.0 1
dtype: int64
With `normalize` set to `True`, returns the relative frequency by
dividing all values by the sum of values.
>>> s = pd.Series([3, 1, 2, 3, 4, np.nan])
>>> s.value_counts(normalize=True)
3.0 0.4
4.0 0.2
2.0 0.2
1.0 0.2
dtype: float64
**bins**
Bins can be useful for going from a continuous variable to a
categorical variable; instead of counting unique
apparitions of values, divide the index in the specified
number of half-open bins.
>>> s.value_counts(bins=3)
(2.0, 3.0] 2
(0.996, 2.0] 2
(3.0, 4.0] 1
dtype: int64
**dropna**
With `dropna` set to `False` we can also see NaN index values.
>>> s.value_counts(dropna=False)
3.0 2
NaN 1
4.0 1
2.0 1
1.0 1
dtype: int64
"""
from pandas.core.algorithms import value_counts
result = value_counts(self, sort=sort, ascending=ascending,
normalize=normalize, bins=bins, dropna=dropna)
return result
def unique(self):
values = self._values
if hasattr(values, 'unique'):
result = values.unique()
else:
from pandas.core.algorithms import unique1d
result = unique1d(values)
return result
def nunique(self, dropna=True):
"""
Return number of unique elements in the object.
Excludes NA values by default.
Parameters
----------
dropna : boolean, default True
Don't include NaN in the count.
Returns
-------
nunique : int
"""
uniqs = self.unique()
n = len(uniqs)
if dropna and isna(uniqs).any():
n -= 1
return n
@property
def is_unique(self):
"""
Return boolean if values in the object are unique.
Returns
-------
is_unique : boolean
"""
return self.nunique() == len(self)
@property
def is_monotonic(self):
"""
Return boolean if values in the object are
monotonic_increasing.
.. versionadded:: 0.19.0
Returns
-------
is_monotonic : boolean
"""
from pandas import Index
return Index(self).is_monotonic
is_monotonic_increasing = is_monotonic
@property
def is_monotonic_decreasing(self):
"""
Return boolean if values in the object are
monotonic_decreasing.
.. versionadded:: 0.19.0
Returns
-------
is_monotonic_decreasing : boolean
"""
from pandas import Index
return Index(self).is_monotonic_decreasing
def memory_usage(self, deep=False):
"""
Memory usage of the values
Parameters
----------
deep : bool
Introspect the data deeply, interrogate
`object` dtypes for system-level memory consumption
Returns
-------
bytes used
See Also
--------
numpy.ndarray.nbytes
Notes
-----
Memory usage does not include memory consumed by elements that
are not components of the array if deep=False or if used on PyPy
"""
if hasattr(self.array, 'memory_usage'):
return self.array.memory_usage(deep=deep)
v = self.array.nbytes
if deep and is_object_dtype(self) and not PYPY:
v += lib.memory_usage_of_objects(self.array)
return v
@Substitution(
values='', order='', size_hint='',
sort=textwrap.dedent("""\
sort : boolean, default False
Sort `uniques` and shuffle `labels` to maintain the
relationship.
"""))
@ | Appender(algorithms._shared_docs['factorize']) | pandas.util._decorators.Appender |
import numpy as np
import pandas as pd
'''
# load data
source_data = pd.read_csv('../data/exo_sample_data_with_targets.csv')
# columns normalization
def NormalData(df):
newDataFrame = pd.DataFrame(index=df.index)
columns = df.columns.tolist()
for c in columns:
d = df[c]
MAX = d.max()
MIN = d.min()
newDataFrame[c] = ((d-MIN) / (MAX-MIN)).tolist() # ratio
return newDataFrame
normal_data = NormalData(source_data)
# print(normal_data)
# normal_data.to_csv('../data/normal_data.csv')
'''
normal_data = | pd.read_csv('../data/normal_data.csv') | pandas.read_csv |
#!/usr/bin/env python
# coding: utf-8
# In[361]:
def warn(*args, **kwargs):
pass
import warnings
warnings.warn = warn
# Python magic
get_ipython().run_line_magic('matplotlib', 'inline')
# Base packages
import gc, sys, re, os
from time import strptime, mktime
# Data processing/preprocessing/modeling packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import statistics as stat
from sklearn.preprocessing import *
# Modeling settings
plt.rc("font", size=14)
sns.set(style="white")
sns.set(style="whitegrid", color_codes=True)
# Testing & Validation packages
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import confusion_matrix, classification_report, roc_curve, roc_auc_score
# KNN
from sklearn.neighbors import KNeighborsClassifier
# Logistic Regression
from sklearn.linear_model import LogisticRegression
# Random Forest
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import export_graphviz
from six import StringIO
from IPython.display import Image
from pydotplus import *
# SVM
from sklearn.svm import SVC
# In[534]:
X_train1 = pd.read_csv('1/TrainData1.txt', delimiter='\s+', header=None)
X_train2 = pd.read_csv('1/TrainData2.txt', delimiter='\s+', header=None)
X_train3 = pd.read_csv('1/TrainData3.txt', delimiter='\s+', header=None)
X_train4 = pd.read_csv('1/TrainData4.txt', delimiter='\s+', header=None)
X_train5 = pd.read_csv('1/TrainData5.txt', delimiter='\s+', header=None)
X_train6 = | pd.read_csv('1/TrainData6.txt', delimiter='\s+', header=None) | pandas.read_csv |
import re
import sys
import copy
import fire
import pandas as pd
from HTSeq import GFF_Reader
from collections import OrderedDict
TR_PATTERN = re.compile(r'(\S+).p\d+$')
CDS_PATTERN = re.compile(r'(\S+).p\d+$')
EXON_PATTERN = re.compile(r'(\S+).p\d+(.exon\w+)')
UTR5_PATTERN = re.compile(r'(\S+).p\d+(.utr5p\w+)')
UTR3_PATTERN = re.compile(r'(\S+).p\d+(.utr3p\w+)')
ID_PATTERN = {
'mRNA': TR_PATTERN,
'five_prime_UTR': UTR5_PATTERN,
'exon': EXON_PATTERN,
'CDS': CDS_PATTERN,
'three_prime_UTR': UTR3_PATTERN,
}
ATTR_FILTER = ('fixed', 'tr_id', 'gene_id', 'geneID')
def rename_id(gff_attr, old_pre='MSTRG', new_pre='BMSK10'):
name_pattern = re.compile(f'(.*){old_pre}.(\d+)(.*)')
for key in gff_attr:
val = gff_attr[key]
if name_pattern.match(str(val)):
pre, id_idx, sfx = name_pattern.match(val).groups()
gff_attr[key] = f'{pre}{new_pre}{id_idx:0>6}{sfx}'
return gff_attr
def gffline(gffreader_item, rename, name_prefix,
id_filter=ATTR_FILTER):
gff_attr = gffreader_item.attr.copy()
if rename:
gff_attr = rename_id(gff_attr, new_pre=name_prefix)
attr_inf = [f'{key}={val}' for key, val
in gff_attr.items()
if key not in id_filter]
attr_line = ';'.join(attr_inf)
basic_inf = gffreader_item.get_gff_line().split('\t')[:-1]
basic_inf.append(attr_line)
return '\t'.join(basic_inf)
def gtfline(gff_item, rename, name_prefix):
gff_attr = gff_item.attr.copy()
if rename:
gff_attr = rename_id(gff_attr, new_pre=name_prefix)
gene_id = gff_attr.get('gene_id')
if gff_item.type == 'gene':
gene_id = gff_attr.get('ID')
tr_id = gff_attr.get('tr_id')
attr_line = f'gene_id "{gene_id}";'
if gff_item.type != 'gene':
attr_line = f'{attr_line} transcript_id "{tr_id}";'
basic_inf = gff_item.get_gff_line().split('\t')[:-1]
basic_inf.append(attr_line)
return '\t'.join(basic_inf)
def update_gene_inf(gene_inf, gff_item):
if gene_inf is None:
gene_inf = copy.deepcopy(gff_item)
attr_dict = {'ID': gff_item.attr['Parent']}
gene_inf.type = 'gene'
gene_inf.attr = attr_dict
else:
if not gene_inf.attr.get('fixed'):
if gene_inf.iv.start > gff_item.iv.start:
gene_inf.iv.start = gff_item.iv.start
if gene_inf.iv.end < gff_item.iv.end:
gene_inf.iv.end = gff_item.iv.end
return gene_inf
def fix_id(input_id, id_type, flag=True):
if flag:
id_match = ID_PATTERN.get(id_type).match(input_id)
return ''.join(id_match.groups())
else:
return input_id
def gff2dict(gff, fix_id_flag=False):
by_gene_dict = OrderedDict()
by_tr_dict = OrderedDict()
gene_entry_dict = dict()
tr2gene = dict()
for eachline in GFF_Reader(gff):
if eachline.type == 'gene':
gene_id = eachline.attr['ID']
eachline.attr['gene_id'] = gene_id
gene_entry_dict[gene_id] = eachline
gene_entry_dict[gene_id].attr['fixed'] = True
continue
if 'geneID' in eachline.attr:
parent = eachline.attr['geneID']
eachline.attr['Parent'] = parent
else:
parent = eachline.attr['Parent']
if eachline.type in ["transcript", "mRNA"]:
tr_id = fix_id(eachline.attr['ID'], eachline.type, fix_id_flag)
eachline.attr['ID'] = tr_id
gene_id = parent
tr2gene[tr_id] = parent
else:
if 'ID' in eachline.attr:
eachline.attr['ID'] = fix_id(
eachline.attr.get('ID'), eachline.type, fix_id_flag)
tr_id = fix_id(parent, 'mRNA', fix_id_flag)
eachline.attr['Parent'] = tr_id
gene_id = tr2gene[tr_id]
eachline.attr['tr_id'] = tr_id
eachline.attr['gene_id'] = gene_id
by_gene_dict.setdefault(gene_id, []).append(eachline)
by_tr_dict.setdefault(tr_id, []).append(eachline)
gene_entry_dict[gene_id] = update_gene_inf(
gene_entry_dict.get(gene_id), eachline)
return by_gene_dict, by_tr_dict, gene_entry_dict, tr2gene
def wirte_gff_inf(gff_inf, out_pre, fmt='gff',
rename=True, name_prefix='Novel',
zero_len=6):
out_file = f'{out_pre}.{fmt}'
with open(out_file, 'w') as out_inf:
for line in gff_inf:
if fmt == 'gff':
line_str = gffline(line, rename, name_prefix)
elif fmt == 'gtf':
line_str = gtfline(line, rename, name_prefix)
else:
raise ValueError(f'Invalid format {fmt}')
out_inf.write(f'{line_str}\n')
def add_by_tr(raw_tr, f_tr, raw_gene_entry, raw_tr2gene,
outprefix, rename, name_prefix):
out_inf_list = []
for tr_id in raw_tr:
gene_id = raw_tr2gene[tr_id]
if gene_id in raw_gene_entry:
out_inf_list.append(raw_gene_entry.pop(gene_id))
if tr_id in f_tr:
out_inf_list.extend(f_tr[tr_id])
else:
out_inf_list.extend(raw_tr[tr_id])
wirte_gff_inf(out_inf_list, outprefix, fmt='gff',
rename=rename, name_prefix=name_prefix)
wirte_gff_inf(out_inf_list, outprefix, fmt='gtf',
rename=rename, name_prefix=name_prefix)
def add_by_gene(raw_gene, f_gene, raw_gene_entry,
f_gene_entry, outprefix, rename,
name_prefix, rm_gene):
if rm_gene:
rm_gene_df = pd.read_csv(rm_gene, sep='\t',
header=None, index_col=0)
else:
rm_gene_df = | pd.DataFrame([]) | pandas.DataFrame |
import pandas as pd
df = | pd.DataFrame() | pandas.DataFrame |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sun May 10 17:52:28 2020
@author: ministudio
"""
from binance.client import Client
import pandas as pd
import sys
from datetime import datetime
import smtplib
import time
from decimal import Decimal as D, ROUND_DOWN
client = Client()
def execOrder(user, user_name, multi, sym, Ncrt, maxValue, ticker, side, mailingList, strategyName, attempts = 1):
'''user, multi, symbol, Ncrt, maxValue, ticker, side, mailingList, strategyName, attempts'''
amount = Ncrt
filled = False
try:
if side == 'buy':
order = user.order_market_buy(symbol=sym,quantity=amount)
if side == 'sell':
order = user.order_market_sell(symbol=sym,quantity=amount)
if order['status'] == 'FILLED':
print(f'{datetime.now()} FILLED {amount} @{ticker} {sym} for {user_name}')
#logging.info(f'{datetime.now()} {side} @ {ticker} for {user}')
filled = True
return filled
except:
error = sys.exc_info()[1]
if error.code == -2010:
print('insufficient balance... buy/sell-ing as much as possible')
try:
if side == 'buy':
toBuy = round(float(user.get_asset_balance(sym[3:])['free']),6)
newQnty = round(Ncrt/(maxValue/toBuy),6)
order = user.order_market_buy(symbol=sym,quantity=newQnty)
if side == 'sell':
free = float(user.get_asset_balance(sym[:3])['free'])
newQnty = float(D(free).quantize(D('0.000001'), rounding=ROUND_DOWN))# - 0.000001
print(free, newQnty)
order = user.order_market_sell(symbol=sym,quantity=newQnty)
amount = newQnty
print('STATUS ->',order['status'],'at',order['executedQty'])
filled = True
return filled
except:
print(error)
return False
else:
print(datetime.now(),error.message)
finally:
if filled:
thBNB = 0.1
if feeBNB(user,thBNB):
print(user_name,f'W A R N I N G - - - BNB < {thBNB}')
#mail di avviso
#mailReport(mailingList,f'{strategyName}',f'{user_name} -- BNB < di {thBNB}')
return filled
else:
if attempts < 2:
time.sleep(0.1)
execOrder(user, user_name, multi, sym, Ncrt, maxValue, ticker, side, mailingList, strategyName, attempts+1)
else:
#mailReport(mailingList,f'{strategyName}',f'{user_name} -- impossibile effettuare ordine {sym}')
return False
def getCandles(timeframe, sym, length = 1, reverse= True, exchange='binance'):
''' restituisce il DATAFRAME relativo alle candele richieste '''
dataCandles = [ 'MTS', 'OPEN', 'HIGH', 'LOW', 'CLOSE', 'VOLUME', 'CLOSETIME',
'QUOTE_ASSET_VOLUME', 'N_TRADES', 'TAKER_BUY_BASE', 'TAKER_BUY_QUOTE', 'TO_IGNORE']
timeframeStandard = ['1m','5m','15m','30m','1h','2h','4h','6h','8h','12h','1d','1M']
if timeframe in timeframeStandard:
candlesBIN = client.get_klines(symbol=sym, interval=timeframe)
candles = pd.DataFrame(candlesBIN, columns = dataCandles)
candles.MTS = pd.to_numeric(candles.MTS/1000,downcast='integer')
candles.HIGH = pd.to_numeric(candles.HIGH)
candles.LOW = | pd.to_numeric(candles.LOW) | pandas.to_numeric |
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
__all__ = ['plot_local_direction', 'plot_variable_importance',
'plot_local_direction_histogram', 'plot_single_direction']
def label(x, color, label):
ax = plt.gca()
ax.text(0, 0.2, label, color='black', #fontweight="bold",
ha="left", va="center", transform=ax.transAxes)
def plot_variable_importance(importances, plot_type='bar', normalize=False,
names=None, xlabel=None, title=None,
figsize=(8, 6), ax=None):
"""Plot global variable importances."""
if ax is None:
fig, ax = plt.subplots(figsize=figsize)
else:
fig = None
n_features = importances.shape[0]
if normalize:
importances = importances / np.sum(importances)
# re-order from largets to smallest
order = np.argsort(importances)
if names is None:
names = ['Feature {}'.format(i + 1) for i in order]
else:
names = names[order]
margin = 0.1 if plot_type == 'lollipop' else 0.0
ax.set_xlim(0, importances.max() + margin)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
if plot_type == 'bar':
ax.barh(y=np.arange(n_features), width=importances[order],
color='gray', tick_label=names, height=0.5)
elif plot_type == 'lollipop':
for k in range(n_features):
ax.hlines(y=k, xmin=0, xmax=importances[order[k]],
color='k', linewidth=2)
ax.plot(importances[order[k]], k, 'o', color='k')
ax.axvline(x=0, ymin=0, ymax=1, color='k', linestyle='--')
ax.set_yticks(range(n_features))
ax.set_yticklabels(names)
else:
raise ValueError(
"Unrecognized plot_type. Should be 'bar' or 'lollipop'")
if xlabel is not None:
ax.set_xlabel(xlabel)
else:
ax.set_xlabel('Variable Importance')
if title:
ax.set_title(title)
return fig, ax
def plot_local_direction(importances, sort_features=False, feature_names=None,
figsize=(10, 6), palette='Set3', scale='count',
inner='quartile'):
n_features = importances.shape[1]
if feature_names is None:
feature_names = ["Feature {}".format(i + 1) for i in range(n_features)]
feature_names = np.asarray(feature_names)
if sort_features:
order = np.argsort(np.var(importances, axis=0))[::-1]
importances = importances[:, order]
feature_names = feature_names[order]
fig, ax = plt.subplots(figsize=(10, 6))
data = pd.melt( | pd.DataFrame(importances, columns=feature_names) | pandas.DataFrame |
if __name__ == "__main__":
from tensorflow.python import keras
import tensorflow as tf
import numpy as np
import triplet_loss
import pickle
import io
from sklearn.neighbors import KDTree
import platform
import pathlib
import pandas
log_dir = 'C:\\tmp\\tensorflow_logdir' if platform.system() == "Windows" else '/tmp/tensorflow_logdir'
num_cat = 15
num_img = 700
training_size = int(num_img/7*5)
test_size = int(num_img/7)
validation_size = num_img - training_size - test_size
pd_columns = ["filename", "class"]
train_images_labels = pandas.DataFrame(columns=pd_columns)
val_images_labels = pandas.DataFrame(columns=pd_columns)
test_images_labels = | pandas.DataFrame(columns=pd_columns) | pandas.DataFrame |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.