Model description

[More Information Needed]

Intended uses & limitations

[More Information Needed]

Training Procedure

[More Information Needed]

Hyperparameters

Click to expand
Hyperparameter Value
memory
steps [('preprocessor', TextPreprocessor()), ('tfidf', TfidfVectorizer(max_features=2000, min_df=2, ngram_range=(1, 2))), ('classifier', XGBClassifier(base_score=None, booster=None, callbacks=None,
colsample_bylevel=None, colsample_bynode=None,
colsample_bytree=None, device=None, early_stopping_rounds=None,
enable_categorical=False, eval_metric='mlogloss',
feature_types=None, gamma=None, grow_policy=None,
importance_type=None, interaction_constraints=None,
learning_rate=None, max_bin=None, max_cat_threshold=None,
max_cat_to_onehot=None, max_delta_step=None, max_depth=None,
max_leaves=None, min_child_weight=None, missing=nan,
monotone_constraints=None, multi_strategy=None, n_estimators=None,
n_jobs=None, num_class=8, num_parallel_tree=None, ...))]
verbose False
preprocessor TextPreprocessor()
tfidf TfidfVectorizer(max_features=2000, min_df=2, ngram_range=(1, 2))
classifier XGBClassifier(base_score=None, booster=None, callbacks=None,
colsample_bylevel=None, colsample_bynode=None,
colsample_bytree=None, device=None, early_stopping_rounds=None,
enable_categorical=False, eval_metric='mlogloss',
feature_types=None, gamma=None, grow_policy=None,
importance_type=None, interaction_constraints=None,
learning_rate=None, max_bin=None, max_cat_threshold=None,
max_cat_to_onehot=None, max_delta_step=None, max_depth=None,
max_leaves=None, min_child_weight=None, missing=nan,
monotone_constraints=None, multi_strategy=None, n_estimators=None,
n_jobs=None, num_class=8, num_parallel_tree=None, ...)
tfidf__analyzer word
tfidf__binary False
tfidf__decode_error strict
tfidf__dtype <class 'numpy.float64'>
tfidf__encoding utf-8
tfidf__input content
tfidf__lowercase True
tfidf__max_df 1.0
tfidf__max_features 2000
tfidf__min_df 2
tfidf__ngram_range (1, 2)
tfidf__norm l2
tfidf__preprocessor
tfidf__smooth_idf True
tfidf__stop_words
tfidf__strip_accents
tfidf__sublinear_tf False
tfidf__token_pattern (?u)\b\w\w+\b
tfidf__tokenizer
tfidf__use_idf True
tfidf__vocabulary
classifier__objective multi:softmax
classifier__base_score
classifier__booster
classifier__callbacks
classifier__colsample_bylevel
classifier__colsample_bynode
classifier__colsample_bytree
classifier__device
classifier__early_stopping_rounds
classifier__enable_categorical False
classifier__eval_metric mlogloss
classifier__feature_types
classifier__gamma
classifier__grow_policy
classifier__importance_type
classifier__interaction_constraints
classifier__learning_rate
classifier__max_bin
classifier__max_cat_threshold
classifier__max_cat_to_onehot
classifier__max_delta_step
classifier__max_depth
classifier__max_leaves
classifier__min_child_weight
classifier__missing nan
classifier__monotone_constraints
classifier__multi_strategy
classifier__n_estimators
classifier__n_jobs
classifier__num_parallel_tree
classifier__random_state 42
classifier__reg_alpha
classifier__reg_lambda
classifier__sampling_method
classifier__scale_pos_weight
classifier__subsample
classifier__tree_method
classifier__validate_parameters
classifier__verbosity
classifier__num_class 8

Model Plot

Pipeline(steps=[('preprocessor', TextPreprocessor()),('tfidf',TfidfVectorizer(max_features=2000, min_df=2,ngram_range=(1, 2))),('classifier',XGBClassifier(base_score=None, booster=None, callbacks=None,colsample_bylevel=None, colsample_bynode=None,colsample_bytree=None, device=None,early_stopping_rounds=None,enable_categorical=False, eval_metric='mlogloss',feature_types=None, gamma=None, grow_policy=None,importance_type=None,interaction_constraints=None, learning_rate=None,max_bin=None, max_cat_threshold=None,max_cat_to_onehot=None, max_delta_step=None,max_depth=None, max_leaves=None,min_child_weight=None, missing=nan,monotone_constraints=None, multi_strategy=None,n_estimators=None, n_jobs=None, num_class=8,num_parallel_tree=None, ...))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

Evaluation Results

[More Information Needed]

How to Get Started with the Model

[More Information Needed]

Model Card Authors

This model card is written by following authors:

[More Information Needed]

Model Card Contact

You can contact the model card authors through following channels: [More Information Needed]

Citation

Below you can find information related to citation.

BibTeX:

[More Information Needed]

limitations

This model is a poor performing XGBClassifier for test upload purpose

model_card_author

theterryzhang

model_description

This model does some basic text processing (lower, lemmatize), TF-IDF vectorize and then fits an XGBoostClassifier

Downloads last month
0
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.