-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathmain_analysis.qmd
647 lines (494 loc) · 27.2 KB
/
main_analysis.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
---
title: "Predicting customer behaviour using scikit learn"
author: Anurag Patil
format:
html:
grid:
body-width: 1000px
margin-width: 150px
code-fold: true
code-tools: true
theme: Solar
mainfont: sans-serif
monofont: monospace
page-layout: full
toc: true
toc-depth: 2
toc-expand: true
panel: center
jupyter: data_science
---
<br>
#### Before getting into the report
1. All the code cells are collapsed so the reader can focus on the analysis
2. Although if you are interested in looking at the code at certain part you can unfold the code block just by clicking on the **"Code"** button provided.
3. If you are interested in looking at all the code you can unfold all the code cells at once by using the option in right hand corner. Alternatively you can go to the github repositery where all the code for this is hosted.
[repositery](https://github.com/ANP-Oxy/GoalZone)
---
- This is a revamped version of the original case study that I worked on for DataCamp's **"Data Scientist Associate certification".**
- The dataset as well as the problem scenario was provided by DataCamp.
- In this report I will be working on a fictitious business scenario, dealing with a customer behaviour classification problem.
- I will clean the data, validate it, do some visual insepction of the data and then work on modelling it.
- The tools used for this analysis are python, numpy, pandas, matplotlib, seaborn, scikit-learn, imbalanced learn and the report was generated using Quarto.
## 1. Scenario
- GoalZone is a fitness club chain in Canada.
- GoalZone offers a range of fitness classes in two capacities - 25 and 15.
- Some classes are always fully booked. Fully booked classes often have a low attendance rate.
- GoalZone wants to increase the number of spaces available for classes.
- They want to do this by predicting whether the member will attend the class or not.
- If they can predict a member will not attend the class, they can make another space
available.
---
## 2. Data
```{python}
# Importing the necessary modules
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use("seaborn-v0_8-whitegrid")
# let's get the data
url = "https://s3.amazonaws.com/talent-assets.datacamp.com/fitness_class_2212.csv"
# this will fetch the data and put it into a dataframe.
data = pd.read_csv(url)
data.head()
```
**The data has 8 columns and 1500 rows**<br>
The basic descriptions of the columns are.<br>
1. unique booking id
2. The number of months as this fitness club member
3. weight
4. The number of days before the class the member registered
5. day of week of the class
6. time of class
7. category
8. The last column attended shows whether the member attended the class or not with 0 corresponding to "No" and 1 to "Yes".
## 3. Defining the problem
- **The business wants to predict whether members will attend using the data provided**
- Here even though the specific task about predicting whether the member will attend the class or not we should keep in mind that the end goal to increase the number of members joining the class.
- Keeping the big picture in the back of the mind before diving into analysis is necessary.
- Now, about the data since our target variable is binary this is essentially a **Binary Classification** problem
- We have to predict whether the member who has registered for the class will attend the class or not.
Now that we have taken a glimpse of the data, decided what our end goal is let's get into the actual analysis. we will go through this step by step by first validating the data then doing some Exploratory analysis and finally training some machine learning models.
## 4. Data cleaning and validation.
<br>
Let's take a look at some information about the columns
1. "months_as_member" : 1 is minimum value for this column
2. "weight" : 40.00 is minimum value
3. "days_before" : 1 is minimum value
4. "day_of_week" : "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun" are valid values
5. "time" : "AM" , "PM" are valid values
6. "category" : "Yoga", "Aqua" , "Strength", "HIIT", "Cycling", "unknown" are valid values
7. "attended" : 1, 0 are valid values
Below I'm creating a python dictionary that contains some metadata about the data we have. This will be used later on to validate the integrity of the data.
```{python}
data_dict = {"booking_id" : "The unique identifier of the booking.",
"months_as_member" : 1, # minimum value for this column
"weight" : 40.00, # minimum value
"days_before" : 1, # minimum value
"day_of_week" : ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"], # valid categories
"time" : ["AM" , "PM"], # valid categories
"category" : ["Yoga", "Aqua" , "Strength", "HIIT", "Cycling", "unknown"], # valid categories
"attended" : [1, 0] # valid categories
}
```
#### Let's take a look at what kind of problems the data has. <br>
first we will check the categorical columns
```{python, attr.output='.details summary="Output"'}
#| output: false
# Select the categorical columns
# Print value counts for each of those columns
for column in data.select_dtypes("object").columns:
print(f"{column} :\n{data[column].value_counts()} \n")
```
**I have disabled the output here because it is very long but you can execute the code and take a look at it**<br>
**The problems identified in the categorical columns are**
1. The days before column is not completely numeric and some cells have "days" string written alongside the number of days. This is data entry incosistency and needs to be fixed.
2. The day of the week also has data entry errors and the weekday names aren't consistent.
3. The time columns seems fine
4. category column has "-" as values in few columns which will need to be replaced.
**I will peform series of operations on the data and fix the problems mentioned above.**
1) first we fix the days before column by elimination string characters from it
2) next we fixed the day_of_week column by using the function defind above
3) then fix the category column by replacing "-" with "unknown"
4) after that we change the datatypes of the columns
5) lastly drop the "booking_id" column as it is not useful
```{python}
# this is a function to fix the day_of_week column
def fix_dow(df):
df['day_of_week'] = df['day_of_week'].str.replace('Fri.', "Fri")
df['day_of_week'] = df['day_of_week'].str.replace('Monday', "Mon")
df['day_of_week'] = df['day_of_week'].str.replace('Wednesday', "Wed")
return df['day_of_week']
# this fuction will carry out cleaning steps outlined above
def clean_data(data = data):
return (data
.assign(days_before = data['days_before'].str.extract(r'(\d+)'),
day_of_week = fix_dow,
category = data['category'].replace("-", "unknown"))
.astype({"days_before":"int", "day_of_week":"category", "time":"category",
"category":"category", "attended":"category"})
.drop("booking_id", axis=1) # drop booking id column
)
# let's store the clean dataframe in a new variable
clean_data = clean_data()
clean_data.head()
```
We can look at the output of the cleaned data.
Let's validate this data using a utility functions I have written.
This will check the columns to see if they contain valid categories or are in valid numerical range.
```{python}
from utils import validate
validate(clean_data, data_dict)
```
as we can see the data has passed the validation tests that were defined. <br>
#### Does the data have missing values?
```{python}
print(clean_data.isna().sum())
```
- it looks like the weight column has few missing values
- we will have to impute these values later before we train the model
- let's keep going for now <br>
**This ends the section on data validation and cleaning. Now we will start analyzing the data**
## 5. Visual inspection
- now that we have cleaned the data it is time to start analyzing it.
- Let's take a look at basic summary statistics for the numerical variables
```{python}
clean_data.describe()
```
it doesn't look like there are any anomalies in this data. so we can keep going
#### Let's see the distribution of our target variable.
```{python}
#| fig-align: center
ax = sns.countplot(data = clean_data, x = 'attended', width=0.5)
ax.set_xticklabels(labels=['No', 'Yes'])
ax.set_yticks([])
ax.bar_label(ax.containers[0], label_type='center', color='black')
ax.set_xlabel("Attended the class or not?");
```
- We can see that the amount of people that did not attend the class is more than twice the amount of the people that did attend the class
- So the classes for our target variable are imbalanced
- Here we have to be careful due to this.
- A model that predicts every Member as someone not attending the class will still have about 70% accuracy
- Is machine learning even the right approach to solve this problem?
- If the percentage of members who attend the class is pretty consistent you could just oversell the tickets?
- There are lots of point of views from which you can think about this problem. for now we will simply continue the analysis
#### Next we will take a look at the "months_as_member" column
```{python}
#| fig-align: center
sns.histplot(data = clean_data, x = 'months_as_member', kde=True)
plt.title('Distribution of "months as member"');
```
- The distribution is right skewed
- It looks like most of the members have been members for between 1 to 20 months and then there's smaller proportion of people who have been members for longer than that.
- There are few outliers here who have been members a lot longer than everyone else which have given a long right tail to the distribution
#### We can look at this distribution with a log scale on x-axis**
```{python}
#| fig-align: center
sns.histplot(data = clean_data, x = 'months_as_member', kde=True)
plt.xscale('log')
plt.title('Distribution of "months as member (log scale)"');
```
#### Let's look at the relationship between "months_as_member" and whether the person attends the class or not
```{python}
#| fig-align: center
with plt.style.context('dark_background'):
fig, ax = plt.subplots(1, 2, figsize=(12, 5))
sns.stripplot(data = clean_data, x='attended',
y='months_as_member', color='steelblue',
alpha=0.5, ax=ax[0])
sns.pointplot(data=clean_data, x='attended',
estimator='mean', y='months_as_member',
color='white', ax=ax[0])
attended = clean_data['attended'].replace({0:"No", 1:"Yes"})
sns.ecdfplot(data=clean_data, x='months_as_member', hue=attended, ax=ax[1])
plt.setp(ax[0].lines, zorder=100)
plt.setp(ax[0].collections, zorder=100, label="")
ax[0].grid(False)
ax[1].grid(False)
ax[0].margins(x=0.1)
ax[0].set_ylim(0)
ax[0].annotate("Average for both categories", xy=(0.4, 20), xytext=(0.2, 100), arrowprops=dict(facecolor='black', shrink=0.05))
ax[0].set_ylabel("Number of months as member of club")
ax[0].set_xlabel("Attended the class?")
ax[0].set_xticklabels(['No', "yes"])
ax[0].annotate(xy=(0, 0.95), xytext=(0, 0.85), textcoords ='axes fraction', text="The people who attend the class are \n more likely to have been member of \n the club for longer time");
```
#### ok there is a lot of unpack here :)
1. In the right graph we can see how many months each person has been member of the club and whether they attended the class or not.
2. The points for members who did not attend are closely stacked togethere whereas the points for people who did attend are more spread out and have longer range.
3. **The white points joind by a line show the average of _months as member_ for both categories**
4. we can see a clear difference there. It's not very big but it is definitely noticeble.
5. The right graph shows a ECDF for both categories.
6. we can see that people who attended the event are more likely to have higher number of months as members of the club
<br>
Ok! This was just some quick analysis. before we go any further into our analysis
- we will split the data into train and test set first
- we don't want to learn patterns from test set too much so before doing a deeper analysis we will separate them
```{python}
#| cold-fold: false
# Obtaining a train test split
from sklearn.model_selection import train_test_split
X, y = clean_data.drop("attended", axis=1), clean_data["attended"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, stratify=y, random_state=241)
```
#### First of all let's look at relationship between all the features and our target variable.
i'm gonna go ahead and write a function that will provide us with plots for this
```{python}
#| fig-align: center
from utils import get_relations
get_relations(dataframe = X_train, y = y_train, which="numerics")
```
- we can see here that the months_as_member and weight column have some correlation with the target variable.
- Depending on whether the target variable is "yes" or "no" the distribution of these features has slight difference
- on the other hand the days_before column doesn't seem to have any correlation with the target variable as the distribution for both the category of target variable seem pretty similar
- one other observation is that the months_as_member and weight column seem to few outliers that might affect the model performance later.
#### Now let's take a look at categorical columns <br>
we will look at how the proportions of the categorical variables are distributed for our target variable.
```{python}
#| fig-align: center
get_relations(dataframe = X_train, y = y_train, which="categoricals")
```
- what we can see in this plot is the variables "day_of_week", "time" and "category" and how they are distributed across our target classes
- I normalized the counts so we can look at the proportions and compare to see if any of these categorical features have any correlation with the target
- looking at these plots it doesn't look like there is any clear correlation.
#### Next, we will take a look at how the numeric feature are distributed and do they have any correlation with each other?
```{python}
#| fig-align: center
sns.pairplot(data = X_train);
```
- We already know that the "Months_as_member" and "weight" features have some correlation with the target
- here it looks like they are also related to each other sightly
- let's take a look at these two columns
```{python}
#| fig-align: center
sns.scatterplot(data=clean_data, x='months_as_member', y="weight", hue=y_train)
plt.ylim(0)
plt.xscale('log')
```
- since the months as member feature has skewed distribution I Have to put it on a **Log-Scale**
- here we can see that it seems like there is some negative correlation between these two features
- it makes sense that people with longer duration are likely to have lower weight as they are more likely to workout regularly.
- another thing we can see is that the members with higher months as members are likely to attend the class they registered to
## 6. Preprocessing and Building Pipeline.
- We have looked at the distributions of the data
- we also looked at how the various features are related to the target variable as well as relationship between categorical features
- The next step is to prepare the data for training a machine learning model
- We will create a pipeline that does all these things for us
- I will also include a step for **Removing Outliers** from the data inside the pipeline. Since sci-kit learn doesn't allow for this functionality I will be using **imbalanced-learn** to implement that. You can find out more about it [here](https://imbalanced-learn.org/stable/auto_examples/applications/plot_outlier_rejections.html)
#### The process of removing outliers.
- I will write a custom function which will do following things
1. It will take X_train and Y_train
2. It will remove any rows corresponding to outliers from X_train and y_train
3. The outliers here will be any observations that are outside of 3 standard deviations of the distribution.
- Before removing outliers we will also do other preprocessing steps such as
1. Imputing missing values for the weight column
2. Encoding the Categorical variables
3. after that outiler rejection will take place
4. and lastly we will scale and standardize the data
```{python}
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import OrdinalEncoder
from imblearn.pipeline import Pipeline
from imblearn import FunctionSampler
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score
imputer = SimpleImputer(strategy='mean')
onehot = OneHotEncoder()
ordEnc = OrdinalEncoder()
models = {"Logistic":LogisticRegression(),
"SVC":SVC(),
"KNN":KNeighborsClassifier()
}
preprocessor = ColumnTransformer(transformers=[
("Imputer", SimpleImputer(strategy='mean'),['weight']),
("CatEncoder", onehot, ["category", "day_of_week"]),
("OrdEnc", ordEnc, ["time"])],
remainder= "passthrough")
```
- What I have done here is import all the necessary estimators and transformers that will be used for preprocessing and model training
- I have created a Instance of Column transformer that will operate on the categorical and numeric columns seperately and output the cleaned features
### 6.1 Models
- As can be seen I have created instances of three different models for training which are as following
1. **Logistic Regression** - A linear classifier that is simple to explain and train.
2. **Support vector machines** - another similar model that can fit to both linear and non linear patterns in the data
3. **K-Neighbours Classifier** - For a different approach this model classifies the data points based on the euclidean or manhattan distance in N-dimensional space.
we will train these three models and see which one performs good enough or is this data even good enough to make predictions.
```{python}
from utils import outlier_rejection
def get_scores(model_dict, X, y, metric):
'''
This function returns training score and
cross validation score for various models
_____________________________________________________
params
model_dict: (dict) model name and model instance
X: X_train array
y: y_train arrray
'''
model_names = []
training_score = []
cross_validation_scores = []
for name, model in model_dict.items():
pipeline = Pipeline(steps=[("preprocesser", preprocessor),
("outlier_rejection",
FunctionSampler(func=outlier_rejection)),
("scaler", StandardScaler()),
("model", model)])
pipeline.fit(X, y.values)
model_names.append(name)
training_score.append(pipeline.score(X, y.values))
avg_cross_val_score = np.mean(cross_val_score(pipeline, X, y.values,
cv=5, scoring=metric))
cross_validation_scores.append(avg_cross_val_score)
return pd.DataFrame({"training_score":training_score,
"cross_val_score":cross_validation_scores},
index=model_names)
```
### 6.2 Pipeline
- In the above code cell, have basically defined a function that uses the pipeline we have created to train the models we have defined
- It will calculate the training scores and the cross validation scores for us of each model.
- The basic steps in this pipeline are
1. Column transformation which includes imputation and categorical variable encoding
2. Next step is removing outliers from the training data. **Note the outliers will only be removed from the training data and not from the testing data**
3. After this scaling and training takes place.
### 6.3 Model evaluation
```{python}
#| fig-align: center
scores = get_scores(model_dict = models, X = X_train, y = y_train, metric="accuracy")
scores.plot.bar()
```
- Here we can see that Logistic regression has best cross validation score and it's the simplest model
- SVC is pretty similar in results and it doesn't look like it's overfitting the training data
- The KNN model has the highest training accuracy but lowest cross validation accuracy which means it's overfitting the training data.
- **Not the thing to keep in mind when looking at these metrics is accuracy is not a good metric to judge a classification models, specially when the target variable is highly inbalanced.**
- Let's calculate ROC_AUC scores for each model using cross vaidation to see how they are really performing.
```{python}
roc_auc = get_scores(model_dict = models, X = X_train, y = y_train, metric="roc_auc")
roc_auc[["cross_val_score"]]
```
- Once again Logistic Regression has the best performance out of three models.
- These ROC_AUC scores are pretty similar to what we say with accuracy.
- But this does tell us that the model is not good just by a random chance. It has some good predictive ability.
- **An even better metric for testing the models would be confusion_matrix**
- Let's see confusion matrix for each model on training data.
```{python}
#| fig-align: center
from yellowbrick.classifier import ConfusionMatrix
# Let's use a subset of above mentioned function to do this.
def confusion():
pipeline = Pipeline(steps=[("preprocesser", preprocessor),
("outlier_rejection",
FunctionSampler(func=outlier_rejection)),
])
x_train_c, y_train_c = pipeline.fit_resample(X_train, y_train.values)
x_train_scale = StandardScaler().fit_transform(x_train_c)
for model in models.values():
fig, ax = plt.subplots(figsize=(4, 4))
cm = ConfusionMatrix(model, encoder={1:"yes", 0:"No"}, ax=ax)
cm.fit(x_train_scale, y_train_c)
cm.score(x_train_scale, y_train_c)
cm.show()
confusion()
```
- Here we can see very granular view of how the models are fitting on the data
- As we can see All three of the models have good accuracy when it comes to classifying people "Not" attending the class
- But the catch here is that the models flag about half of the people who will attend the class and people who will not.
- Due to this reason the model has high prediction error when it comes to positive class.
- Alright this was base models. Not let's do some hyperparameter tuning on these models and check out their the results.
## 7. Hyperparameter Tuning
```{python}
params = {"Logistic":
{"model__solver" : ["newton-cg" , "lbfgs", "liblinear", "sag", "saga"],
"model__penalty": ["l1", "l2", "elasticnet"],
"model__C": [0.1, 1, 10, 100, 1000]},
"SVC" :
{'model__kernel':['linear', 'poly', 'rbf', 'sigmoid'],
'model__C': [100, 10, 1.0, 0.1, 0.001]},
"KNN":
{"model__n_neighbors": np.arange(1,21,1),
"model__metric": ['euclidean', 'manhattan', 'minkowski'],
"model__weights": ['uniform', 'distance']}
}
```
**In above code cell I have created a nested dictionary that contains hyperparameters for each model.
I know that's a lot of parameters but we can afford to do exhaustive search due to dataset being very small.
Now I will write a custom function that uses **GridSeachCV** from scikit-learn and give us best parameters for each model.
```{python}
#| output: false
from sklearn.model_selection import GridSearchCV
hyperparameters = []
best_scores = []
def tune_model(model, param_grid):
pipeline = Pipeline(steps=[("preprocesser", preprocessor),
("outlier_rejection",FunctionSampler(func=outlier_rejection)),
("scaler", StandardScaler()),
("model", model)])
GSC = GridSearchCV(pipeline, param_grid=param_grid,
cv = 5, scoring='roc_auc')
GSC.fit(X_train, y_train.values)
hyperparameters.append(GSC.best_params_)
best_scores.append(GSC.best_score_)
for name, model in models.items():
estimator = model
tune_model(estimator, params[name])
```
```{python}
for i in hyperparameters:
print(i, '\n')
print(best_scores)
```
- here we can see the hyperparameters chosen for the models and the best accuracy that was obtaind by those parameters.
- Overall Logistic Regression model has performed most consistently during this analysis and that's the one we will use as the final model here
- Let's train a final model and test it on test set.
## 8. Final Training.
```{python}
logreg = Pipeline(steps=[("preprocesser", preprocessor),
("outlier_rejection",
FunctionSampler(func=outlier_rejection)),
("scaler", StandardScaler()),
("LogisticRegression",
LogisticRegression(C=0.1, penalty='l1',solver='liblinear'))])
logreg.fit(X_train, y_train.values)
logreg.score(X_test, y_test.values)
```
- Finally we can see the performance of our model on the test data
- This doesnt tell us much though, let's look at some more metrics.
```{python}
#| fig-align: center
from yellowbrick.classifier import ClassificationReport
cr = ClassificationReport(logreg, encoder={0:"No", 1:"Yes"})
cr.fit(X_train, y_train.values)
cr.score(X_test, y_test.values)
cr.show()
```
## 9. Final Thougts
1. Logistic Regression and SVM machines achieved fairly similar results on the train set
2. Logistic Regression is slightly better than SVM and seems like the best fit as the solution for GoalZone.
3. The reason both of these models performed better than the K nearest neighbor is because they used linear decision boundaries.
```{python}
#| fig-align: center
from yellowbrick.classifier import class_prediction_error
class_prediction_error(logreg, X_train, y_train.values, X_test, y_test.values, encoder={0:"No", 1:"Yes"});
```
- we can see prediction error for both of the classes here.
- Overall the model is performing better than just doing random guesses but It's hard to tell if it's really a significant difference.
- There should be more investigation done into this and different approaches need to be tried.
- This problem can also be looked at through the lens of time series and see if it's possible to predict class attendence over time.
- The data provided was very small in size and collecting more data with better features is also a good idea.
- In this dataset there weren't a lot of features that explained the target variable well besides weight and Months_as_member
- If GoalZone wants to use this model for freeing up more seats for their classes about 65-75% seats can be freed according to the model. Again there might be some time series components here that need to be looked at but if you keep the prediction error in mind the business should be able to free up more seats.
---
**References**
1. [datacamp](https://app.datacamp.com/)
2. [pandas](https://pandas.pydata.org/)
3. [Numpy](https://numpy.org/)
4. [Scikit-learn](https://scikit-learn.org/stable/)
5. [Matplotlib](https://matplotlib.org/)
6. [Imbalanced-learn](https://imbalanced-learn.org/stable/)
7. [Quarto](https://quarto.org/)