Note

Description:

Hey friend! Imagine youā€™re a data analyst at an e-commerce company. The marketing team wants to know which products are driving sales, how customer behavior changes over time, and which demographics are most valuable. The problem is, the data is scattered across multiple sources! Your task is to use your Pandas magic to unite the data, answer the marketing teamā€™s questions, and become the hero of the company.

Tasks

  • 1:Ā Product Performance:Ā Find the top 10 products with the highest total revenue, considering both sales data and product prices. The data includes columns for ā€˜product_idā€™, ā€˜product_priceā€™, and ā€˜units_soldā€™.
  • 2:Ā Customer Journey:Ā Analyze how customer behavior changes over their first five purchases. The data includes columns for ā€˜customer_idā€™, ā€˜purchase_dateā€™, ā€˜product_idā€™, and ā€˜units_soldā€™.

Bonus Question:

  • Demographic Insights:Ā Calculate the average order value for each age group and gender. The data includes columns for ā€˜customer_idā€™, ā€˜ageā€™, ā€˜genderā€™, and ā€˜order_valueā€™.

import libraries

import pandas as pd
import numpy as np
import sys
 
print('Python version ' + sys.version)
print('Pandas version ' + pd.__version__)
print('Numpy version ' + np.__version__)
Python version 3.11.7 | packaged by Anaconda, Inc. | (main, Dec 15 2023, 18:05:47) [MSC v.1916 64 bit (AMD64)]
Pandas version 2.2.1
Numpy version 1.26.4

The Data

The generated dataset simulates e-commerce data with 3 tables: products, customers, and orders.

Column Descriptions:

  • product_id:Ā unique product identifier
  • product_price:Ā product price
  • customer_id:Ā unique customer identifier
  • age:Ā customer age
  • gender:Ā customer gender (male or female)
  • purchase_date:Ā order date
  • units_sold:Ā number of units sold
# set the seed
np.random.seed(0)
 
# generate sample products
products = pd.DataFrame({
'product_id': np.arange(100),
'product_price': np.random.randint(10, 100, size=100)
})
 
# customer data
customers = pd.DataFrame({
'customer_id': np.arange(200, dtype=int),
'age': np.random.randint(18, 65, size=200),
'gender': np.random.choice(['M', 'F'], size=200)
})
 
# sales orders
orders = pd.DataFrame({
'customer_id': np.random.choice(customers['customer_id'], size=1000),
'product_id': np.random.choice(products['product_id'], size=1000),
'purchase_date': pd.date_range('1/1/2022', periods=1000),
'units_sold': np.random.randint(1, 10, size=1000)
})
 
# merge dataframes
data = pd.merge(orders, products, on='product_id')
data = pd.merge(data, customers, on='customer_id')
 
# introduce missing values
data.loc[data.sample(frac=0.1).index, 'product_price'] = np.nan
data.loc[data.sample(frac=0.05).index, 'gender'] = np.nan
 
# introduce duplicates
data = pd.concat([data, data.sample(n=50)], ignore_index=True)
 
data
customer_idproduct_idpurchase_dateunits_soldproduct_priceagegender
0104522022-01-01144.062M
191902022-01-02768.031F
243802022-01-03278.056F
363462022-01-04584.020M
4159962022-01-05859.042M
ā€¦ā€¦ā€¦ā€¦ā€¦ā€¦ā€¦ā€¦
1045183162022-04-18849.036M
104647552023-01-20546.058F
104749982024-01-26151.048F
10480462022-02-27584.018F
10490232023-03-07587.018F

1050 rows Ɨ 7 columns

Letā€™s take a look at the datatypes to ensure all of the columns are of the correct type.

Take note of the output below, we are also able to see the columns with null values (gender and product_price)

data.info()

<class ā€˜pandas.core.frame.DataFrameā€™> RangeIndex: 1050 entries, 0 to 1049 Data columns (total 7 columns):

Column Non-Null Count Dtype

`--- ------ -------------- -----
0 customer_id 1050 non-null int32
1 product_id 1050 non-null int32
2 purchase_date 1050 non-null datetime64[ns] 3 units_sold 1050 non-null int32
4 product_price 945 non-null float64
5 age 1050 non-null int32
6 gender 997 non-null object
dtypes: datetime64ns, float64(1), int32(4), object(1) memory usage: 41.1+ KB

Missing Values

To make things simple, we can get rid of the rows that have missing prices as that is an important column and we donā€™t really want to make guesses here. The gender column is a different story. For this tutorial, we are not really doing any kind of filtering by gender (except the bonus question), so for now let us ignore the missing values in the gender column.

# only drop rows where product_price is null
data = data.dropna(subset='product_price')
 
data.info()

<class ā€˜pandas.core.frame.DataFrameā€™> Index: 945 entries, 0 to 1049 Data columns (total 7 columns):

Column Non-Null Count Dtype

`--- ------ -------------- -----
0 customer_id 945 non-null int32
1 product_id 945 non-null int32
2 purchase_date 945 non-null datetime64[ns] 3 units_sold 945 non-null int32
4 product_price 945 non-null float64
5 age 945 non-null int32
6 gender 894 non-null object
dtypes: datetime64ns, float64(1), int32(4), object(1) memory usage: 44.3+ KB

Duplicates

Letā€™s pretend you took a look at specific duplicate rows and determined they were in fact duplicates and needed to be removed.

# identify the dupes
data[data.duplicated()].sort_values(by=['customer_id','product_id']).head()
customer_idproduct_idpurchase_dateunits_soldproduct_priceagegender
10490232023-03-07587.018F
10480462022-02-27584.018F
1038342024-02-29277.048M
10257202024-09-07491.047F
10037832022-09-23657.047F
# lets look at one of the dupes, yep we should get rid of those dupes
data[(data.loc[:,'customer_id']==0) & (data.loc[:,'product_id']==23)]
customer_idproduct_idpurchase_dateunits_soldproduct_priceagegender
4300232023-03-07587.018F
10490232023-03-07587.018F
# drop dupes
data = data.drop_duplicates()

Product Performance:

Find the top 10 products with the highest total revenue, considering both sales data and product prices. The data includes columns for ā€˜product_idā€™, ā€˜product_priceā€™, and ā€˜units_soldā€™.

# let's start by calculating the revenue
data.loc[:,'revenue'] = data['product_price'] * data['units_sold']
 
data
customer_idproduct_idpurchase_dateunits_soldproduct_priceagegenderrevenue
0104522022-01-01144.062M44.0
191902022-01-02768.031F476.0
243802022-01-03278.056F156.0
363462022-01-04584.020M420.0
4159962022-01-05859.042M472.0
ā€¦ā€¦ā€¦ā€¦ā€¦ā€¦ā€¦ā€¦ā€¦
994116672024-09-21351.041M153.0
995104302024-09-22657.062M342.0
996126372024-09-23129.058NaN29.0
99811652024-09-25919.041M171.0
99916592024-09-26227.028F54.0

900 rows Ɨ 8 columns

To determine the products with the highest total revenue, we will make use of Pandas groupBy method.

After I calculated the total revenue per product, I sorted the data from highest to lowest, and then selected the top ten rows.

# create group object
group = data[['product_id','revenue']].groupby('product_id')
 
# get top ten
group.sum().sort_values(by='revenue', ascending=False).head(10)
revenue
product_id
726808.0
946650.0
176111.0
295874.0
785452.0
115194.0
715152.0
235133.0
435092.0
974898.0

Customer Journey:

Analyze how customer behavior changes over their first 5 purchases. The data includes columns for ā€˜customer_idā€™, ā€˜purchase_dateā€™, ā€˜product_idā€™, and ā€˜units_soldā€™.

I wanted to start by looking at the units sold over the dataset. We might be able to see a pattern here, but since we are working with randomly generated data, I donā€™t see much here. Do you?

# create group object
group = data[['units_sold','purchase_date']].groupby('purchase_date')
 
# let's take a peek at the year 2022
group.sum()[:'2022'].plot(figsize=(10, 5));

# create group object
group = data[['units_sold','purchase_date','customer_id']].sort_values(by='purchase_date').groupby(['customer_id'])
 
def first_five(df):
    ''' return the first 5 '''
    return df.head(5)
 
# apply the function to each group
top5 = group.apply(first_five, include_groups=False)
 
top5
units_soldpurchase_date
customer_id
05752022-02-27
6752022-03-09
33182022-11-28
43052023-03-07
84962024-04-29
ā€¦ā€¦ā€¦ā€¦
19920642022-07-26
54612023-07-01
62342023-09-16
81552024-03-26
94022024-07-29

780 rows Ɨ 2 columns

Now that we have trimmed the data to the first 5 purchases per customer, letā€™s generate the chart again.

As you can see below, we are still in the same boat. At this point, you may want to further filter the data by gender/age and see if you find any patterns there.

# create group object
group = top5.reset_index()[['units_sold','purchase_date']].groupby('purchase_date')
 
# let's take a peek at the year 2022
group.sum()[:'2024'].plot(figsize=(10, 5));

Can You Solve the Bonus Question?

Calculate the average order value for each age group and gender. The data includes columns for ā€˜customer_idā€™, ā€˜ageā€™, ā€˜genderā€™, and ā€˜order_valueā€™.