Performance Analysis

Evaluation of the Ex-Post Performance of Securities and Rules-Based Strategies on an Absolute, Relative and Risk-Adjusted Basis.

Paul McAteer, MSc, MBA

pcm353@stern.nyu.edu


Contents¶

  1. Import libraries.
  2. Specify investment universe and historic time period.
  3. Define parameters for raw data import and storage.
  4. Import price data over sample period, save and perform preliminary inspection.
  5. Pass price data to dataframe. Clean data structure.
  6. Visualize asset price evolution. Calculate and inspect daily and monthly returns.
  7. Construct cap-weighted benchmark.
  8. Construct equal-weighted benchmark.
  9. Compute expected return vector and sample covariance matrix.
  10. Compute idiosyncratic risk statistics.
  11. Construct efficient frontier based on classical Markowitz model.
  12. Shrink covariance matrix.
  13. Construct efficient frontier based on robust covariance estimate.
  14. Design Risk Parity portfolio.
  15. Design Black-Litterman optimized portfolio.
  16. Create security weighting scheme for Black-Litterman portfolios..
  17. Generate strategy backtest returns applying security weighting schemes.
  18. Generate risk-adjusted performance metrics for strategies.

1) Import libraries.¶

In [7]:
# Import the python libraries
import pandas as pd
import numpy as np
import math
from datetime import datetime
import matplotlib.pyplot as plt

2) Specify investment universe and historic time period.¶

In [8]:
Securities = "MSFT AAPL AMZN GOOG NVDA BRK-A JNJ V PG JPM UNH MA INTC VZ HD T PFE MRK PEP WMT BAC XOM DIS KO CVX CSCO CMCSA WFC BA ADBE"
Start = "2016-05-30"
End   = "2020-06-30"

#Start = "2010-06-30"
#End   = "2020-06-30"

3) Define parameters for raw data import and storage.¶

In [9]:
# Select File Type for upload of Security Data
filetype =".csv"
# Specify Local Storage Location 
path = r"C:\Users\delga\Desktop\NYU\CQF_Work\Portfolio_Management"
# Convert data parameters to string for file naming purposes
Sec_Dates = Securities,Start, End
def convertTuple(tup): 
    str =  '_'.join(tup) 
    return str
conv = convertTuple(Sec_Dates)
# Converted data parameters + filetype = filename
filename = conv+filetype
# Join path, filename & filetype for single reference "File"
import os 
File = os.path.join(path,filename)
print(File)
C:\Users\delga\Desktop\NYU\CQF_Work\Portfolio_Management\MSFT AAPL AMZN GOOG NVDA BRK-A JNJ V PG JPM UNH MA INTC VZ HD T PFE MRK PEP WMT BAC XOM DIS KO CVX CSCO CMCSA WFC BA ADBE_2016-05-30_2020-06-30.csv

4) Import price data over sample period, save and perform preliminary inspection.¶

In [10]:
import yfinance as yf
data = yf.download(Securities, start=Start, end=End)
[*********************100%***********************]  30 of 30 completed
In [11]:
# View data
data
Out[11]:
Adj Close ... Volume
AAPL ADBE AMZN BA BAC BRK-A CMCSA CSCO CVX DIS ... PEP PFE PG T UNH V VZ WFC WMT XOM
Date
2016-05-31 23.396452 99.470001 722.789978 114.231476 13.455158 211695.0 28.980476 25.153624 83.339264 93.608437 ... 6368700 34547300 6705000 27940000 3634100 8639600 15834500 19725000 8390200 13178800
2016-06-01 23.068445 100.040001 719.440002 114.466904 13.564697 212360.0 29.177340 25.023746 83.446510 92.948021 ... 4741400 24361200 6184100 17055700 2618000 4507000 10670800 16774300 7220900 7994800
2016-06-02 22.895067 100.169998 728.239990 114.865326 13.637724 212355.0 29.145294 25.179604 82.951439 93.136711 ... 3618200 31807700 5186300 12301700 4015000 5627500 8721100 14206700 6508700 9837500
2016-06-03 22.941923 98.699997 725.539978 115.345253 13.163051 211615.0 29.099510 25.222895 83.058708 93.165016 ... 4511700 28377600 6306700 19244900 3625600 7628000 9500700 25452600 6374500 9367100
2016-06-06 23.108265 99.120003 726.729980 119.438217 13.254335 212880.0 29.026255 25.196920 83.479523 93.193321 ... 2944200 21526800 5292300 16739100 2959900 7620100 10075600 14528500 9018000 8573000
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2020-06-23 91.310051 440.549988 2764.409912 187.880005 24.466021 272700.0 38.772812 44.702293 88.833359 116.589996 ... 3625600 37690400 5340400 38913000 4023400 9866800 20716400 47279100 6575900 18916600
2020-06-24 89.698242 431.679993 2734.399902 176.690002 23.498827 266440.0 38.139790 43.994450 85.141685 112.070000 ... 4853500 30817700 6547800 65596000 3399900 9453100 24794600 57380900 6871600 27626900
2020-06-25 90.889038 436.950012 2754.580078 174.880005 24.396935 267840.0 38.149681 44.456512 86.666924 111.360001 ... 4442000 27418700 5489300 36571300 3112500 9936600 17122500 45102200 6856600 19353600
2020-06-26 88.096405 426.920013 2692.870117 170.009995 22.847452 263400.0 38.228809 45.528111 83.995323 109.099998 ... 7372700 42154000 23201700 76576700 5043200 11836000 30211900 81172300 8997200 31518600
2020-06-29 90.126732 424.200012 2680.379883 194.490005 23.084316 265219.0 38.515648 45.370815 85.170822 111.519997 ... 4173900 23956300 6866900 37534500 2356700 8020400 15930300 43769200 5503800 17052100

1028 rows × 180 columns

In [12]:
# Save data to named file
data.to_csv(File)
In [13]:
# Inspecting the first 3 lines and all columns lines of the saved CSV file
f =open(File,"r")
f.readlines()[:3]
Out[13]:
[',Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Adj Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,Close,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,High,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Low,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Open,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume,Volume\n',
 ',AAPL,ADBE,AMZN,BA,BAC,BRK-A,CMCSA,CSCO,CVX,DIS,GOOG,HD,INTC,JNJ,JPM,KO,MA,MRK,MSFT,NVDA,PEP,PFE,PG,T,UNH,V,VZ,WFC,WMT,XOM,AAPL,ADBE,AMZN,BA,BAC,BRK-A,CMCSA,CSCO,CVX,DIS,GOOG,HD,INTC,JNJ,JPM,KO,MA,MRK,MSFT,NVDA,PEP,PFE,PG,T,UNH,V,VZ,WFC,WMT,XOM,AAPL,ADBE,AMZN,BA,BAC,BRK-A,CMCSA,CSCO,CVX,DIS,GOOG,HD,INTC,JNJ,JPM,KO,MA,MRK,MSFT,NVDA,PEP,PFE,PG,T,UNH,V,VZ,WFC,WMT,XOM,AAPL,ADBE,AMZN,BA,BAC,BRK-A,CMCSA,CSCO,CVX,DIS,GOOG,HD,INTC,JNJ,JPM,KO,MA,MRK,MSFT,NVDA,PEP,PFE,PG,T,UNH,V,VZ,WFC,WMT,XOM,AAPL,ADBE,AMZN,BA,BAC,BRK-A,CMCSA,CSCO,CVX,DIS,GOOG,HD,INTC,JNJ,JPM,KO,MA,MRK,MSFT,NVDA,PEP,PFE,PG,T,UNH,V,VZ,WFC,WMT,XOM,AAPL,ADBE,AMZN,BA,BAC,BRK-A,CMCSA,CSCO,CVX,DIS,GOOG,HD,INTC,JNJ,JPM,KO,MA,MRK,MSFT,NVDA,PEP,PFE,PG,T,UNH,V,VZ,WFC,WMT,XOM\n',
 'Date,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,\n']

5) Pass price data to dataframe. Clean data structure.¶

In [14]:
#The filename passed to the pd.read_csv() function creates the daily price dataframe.
#Specified that the first two rows shall be handled as headers.
#Specified that the first column shall be handled as an index.
#Specified that the index values are of type datetime
df_csv = pd.read_csv(File, header= [0,1], index_col=0, parse_dates=True,)
df_csv.info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 1028 entries, 2016-05-31 to 2020-06-29
Columns: 180 entries, ('Adj Close', 'AAPL') to ('Volume', 'XOM')
dtypes: float64(150), int64(30)
memory usage: 1.4 MB
In [15]:
df_csv
Out[15]:
Adj Close ... Volume
AAPL ADBE AMZN BA BAC BRK-A CMCSA CSCO CVX DIS ... PEP PFE PG T UNH V VZ WFC WMT XOM
Date
2016-05-31 23.396452 99.470001 722.789978 114.231476 13.455158 211695.0 28.980476 25.153624 83.339264 93.608437 ... 6368700 34547300 6705000 27940000 3634100 8639600 15834500 19725000 8390200 13178800
2016-06-01 23.068445 100.040001 719.440002 114.466904 13.564697 212360.0 29.177340 25.023746 83.446510 92.948021 ... 4741400 24361200 6184100 17055700 2618000 4507000 10670800 16774300 7220900 7994800
2016-06-02 22.895067 100.169998 728.239990 114.865326 13.637724 212355.0 29.145294 25.179604 82.951439 93.136711 ... 3618200 31807700 5186300 12301700 4015000 5627500 8721100 14206700 6508700 9837500
2016-06-03 22.941923 98.699997 725.539978 115.345253 13.163051 211615.0 29.099510 25.222895 83.058708 93.165016 ... 4511700 28377600 6306700 19244900 3625600 7628000 9500700 25452600 6374500 9367100
2016-06-06 23.108265 99.120003 726.729980 119.438217 13.254335 212880.0 29.026255 25.196920 83.479523 93.193321 ... 2944200 21526800 5292300 16739100 2959900 7620100 10075600 14528500 9018000 8573000
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2020-06-23 91.310051 440.549988 2764.409912 187.880005 24.466021 272700.0 38.772812 44.702293 88.833359 116.589996 ... 3625600 37690400 5340400 38913000 4023400 9866800 20716400 47279100 6575900 18916600
2020-06-24 89.698242 431.679993 2734.399902 176.690002 23.498827 266440.0 38.139790 43.994450 85.141685 112.070000 ... 4853500 30817700 6547800 65596000 3399900 9453100 24794600 57380900 6871600 27626900
2020-06-25 90.889038 436.950012 2754.580078 174.880005 24.396935 267840.0 38.149681 44.456512 86.666924 111.360001 ... 4442000 27418700 5489300 36571300 3112500 9936600 17122500 45102200 6856600 19353600
2020-06-26 88.096405 426.920013 2692.870117 170.009995 22.847452 263400.0 38.228809 45.528111 83.995323 109.099998 ... 7372700 42154000 23201700 76576700 5043200 11836000 30211900 81172300 8997200 31518600
2020-06-29 90.126732 424.200012 2680.379883 194.490005 23.084316 265219.0 38.515648 45.370815 85.170822 111.519997 ... 4173900 23956300 6866900 37534500 2356700 8020400 15930300 43769200 5503800 17052100

1028 rows × 180 columns

In [16]:
# Define string and substring to count securities in portfolio. Reduce Dataframe to daily adj close for 30 securities
string = Securities
substring = " "
Sec_count = string.count(substring)+1

df_csv = df_csv.iloc[:,0:Sec_count]
df_csv
Out[16]:
Adj Close
AAPL ADBE AMZN BA BAC BRK-A CMCSA CSCO CVX DIS ... PEP PFE PG T UNH V VZ WFC WMT XOM
Date
2016-05-31 23.396452 99.470001 722.789978 114.231476 13.455158 211695.0 28.980476 25.153624 83.339264 93.608437 ... 87.944633 27.820210 70.942688 30.285034 124.059410 76.602112 41.695221 43.575153 63.739830 71.519508
2016-06-01 23.068445 100.040001 719.440002 114.466904 13.564697 212360.0 29.177340 25.023746 83.446510 92.948021 ... 88.954414 27.860296 71.599236 29.960131 124.857597 76.873802 41.310215 43.815708 63.487690 71.696259
2016-06-02 22.895067 100.169998 728.239990 114.865326 13.637724 212355.0 29.145294 25.179604 82.951439 93.136711 ... 89.331017 27.956507 71.739296 30.045225 126.259026 77.582169 41.588726 43.910206 63.892910 71.125847
2016-06-03 22.941923 98.699997 725.539978 115.345253 13.163051 211615.0 29.099510 25.222895 83.058708 93.165016 ... 89.795204 27.812195 72.194511 30.331446 127.001511 77.572487 41.711597 43.119812 63.820892 70.997276
2016-06-06 23.108265 99.120003 726.729980 119.438217 13.254335 212880.0 29.026255 25.196920 83.479523 93.193321 ... 89.944069 28.004612 72.457123 30.432009 128.217346 78.154709 41.539574 43.377552 63.982983 71.776596
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2020-06-23 91.310051 440.549988 2764.409912 187.880005 24.466021 272700.0 38.772812 44.702293 88.833359 116.589996 ... 129.499603 30.476723 116.345726 29.196730 295.316254 197.373779 53.757511 26.994461 120.135170 44.717655
2020-06-24 89.698242 431.679993 2734.399902 176.690002 23.498827 266440.0 38.139790 43.994450 85.141685 112.070000 ... 127.725098 29.937311 115.051125 28.395628 286.960846 190.833527 52.769249 25.894064 119.371117 42.611038
2020-06-25 90.889038 436.950012 2754.580078 174.880005 24.396935 267840.0 38.149681 44.456512 86.666924 111.360001 ... 129.548904 30.058214 116.503845 28.685183 293.946838 193.395782 53.111717 27.133249 118.785667 43.252594
2020-06-26 88.096405 426.920013 2692.870117 170.009995 22.847452 263400.0 38.228809 45.528111 83.995323 109.099998 ... 127.104004 29.797808 113.875122 28.067467 284.678497 188.699966 52.015823 25.120810 117.406403 41.768391
2020-06-29 90.126732 424.200012 2680.379883 194.490005 23.084316 265219.0 38.515648 45.370815 85.170822 111.519997 ... 129.223572 30.355820 116.276558 28.868567 287.536407 190.803619 53.512894 25.477695 118.140686 42.438679

1028 rows × 30 columns

In [17]:
# Check Column names
for col in df_csv.columns: 
    print(col)
('Adj Close', 'AAPL')
('Adj Close', 'ADBE')
('Adj Close', 'AMZN')
('Adj Close', 'BA')
('Adj Close', 'BAC')
('Adj Close', 'BRK-A')
('Adj Close', 'CMCSA')
('Adj Close', 'CSCO')
('Adj Close', 'CVX')
('Adj Close', 'DIS')
('Adj Close', 'GOOG')
('Adj Close', 'HD')
('Adj Close', 'INTC')
('Adj Close', 'JNJ')
('Adj Close', 'JPM')
('Adj Close', 'KO')
('Adj Close', 'MA')
('Adj Close', 'MRK')
('Adj Close', 'MSFT')
('Adj Close', 'NVDA')
('Adj Close', 'PEP')
('Adj Close', 'PFE')
('Adj Close', 'PG')
('Adj Close', 'T')
('Adj Close', 'UNH')
('Adj Close', 'V')
('Adj Close', 'VZ')
('Adj Close', 'WFC')
('Adj Close', 'WMT')
('Adj Close', 'XOM')
In [18]:
# Create single level header from multilevel header 
df_csv.columns = df_csv.columns.map('|'.join).str.strip('|')
df_csv.columns = df_csv.columns.str.replace(r'Adj Close|$', '')
df_csv.columns = df_csv.columns.str.lstrip('|')  # strip suffix at the right end only.
# Check column names 
print(df_csv.columns)
Index(['AAPL', 'ADBE', 'AMZN', 'BA', 'BAC', 'BRK-A', 'CMCSA', 'CSCO', 'CVX',
       'DIS', 'GOOG', 'HD', 'INTC', 'JNJ', 'JPM', 'KO', 'MA', 'MRK', 'MSFT',
       'NVDA', 'PEP', 'PFE', 'PG', 'T', 'UNH', 'V', 'VZ', 'WFC', 'WMT', 'XOM'],
      dtype='object')
In [19]:
# View Adj Close Dataframe
df_csv
Out[19]:
AAPL ADBE AMZN BA BAC BRK-A CMCSA CSCO CVX DIS ... PEP PFE PG T UNH V VZ WFC WMT XOM
Date
2016-05-31 23.396452 99.470001 722.789978 114.231476 13.455158 211695.0 28.980476 25.153624 83.339264 93.608437 ... 87.944633 27.820210 70.942688 30.285034 124.059410 76.602112 41.695221 43.575153 63.739830 71.519508
2016-06-01 23.068445 100.040001 719.440002 114.466904 13.564697 212360.0 29.177340 25.023746 83.446510 92.948021 ... 88.954414 27.860296 71.599236 29.960131 124.857597 76.873802 41.310215 43.815708 63.487690 71.696259
2016-06-02 22.895067 100.169998 728.239990 114.865326 13.637724 212355.0 29.145294 25.179604 82.951439 93.136711 ... 89.331017 27.956507 71.739296 30.045225 126.259026 77.582169 41.588726 43.910206 63.892910 71.125847
2016-06-03 22.941923 98.699997 725.539978 115.345253 13.163051 211615.0 29.099510 25.222895 83.058708 93.165016 ... 89.795204 27.812195 72.194511 30.331446 127.001511 77.572487 41.711597 43.119812 63.820892 70.997276
2016-06-06 23.108265 99.120003 726.729980 119.438217 13.254335 212880.0 29.026255 25.196920 83.479523 93.193321 ... 89.944069 28.004612 72.457123 30.432009 128.217346 78.154709 41.539574 43.377552 63.982983 71.776596
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2020-06-23 91.310051 440.549988 2764.409912 187.880005 24.466021 272700.0 38.772812 44.702293 88.833359 116.589996 ... 129.499603 30.476723 116.345726 29.196730 295.316254 197.373779 53.757511 26.994461 120.135170 44.717655
2020-06-24 89.698242 431.679993 2734.399902 176.690002 23.498827 266440.0 38.139790 43.994450 85.141685 112.070000 ... 127.725098 29.937311 115.051125 28.395628 286.960846 190.833527 52.769249 25.894064 119.371117 42.611038
2020-06-25 90.889038 436.950012 2754.580078 174.880005 24.396935 267840.0 38.149681 44.456512 86.666924 111.360001 ... 129.548904 30.058214 116.503845 28.685183 293.946838 193.395782 53.111717 27.133249 118.785667 43.252594
2020-06-26 88.096405 426.920013 2692.870117 170.009995 22.847452 263400.0 38.228809 45.528111 83.995323 109.099998 ... 127.104004 29.797808 113.875122 28.067467 284.678497 188.699966 52.015823 25.120810 117.406403 41.768391
2020-06-29 90.126732 424.200012 2680.379883 194.490005 23.084316 265219.0 38.515648 45.370815 85.170822 111.519997 ... 129.223572 30.355820 116.276558 28.868567 287.536407 190.803619 53.512894 25.477695 118.140686 42.438679

1028 rows × 30 columns

In [20]:
# View metadata
df_csv.info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 1028 entries, 2016-05-31 to 2020-06-29
Data columns (total 30 columns):
 #   Column  Non-Null Count  Dtype  
---  ------  --------------  -----  
 0   AAPL    1028 non-null   float64
 1   ADBE    1028 non-null   float64
 2   AMZN    1028 non-null   float64
 3   BA      1028 non-null   float64
 4   BAC     1028 non-null   float64
 5   BRK-A   1028 non-null   float64
 6   CMCSA   1028 non-null   float64
 7   CSCO    1028 non-null   float64
 8   CVX     1028 non-null   float64
 9   DIS     1028 non-null   float64
 10  GOOG    1028 non-null   float64
 11  HD      1028 non-null   float64
 12  INTC    1028 non-null   float64
 13  JNJ     1028 non-null   float64
 14  JPM     1028 non-null   float64
 15  KO      1028 non-null   float64
 16  MA      1028 non-null   float64
 17  MRK     1028 non-null   float64
 18  MSFT    1028 non-null   float64
 19  NVDA    1028 non-null   float64
 20  PEP     1028 non-null   float64
 21  PFE     1028 non-null   float64
 22  PG      1028 non-null   float64
 23  T       1028 non-null   float64
 24  UNH     1028 non-null   float64
 25  V       1028 non-null   float64
 26  VZ      1028 non-null   float64
 27  WFC     1028 non-null   float64
 28  WMT     1028 non-null   float64
 29  XOM     1028 non-null   float64
dtypes: float64(30)
memory usage: 249.0 KB
In [21]:
# Identify null values in dataset
df_csv.isnull().any()
Out[21]:
AAPL     False
ADBE     False
AMZN     False
BA       False
BAC      False
BRK-A    False
CMCSA    False
CSCO     False
CVX      False
DIS      False
GOOG     False
HD       False
INTC     False
JNJ      False
JPM      False
KO       False
MA       False
MRK      False
MSFT     False
NVDA     False
PEP      False
PFE      False
PG       False
T        False
UNH      False
V        False
VZ       False
WFC      False
WMT      False
XOM      False
dtype: bool
In [22]:
# Drop null values in dataset
df_csv = pd.DataFrame(df_csv.dropna().round(4))
df_csv.info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 1028 entries, 2016-05-31 to 2020-06-29
Data columns (total 30 columns):
 #   Column  Non-Null Count  Dtype  
---  ------  --------------  -----  
 0   AAPL    1028 non-null   float64
 1   ADBE    1028 non-null   float64
 2   AMZN    1028 non-null   float64
 3   BA      1028 non-null   float64
 4   BAC     1028 non-null   float64
 5   BRK-A   1028 non-null   float64
 6   CMCSA   1028 non-null   float64
 7   CSCO    1028 non-null   float64
 8   CVX     1028 non-null   float64
 9   DIS     1028 non-null   float64
 10  GOOG    1028 non-null   float64
 11  HD      1028 non-null   float64
 12  INTC    1028 non-null   float64
 13  JNJ     1028 non-null   float64
 14  JPM     1028 non-null   float64
 15  KO      1028 non-null   float64
 16  MA      1028 non-null   float64
 17  MRK     1028 non-null   float64
 18  MSFT    1028 non-null   float64
 19  NVDA    1028 non-null   float64
 20  PEP     1028 non-null   float64
 21  PFE     1028 non-null   float64
 22  PG      1028 non-null   float64
 23  T       1028 non-null   float64
 24  UNH     1028 non-null   float64
 25  V       1028 non-null   float64
 26  VZ      1028 non-null   float64
 27  WFC     1028 non-null   float64
 28  WMT     1028 non-null   float64
 29  XOM     1028 non-null   float64
dtypes: float64(30)
memory usage: 249.0 KB

6) Visualize asset price evolution. Calculate and inspect daily and monthly returns.¶

In [23]:
# Plot Daily Price Evolution
df_csv.plot(figsize=(12, 60), subplots=True);
In [24]:
# Calculate and plot daily returns
returns_daily = df_csv.pct_change()
returns_daily.plot(figsize=(12, 60), subplots=True);