This Jupyter Notebook described a metabolomics data analysis and visualisation workflow for partial least squares regression (a.k.a. projection to latent structure) with a binary classification outcome.
This computational workflow is described using a previously published plasma dataset by Ganna et al. (2014) and Ganna et al. (2015).The study compared the plasma metabolomic profile comparison across a large prospective epidemiological study of men (n=485) and women (n=483) at age 70 living in Uppsala, Sweden. For the purpose of this computational workflow, we compare only the males (Class=1) and females (Class=0) in a binary discriminant analysis. The deconvolved and annotated data from this study is deposited on Metabolights, and can be accessed directly via its Study ID: MTBLS90. The Excel file used in this workflow can be accessed via the following link: MTBLS90.xlsx.
This computational workflow requires a dataset to be in, or converted to, a previously described standardised Excel file format (Mendez et al. 2019). This format uses the Tidy Data Framework (Wickham, 2014), where each row represents an observation (e.g. sample) and each column represents a variable (e.g. age or metabolite). Each excel file (per study) contains two sheets; a data sheet and a peak sheet. The data sheet contains the metabolite concentration together with the metadata associated for each observation (requiring the inclusion of the columns: Idx, SampleID, and Class). The peak sheet contains the additional metadata that pertains to the metabolites in the data sheet (requiring the inclusion of the columns: Idx, Name, and Label). The standardisation of this format allows for the efficient re-use of this computational workflow.
Packages provide additional tools that extend beyond the basic functionality of the Python programming. Prior to usage, packages need to be imported into the Jupyter environment. The following packages need to be imported for this computional workflow:
numpy
: a standard package primarily used for the manipulation of arrayspandas
: a standard package primarily used for the manipulation of data tablescimcb
: a library of helpful functions and tools provided by the authorsimport numpy as np
import pandas as pd
import cimcb as cb
print('All packages successfully loaded')
This CIMCB helper function load_dataXL()
loads the Data and Peak sheet from an Excel file. In addition, this helper function checks that the data is in the standardised Excel file format described above. After the initial checks, load_dataXL()
outputs two individual Pandas DataFrames (i.e. tables) called DataTable
and PeakTable
from the Excel file MTBLS90.xlsx. This helper function requires values for the following parameters:
filename
: the name of the excel file (.xlsx file)DataSheet
: the name of the data sheet in the filePeakSheet
: the name of the peak sheet in the filehome = 'data/'
file = 'MTBLS90.xlsx'
DataTable,PeakTable = cb.utils.load_dataXL(home + file, DataSheet='Data', PeakSheet='Peak')
The following steps are needed to: (a) extract the binary outcome (i.e. 1/0 or Male/Female) and (b) extract, transform, and scale the metabolite data matrix, with missing values imputed.
DataTable
called DataTable2
, only with samples in the Class “1” or “0” (Male/Female)Y
to a list (or 1D array) of binary outcomes based on the Class column from DataTable2
(1/0)peaklist
to hold the names (M1...Mn) of the metabolites to be usedpeaklist
, extract all corresponding columns (i.e. metabolite data) from DataTable2
, and place it in matrix X
X
cb.utils.scale()
, scale the log-transformed data to the unit variance (a.k.a. auto scaling).cb.utils.knnimpute()
to give the final matrix, XTknn
# Select Subset of Data
DataTable2 = DataTable[(DataTable.Class == 1) | (DataTable.Class == 0)]
# Create a Binary Y Vector
Outcomes = DataTable2['Class']
Y = Outcomes.values
# Extract and Scale Metabolite Data
peaklist = PeakTable['Name']
XT = DataTable2[peaklist]
XTlog = np.log(XT)
XTscale = cb.utils.scale(XTlog, method='auto')
XTknn = cb.utils.knnimpute(XTscale, k=3)
The CIMCB helper function cb.cross_val.kfold()
is used to carry out k-fold cross-validation (k=5) on a set of PLS-DA models with varying number of latent variables (1 to 6) to determine the optimal number. In k-fold cross-validation, the original dataset is randomly split into k sized folds and subsequently trained for k iterations, where the model is trained on 1 – k folds and tested on the k fold (Kohavi 1995). This helper function requires values for the following parameters:
model
: the class of model used by the function, cb.model.PLS_NIPALS
X
: the metabolite data matrix, XTknn
Y
: the binary outcome vector, Y
param_dict
: a dictionary, param_dict
, that describes all key:value pairs to search, with the key name corresponding to the hyperparameter in the model class and the value as the list of possible valuesfolds
: the number of folds in the k-fold cross validationn_mc
: the number of Monte Carlo repetitions of the k-fold CV# Parameter Dictionary
param_dict = {'n_components': [1, 2, 3, 4, 5, 6]}
# Initialise
cv = cb.cross_val.kfold(model=cb.model.PLS_NIPALS,
X=XTknn,
Y=Y,
param_dict=param_dict,
folds=5,
n_mc=100)
# Run
cv.run()
When cv.plot(metric='r2q2', method='ratio')
is run, 2 plots of $R^2$ and $Q^2$ statistics are displayed: (a) the absolute difference of ($R^2 - Q^2$) / $R^2$ vs. $Q^2$, and (b) $R^2$ and $Q^2$ against the number of latent variables. Alternatively, if method='standard'
, plot (a) is the absolute difference of ($R^2 - Q^2$). The optimal number of hyperparameters is selected based on the point of inflection in figure b, or if a clear inflection point is not present, where | ($R^2 - Q^2$) / $R^2$ | = 0.2. Note, the $R^2$ is the mean coefficient of determination for the full dataset, and the $Q^2$ is the mean coefficient of determination for cross-validated prediction dataset over the 100 Monte Carlo repetitions. The following parameters of cv.plot()
can be altered:
metric
: the metric used for the plots (default = 'r2q2'). Alternative metrics include 'auc', 'acc', 'f1score', 'prec', 'sens', and 'spec'
method
: the types of plots displayed (default = 'ratio'). Alternative value is 'standard'
ci
: the confidence interval in figure b (default = 95)
legend
: to show legend (default = True). Alternative value is False
cv.plot(metric='r2q2',
method='ratio',
ci=95,
legend=True)
When cv.plot_projections()
is run, a n x n grid of plots are displayed, where n is the number of latent variables (LV) to interrogate. These plots include score plots, distribution plots, and receiver operating characteristic (ROC) curves.
There are C(n,2) score plots (i.e. a score plot for every combination of 2 LVs e.g. LV1 scores vs. LV2 scores). Each score plot include the full scores (as circles) and CV scores (as crosses) coloured by group, as well as the 95% confidence interval ellipses for the full scores (as solid lines) and CV scores (as dashed lines). Additionally, the optimal line of separation (dashed grey line) and orthogonal line (solid grey line) are shown.
There are n distribution plots (a distribution plot for each LV scores). The distribution of the full and CV scores for each corresponding group (i.e. 4 discrete distributions overlayed for 2 groups). Each distribution is calculated using kernel density estimation, a standard non-parametric method used for estimation of a probability density function based on a set of data points (Silverman 1986).
There are are C(n,2) ROC curves (a ROC curve for every combination of 2 LVs e.g. LV1 scores vs. LV2 scores). As the ROC curves are for every combination of 2 LVs, the discrimination is calculated based on optimal separation (i.e. the grey line from the corresponding score plot). For each ROC curve plot there is a ROC curve for the full model (green), and ROC curve for the cv model with 95% confidence intervals (yellow). Additionally, the equal distribution line (dashed black line) is shown.
components
: LVs to plot (default = "all" ; plot all components). Alternatively, list the components to plot e.g. [1,3,4]plot
: Data to show (default = 'ci' ; plot only 95% confidence interval ellipses). Alternative values include 'meanci', 'full', 'cv', and 'all'label
: Add labels to groups in scores plot (default = None ; refers to groups as 0/1)
legend
: Show legends for plots (default = 'all'). Alternative values are 'scatter', 'dist', 'roc', and 'none'cv.plot_projections(components=[1,2,3,4],
plot="ci",
label=DataTable2.Sex,
legend='all')
A PLS-DA model using cb.model.PLS_NIPALS
is created and initialised using the optimal hyperparameter values determined in step 4 (i.e. the number of latent variables). Following this initialisation, the PLS-DA model is trained using the .train(X, Y)
method where the X matrix is XTknn
and the Y vector is Y
. The implementation of PLS in the cb.model.PLS_NIPALS
class uses PLSRegression from sci-kit learn.
# Build Model
model = cb.model.PLS_NIPALS(n_components=3)
# Train Model
Ypred = model.train(XTknn, Y)
#Ypred_test = model.test(XTknn) # To test model
After a model has been trained, the .permutation_test()
method can be used to assess the reliability of the trained model (after selecting the number of latent variables). For the permutation test, the metabolite data matrix is randomised (permuted or 'shuffled'), while the Y (i.e. outcome) is fixed, and subsequently trained and tested on this randomised data (Szymańska et al. 2012). This process is repeated (in this case, n=100) to construct a distribution to fairly access the model. For a dataset with features that have with no meaningful contribution, we would expect a similar $R^2$ and $Q^2$ to a randomised dataset, while for a dataset with features with meaningful contribution, we would expect a $R^2$ and $Q^2$ significantly higher than that of the randomised dataset. When .permutation_test()
is run, 2 plots are displayed: (a) $R^2$ and $Q^2$ against "correlation of permuted data against original data", and (b) probability density functions for $R^2$ and $Q^2$, with the $R^2$ and $Q^2$ values found for the model trained on original data presented as ball-and-stick. The following parameter value of .permutation_test()
can be altered:
metric
: the metric used for the plots (default = 'r2q2'). Alternative metrics include 'auc', 'acc', 'f1score', 'prec', 'sens', and 'spec'. Multiple metrics can be plotted using a list e.g. ['r2q2', 'auc]
nperm
: the number of permutations. (default = 100)
model.permutation_test(metric='auc',
nperm=100)
Bootstrap resampling is a resampling method based on random resampling with replacement, commonly used to provide an estimate of sampling distribution of a test statistic (Efron, 1982). In the context of this workflow, the PLS model from step 5 with its fixed hyperparameter values (i.e. number of LVs = 2) is retrained on the resampled with replacement data (in-bag) and evaluated on the unused data (out-of-bag) for 100 resamples. After the model is evaluated for each boostrap, metrics including the predicted values (ypred), LV scores, LV loadings, and feature importance (VIP and coefficients) are stored and used to calculate 95% confidence intervals. To calculate the 95% confidence intervals, the common bias-correct (BC) bootstrap method is used (as opposed to the basic percentile method), where the percentiles are adjusted to account for the bias in the boostrap distribution from the original distribution. For details on the methodology behind the BC method, refer to (Efron, 1982). To create and run the bootmodel
, the following parameter values need to be set:
model
: A model with fixed hyperparameter values for boostrap resamplingbootnum
: The number of bootstrap resamples (default = 100)bootmodel = cb.bootstrap.BC(model, bootnum=100)
bootmodel.run()
After the bootmodel
has been run, the .evaluate()
method can be used to provide an estimate of the robustness and a measure of the generalised predictability of the model. There are three plots produced when this method is run including a violin plot, probability density function, and a ROC curve. The violin plots shows the distribution of the median predicted score for the in-bag and out-of-bag (i.e. train and test) by group. The distribution plot shows the probability density function of the median predicted score for the in-bag and out-of-bag (i.e. train and test) by group. The ROC curve shows the initial model ROC curve with the 95% CI for the in-bag (green) and the median and 95% CI for the out-of-bag (yellow). There are three options for visualising the 95% CI for the in-bag: "null"
, where a bias-corrected 95% CI is used "parametric"
, where the upper limit mirrors the lower limit, "nonparametric"
, where the upper limit mirrors the lower limit, with the exception the upper limit remains or increases (i.e. does not decrease) as 1 - Specificity increases. These options can be selected by altering the following parameter:
bc
: method used to calculate 95% CI for the ROC Curve (default = 'nonparametric'; Alternative values are 'parametric' and 'null'label
: Add labels to groups (default = None ; refer to groups as 0/1)
legend
: Show legends for plots (default = 'all'). Alternative values are 'roc', 'dist', 'violin', and 'none'bootmodel.evaluate(bc='nonparametric',
label=DataTable2.Sex,
legend='all')
After the bootmodel
has been run, the .plot_projections()
method can be used to visualise the latent variable (LV) scores. When this method is run, a n x n grid of plots are displayed, where n is the number of LVs. These plots include score plots, distribution plots, and receiver operating characteristic (ROC) curves.
There are C(n,2) score plots (i.e. a score plot for every combination of 2 LVs e.g. LV1 scores vs. LV2 scores). Each score plot include the in-bag scores (as circles) and out-of-bag scores (as crosses) coloured by group, as well as the 95% confidence interval ellipses for the in-bag scores (as solid lines) and out-of-bag scores (as dashed lines). Additionally, the optimal line of separation (dashed grey line) and orthogonal line (solid grey line) are shown.
There are n distribution plots (a distribution plot for each LV scores). The distribution of the in-bag and out-of-bag scores for each corresponding group (i.e. 4 discrete distributions overlayed for 2 groups). Each distribution is calculated using kernel density estimation, a standard non-parametric method used for estimation of a probability density function based on a set of data points (Silverman 1986).
There are are C(n,2) ROC curves (a ROC curve for every combination of 2 LVs e.g. LV1 scores vs. LV2 scores). As the ROC curves are for every combination of 2 LVs, the discrimination is calculated based on optimal separation (i.e. the grey line from the corresponding score plot). For each ROC curve plot there is a ROC curve with the LV score for the initial model with the 95% confidence intervals using the in-bag LV scores (green), and a ROC curve for the out-of-bag LV scores with 95% confidence intervals. Additionally, the equal distribution line (dashed black line) is shown.
plot
: Data to show in plot (default = "ci" ; plot only 95% confidence interval ellipses). Alternative values include 'meanci', 'ib', 'oob', and 'all'label
: Add labels to groups in scores plot (default = None ; refer to groups as 0/1).
bc
: method used to calculate 95% CI for the ROC Curve (default = 'nonparametric; Alternative values are 'parametric', and 'null'legend
: Show legends for plots (default = 'all'). Alternative values are 'scatter', 'dist', 'roc', and 'none'bootmodel.plot_projections(plot='ib',
label=DataTable2.Sex,
bc='nonparametric',
legend='all')
After the bootmodel
has been run, the .plot_loadings()
method can be used to visualise the latent variable (LV) loadings. When this method is run, n plots are displayed, where n is the number of LVs. The circles in each loadings plot represent the LV loadings for the initial model. The 95% confidence intervals are calculated using bias-correct (BC) bootstrap method in step 6. Any metabolite loadings with a confidence interval crossing the zero line is considered non-significant to the latent variable. This method requires values for the following parameters:
PeakTable
: Cleaned PeakTable from step 3peaklist
: Peaks to include in plot (default = None; include all samples).
ylabel
: Name of column in PeakTable to use as the ylabel (default = 'Label')
sort
: Whether to sort plots in absolute descending order (default = True)bootmodel.plot_loadings(PeakTable,
peaklist,
ylabel='Label', # change ylabel to 'Name'
sort=False) # change sort to True
After the bootmodel
has been run, the .plot_featureimportance()
method can be used to visualise the feature importance metrics. When this method is run, 2 plots are displayed; the Coefficient Plot and Variable Importance in Projection (VIP) plot. The circles represent the values in the intial mode. The 95% confidence intervals are calculated using bias-correct (BC) bootstrap method in step 6.
The coefficients (in the Coefficient plot) contain information about the overall contribution of each metabolite. The coefficient values can either a positive or negative number, and therefore, negatively or positively contribute to the model. Any metabolite coefficient value with a confidence interval crossing the zero line is considered non-significant to the model.
The values in the VIP plot contain information about the overall contribution of each metabolite. Unlike the coefficient values, the VIP is absolute, with the higher values representing a higher significance to the model. Typically, metabolites with a VIP greather than 1 are considered "important" in the model.
This method, bootmodel
exports the feature importance metrics as a pandas DataFrame (table). This method also requires values for the following parameters:
PeakTable
: Cleaned PeakTable from step 3peaklist
: Peaks to include in plot (default = None; include all samples).
ylabel
: Name of column in PeakTable to use as the ylabel (default = 'Label')
sort
: Whether to sort plots in absolute descending order (default = True)feature_importance = bootmodel.plot_featureimportance(PeakTable,
peaklist,
ylabel='Label', # change ylabel to 'Name'
sort=False) # change sort to True
The feature importance table created in step 8.3 can be exported using the inbuilt .to_excel()
function within a pandas DataFrame. This function requires an input with the name of the file to create, and it can include directories by using the ‘ / ’ symbol. In the cell below, the table feature_importance
is exported as an excel file called 'PLSDA_MTBLS90.xlsx' in the 'results' folder.
export_folder = 'results/'
export_file = 'PLSDA_MTBLS90.xlsx'
feature_importance.to_excel(export_folder + export_file)
print("Done!")