Now that we’ve completed a run through WGCNA (well, as much as possible with one crab) with a single crab (Crab A), we will scale up our run to include all individual libraries of ambient-temperature crabs over days 0, 2, and 17.
We will include transcripts from both C. bairdi and Hematodinium by using kallisto alignments to an unfiltered transcriptome (cbai_transcriptomev2.0, AKA cbaihemat_transcriptomev2.0)
Table of crabs and libraries included in analysis:
Crab ID | Treatment Group | Day 0 Sample ID | Day 2 Sample ID | Day 17 Sample ID |
---|---|---|---|---|
A | ambient | 178 | 359 | 463 |
B | ambient | 118 | 349 | 481 |
C | ambient | 132 | 334 | 485 |
Again, script is based largely on Yaamini’s script, which is based largely on the official WGCNA tutorial
We will first extract the TPM (transcripts per million) counts from the kallisto libraries created earlier in the pipeline. We will then change those to logTPM counts, and then begin the WGCNA analysis
library(tidyverse)
library(WGCNA)
library(DESeq2)
This portion of the script is based largely off 21_obtaining_TPM_for_DEGs.Rmd
First, set all variables
# Path to kallisto libraries
kallisto_path <- "../output/kallisto_libraries/cbaihemat_transcriptomev2.0/"
# Libraries we want to read in to our TPM matrix
libraries <- c("178", "118", "132", "359", "349", "334", "463", "481", "485")
# For each row, crab and day should correspond to the order of libraries (ex: 4th row of crabTraits should match libraries[4])
crabTraits <- data.frame("crab" = rep(c("A", "B", "C"), times = 3),
"day" = factor(c(rep(0, times = 3),
rep(2, times = 3),
rep(17, times = 3))))
# Create clinical data trait matrix. Same rules as above, but both crab and day are numeric. Crab A will be noted as 1, B as 2, and C as 3
crabClinicalData <- data.frame("crab" = rep(c(1, 2, 3), times = 3),
"day" = c(rep(0, times = 3),
rep(2, times = 3),
rep(17, times = 3)))
Then, we begin creating our TPM matrix for all transcripts
# Create character vector with all filenames for our libraries
kallisto_files <- paste0(kallisto_path, "id", libraries, "/abundance.tsv")
# Read first kallisto file in to start data frame
TPMcounts <- read.delim(file = kallisto_files[1],
header = TRUE,
sep = "\t")
# Eliminate all columns except transcript ID and TPM
TPMcounts <- TPMcounts %>%
select(target_id, tpm)
# Rename columns for consistency and to ID TPM counts
colnames(TPMcounts)[1:2] <- c("Transcript_ID",
paste0("id", libraries[1], "_TPM"))
# Loop through remaining kallisto files, performing full joins to the kallisto file we read in
for (i in 2:length(kallisto_files)){
idnum <- str_extract(kallisto_files[i], "id[0-9]+")
kallisto_output <- read.delim(file = kallisto_files[i],
header = TRUE,
sep = "\t")
# Select only transcript ID and TPM (transcripts per million) columns
kallisto_output <- kallisto_output %>%
select(target_id, tpm)
# Rename kallisto column names to give ID to count column
colnames(kallisto_output)[1:2] <- c("Transcript_ID",
paste0(idnum, "_TPM"))
# Add TPM value to table of DEGs
# Perform full join, keeping all transcript IDs
TPMcounts <- full_join(TPMcounts, kallisto_output, by = "Transcript_ID")
}
WGCNA has several recommendations when it comes to RNAseq data, available in the FAQ here. First, they suggest removing all transcripts with counts below 10 in over 90% of samples. Since we have 9 samples total, we will only remove all transcripts with counts below 10 in all samples.
They also suggest a variance-stabilizing transformation or log-transforming the counts using log2(x+1). Unlike in our trial run of Crab A, we will be able to create a DESeq object and perform a variance-stabilizing transformation.
We will change our data to fit both of these recommendations. After, we will transpose the data frame so samples are rows and transcripts are columns
# Create logical matrix for whole dataframe, comparing values to 10
# Move transcript ID to rownames
TPMcounts <- TPMcounts %>%
column_to_rownames(var = "Transcript_ID")
# Get initial dimensions of data frame
dim(TPMcounts)
## [1] 1412254 9
# Filter out all variables with no counts greater than 80. Should be 10, but testing if this works
TPMcounts <- TPMcounts %>%
filter_all(any_vars(. > 80))
# See how many transcripts we have left
dim(TPMcounts)
## [1] 3657 9
# Round all counts to the nearest integer
TPMcounts <- round(TPMcounts, digits = 0)
# Normalize raw counts with DESeq()
crab.dds <- DESeqDataSetFromMatrix(countData = TPMcounts,
colData = crabTraits,
design = ~day)
## converting counts to integer mode
crab.dds <- DESeq(crab.dds)
## estimating size factors
## estimating dispersions
## gene-wise dispersion estimates
## mean-dispersion relationship
## final dispersion estimates
## fitting model and testing
# Perform vst on DESeq object
vsd <- getVarianceStabilizedData(crab.dds)
# Transpose dataframe to format for WGCNA
CrabExpr0 <- as.data.frame(t(vsd))
# Check dataframe was transposed correctly
dim(CrabExpr0)
## [1] 9 3657
We will now begin analysis with WGCNA. Our script is based largely on Yaamini’s WGCNA script, which is based largely on the WGCNA tutorial
# Check for genes and samples with too many missing values
gsg <- goodSamplesGenes(CrabExpr0, verbose = 3)
## Flagging genes and samples with too many missing values...
## ..step 1
gsg$allOK # should return TRUE if all genes pass test
## [1] TRUE
sampleTree <- hclust(dist(CrabExpr0), method = "average")
plot(sampleTree)
# Print the crabTraits matrix we made earlier
head(crabTraits)
## crab day
## 1 A 0
## 2 B 0
## 3 C 0
## 4 A 2
## 5 B 2
## 6 C 2
# Use same rownames as expression data to create analogous matrix
rownames(crabTraits) <- rownames(CrabExpr0)
# Make sure it looks good
head(crabTraits)
## crab day
## id178_TPM A 0
## id118_TPM B 0
## id132_TPM C 0
## id359_TPM A 2
## id349_TPM B 2
## id334_TPM C 2
# Create a dendrogram to look at sample and trait clustering
sampleTree2 <- hclust(dist(CrabExpr0), method = "average")
traitColors <- numbers2colors(crabClinicalData, signed = FALSE)
# Plot dendrogram
plotDendroAndColors(sampleTree2, traitColors,
groupLabels = names(crabTraits))
# Create set of soft-thresholding powers
powers <- c(c(1:10), seq(from = 12, to = 20, by = 2))
# Use network topology analysis function to eval soft-thresholding power vals
sft <- pickSoftThreshold(CrabExpr0, powerVector = powers, verbose = 5)
## pickSoftThreshold: will use block size 3657.
## pickSoftThreshold: calculating connectivity for given powers...
## ..working on genes 1 through 3657 of 3657
## Warning: executing %dopar% sequentially: no parallel backend registered
## Power SFT.R.sq slope truncated.R.sq mean.k. median.k. max.k.
## 1 1 0.78100 2.2800 0.729 2040 2220 2660
## 2 2 0.67800 0.7160 0.639 1410 1550 2160
## 3 3 0.28500 0.2420 0.514 1080 1160 1840
## 4 4 0.00469 0.0238 0.461 867 893 1620
## 5 5 0.10900 -0.1110 0.597 723 707 1450
## 6 6 0.31800 -0.2190 0.651 619 570 1320
## 7 7 0.50100 -0.2950 0.777 541 469 1220
## 8 8 0.59400 -0.3600 0.776 479 390 1140
## 9 9 0.68300 -0.4120 0.817 429 331 1070
## 10 10 0.72400 -0.4540 0.806 388 282 1010
## 11 12 0.78700 -0.5370 0.834 324 218 906
## 12 14 0.82700 -0.5970 0.844 277 189 825
## 13 16 0.84100 -0.6460 0.835 240 153 757
## 14 18 0.85400 -0.6870 0.840 212 126 700
## 15 20 0.84700 -0.7230 0.821 188 106 650
# Plot scale-free topology fit as function of soft-thresholding power
plot(sft$fitIndices[,1], -sign(sft$fitIndices[,3])*sft$fitIndices[,2],
xlab = "Soft Threshold (power)",
ylab = "Scale Free Topology Model Fit, signed R^2",
type = "n",
main = paste("Scale independence"))
text(sft$fitIndices[,1], -sign(sft$fitIndices[,3])*sft$fitIndices[,2],
labels = powers,
cex = 1,
col = "red")
# Plot mean connectivity as function of soft-thresholding power
plot(sft$fitIndices[,1],sft$fitIndices[,5],
xlab = "Soft Threshold (power)",
ylab = "Mean Connectivity",
type = "n",
main = paste("Mean connectivity"))
# Add sft values
text(sft$fitIndices[,1], sft$fitIndices[,5],
labels = powers,
cex = 1,
col = "red")
Typically, we would choose the lowest power that reached an R2 value of 0.8 or higher. However, no soft-thresholding power was even close to reaching 0.8. According to the WGCNA FAQ, this indicates the following:
“If the scale-free topology fit index fails to reach values above 0.8 for reasonable powers (less than 15 for unsigned or signed hybrid networks, and less than 30 for signed networks) and the mean connectivity remains relatively high (in the hundreds or above) [note: which our data does], chances are that the data exhibit a strong driver that makes a subset of the samples globally different from the rest. The difference causes high correlation among large groups of genes which invalidates the assumption of the scale-free topology approximation.”
I chose to use an unsigned network, since the direction of correlation could be quite interesting (i.e. associating up-regulated C. bairdi with down-regulated Hemat.). Therefore, the WGCNA FAQ recommends choosing a soft-thresholding power of 9, since we have fewer than 20 samples total.
softPower <- 9
adjacency <- adjacency(CrabExpr0, power = softPower)
# Minimize noise and spurious associations by transforming adjacency into TOM
TOM <- TOMsimilarity(adjacency)
## ..connectivity..
## ..matrix multiplication (system BLAS)..
## ..normalization..
## ..done.
#Calculate dissimilarity matrix
dissTOM <- 1 - TOM
# Clustering using TOM
# Create hierarchical clustering object
geneTree <- hclust(as.dist(dissTOM), method = "average")
# Plot initial dendrogram. Dissimilarity is based on topological overlap
plot(geneTree, xlab = "", sub = "",
main = "Gene clustering on TOM-based dissimilarity",
labels = FALSE,
hang = 0.04)
# Set minimum module size, AKA num of genes that need to be in a module. Here, using WGCNA default
minModuleSize <- 30
# Cut branches of dendrogram to ID WGCNA modules
dynamicMods <- cutreeDynamic(dendro = geneTree,
distM = dissTOM,
deepSplit = 2,
pamRespectsDendro = FALSE,
minClusterSize = minModuleSize)
## ..cutHeight not given, setting it to 0.981 ===> 99% of the (truncated) height range in dendro.
## ..done.
# Look at table of modules
table(dynamicMods)
## dynamicMods
## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
## 927 836 326 236 218 190 168 131 128 101 89 81 78 61 48 39
# Convert module numbers into colors
dynamicColors <- labels2colors(dynamicMods)
# Plot dendrogram with module colors
plotDendroAndColors(geneTree, dynamicColors, "Dynamic Tree Cut",
dendroLabels = FALSE,
hang = 0.03,
addGuide = TRUE,
guideHang = 0.05,
main = "Gene dendrogram and module colors")
Merging lets us combine modules with genes that are highly co-expressed. To do this, we can create and cluster eigengenes
# Calculate eigengenes
MElist <- moduleEigengenes(CrabExpr0, colors = dynamicColors)
# Save eigengenes as new object
MEs <- MElist$eigengenes
# Calculate dissimilarity of eigengenes
MEDiss <- 1-cor(MEs)
# Create cluster object
METree <- hclust(as.dist(MEDiss), method = "average")
# Plot dendrogram of clustered eigengenes
plot(METree, main = "Clustering of module eigengenes",
xlab = "",
sub = "")
# ID cut height based on sample number (3)
dynamicMergeCut(9)
## [1] 0.5278481
MEDissThres <- dynamicMergeCut(3)
## Warning in function dynamicMergeCut: too few observations for the dynamic assignment of the merge threshold.
## Will set the threshold to .35
abline(h = MEDissThres, col = "red")
merge <- mergeCloseModules(CrabExpr0, dynamicColors,
cutHeight = MEDissThres,
verbose = 3)
## mergeCloseModules: Merging modules whose distance is less than 0.35
## multiSetMEs: Calculating module MEs.
## Working on set 1 ...
## moduleEigengenes: Calculating 16 module eigengenes in given set.
## multiSetMEs: Calculating module MEs.
## Working on set 1 ...
## moduleEigengenes: Calculating 8 module eigengenes in given set.
## Calculating new MEs...
## multiSetMEs: Calculating module MEs.
## Working on set 1 ...
## moduleEigengenes: Calculating 8 module eigengenes in given set.
# Extract merged colors and eigengenes
mergedColors <- merge$colors
mergedMEs <- merge$newMEs
# Plot dendrogram with original and merged eigengenes
plotDendroAndColors(geneTree, cbind(dynamicColors, mergedColors),
c("Dynamic Tree Cut", "Merged dynamic"),
dendroLabels = FALSE,
hang = 0.03,
addGuide = TRUE,
guideHang = 0.05)
# Rename and save variables for subsequent analysis
moduleColors <- mergedColors
colorOrder <- c("grey", standardColors(50)) # Determine color order
moduleLabels <- match(moduleColors, colorOrder)-1 # Construct numerical labels based on colors
MEs <- mergedMEs # Replace unmerged MEs
# Count the number of genes and samples
nGenes <- ncol(CrabExpr0)
nSamples <- nrow(CrabExpr0)
# Recalculate MEs with color labels, order MEs based on MEs0
MEs0 <- moduleEigengenes(CrabExpr0, moduleColors)$eigengenes
MEs <- orderMEs(MEs0)
# Calculate trait correlations and obtain p-values
moduleTraitCor <- cor(MEs, crabClinicalData, use = "p")
moduleTraitPvalue <- corPvalueStudent(moduleTraitCor, nSamples)
moduleTraitPvalue
## crab day
## MEpurple 0.044081506 0.3141518
## MEgreenyellow 0.859521292 0.2440461
## MEblue 0.538320482 0.3017231
## MEmagenta 0.004083427 0.6531787
## MEred 0.622277077 0.7560722
## MEmidnightblue 0.249997544 0.2126666
## MEbrown 0.725538577 0.5812702
## MEgreen 0.548942523 0.1990975
# Create text matrix for correlations and their p-values
textMatrix <- paste(signif(moduleTraitCor, 2), "\n(",
signif(moduleTraitPvalue, 1), ")", sep = "")
dim(textMatrix)
## NULL