There has been some confusion about the species of the bacteriophage genome that we are working on that needs to be clarified. The genome we were assigned was supposed to belong to a Enterotoxigenic Escherichia coli (ETEC) p7 bacteriophage. This made me believe that this phage was a bacteriophage P7, which is a myoviridae.
When BLASTing the genome that was assembled against the NCBI database the results showed that the Phage vB_EcoP_SU10 (SU10) had the closets identity (90%) with our assigned phage genome. The SU10 is closely related to podoviridae (according to professor Nilsson). Also PhageTerm classified the genome as belonging to a bacteriophage T7, which is a podoviridae. This was the source of the confusion since I believed our assigned phage genome should belong to a myoviridae, but all the analysis showed that it was closer to, or even belonged to a podoviridae.
After clarification from professor Nilsson it turns out a ETEC p7 is not the same phage as a bacteriophage P7. ETEC p7 has a distant relationship to podovirdae (according to professor Nilsson). This should explain why the softwares placed ETEC p7 in relationship with podovirdae. So it seems we are on the right track.
Next up is to try and predict coding sequences of the ETEC p7 genome with Glimmer.
This post is just some description and reflection on the coverage of the data, since it is good to know if there are any “weak spots” of the assembly.
I downloaded and installed Geneious Prime with a 14 days trial license. I aligned the paired end reads to the assembled contig. Then I used the built in tool for calculation of coverage in Geneious Prime (with default settings). The output showed regions of high coverage and regions of low coverage. As seen in the image below the high coverage region (yellow part) includes the region of the terminal repeats that was found with PhageTerm (and now confirmed in Geneious Prime). The regions of low coverage (red part) are just at the ends of the assembly, where it would be expected to be a lower reads due to lower quality of the reads. Overall the coverage seems to be good with no gaps.
The P7 bacteriophage belong to the order of Caudovirales, which contain a single linear double stranded DNA (dsDNA) and a have a tail. This order has three know families, Siphoviridae, Myoviridae and Podoviridae. The difference between these families is that they have different types of tails. The P7 bacteriophage belongs to the Myoviridae phages, which have a complex contractile tail. The mechanisms for DNA replication and packaging into procapsid can differ between different species of Caudovirales. By analyzing and determining the nature of the ends of the chromosomes it can be shed a light on the replication strategy of the bacteriophage.
Caudovirales have six know types of terminal ends. Phages use these different terminal ends to recognize their own DNA, rather than the DNA of their host’s. Most phages from this order package the DNA in a procapsid from concatemeric (repeating) DNA molecules that are frequently the result of rolling circular replication mechanisms. For P7 bacteriophages (that belong to the species of P1 bacteriophages) the mechanism of packaging is one that is called headful packaging, using a pac site. The pac site is where the terminase can initiate packaging. This leads to phages that have chromosomes that are terminally redundant and circularly permuted. An analysis of the terminals should confirm this.
After some research it seems there are two approaches of characterizing the termini of phages. The first one, that also was recommended by Professor Nilsson, is to use the software Geneious to look for regions of higher coverage. Since the terminal ends are repeats it is expected that this regions also have higher coverage. This should be combined with comparing the phage genome with a similar bacteriophage that has already been characterized, to be able to pinpoint the terminal repeats.
The second approach is to use the software PhageTerm. This software is freely available and uses the same principal as described above, by looking at regions of the data with a significantly higher number of reads compared to the rest of the genome. The advantage is that, unlike using Geneious which require experience to determine the terminals, PhageTerm uses a theoretical and statistical framework to determine the terminal repeats. Other advantages of PhageTerm are that it has been specifically investigated with Illumina technologies, tested with a range of de novo assembled bacteriophages and developed for dsDNA bacteriophages.
PhageTerm is developed by researchers at the Pasteur Institute and the institute also hosts PhageTerm on a Galaxy wrapper. This instance of PhageTerm was used to analyze the terminals of the genome assembled phage genome. The paired-end data and the assembled genome were given as inputs, with the default settings (seed length = 20, peak surrounding region = 20, limit coverage = 250). This resulted in a report [PDF] that put the starting position of the terminal repeats at 13344 and the ending position at 13592, which makes the terminal repeats 248 bps long. PhageTerm also classifies the ends as redundant and non permuting. If I understand the report correctly it identifies the genome as belonging to a T7 bacteriophage, but this needs to be discussed with professor Nilsson, since the information we were given was that genome should belong to a P7 bacteriophage. The difference between P1/p7 bacteriophages and T7 bacteriophages are that the chromosome ends of P1/P7 phages are permuted and the chromosome ends of T7 phages are not permuted.
PhageTerm also generates a file containing the phage genome sequence reorganized according to termini positions. It is unclear if we should proceed with this new reorganized file of the genome or continue with the genome that was assembled with SPAdes. This also needs to be discussed with professor Nilsson.
The post will only be a summary of some knowledge that the team gain during the past week or so about the phage and the genome of the phage that we were assigned.
First of all, out of the ten assemblies we decided to continue with the one that was assembled with SPAdes, where all reads where given as input and the setting careful was used. This resulted in an assembly with 34 contigs, but where two of the contigs make up the majority of the total sizes. One of these contigs is 90000 bp and the other one is 76000 bp. The rest of the contigs are about 3000 bp or shorter. This makes us think that maybe one of the two larger contigs might be the phage genome. Later it was confirmed by Anders Nilsson that both of these contigs are phage genomes and from two different phages. The the smaller contig is the genome that the research group was looking for, and so in the continued work of this project we have started to characterize and annotate this genome. Anders also mentioned that this phage is virulent and so we expect and have to some extent confirmed that this phage has its own enzymes and mechanisms for replication. This was done by using BLAST against other phage genomes. We also were informed by Anders that this phage belongs to the family of P7 phages. Further annotation should, thus, be proceeded by using BLAST against genomes belonging this this family of phages, as far as possible.
We also performed BLASTs against human and E. coli genomes and found matches against the genomes of these species that could not be found in bacteriophages. We thus concluded that there is contamination from these species in the samples. These are non the less irrelevant as we have been provided confirmation that the 76000 bp contig is the genome of the phage of interest. But this contig should be BLASTed against E. coli and human to assess if there might be reads from these species that might have been incorporated into the contig.
We decided to divided the work among us, where one of us would do research on the phage biology of our phage of interest and compare it to other phages as one means of characterization of our phage genome, one would research the ORFs of the phage genomes to try and predict unique genes for this phage, and one of us would do the research and testing necessary to find the terminal repeats of the genome. For my part I was given the task of finding the terminal repeats. This work will be conducted by researching the literature to find phages of the same family where the terminals have already been found to find clues about the terminal repeats, and also the coverage of the reads back to the genome should give clues about the position of the terminals, since it can be assumed that the repeats of the terminal have a higher coverage compared to the rest of the genome.
We have been looking for a tool to visualize the contigs and allow us to work with the genome in a visual manner, and Anders recommended the software Geneious. This software is a commercial software, but there is a possibility to get a 14 days free trial version. This is the software I will use to explore the terminal repeats.
Then we proceeded to do trim and filter the reads with FastX. Trimming is when bases with poor quality are removed, and filtering is when entire reads of poor quality are removed from the dataset, either due to poor average quality, ambiguous base calling or short length. FastX is a collection of command line tools for pre-processing short reads. Using FastX to filter the data did not result in any changes of the quality of the data. The file sizes of the fastq files before and after remained the same and analyzing the files with FastQC confirmed this since there were no changes in the plots compared to the plots before filtering. The per base quality plot below is given as an example of no changes to before the filtering.
We decided to continue without the filtered reads.
Next we tried assembling the reads into contigs with two different assemblers: Velvet and SPAdes. After discussion with Anders Nilsson and consulting documentation from Illumina it was decided to assemble the reads with Velvet with three different k-mer sizes: 21, 41 and 61. SPAdes has an algorithm for calculating the k-mer size. In the literature it is also recommended to assemble with a lower coverage that is common when assembling phage genomes. This is because phage genomes are small and with a high coverage there is a risk that systematic errors would be treated as natural variations. For this reason the reads were also assembled with only 10% of the reads, for both Velvet and for SPAdes. In addition, when the reads were assembled with SPAdes they were assemble with the setting “careful” on and off. In total ten assemblies were made using SPAdes and Velvet.
The software Quast was used to analyze and require metrics for all assemblies. The result reported by Quast was (only contigs > 500 bp reported):
Velvet 21 k-mer:
Number of contigs: 32
Largest contig: 2926 bp
N50: 749 bp
Total length: 24391 bp
Velvet 41 k-mer:
Number of contigs: 43
Largest contig: 1730 bp
N50: 711 bp
Total length: 32159 bp
Velvet 61 k-mer:
Number of contigs: 63
Largest contig: 1241 bp
N50: 660 bp
Total length: 42473 bp
Velvet 21 k-mer (10% coverage):
Number of contigs: 63
Largest contig: 5390 bp
N50: 1862 bp
Total length: 85004 bp
Velvet 41 k-mer (10% coverage):
Number of contigs: 64
Largest contig: 7011 bp
N50: 2777 bp
Total length: 100054 bp
Velvet 61 k-mer (10% coverage):
Number of contigs: 60
Largest contig: 14272 bp
N50: 2194 bp
Total length: 103048 bp
Number of contigs: 34
Largest contig: 90035 bp
N50: 76572 bp
Total length: 193763 bp
Number of contigs: 34
Largest contig: 90035 bp
N50: 76700 bp
Total length: 193891 bp
SPAdes (10% coverage and careful):
Number of contigs: 8
Largest contig: 90035 bp
N50: 90035 bp
Total length: 169704 bp
SPAdes (10% coverage and uncareful):
Number of contigs: 7
Largest contig: 90035 bp
N50: 90035 bp
Total length: 169705 bp
We are unsure how to interpret which of these assemblies are the best based on these metrics, but the genome of the phage is expected to be 80-90 kb in total. None of the SPAdes assemblies fall within this range. Out of the Velvet assemblies only the 21 k-mer with 10% coverage fall with in the expected genome size. But the best assembly still remains to be discussed.
Since we realized that we need to quality controls of the reads and remove adapters and trimming and filtering we have found the software FastQC which is a quality control tool for high throughput data. FastQC does not modify the reads. It just give different kinds of graphs that report the quality of the reads.
We analyzed the original dataset of reads with FastQC and the quality of the data is reported as good even though some categories give the failure and warnings. The per base sequence quality shows that the general quality of the bases is good even though it starts to drop by the end of the reads.
The category for sequence duplication levels is the only category that gives a failure. This may indicate some kind of enrichment.
The category for over-represented sequences gives a warning. A sequence is regarded as over-represented and the software will raise a warning if the sequence makes up more than 0.1% of the total number of sequences. An over-represented sequence may be due to biological importance or to contamination. In the table it can be seen that the over-representation of two of the sequences are due to the Illumina adapter sequence.
The category for adapter content also gives a warning. The graph shows, though, that the source of the warning is a significant amount of Illumina adapter sequences.
The analysis of the raw reads shows that there is a significant amount of Illumina adapter sequences in the dataset and thus adapter removal should be performed. This was previously done with Trimmomatic and the resulting reads were again analyzed with FastQC.
The per base quality improved drastically compared to before the removal of the adapters as seen in the image below.
Before the adapter removal the distribution of sequence lengths had a perfect score since all the sequences where 300 nucleotides. After the adapters were removed the distribution of sequence lengths changed, as expected. Most of the sequences are very close to 300 nucleotides. According to the FastQC manual the software raises a warning as if all the sequences are not the same length. But this should not be a big issue in this case.
The sequence duplication levels did not change much after the removal of the adapters compared to before the adapter removal and still raises a failure, indicating an enrichment bias. The software raises a failure if more than 50% of the total amount of sequences are non-unique. I’m not sure if this will cause an issue with the assembly but we decided to continue with the next steps without looking closer into this issue.
In the table of overrepresented sequences it can be seen that the sequences of the adapter have been removed, as expected. The rest of them from before the removal are still present and their sources are still unknown but should probably be BLASTed to find out more about their importance.
Finally, as expected the graph for adapter content shows that all the adapters were removed. This category turned from a warning to a pass.
In conclusion we decided that the adapter removal improved the data quality and decided to continue with the adapter removed reads.
The group met today and we added quality control of the reads to the project plan. We looked at using either Trimmomatic or Cutadapt. Trimmomatic would be the preferred option since it is a trimming tool for Illumina NGS data. The adapter sequences to be removed are also distributed with the software, unlike Cutadapt, where the user has to specify the adapter sequences that should be removed.
According to the manual of Trimmomatic a FASTA file should be specified (in addition to the dataset) that contains the adapter sequences (and PCR sequences etc). This file is distributed with Trimmomatic and contains the Illumina adapter sequences. It does not really make sense to me that the pathway of this file needs to be specified since it is distributed with the software, and since the software only works with data from Illumina sequencing machines there is not a lot of different options for the user to specify. Finding this file in a distributed system like Uppmax is what took the most time in trying to use this software. The solution was instead to find this FASTA file with the Illumina sequences on the internet and upload it to the same folder as the files with the reads.
The options used for Trimmomatic were the default options that are specified in the example of the webpage of Trimmomatic (for paired end data):
TrimmomaticPE: Started with arguments:
-phred33 ETECp7_TCCGCGAA-CAGGACGT_L001_R1_001.fastq.gz ETECp7_TCCGCGAA-CAGGACGT_L001_R2_001.fastq.gz output_forward_paired.fq.gz output_forward_unpaired.fq.gz output_reverse_paired.fq.gz output_reverse_unpaired.fq.gz ILLUMINACLIP:TruSeq3-PE-2.fa:2:30:10 LEADING:3 TRAILING:3 SLIDINGWINDOW:4:15 MINLEN:36
Using PrefixPair: ‘TACACTCTTTCCCTACACGACGCTCTTCCGATCT’ and ‘GTGACTGGAGTTCAGACGTGTGCTCTTCCGATCT’
Using Long Clipping Sequence: ‘AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTA’
Using Long Clipping Sequence: ‘AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC’
Using Long Clipping Sequence: ‘GTGACTGGAGTTCAGACGTGTGCTCTTCCGATCT’
Using Long Clipping Sequence: ‘TACACTCTTTCCCTACACGACGCTCTTCCGATCT’
ILLUMINACLIP: Using 1 prefix pairs, 4 forward/reverse sequences, 0 forward only sequences, 0 reverse only sequences
Input Read Pairs: 799754 Both Surviving: 718152 (89.80%) Forward Only Surviving: 77273 (9.66%) Reverse Only Surviving: 631 (0.08%) Dropped: 3698 (0.46%)
TrimmomaticPE: Completed successfully
I can’t really interpret the summary of the output. Are the results good or bad? But will look into it tomorrow.
Have been doing literature research to find out more about the general approach of assembling and the corresponding software tools used in each step. One recent paper gives the overview of the approaches to assembling viral genomes (R.J. Orton et al.).
The steps that are recommended for the de novo assembly and annotation of a viral genome according to R.J. Orton et al. would be first of all to put the raw read through a quality control to remove primers/adapter from the reads. Cutadapt and Trimmomatic are two widely used tools to remove adapters. The reads are also usually trimmed to remove poor-quality bases from the ends of reads. In addition to trimming the read they are also filtered, which means the complete removal of some reads because of low quality, short length or ambiguous base calling. For de novo assembly it is also recommended to remove exact read duplicates. Two widely used tools for filtering and trimming are Trim Galore! and PRINSEQ. Because phage samples often are contaminated with the host genome it is also recommended to “run a host sequence depletion step”. This means that the reads are first aligned to the host genome and only the unmapped reads are used for de novo assembly. But, in the meeting with Anders Nilsson, he said that phage genomes might contain sequences that are the same as the host genome, so a host sequence depletion step can probably not be performed thoughtlessly.
The next step is the assembly. For this step R.J. Orton et al. emphases the importance of removing adapters and trimming bases of low quality, since a very low amount of the DNA will be viral it will be important to have high quality yields. The most common algorithms for de novo assembly are overlap layout consensus (OLC) and de Bruijn graphs. They mention the assemblers MIRA (OLC), Edena (OLC), AbySS (de Bruijn) and Velvet (de Bruijn). One big issue with de novo assemblies are that they consist of a multitude of contigs and not the complete genome. This is because of “sequencing errors, repeat regions and areas with low converage”. The recommended way of joining contigs is to align them to a related reference genome. This will probably not be possible in this case, though, since phages evolve to fast which makes it impossible to use a reference genome. In discussions with Anders it was advised that this strategy might be possible to do for some of the genes, but not any longer stretches of the phage genome. If a reference genome is not available for alignment of the gaps R.J. Orton et al. recommends using paired-end reads or mate-pair reads to scaffold the contigs into the correct linear order. This should be possible to do in this case since the data is paired-ends. If the assembler does not do the scaffolding inherently there are stand-alone scaffolders such as Bambus2 and BESST. For paired-end data gap filling software such as IMAGE and GapFiller may also be used to close some of the gaps.
After the genome assembly draft is completed it is recommended to inspect the draft genome, for example by mapping the reads to the completed draft genome and looking for issues, such as miscalled bases, indels and regions of no coverage. Tools exist to help in this inspection process, such as ICORN2.
SPAdes is a recommended tool that can perform most of the steps of de novo assembly and the following quality control steps and corrections.