11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      A heat-shock response regulated by the PfAP2-HS transcription factor protects human malaria parasites from febrile temperatures

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Periodic fever is a characteristic clinical feature of human malaria, but how parasites survive febrile episodes is not known. Although Plasmodium spp. genomes encode a full set of chaperones, they lack the conserved eukaryotic transcription factor HSF1, which activates the expression of chaperones upon heat-shock. Here, we show that PfAP2-HS, a transcription factor in the ApiAP2 family, regulates the protective heat-shock response in Plasmodium falciparum. PfAP2-HS activates transcription of hsp70–1 and hsp90 at elevated temperatures. The main binding site of PfAP2-HS in the entire genome coincides with a tandem G-box DNA motif in the hsp70–1 promoter. Engineered parasites lacking PfAP2-HS have reduced heat-shock survival and severe growth defects at 37°C, but not at 35°C. Parasites lacking PfAP2-HS also have increased sensitivity to imbalances in protein homeostasis (proteostasis) produced by artemisinin, the frontline antimalarial drug, or by the proteasome inhibitor epoxomicin. We propose that PfAP2-HS contributes to maintenance of proteostasis under basal conditions and upregulates specific chaperone-encoding genes at febrile temperatures to protect the parasite against protein damage.

          Related collections

          Most cited references72

          • Record: found
          • Abstract: found
          • Article: not found

          Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles

          Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Model-based Analysis of ChIP-Seq (MACS)

            Background The determination of the 'cistrome', the genome-wide set of in vivo cis-elements bound by trans-factors [1], is necessary to determine the genes that are directly regulated by those trans-factors. Chromatin immunoprecipitation (ChIP) [2] coupled with genome tiling microarrays (ChIP-chip) [3,4] and sequencing (ChIP-Seq) [5-8] have become popular techniques to identify cistromes. Although early ChIP-Seq efforts were limited by sequencing throughput and cost [2,9], tremendous progress has been achieved in the past year in the development of next generation massively parallel sequencing. Tens of millions of short tags (25-50 bases) can now be simultaneously sequenced at less than 1% the cost of traditional Sanger sequencing methods. Technologies such as Illumina's Solexa or Applied Biosystems' SOLiD™ have made ChIP-Seq a practical and potentially superior alternative to ChIP-chip [5,8]. While providing several advantages over ChIP-chip, such as less starting material, lower cost, and higher peak resolution, ChIP-Seq also poses challenges (or opportunities) in the analysis of data. First, ChIP-Seq tags represent only the ends of the ChIP fragments, instead of precise protein-DNA binding sites. Although tag strand information and the approximate distance to the precise binding site could help improve peak resolution, a good tag to site distance estimate is often unknown to the user. Second, ChIP-Seq data exhibit regional biases along the genome due to sequencing and mapping biases, chromatin structure and genome copy number variations [10]. These biases could be modeled if matching control samples are sequenced deeply enough. However, among the four recently published ChIP-Seq studies [5-8], one did not have a control sample [5] and only one of the three with control samples systematically used them to guide peak finding [8]. That method requires peaks to contain significantly enriched tags in the ChIP sample relative to the control, although a small ChIP peak region often contains too few control tags to robustly estimate the background biases. Here, we present Model-based Analysis of ChIP-Seq data, MACS, which addresses these issues and gives robust and high resolution ChIP-Seq peak predictions. We conducted ChIP-Seq of FoxA1 (hepatocyte nuclear factor 3α) in MCF7 cells for comparison with FoxA1 ChIP-chip [1] and identification of features unique to each platform. When applied to three human ChIP-Seq datasets to identify binding sites of FoxA1 in MCF7 cells, NRSF (neuron-restrictive silencer factor) in Jurkat T cells [8], and CTCF (CCCTC-binding factor) in CD4+ T cells [5] (summarized in Table S1 in Additional data file 1), MACS gives results superior to those produced by other published ChIP-Seq peak finding algorithms [8,11,12]. Results Modeling the shift size of ChIP-Seq tags ChIP-Seq tags represent the ends of fragments in a ChIP-DNA library and are often shifted towards the 3' direction to better represent the precise protein-DNA interaction site. The size of the shift is, however, often unknown to the experimenter. Since ChIP-DNA fragments are equally likely to be sequenced from both ends, the tag density around a true binding site should show a bimodal enrichment pattern, with Watson strand tags enriched upstream of binding and Crick strand tags enriched downstream. MACS takes advantage of this bimodal pattern to empirically model the shifting size to better locate the precise binding sites. Given a sonication size (bandwidth) and a high-confidence fold-enrichment (mfold), MACS slides 2bandwidth windows across the genome to find regions with tags more than mfold enriched relative to a random tag genome distribution. MACS randomly samples 1,000 of these high-quality peaks, separates their Watson and Crick tags, and aligns them by the midpoint between their Watson and Crick tag centers (Figure 1a) if the Watson tag center is to the left of the Crick tag center. The distance between the modes of the Watson and Crick peaks in the alignment is defined as 'd', and MACS shifts all the tags by d/2 toward the 3' ends to the most likely protein-DNA interaction sites. Figure 1 MACS model for FoxA1 ChIP-Seq. (a,b) The 5' ends of strand-separated tags from a random sample of 1,000 model peaks, aligned by the center of their Watson and Crick peaks (a) and by the FKHR motif (b). (c) The tag count in ChIP versus control in 10 kb windows across the genome. Each dot represents a 10 kb window; red dots are windows containing ChIP peaks and black dots are windows containing control peaks used for FDR calculation. (d) Tag density profile in control samples around FoxA1 ChIP-Seq peaks. (e,f) MACS improves the motif occurrence in the identified peak centers (e) and the spatial resolution (f) for FoxA1 ChIP-Seq through tag shifting and λlocal. Peaks are ranked by p-value. The motif occurrence is calculated as the percentage of peaks with the FKHR motif within 50 bp of the peak summit. The spatial resolution is calculated as the average distance from the summit to the nearest FKHR motif. Peaks with no FKHR motif within 150 bp of the peak summit are removed from the spatial resolution calculation. When applied to FoxA1 ChIP-Seq, which was sequenced with 3.9 million uniquely mapped tags, MACS estimates the d to be only 126 bp (Figure 1a; suggesting a tag shift size of 63 bp), despite a sonication size (bandwidth) of around 500 bp and Solexa size-selection of around 200 bp. Since the FKHR motif sequence dictates the precise FoxA1 binding location, the true distribution of d could be estimated by aligning the tags by the FKHR motif (122 bp; Figure 1b), which gives a similar result to the MACS model. When applied to NRSF and CTCF ChIP-Seq, MACS also estimates a reasonable d solely from the tag distribution: for NRSF ChIP-Seq the MACS model estimated d as 96 bp compared to the motif estimate of 70 bp; applied to CTCF ChIP-Seq data the MACS model estimated a d of 76 bp compared to the motif estimate of 62 bp. Peak detection For experiments with a control, MACS linearly scales the total control tag count to be the same as the total ChIP tag count. Sometimes the same tag can be sequenced repeatedly, more times than expected from a random genome-wide tag distribution. Such tags might arise from biases during ChIP-DNA amplification and sequencing library preparation, and are likely to add noise to the final peak calls. Therefore, MACS removes duplicate tags in excess of what is warranted by the sequencing depth (binomial distribution p-value 2). Among the remaining 34.6% ChIP-Seq unique peaks, 1,045 (13.3%) were not tiled or only partially tiled on the arrays due to the array design. Therefore, only 21.4% of ChIP-Seq peaks are indeed specific to the sequencing platform. Furthermore, ChIP-chip targets with higher fold-enrichments are more likely to be reproducibly detected by ChIP-Seq with a higher tag count (Figure 3b). Meanwhile, although the signals of array probes at the ChIP-Seq specific peak regions are below the peak-calling cutoff, they show moderate signal enrichments that are significantly higher than the genomic background (Wilcoxon p-value 2) and ChIP-Seq (MACS; FDR <1%). Shown are the numbers of regions detected by both platforms (that is, having at least 1 bp in common) or unique to each platform. (b) The distributions of ChIP-Seq tag number and ChIP-chip MATscore [13] for FoxA1 binding sites identified by both platforms. (c) MATscore distributions of FoxA1 ChIP-chip at ChIP-Seq/chip overlapping peaks, ChIP-Seq unique peaks, and genome background. For each peak, the mean MATscore for all probes within the 300 bp region centered at the ChIP-Seq peak summit is used. Genome background is based on MATscores of all array probes in the FoxA1 ChIP-chip data. (d) Width distributions of FoxA1 ChIP-Seq/chip overlapping peaks and ChIP-Seq unique peaks at different fold-enrichments (less than 25, 25 to 50, and larger than 50). (e) Spatial resolution for FoxA1 ChIP-chip and ChIP-Seq peaks. The Wilcoxon test was used to calculate the p-values for (d) and (e). (f) Motif occurrence within the central 200 bp regions for FoxA1 ChIP-Seq/chip overlapping peaks and platform unique peaks. Error bars showing standard deviation were calculated from random sampling of 500 peaks ten times for each category. Background motif occurrences are based on 100,000 randomly selected 200 bp regions in the human genome, excluding regions in genome assembly gaps (containing 'N'). Comparing the difference between ChIP-chip and ChIP-Seq peaks, we find that the average peak width from ChIP-chip is twice as large as that from ChIP-Seq. The average distance from peak summit to motif is significantly smaller in ChIP-Seq than ChIP-chip (Figure 3e), demonstrating the superior resolution of ChIP-Seq. Under the same 1% FDR cutoff, the FKHR motif occurrence within the central 200 bp from ChIP-chip or ChIP-Seq specific peaks is comparable with that from the overlapping peaks (Figure 3f). This suggests that most of the platform-specific peaks are genuine binding sites. A comparison between NRSF ChIP-Seq and ChIP-chip (Figure S3 in Additional data file 1) yields similar results, although the overlapping peaks for NRSF are of much better quality than the platform-specific peaks. Discussion ChIP-Seq users are often curious as to whether they have sequenced deep enough to saturate all the binding sites. In principle, sequencing saturation should be dependent on the fold-enrichment, since higher-fold peaks are saturated earlier than lower-fold ones. In addition, due to different cost and throughput considerations, different users might be interested in recovering sites at different fold-enrichment cutoffs. Therefore, MACS produces a saturation table to report, at different fold-enrichments, the proportion of sites that could still be detected when using 90% to 20% of the tags. Such tables produced for FoxA1 (3.9 million tags) and NRSF (2.2 million tags) ChIP-Seq data sets (Figure S4 in Additional data file 1; CTCF does not have a control to robustly estimate fold-enrichment) show that while peaks with over 60-fold enrichment have been saturated, deeper sequencing could still recover more sites less than 40-fold enriched relative to the chromatin input DNA. As sequencing technologies improve their throughput, researchers are gradually increasing their sequencing depth, so this question could be revisited in the future. For now, we leave it up to individual users to make an informed decision on whether to sequence more based on the saturation at different fold-enrichment levels. The d modeled by MACS suggests that some short read sequencers such as Solexa may preferentially sequence shorter fragments in a ChIP-DNA pool. This may contribute to the superior resolution observed in ChIP-Seq data, especially for activating transcription and epigenetic factors in open chromatin. However, for repressive factors targeting relatively compact chromatin, the target regions might be harder to sonicate into the soluble extract. Furthermore, in the resulting ChIP-DNA, the true targets may tend to be longer than the background DNA in open chromatin, making them unfavorable for size-selection and sequencing. This implies that epigenetic markers of closed chromatin may be harder to ChIP, and even harder to ChIP-Seq. To assess this potential bias, examining the histone mark ChIP-Seq results from Mikkelsen et al. [7], we find that while the ChIP-Seq efficiency of the active mark H3K4me3 remains high as pluripotent cells differentiate, that of repressive marks H3K27me3 and H3K9me3 becomes lower with differentiation (Table S2 in Additional data file 1), even though it is likely that there are more targets for these repressive marks as cells differentiate. We caution ChIP-Seq users to adopt measures to compensate for this bias when ChIPing repressive marks, such as more vigorous sonication, size-selecting slightly bigger fragments for library preparation, or sonicating the ChIP-DNA further between decrosslinking and library preparation. MACS calculates the FDR based on the number of peaks from control over ChIP that are called at the same p-value cutoff. This FDR estimate is more robust than calculating the FDR from randomizing tags along the genome. However, we notice that when tag counts from ChIP and controls are not balanced, the sample with more tags often gives more peaks even though MACS normalizes the total tag counts between the two samples (Figure S5 in Additional data file 1). While we await more available ChIP-Seq data with deeper coverage to understand and overcome this bias, we suggest to ChIP-Seq users that if they sequence more ChIP tags than controls, the FDR estimate of their ChIP peaks might be overly optimistic. Conclusion As developments in sequencing technology popularize ChIP-Seq, we propose a novel algorithm, MACS, for its data analysis. MACS offers four important utilities for predicting protein-DNA interaction sites from ChIP-Seq. First, MACS improves the spatial resolution of the predicted sites by empirically modeling the distance d and shifting tags by d/2. Second, MACS uses a dynamic λlocal parameter to capture local biases in the genome and improves the robustness and specificity of the prediction. It is worth noting that in addition to ChIP-Seq, λlocal can potentially be applied to other high throughput sequencing applications, such as copy number variation and digital gene expression, to capture regional biases and estimate robust fold-enrichment. Third, MACS can be applied to ChIP-Seq experiments without controls, and to those with controls with improved performance. Last but not least, MACS is easy to use and provides detailed information for each peak, such as genome coordinates, p-value, FDR, fold_enrichment, and summit (peak center). Materials and methods Dataset ChIP-Seq data for three factors, NRSF, CTCF, and FoxA1, were used in this study. ChIP-chip and ChIP-Seq (2.2 million ChIP and 2.8 million control uniquely mapped reads, simplified as 'tags') data for NRSF in Jurkat T cells were obtained from Gene Expression Omnibus (GSM210637) and Johnson et al. [8], respectively. ChIP-Seq (2.9 million ChIP tags) data for CTCF in CD4+ T cells were derived from Barski et al. [5]. ChIP-chip data for FoxA1 and controls in MCF7 cells were previously published [1], and their corresponding ChIP-Seq data were generated specifically for this study. Around 3 ng FoxA1 ChIP DNA and 3 ng control DNA were used for library preparation, each consisting of an equimolar mixture of DNA from three independent experiments. Libraries were prepared as described in [8] using a PCR preamplification step and size selection for DNA fragments between 150 and 400 bp. FoxA1 ChIP and control DNA were each sequenced with two lanes by the Illumina/Solexa 1G Genome Analyzer, and yielded 3.9 million and 5.2 million uniquely mapped tags, respectively. Software implementation MACS is implemented in Python and freely available with an open source Artistic License at [16]. It runs from the command line and takes the following parameters: -t for treatment file (ChIP tags, this is the ONLY required parameter for MACS) and -c for control file containing mapped tags; --format for input file format in BED or ELAND (output) format (default BED); --name for name of the run (for example, FoxA1, default NA); --gsize for mappable genome size to calculate λBG from tag count (default 2.7G bp, approximately the mappable human genome size); --tsize for tag size (default 25); --bw for bandwidth, which is half of the estimated sonication size (default 300); --pvalue for p-value cutoff to call peaks (default 1e-5); --mfold for high-confidence fold-enrichment to find model peaks for MACS modeling (default 32); --diag for generating the table to evaluate sequence saturation (default off). In addition, the user has the option to shift tags by an arbitrary number (--shiftsize) without the MACS model (--nomodel), to use a global lambda (--nolambda) to call peaks, and to show debugging and warning messages (--verbose). If a user has replicate files for ChIP or control, it is recommended to concatenate all replicates into one input file. The output includes one BED file containing the peak chromosome coordinates, and one xls file containing the genome coordinates, summit, p-value, fold_enrichment and FDR (if control is available) of each peak. For FoxA1 ChIP-Seq in MCF7 cells with 3.9 million and 5.2 million ChIP and control tags, respectively, it takes MACS 15 seconds to model the ChIP-DNA size distribution and less than 3 minutes to detect peaks on a 2 GHz CPU Linux computer with 2 GB of RAM. Figure S6 in Additional data file 1 illustrates the whole process with a flow chart. Abbreviations ChIP, chromatin immunoprecipitation; CTCF, CCCTC-binding factor; FDR, false discovery rate; FoxA1, hepatocyte nuclear factor 3α; MACS, Model-based Analysis of ChIP-Seq data; NRSF, neuron-restrictive silencer factor. Authors' contributions XSL, WL and YZ conceived the project and wrote the paper. YZ, TL and CAM designed the algorithm, performed the research and implemented the software. JE, DSJ, BEB, CN, RMM and MB performed FoxA1 ChIP-Seq experiments and contributed to ideas. All authors read and approved the final manuscript. Additional data files The following additional data are available. Additional data file 1 contains supporting Figures S1-S6, and supporting Tables S1 and S2. Supplementary Material Additional data file 1 Figures S1-S6, and Tables S1 and S2. Click here for file
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega

              Introduction Multiple sequence alignments (MSAs) are essential in most bioinformatics analyses that involve comparing homologous sequences. The exact way of computing an optimal alignment between N sequences has a computational complexity of O(L N ) for N sequences of length L making it prohibitive for even small numbers of sequences. Most automatic methods are based on the ‘progressive alignment' heuristic (Hogeweg and Hesper, 1984), which aligns sequences in larger and larger subalignments, following the branching order in a ‘guide tree.' With a complexity of roughly O(N 2), this approach can routinely make alignments of a few thousand sequences of moderate length, but it is tough to make alignments much bigger than this. The progressive approach is a ‘greedy algorithm' where mistakes made at the initial alignment stages cannot be corrected later. To counteract this effect, the consistency principle was developed (Notredame et al, 2000). This has allowed the production of a new generation of more accurate aligners (e.g. T-Coffee (Notredame et al, 2000)) but at the expense of ease of computation. These methods give 5–10% more accurate alignments, as measured on benchmarks, but are confined to a few hundred sequences. In this report, we introduce a new program called Clustal Omega, which is accurate but also allows alignments of almost any size to be produced. We have used it to generate alignments of over 190 000 sequences on a single processor in a few hours. In benchmark tests, it is distinctly more accurate than most widely used, fast methods and comparable in accuracy to some of the intensive slow methods. It also has powerful features for allowing users to reuse their alignments so as to avoid recomputing an entire alignment, every time new sequences become available. The key to making the progressive alignment approach scale is the method used to make the guide tree. Normally, this involves aligning all N sequences to each other giving time and memory requirements of O(N 2). Protein families with >50 000 sequences are appearing and will become common from various wide scale genome sequencing projects. Currently, the only method that can routinely make alignments of more than about 10 000 sequences is MAFFT/PartTree (Katoh and Toh, 2007). It is very fast but leads to a loss in accuracy, which has to be compensated for by iteration and other heuristics. With Clustal Omega, we use a modified version of mBed (Blackshields et al, 2010), which has complexity of O(N log N), and which produces guide trees that are just as accurate as those from conventional methods. mBed works by ‘emBedding' each sequence in a space of n dimensions where n is proportional to log N. Each sequence is then replaced by an n element vector, where each element is simply the distance to one of n ‘reference sequences.' These vectors can then be clustered extremely quickly by standard methods such as K-means or UPGMA. In Clustal Omega, the alignments are then computed using the very accurate HHalign package (Söding, 2005), which aligns two profile hidden Markov models (Eddy, 1998). Clustal Omega has a number of features for adding sequences to existing alignments or for using existing alignments to help align new sequences. One innovation is to allow users to specify a profile HMM that is derived from an alignment of sequences that are homologous to the input set. The sequences are then aligned to these ‘external profiles' to help align them to the rest of the input set. There are already widely available collections of HMMs from many sources such as Pfam (Finn et al, 2009) and these can now be used to help users to align their sequences. Results Alignment accuracy The standard method for measuring the accuracy of multiple alignment algorithms is to use benchmark test sets of reference alignments, generated with reference to three-dimensional structures. Here, we present results from a range of packages tested on three benchmarks: BAliBASE (Thompson et al, 2005), Prefab (Edgar, 2004) and an extended version of HomFam (Blackshields et al, 2010). For these tests, we just report results using the default settings for all programs but with two exceptions, which were needed to allow MUSCLE (Edgar, 2004) and MAFFT to align the biggest test cases in HomFam. For test cases with >3000 sequences, we run MUSCLE with the –maxiter parameter set to 2, in order to finish the alignments in reasonable times. Second, we have run several different programs from the MAFFT package. MAFFT (Katoh et al, 2002) consists of a series of programs that can be run separately or called automatically from a script with the --auto flag set. This flag chooses to run a slow, consistency-based program (L-INS-i) when the number and lengths of sequences is small. When the numbers exceed inbuilt thresholds, a conventional progressive aligner is used (FFT-NS-2). The latter is also the program that is run by default if MAFFT is called with no flags set. For very large data sets, the --parttree flag must be set on the command line and a very fast guide tree calculation is then used. The results for the BAliBASE benchmark tests are shown in Table I. BAliBASE is divided into six ‘references.' Average scores are given for each reference, along with total run times and average total column (TC) scores, which give the proportion of the total alignment columns that is recovered. A score of 1.0 indicates perfect agreement with the benchmark. There are two rows for the MAFFT package: MAFFT (auto) and MAFFT default. In most (203 out of 218) BAliBASE test cases, the number of sequences is small and the script runs L-INS-i, which is the slow accurate program that uses the consistency heuristic (Notredame et al, 2000) that is also used by MSAprobs (Liu et al, 2010), Probalign, Probcons (Do et al, 2005) and T-Coffee. These programs are all restricted to small numbers of sequences but tend to give accurate alignments. This is clearly reflected in the times and average scores in Table I. The times range from 25 min up to 22 h for these packages and the accuracies range from 55 to 61% of columns correct. Clustal Omega only takes 9 min for the same runs but has an accuracy level that is similar to that of Probcons and T-Coffee. The rest of the table is mainly taken by the programs that use progressive alignment. Some of these are very fast but this speed is matched by a considerable drop in accuracy compared with the consistency-based programs and Clustal Omega. The weakest program here, is Clustal W (Larkin et al, 2007) followed by PRANK (Löytynoja and Goldman, 2008). PRANK is not designed for aligning distantly related sequences but at giving good alignments for phylogenetic work with special attention to gaps. These gap positions are not included in these tests as they tend not to be structurally conserved. Dialign (Morgenstern et al, 1998) does not use consistency or progressive alignment but is based on finding best local multiple alignments. FSA (Bradley et al, 2009) uses sampling of pairwise alignments and ‘sequence annealing' and has been shown to deliver good nucleotide sequence alignments in the past. The Prefab benchmark test results are shown in Table II. Here, the results are divided into five groups according to the percent identity of the sequences. The overall scores range from 53 to 73% of columns correct. The consistency-based programs MSAprobs, MAFFT L-INS-i, Probalign, Probcons and T-Coffee, are again the most accurate but with long run times. Clustal Omega is close to the consistency programs in accuracy but is much faster. There is then a gap to the faster progressive based programs of MUSCLE, MAFFT, Kalign (Lassmann and Sonnhammer, 2005) and Clustal W. Results from testing large alignments with up to 50 000 sequences are given in Table III using HomFam. Here, each alignment is made up of a core of a Homstrad (Mizuguchi et al, 1998) structure-based alignment of at least five sequences. These sequences are then inserted into a test set of sequences from the corresponding, homologous, Pfam domain. This gives very large sets of sequences to be aligned but the testing is only carried out on the sequences with known structures. Only some programs are able to deliver alignments at all, with data sets of this size. We restricted the comparisons to Clustal Omega, MAFFT, MUSCLE and Kalign. MAFFT with default settings, has a limit of 20 000 sequences and we only use MAFFT with --parttree for the last section of Table III. MUSCLE becomes increasingly slow when you get over 3000 sequences. Therefore, for >3000 sequences we used MUSCLE with the faster but less accurate setting of –maxiters 2, which restricts the number of iterations to two. Overall, Clustal Omega is easily the most accurate program in Table III. The run times show MAFFT default and Kalign to be exceptionally fast on the smaller test cases and MAFFT --parttree to be very fast on the biggest families. Clustal Omega does scale well, however, with increasing numbers of sequences. This scaling is described in more detail in the Supplementary Information. We do have two further test cases with >50 000 sequences, but it was not possible to get results for these from MUSCLE or Kalign. These are described in the Supplementary Information as well. Table III gives overall run times for the four programs evaluated with HomFam. Figure 1 resolves these run times case by case. Kalign is very fast for small families but does not scale as well. Overall, MAFFT is faster than the other programs over all test case sizes but Clustal Omega scales similarly. Points in Figure 1 represent different families with different average sequence lengths and pairwise identities. Therefore, the scalability trend is fuzzy, with larger dots occurring generally above smaller dots. Supplementary Figure S3 shows scalability data, where subsets of increasing size are sampled from one large family only. This reduces variability in pairwise identity and sequence length. External profile alignment Clustal Omega can read extra information from a profile HMM derived from preexisting alignments. For example, if a user wishes to align a set of globin sequences and has an existing globin alignment, this alignment can be converted to a profile HMM and used as well as the sequence input file. This HMM is here referred to as an ‘external profile' and its use in this way as ‘external profile alignment' (EPA). During EPA, each sequence in the input set is aligned to the external profile. Pseudocount information from the external profile is then transferred, position by position, to the input sequence. Ideally, this would be used with large curated alignments of particular proteins or domains of interest such as are used in metagenomics projects. Rather than taking the input sequences and aligning them from scratch, every time new sequences are found, the alignment should be carefully maintained and used as an external profile for EPA. Clustal Omega also can align sequences to existing alignments using conventional alignment methods. Users can add sequences to an alignment, one by one or align a set of aligned sequences to the alignment. In this paper, we demonstrate the EPA approach with two examples. First, we take the 94 HomFam test cases from the previous section and use the corresponding Pfam HMM for EPA. Before EPA, the average accuracy for the test cases was 0.627 of correctly aligned Homstrad positions but after EPA it rises to 0.653. This is plotted, test case for test case in Figure 2A. Each dot is one test case with the TC score for Clustal Omega plotted against the score using EPA. The second example is illustrated in Figure 2B. Here, we take all the BAliBASE reference sets and align them as normal using Clustal Omega and obtain the benchmark result of 0.554 of columns correctly aligned, as already reported in Table I. For EPA, we use the benchmark reference alignments themselves as external profiles. The results now jump to 0.857 of columns correct. This is a jump of over 30% and while it is not a valid measure of Clustal Omega accuracy for comparison with other programs, it does illustrate the potential power of EPA to use information in external alignments. Iteration EPA can also be used in a simple iteration scheme. Once a MSA has been made from a set of input sequences, it can be converted into a HMM and used for EPA to help realign the input sequences. This can also be combined with a full recalculation of the guide tree. In Figure 3, we show the results of one and two iterations on every test case from HomFam. The graph is plotted as a running average TC score for all test cases with N or fewer test cases where N is plotted on the horizontal axis using a log scale. With some smaller test cases, iteration actually has a detrimental effect. Once you get near 1000 or more sequences, however, a clear trend emerges. The more sequences you have, the more beneficial the effect of iteration is. With bigger test cases, it becomes more and more beneficial to apply two iterations. This result confirms the usefulness of EPA as a general strategy. It also confirms the difficulty in aligning extremely large numbers of sequences but gives one partial solution. It also gives a very simple but effective iteration scheme, not just for guide tree iteration, as used in many packages, but for iteration of the alignment itself. Discussion The main breakthroughs since the mid 1980s in MSA methods have been progressive alignment and the use of consistency. Otherwise, most recent work has concerned refinements for speed or accuracy on benchmark test sets. The speed increases have been dramatic but, with just two major exceptions, the methods are still basically O(N 2) and incapable of being extended to data sets of >10 000 sequences. The two exceptions are mBed, used here, and MAFFT PartTree. PartTree is faster but at the expense of accuracy, at least as judged by the benchmarking here. The second group of recent developments have concerned accuracy. This has tended to focus on results from benchmarking, a potentially contentious issue (Aniba et al, 2010; Edgar, 2010). The benchmark test sets that we have are limited in scope and heavily biased toward single domain globular proteins. This has the potential to lead to methods that behave well on benchmarks but which are not so flexible or useful in real-world situations. One development to improve accuracy has been the recruitment of extra homologs to bulk up input data sets. This seems to work well with the consistency-based methods and for small data sets. It appears, however, that there is a limit to the extra accuracy that can be obtained this way, without further development. The extra sequences may also bring in noise and dramatically increase the complexity of the computational problem. This can be partly fixed by iteration but, EPA to a high-quality reference alignment might be a better solution. This also raises the need for methods to visualize such large alignments, in order to detect problems. A second major focus for development has been the use of external information such as RNA structure (Wilm et al, 2008) or protein structure predictions (Pirovano et al, 2008). EPA is a new approach that allows users to exploit information in their own or in publicly available alignments. It does not force new sequences to follow the older alignment exactly. The new sequences get aligned to each other using progressive alignment but the information in the external profile can help provide information as to which amino acids are most likely to occur at each position in a sequence. Most methods attempt to predict this from general models of protein evolution with secondary structure prediction as a refinement. In this paper, we have shown that even using the mass produced alignments from Pfam as external profiles provides a small increase in accuracy for a large general set of test cases. This opens up a new set of possibilities for users to make use of the information contained in large, publicly available alignments and creates an incentive for database providers to make very high-quality alignments available. One of the reasons for the great success of Clustal X was the very user-friendly graphical user interface (GUI). This, however, is not as critical as in the past due to the widespread availability of web-based services where the GUI is provided by the web-based front-end server. Further, there are several very high-quality alignment viewers and editors such as Jalview (Clamp et al, 2004) and Seaview (Gouy et al, 2010) that read Clustal Omega output or which can call Clustal Omega directly. Materials and methods Clustal Omega is licensed under the GNU Lesser General Public License. Source code as well as precompiled binaries for Linux, FreeBSD, Windows and Mac (Intel and PowerPC) are available at http://www.clustal.org. Clustal Omega is available as a command line program only, which uses GNU-style command line options, and also accepts ClustalW-style command options for backwards compatibility and easy integration into existing pipelines. Clustal Omega is written in C and C++ and makes use of a number of excellent free software packages. We used a modified version of Sean Eddy's Squid library (http://selab.janelia.org/software.html) for sequence I/O, allowing the use of a wide variety of file formats. We use David Arthur's k-means++ code (Arthur and Vassilvitskii, 2007) for fast clustering of sequence vectors. Code for fast UPGMA and guide tree handling routines was adopted from MUSCLE (Edgar, 2004). We use the OpenMP library to enable multithreaded computation of pairwise distances and alignment match states. The documentation for Clustal Omega's API is part of the source code, and in addition is available from http://www.clustal.org/omega/clustalo-api/. Full details of all algorithms are given in the accompanying Supplementary Information. The benchmarks that were used were BAliBASE 3 (Thompson et al, 2005), PREFAB 4.0 (posted March 2005) (Edgar, 2010) and a newly constructed data set (HomFam) using sequences from Pfam (version 25) and Homstrad (as of 2011-06-13) (Mizuguchi et al, 1998). The programs that were compared can be obtained from: ClustalW2, v2.1 (http://www.clustal.org) DIALIGN 2.2.1 (http://dialign.gobics.de/) FSA 1.15.5 (http://sourceforge.net/projects/fsa/) Kalign 2.04 (http://msa.sbc.su.se/cgi-bin/msa.cgi) MAFFT 6.857 (http://mafft.cbrc.jp/alignment/software/source.html) MSAProbs 0.9.4 (http://sourceforge.net/projects/msaprobs/files/) MUSCLE version 3.8.31 posted 1 May 2010 (http://www.drive5.com/muscle/downloads.htm) PRANK v.100802, 2 August 2010 (http://www.ebi.ac.uk/goldman-srv/prank/src/prank/) Probalign v1.4 (http://cs.njit.edu/usman/probalign/) PROBCONS version 1.12 (http://probcons.stanford.edu/download.html) T-Coffee Version 8.99 (http://www.tcoffee.org/Projects_home_page/t_coffee_home_page.html#DOWNLOAD). Supplementary Material Supplementary Information Supplementary Figures S1–3 Review Process File
                Bookmark

                Author and article information

                Journal
                101674869
                44774
                Nat Microbiol
                Nat Microbiol
                Nature microbiology
                2058-5276
                16 July 2021
                16 August 2021
                September 2021
                16 February 2022
                : 6
                : 9
                : 1163-1174
                Affiliations
                [1 ]ISGlobal, Hospital Clínic - Universitat de Barcelona, Barcelona 08036, Catalonia, Spain
                [2 ]Department of Biochemistry & Molecular Biology and Huck Center for Malaria Research, Pennsylvania State University, University Park 16802, PA, USA
                [3 ]Department of Infection Biology, London School of Hygiene and Tropical Medicine, London, WC1E 7HT, UK
                [4 ]School of Biological Sciences, Nanyang Technological University, Singapore 637551, Singapore
                [5 ]Department of Chemistry, Pennsylvania State University, University Park 16802, PA, USA
                [6 ]ICREA, Barcelona 08010, Catalonia, Spain
                Author notes
                [#]

                Equal contribution as second author

                AUTHOR CONTRIBUTIONS

                E.T.-F. performed all experiments except for those presented in Extended Data Fig. 1, Western blot and ChIP-seq experiments. L.M.-T., E.T.-F., T.J.R. and A.C. performed the bioinformatics analysis. N.C.-V. performed Western blot experiments. T.J.R. performed and M.L. supervised ChIP-seq experiments. Z.B. provided microarray hybridizations for experiments presented in Extended Data Fig. 1. D.J.C. advised on clinical isolates and provided Line 1 from The Gambia. E.T.-F. and A.C. conceived the project, designed and interpreted the experiments, and wrote the manuscript (with input from all authors and major input from M.L. and D.J.C.).

                [* ]Correspondence: alfred.cortes@ 123456isglobal.org (Alfred Cortés)
                Article
                NIHMS1717925
                10.1038/s41564-021-00940-w
                8390444
                34400833
                c9a34237-b097-4a2a-b987-615c456dd5b4

                <p>Users may view, print, copy, and download text and data-mine the content in such documents, for the purposes of academic research, subject always to the full Conditions of use: <uri xlink:href="https://www.springernature.com/gp/open-research/policies/accepted-manuscript-terms">https://www.springernature.com/gp/open-research/policies/accepted-manuscript-terms</uri></p>

                History
                Categories
                Article

                Comments

                Comment on this article