text
stringlengths
1
40.9k
B.
We directly examined the capacity of NE to facilitate reovirus infection by using the irreversible elastase inhibitor, N-(methoxysuccinyl)-Ala-Ala-Pro-Val-chloromethyl ketone [53] . This inhibitor is highly specific for NE and does not inhibit the activity of the related serine protease, Cat G [53] . First, we established the efficacy and specificity of inhibitor treatment under our experimental conditions. U937 cells were treated with the NE inhibitor, E64, Baf or NH 4 Cl for either 3 h or 2 d and the activity of NE in cell lysates was examined using a colorimetric substrate. As shown in Table 1 , the NE inhibitor was active at both time points. In cells treated with the specific inhibitor, NE activity was less than 9% of that in untreated U937 cells.
In contrast, in U937 cells treated with E64, Baf or NH 4 Cl, NE activity was only modestly reduced, remaining above 80% even after 2 d. These results are consistent with the capacity of NE to function at neutral pH. To verify the specificity of the NE inhibitor, we also examined its effect on Cat L/B activity using the fluorogenic substrate Z-Phe-Arg-MCA. As expected, Cat L/B activity was completely inhibited by E64 but largely unaffected by the NE inhibitor.
To examine the effect of the NE inhibitor on reovirus replication in U937 cells, we pre-treated them for 3 h with E64 in the presence or absence of the NE inhibitor, infected them with Lang virions or ISVPs at an MOI of 3, and quantified viral yields at 2 d p.i. A representative experiment is shown in Fig. 4 . Consistent with the results shown in Fig 1, virion replication was not blocked in E64treated U937 cells. However, in the presence of both E64 and the NE inhibitor, yields were significantly reduced. ISVPs replicated to high yields in treated cells, indicating that the combination of inhibitors was not toxic to U937 cells. These results demonstrate that NE plays a critical role in reovirus infection of U937 cells when cysteine proteases are inhibited.
NE, like many cellular proteases, is expressed as a proenzyme that becomes activated only after its pro-region is removed [54] . We envisioned two models by which NE could facilitate reovirus infection of U937 cells. In the first, NE could directly mediate σ3 degradation, leading to the generation of an ISVP-like particle. In the second, NE could act indirectly by activating another protease. To try to distinguish between these models, we examined the capacity of purified NE to directly mediate σ3 removal from Lang virions in vitro. Purified Lang virions were treated with NE for 1 and 4 h and the treated virus particles were analyzed by SDS-PAGE. As shown in Fig. 5A , NE efficiently removed σ3 from Lang virions; after 1 h very little intact σ3 remained on viral particles. After 4 h of NE treatment, σ3 was completely removed and the underlying µ1C was cleaved to the δ and φ fragments (φ was not retained on the gel). When we assayed the infectivity of the resultant particles by plaque assay we found that NE treatment did not negatively affect the titer of Lang particles (data not shown).
To determine if NE-generated SVPs required further proteolytic processing of σ3, L929 cells were pre-treated with E64 to block cysteine protease activity and infected at an MOI of 3 with Lang virions, ISVPs or NE-generated subviral particles (NE-SVPs). Viral yields were determined at 1 d p.i. As expected, E64 blocked infection of L929 cells by virions. In contrast, both ISVPs and NE-SVPs replicated efficiently in the presence of the cysteine protease inhibitor (Fig. 5B) . Because virion disassembly in L929 cells requires acidic pH [10] , we also examined the capacity of NE-SVPs to infect L929 cells treated with Baf, NH 4 Cl or monensin, three agents that raise vesicular pH by distinct mechanisms. Cells were treated with these agents and then infected with virions, ISVPs or NE-SVPs at an MOI of a U937 cells were treated with the indicated inhibitors for 3 h or 2 d. b NE activity was assessed using the colorimetric substrate MeOSuc-Ala-Ala-Pro-Val-ρNA and percent activity relative to untreated cells was calculated. c Cathepsin L and B activity were assessed using the fluorogenic substrate Z-Phe-Arg-MCA and percent activity relative to untreated cells was calculated.
10. At 18 hours post infection (h p.i.), cell lysates were harvested and expression of the reovirus non-structural protein µNS was analyzed by immunoblotting (Fig. 5C ). As expected, when treated cells were infected with virions, viral protein expression was blocked. In contrast, µNS expression was evident even in the presence of agents that raise pH when infections were initiated with ISVPs or NE-SVPs (Fig. 5C ). Together, these results demonstrate that NE can directly mediate σ3 removal from virions to generate infectious particles that do not require further proteolytic processing by acid-dependent cysteine proteases in L929 cells.
Serine proteases are involved in reovirus infection in the mammalian intestinal tract [31] and in this report we pro-vide evidence that they can mediate uncoating and promote infection in U937 cells. This expands the range of proteases that promote reovirus infection in cell culture to include NE as well as the cysteine proteases Cat L, Cat B, and Cat S. Several lines of evidence now support the notion that protease expression is a cell-specific host factor that can impact reovirus infection. For example, some reovirus strains are inefficiently uncoated by Cat S and thus do not replicate to high yield in P388D macrophages [3] . In this report we demonstrate that PMA-induced differentiation influences the type of protease that mediates reovirus uncoating in U937 cells. In these cells, PMA treatment is reported to increase Cat L expression [55] and decrease expression of the serine proteases NE and Cat G [56, 57] . Accordingly, when we used PMA to induce U937 cell cultures to differentiate, reovirus infection became sensitive to the cysteine protease inhibitor E64. We suspect that Cat L is largely responsible for uncoating in these PMA-differentiated cells, but the acid-independent protease Cat S may also play a role. We are currently addressing this question by analyzing infection in PMAdifferentiated cells treated with either Baf or NH 4 Cl.
Our data do not completely resolve this question. Cat G is expressed by U937 cells and, like NE, it is down-regulated by PMA treatment. Furthermore, we found that in vitro treatment of reovirus virions with purified Cat G generates SVPs that behave like NE-SVPs in that they are infectious in the absence of further proteolytic processing (data not shown). Results of our experiment with the NE-specific inhibitor suggest that NE is largely responsible for the E64-resistant infection in U937 cells. While this inhibitor is reported not to inhibit Cat G [53] , we have not independently confirmed this. Another approach to assess the role of Cat G in reovirus infection of U937 cells would be to examine the effect of Cat G-specific inhibitors on infection. We tried one such inhibitor, Cathepsin G Inhibitor I (Calbiochem) [58] , but found that it was cytotoxic to U937 cell cultures. Given that both NE and Cat G can generate infectious reovirus SVPs, more work needs to be done in order to understand the role that these two proteases play in infection in these cells.
Previously, we reported that virion uncoating mediated by Cat S does not require acidic pH [3] . These results were consistent with the acid-independence of Cat S activity [37] . Together, the results in Fig. 2 and Fig. 4 reveal that, like Cat S, NE-mediates infection in an acid-independent manner. This finding thus provides further support for a model in which the requirement for acidic pH during reovirus infection of some cell types reflects the requirement for acid-dependent protease activity in those cells rather than some other requisite acid-dependent aspect of cell entry. The small effect of Baf and NH 4 Cl on E64-resistant reovirus growth (Fig. 2 ) may reflect the participation of one or more acid-dependent proteases (such as Cat D) in the activation of NE.
Elastase is stored in azurophilic granules that are the major source of acid-dependent hydrolases in neutrophils [59] . Although these granules do not contain LAMP-1 or LAMP-2 [60] they contain the lysosomal markers LAMP-3 [61] and CD68 [62] and are accessible to endocytosed fluid-phase markers under conditions of cellular stimulation [63] . NE can be released from neutrophils during degranulation [64] and its cell surface expression can be induced upon PMA treatment [65] . However, studies in U937 cells have shown that NE is predominantly retained intracellularly and that little if any activity is present in the extracellular medium [45] . Consistent with this, we have been unable to generate ISVP-like particles by treatment of virions with U937 culture supernatants (data not shown). This observation, together with our finding that PMA treatment decreases the capacity of E64-treated U937 cells to support reovirus infection, leads us to favor a model in which NE-mediated virion uncoating in U937 cell cultures occurs intracellularly.
In vivo, a number of viruses, including dengue and respiratory syncytial virus, induce the release of IL-8, a cytokine that serves as a chemoattractant for neutrophils and promotes their degranulation [66, 67] . Reovirus replication in the rat lung results in neutrophilic invasion [35, 43] and studies in cell culture indicate that reovirus infection can induce IL-8 expression [68] . Thus, the capacity of reovirus to induce IL-8 secretion in vivo might facilitate the release of neutrophilic lysosomal hydrolases, including NE, into the extracellular milieu. In this report, we have shown that mammalian reovirus can utilize this acid-independent serine protease for uncoating. Our data suggest that, in vivo, one consequence of reovirus-induced IL-8 expression would be the generation of infectious NE-SVPs. Like ISVPs, these particles would be predicted to have an expanded cellular host range because they can infect cells that restrict intracellular uncoating [2] . Thus, inflammation might be predicted to exacerbate reovirus infection by promoting viral spread. Future studies using mice with deletions in the NE gene will be required to elucidate the role this protease plays during reovirus infection in the respiratory tract and other tissues. Finally, given the recent finding that endosomal proteolysis of the Ebola virus glycoprotein is necessary for infection [30] , our results raise the interesting possibility that NE or other neutrophil proteases may play a role in cell entry of other viruses. [Furlong, 1988 #81] . ISVPs were prepared by treating purified virions with chymotrypsin as described elsewhere [Nibert, 1992 #95] .
Cysteine protease activity was measured as described previously [23] Samples were frozen and thawed three times and titrated by plaque assay on L929 cells as described elsewhere [69] . Viral yields were calculated according to the following formula: log 10 (PFU/ml) t = x hrlog 10 (PFU/ml) t = 0 +/-standard deviation (SD).
To analyze NE expression, cell lysates were generated from U937 cells, either treated or untreated for 48 h with 150 nM PMA as described for the analysis of viral protein expression. Lysate from the equivalent of 1 × 10 6 cells was run on SDS-12% polyacrylamide gels and transferred to nitrocellulose. Membranes were blocked overnight in TBST containing 10% nonfat dry milk. NE expression was analyzed using a polyclonal antibody against NE (1:400 in TBST) (Santa Cruz Biotechnology Inc, Santa Cruz, CA). Membranes were washed with TBST and incubated with a horseradish peroxidase-conjugated anti-goat IgG (1:5000 in TBST). Bound antibody was detected by treating the nitrocellulose filters with enhanced chemiluminescence (ECL) detection reagents (Amersham) and exposing them to Full Speed Blue X-ray film (Henry Schein, Melville, NY).
Cells were plated at 10 6 /well in a 6-well plate 18-24 h prior to infection. Virus was allowed to adsorb to cells for 1.5 h at 4°C. At this temperature, virus binds to cells but is not internalized [70] . After adsorption, the cultures were incubated at 37°C in fresh medium. Prior to some infections, cells were pre-treated for 3 h with 300 µM E64, 100 nM Baf, 25 µM monensin (Sigma), or 20 mM NH 4 Cl.
In those instances inhibitors were also included in the post-adsorption culture medium. At the indicated times p.i., cells were collected by centrifugation at 179 × g, washed twice in chilled PBS and lysed in TLB. After centrifugation at 179 × g to remove cellular debris, samples were resuspended in sample buffer. Protein samples (representing 1 × 10 5 cells) were analyzed by electrophoresis on SDS-12% polyacrylamide gels and transferred to nitrocellulose membranes for 2 h at 100 V in 25 mM Tris-192 mM glycine-20% methanol. Nitrocellulose membranes (Bio-Rad Laboratories, Hercules, Calif.) were blocked overnight at 4°C in TBST (10 mM Tris [pH 8.0], 150 mM NaCl and 0.05% Tween) containing 5% nonfat dry milk, rinsed with TBST, and incubated with a rabbit anti-µNS polyclonal antiserum [71] (1:12500 in TBST) for 1 h. Membranes were subsequently washed with TBST and incubated for 1 h with horseradish peroxidase-conjugated anti-rabbit immunogloblin G (IgG) (1:7500 in TBST) (Amersham, Arlington Heights, Ill.). Bound antibody was detected by treating the nitrocellulose filters with enhanced chemilumescence (ECL) detection reagents (Amersham) and exposing the filters to Full Speed Blue X-ray film (Eastman Kodak, Rochester, N.Y.).
Purified virions (1.4 × 10 11 ) were incubated with 25 µg/ ml of purified neutrophil elastase (Calbiochem) in 40 µL of VDB at 37°C for 3 h. Reactions were terminated by adding 1 mM PMSF and 200 µM NE inhibitor to the reaction mixture. 5.0 × 10 10 particles were run on SDS-12% polyacrylamide gels stained with Coomassie Brilliant Blue to confirm the removal of σ3. Viral infectivity was determined by plaque assay on L929 cell monlayers.
Purified Lang virions (1.4 × 10 11 ) were treated with 25 µg/ ml of NE in 40 µL of VDB at 37°C for the times indicated. Reactions were terminated as described above. To verify σ3 removal, the proteins from 5.0 × 10 10 particles were separated on SDS-12% polyacrylamide gels and visualized with Coomassie Brilliant Blue staining. Viral infectivity for each time point was determined by plaque assay on L929 cell monolayers. The influence of locked nucleic acid residues on the thermodynamic properties of 2′-O-methyl RNA/RNA heteroduplexes The influence of locked nucleic acid (LNA) residues on the thermodynamic properties of 2′-O-methyl RNA/RNA heteroduplexes is reported. Optical melting studies indicate that LNA incorporated into an otherwise 2′-O-methyl RNA oligonucleotide usually, but not always, enhances the stabilities of complementary duplexes formed with RNA. Several trends are apparent, including: (i) a 3′ terminal U LNA and 5′ terminal LNAs are less stabilizing than interior and other 3′ terminal LNAs; (ii) most of the stability enhancement is achieved when LNA nucleotides are separated by at least one 2′-O-methyl nucleotide; and (iii) the effects of LNA substitutions are approximately additive when the LNA nucleotides are separated by at least one 2′-O-methyl nucleotide. An equation is proposed to approximate the stabilities of complementary duplexes formed with RNA when at least one 2′-O-methyl nucleotide separates LNA nucleotides. The sequence dependence of 2′-O-methyl RNA/RNA duplexes appears to be similar to that of RNA/RNA duplexes, and preliminary nearest-neighbor free energy increments at 37°C are presented for 2′-O-methyl RNA/RNA duplexes. Internal mismatches with LNA nucleotides significantly destabilize duplexes with RNA. Understanding the thermodynamics of nucleic acid duplexes is important for many reasons. For example, such knowledge facilitates design of ribozymes (1), antisense and RNAi oligonucleotides (2) (3) (4) (5) (6) (7) (8) (9) , diagnostic probes including those employed on microarrays (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) and structures useful for nanotechnology (24) (25) (26) (27) . Many modified residues have been developed for such applications. Examples include propynylated bases (28) (29) (30) , peptide nucleic acids (5, (31) (32) (33) , N3 0 -P5 0 phosphoramidates (34-38) and 2 0 -O-alkyl RNA (39) (40) (41) (42) (43) . A modification that is particularly stabilizing in DNA and RNA duplexes (44) (45) (46) (47) (48) (49) (50) (51) is a methyl bridge between the 2 0 oxygen and 4 0 carbon of ribose to form a 'locked nucleic acid' or LNA as shown in Figure 1 . McTigue et al. (48) have shown that the enhanced stability due to a single LNA residue in a DNA duplex can be predicted from a nearestneighbor model.
Hybridization of oligonucleotides to RNA is important for applications, such as antisense therapeutics (4, 8, 21, 46, (52) (53) (54) , diagnostics (32, 33, 42, 55) , profiling gene expression with microarrays (18) (19) (20) 56) , identifying bands by Northern blots of gels (57, 58) and probing RNA structure (1, 3, 15, (59) (60) (61) . Oligonucleotides with 2 0 -O-alkyl modifications can be particularly useful for these applications because they are easily synthesized (39, 43) , chemically stable and bind relatively tightly to RNA (39) (40) (41) (42) . However, for many applications, it is desirable to modulate the binding affinity. For example, sequence independent duplex stabilities would benefit applications that involve multiplex detection, such as microarrays.
Here, we show that introduction of LNA into 2 0 -O-methyl RNA oligonucleotides can increase stabilities of 2 0 -O-methyl RNA/RNA hybrid duplexes and that the enhancements in stability can usually be predicted with a simple model.
High-performance liquid chromatography (HPLC) was performed on a Hewlett Packard series 1100 HPLC with a reverse-phase Supelco RP-18 column (4.6 · 250 mm). Mass spectra were obtained on an LC MS Hewlett Packard series 1100 MSD with API-ES detector or on an AMD 604/402. Thin-layer chromatography (TLC) was carried out on Merck 60 F 254 TLC plates with the mixture 1-propanol/ aqueous ammonia/water ¼ 55:35:10 (v/v/v).
Oligoribonucleotides were synthesized on an Applied Biosystems DNA/RNA synthesizer, using b-cyanoethyl phosphoramidite chemistry (62) . For synthesis of standard RNA oligonucleotides, the commercially available phosphoramidites with 2 0 -O-tertbutyldimethylsilyl groups were used (Glen Research). For synthesis of 2 0 -O-methyl RNA oligonucleotides, the 3 0 -O-phosphoramidites of 2 0 -Omethylnucleotides were used (Glen Research and Proligo). The 3 0 -O-phosphoramidites of LNA nucleotides were synthesized according to the published procedures with some minor modifications (44, 47, 63) . The details of deprotection and purification of oligoribonucleotides were described previously (64) .
Oligonucleotides were melted in buffer containing 100 mM NaCl, 20 mM sodium cacodylate, 0.5 mM Na 2 EDTA, pH 7.0. The relatively low NaCl concentration kept melting temperatures in the reasonable range even when there were multiple LNA substitutions. Oligonucleotide single-strand concentrations were calculated from absorbencies above 80 C and single-strand extinction coefficients were approximated by a nearest-neighbor model (65, 66) . It was assumed that 2 0 -Omethyl RNA and RNA strands with identical sequences have identical extinction coefficients. Absorbancy versus temperature melting curves were measured at 260 nm with a heating rate of 1 C/min from 0 to 90 C on a Beckman DU 640 spectrophotometer with a water cooled thermoprogrammer. Melting curves were analyzed and thermodynamic parameters were calculated from a two-state model with the program MeltWin 3.5 (67) . For almost all sequences, the DH derived from T m À1 versus ln (C T /4) plots is within 15% of that derived from averaging the fits to individual melting curves, as expected if the two-state model is reasonable.
Free energy parameters for predicting stabilities of 2 0 -O-methyl RNA/RNA and 2 0 -O-methyl RNA-LNA/RNA duplexes with the Individual Nearest-Neighbor Hydrogen Bonding (INN-HB) model (64) were obtained by multiple linear regression with the program Analyse-it v.1.71 (Analyse-It Software, Ltd, Leeds, England; www.analyse-it. com) which expands Microsoft Excel. Analyse-It was also used to obtain parameters for enhancement of stabilities of 2 0 -O-methyl RNA/RNA duplexes by substitution of LNA nucleotides internally and/or at the 3 0 end when the LNAs are separated by at least one 2 0 -O-methyl nucleotide. Results from T m À1 versus ln (C T /4) plots were used as the data for the calculations. 3 show typical data from optical melting curves, and Table 1 lists the thermodynamic parameters for the helix to coil transition with either no or one LNA nucleotide in the primarily 2 0 -O-methyl strand of a hybrid with a Watson-Crick complementary RNA strand.
Single LNA substitutions at the 5 0 end of heptamer duplexes have little effect on stability
The effects of single LNA substitutions at the 5 0 end of the 2 0 -O-methyl strand were studied in duplexes of the form,
where superscript M denotes a 2 0 -O-methyl sugar, N is A, C, G, or U with a 2 0 -O-methyl or LNA sugar, r denotes ribose sugars, and Q is the Watson-Crick complement to N. As summarized in Table 1 , 5 0 terminal LNA substitutions make duplex stability more favorable by 0.3-0.6 kcal/mol at 37 C with an average enhancement of 0.45 kcal/mol. Thus, 5 0 terminal LNA substitutions increase the binding constant for duplex formation by $2-fold at 37 C.
The effects of single LNA substitutions at the 3 0 ends of heptamer duplexes is idiosyncratic
The effects of single LNA substitutions at the 3 0 end of the 2 0 -O-methyl strand was studied in duplexes of the form, Table 1 ). If N is A, C or G, then LNA substitutions have similar effects. On average, an LNA substitution makes duplex stability more favorable by 1.2 kcal/mol at 37 C. In the two sequences with a 3 0 terminal LNA U on the 2 0 -O-methyl strand, duplex stability is, however, affected little, averaging a destabilization of 0.08 kcal/ mol at 37 C. In both cases, the terminal U is preceded by a GC pair, but both orientations of the GC pair give similar destabilization upon LNA substitution at the 3 0 terminal U.
Single LNA substitutions in the interior of A M C M U M A M C M C M A M enhance the stability of the duplex formed with its complementary RNA by $1.4 kcal/mol
The effect of interior position on the free energy increment for a single LNA substitution for a 2 0 -O-methyl RNA was studied for the duplex 5 0 A M C M U M A M C M C M A M /3 0 r(UGAUGGU). As summarized in Table 1 , a single interior LNA substitution makes duplex stability more favorable by 1.2-1.7 kcal/mol at 37 C, with an average of 1.4 kcal/mol. This corresponds to roughly a 10-fold increase in binding constant. Thus, interior and 3 0 terminal LNA substitutions usually improve binding more than 5 0 terminal LNA substitutions. Table 1 . For 13 of 16 sequences, the LNA substitution makes duplex stability more favorable by 1.0-1.5 kcal/mol at 37 C, with an average enhancement of 1.3 kcal/mol. The enhancement for the other three sequences averages 2.1 kcal/mol at 37 C.
The dependence on the 5 0 nearest-neighbor nucleotide of effects from substituting U L for U M was studied in duplexes of the form, neighbor that is preceded by A M and U M , respectively. In both cases, the LNA substitution enhances duplex stability by 1.14 kcal/mol at 37 C. Thus, for seven duplexes, the enhanced stability from an LNA substitution is relatively independent of the nearest-neighbor nucleotide 5 0 to the LNA. The one exception is for the nearest neighbor 5 0 G M U L /3 0 r(CA). Interestingly, this nearest-neighbor combination is also destabilized by LNA substitution at a 3 0 terminal U (Table 1) . Evidently, an LNA substitution in the middle of a 2 0 -O-methyl strand usually affects heteroduplex stability with an RNA strand by about the same amount as an LNA substitution at a 3 0 terminus.
The effects of LNA substitutions are approximately additive when LNA nucleotides are spaced by at least one 2 0 -O-methyl nucleotide Table 2 contains thermodynamic parameters measured for duplexes having more than one LNA substitution and Table 3 compares the stabilities at 37 C with those predicted from four simple models. The first model, labeled 'additivity', predicts the DG 37 for duplex formation in the 5 0 ACUACCA/ 3 0 UGAUGGU series by adding the free energy increments measured for single LNA substitutions in the same context to the DG 37 for duplex formation in the absence of LNA nucleotides. The second model predicts the DG 37 (kcal/mol) for duplex formation with the following equation as deduced from fitting the data in Tables 1 and 2 Here, DG 37 (2 0 -O-MeRNA/RNA) is the free energy change at 37 C for duplex formation in the absence of any LNA nucleotides, n 5 0 tL is the number of 5 0 terminal LNAs, n iAL/UL and n iGL/CL are the number of internal LNAs in AU and GC pairs, respectively, n 3 0 tU and n 3 0 tAL/CL/GL are the number of Here, T m À1 is the inverse melting temperature in kelvin, R is the gas constant, 1.987 cal K À1 mol À1 , C T is the total oligonucleotide strand concentration, and both strands have the same concentration. Table 1 . Thermodynamic parameters of duplex formation between RNA and 2 0 -O-methyl oligoribonucleotides with and without a single LNA substitution a Oligonucleotides RNA Average of curve fits 3 0 terminal LNAs that are U or not U, respectively. Both methods that use experimental data for DG 37 (2 0 -O-MeRNA/RNA) provide reasonable predictions that are within 1 kcal/mol of the measured value (Table 3) . Two other methods that use nearest-neighbor models to approximate DG 37 (2 0 -O-MeRNA/RNA) provide somewhat less accurate, but still reasonable predictions as described below. The duplex with the worst prediction, 5 0 G M U L U M C L G M G L /3 0 CAAGCC has a 5 0 G M U L /3 0 CA nearest neighbor, consistent with this motif being unusually unstable by $1.2 kcal/mol. Thus, it is likely that the DG 37 of Equation 1 should be made less favorable by 1.2 kcal/mol for every internal 5 0 G M U L /3 0 CA nearest neighbor in a duplex. Evidently, the effects of multiple LNA substitutions are approximately additive when the LNAs are spaced by at least 1 nt.
The data may also be fit to a nearest-neighbor model containing 30 of the LNA enhancement parameters associated with duplexes of RNA strands bound to 2 0 -O-methyl RNA/ LNA chimeras. These parameters are listed in Supplementary Material. The number of occurrences for each nearest neighbor is limited, however, so the values are only roughly determined.
Predictions for RNA/RNA duplexes at 1 M NaCl can be used to approximate stabilities of 2 0 -O-methyl RNA/RNA duplexes at 0.1 M NaCl
The stabilities of RNA/RNA duplexes at 37 C and 1 M NaCl are predicted well by an Independent Nearest-Neighbor Hydrogen Bonding (INN-HB) model (64) . In this model, the stability of an RNA/RNA duplex is approximated by:
Here, DG init is the free energy change for initiating a helix; each DG j NN ð Þ is the free energy increment of the jth type nearest neighbor (see Table 4 ) with n j occurrences in the sequence; m term-AU is the number of terminal AU pairs; DG termÀAU is the free energy increment per terminal AU pair; DG sym is 0.43 kcal/mol at 37 C for self-complementary duplexes and 0 for non-self-complementary duplexes.
À0.73 ± 0.26 5 0 AU3 0 À1.10 ± 0.08 Similar sequence dependent parameters may also be applicable to 2 0 -O-methyl RNA/RNA heteroduplexes because they are expected to have A-form conformations similar to those of RNA/RNA homoduplexes (68) . This was tested by comparing the predicted stabilities of RNA/RNA duplexes in 1 M NaCl at 37 C with those measured for 2 0 -O-methyl RNA/RNA duplexes in 0.1 M NaCl at 37 C. The predicted thermodynamics are listed in parentheses in Tables 1 and 2 . On average at 37 C, the RNA/RNA duplexes in 1 M NaCl are 0.12 ± 0.01 kcal/mol of phosphate pairs more stable than the 2 0 -Omethyl RNA/RNA duplexes in 0.1 M NaCl. Presumably, much of this difference is due to a sequence independent effect of salt concentration, which would primarily affect the DS for duplex formation (22, 69) . Thus, a reasonable approximation for the first term on the right hand side of Equation 1 is:
Note that DG sym from the RNA/RNA calculation is subtracted because a 2 0 -O-methyl RNA/RNA duplex cannot be selfcomplementary because the backbones differ. For the duplexes studied here, the number of phosphate pairs is one less than the number of base pairs. The effects of LNA substitutions are likely not very dependent on salt concentration. Thus, it is probable that in 1 M NaCl or in the presence of Mg 2+ (70) that DG 37 (2 0 -O-MeRNA/ RNA) can be approximated by DG 37 (RNA/RNA, 1 M NaCl). Table 3 compares measured values for duplexes with more than one LNA to predictions from combining Equation 1-3. The measured DG 37 values average À10.5 kcal/mol and the root-mean-square difference between measured and predicted DG 37 values is 0.6 kcal/mol with the largest difference being 1.7 kcal/mol. Again, the sequence with the largest difference contains a 5 0 G M U L /3 0 CA nearest neighbor so the prediction would be improved if Equation 1 was corrected for the apparent instability of this motif.
The results for 2 0 -O-methyl RNA/RNA duplexes provide preliminary nearest-neighbor free energy increments for predicting stabilities of such duplexes
The comparison of predicted RNA/RNA stabilities with those measured for 2 0 -O-methyl RNA/RNA duplexes suggests that the INN-HB model will also be applicable to 2 0 -O-methyl RNA/RNA duplexes (71) . The results in Tables 1 and 2 Table 4 ). Three nearest neighbors are only represented once or twice in the database, and these parameters are in parentheses. The parameters for 2 0 -O-methyl RNA/RNA and RNA/RNA duplexes are similar, especially if the RNA/RNA Watson-Crick nearest-neighbor parameters are each made less favorable by 0.12 kcal/mol, which largely accounts for the difference in salt concentration as suggested above. Evidently, the first term on the right hand side of Equation 1 can also be approximated by: Table 3 compares predictions from combining Equations 1 and 4 with measured values for duplexes with more than one LNA. The root-mean-square difference between measured and predicted DG 37 values is 0.6 kcal/mol with the largest difference being the 1.7 kcal/mol associated with the duplex containing a 5 0 G M U L /3 0 CA nearest neighbor. Undoubtedly, this model can be expanded and refined by more measurements, but it appears sufficient to aid sequence design for many applications.
Complete LNA substitution is no more stabilizing than substitution at every other nucleotide starting at the second nucleotide from the 5 0 end
The effect of complete LNA substitution for a 2 0 -O-methyl RNA backbone was studied for the sequences 5 0 A L C L U L A L C L C L A L /3 0 r(UGAUGGU) and 5 0 G L C L U L A L C L U L G L / 3 0 r(CGAUGAC). As summarized in Table 2 , the stabilities of these duplexes at 37 C are within experimental error of those measured for 5 0 A M C L U M A L C M C L A M /3 0 r(UGAUGGU) and 5 0 G M C L U M A L C M U L G M /3 0 r(CGAUGAC), respectively. Evidently, the most effective use of LNA nucleotides is to space them every other nucleotide with the first LNA placed at the second nucleotide from the 5 0 end.
Internal mismatches make duplex formation less favorable Table 5 contains thermodynamic parameters measured for the formation of duplexes containing single mismatches and the difference in stabilities relative to completely Watson-Crick complementary duplexes (Tables 1 and 2 ). All internal mismatches make duplex formation less favorable by at least 2 kcal/mol at 37 C corresponding to at least a 25-fold less favorable equilibrium constant for duplex formation. In general, terminal mismatches destabilize much less than internal mismatches. In fact, when the 3 0 terminal U L of 5 0 A M C M U M A M C M C M U L makes a GU pair, the duplex is stabilized by 0.14 kcal/mol at 37 C relative to a terminal AU pair.
For four cases, the effect of a mismatch with an LNA nucleotide was compared with that for the equivalent 2 0 -O-methyl nucleotide. In each case, the mismatch penalty for the LNA was less than that for 2 0 -O-methyl RNA. However, for an A M -G mismatch flanked by LNAs in the context 5 0 A L C M U L A M C L C M A L /3 0 r(UGAGGGU), the LNAs enhanced the mismatch penalty by $1 kcal/mol relative to a completely 2 0 -O-methyl RNA strand. Thus, oligonucleotides containing LNA may discriminate best against mismatches flanked by LNAs.
Oligonucleotide hybridization to RNA has many applications, ranging from quantifying gene expression (18) (19) (20) 56) to designing therapeutics (4, 8, 21, 46, (52) (53) (54) . LNA nucleotides have characteristics useful for these purposes. For example, LNA usually stabilizes duplexes (4, 44, 48, 51) and is more resistant than RNA and DNA to nuclease digestion (4, 6, 51) . The results presented here provide insights that are useful for designing 2 0 -O-methyl RNA/LNA chimeric oligonucleotides for various purposes. Some trends may be general for RNA A-form helixes and thus may also be relevant to other chimeras with nucleotides that favor A-form conformations. The results suggest several principles for the design of 2 0 -O-methyl RNA/LNA chimeras for hybridization to RNA
The database in Tables 1 and 2 is too small to The magnitude and sequence dependence of the stabilization due to LNAs are surprising. Ribose and therefore probably 2 0 -O-methyl ribose sugars in single strands are typically found in roughly equal fractions in C2 0 -endo and C3 0 -endo conformations. If the methylene bridge of an LNA only locks the sugar into the C3 0 -endo conformation, then the expected stabilization due to preorganization would be: DDG ¼ ÀRT ln 2, which is À0.4 kcal/mol at 37 C (310.15 K). The stabilization observed for a 5 0 terminal LNA is roughly À0.4 kcal/ mol, but the average stabilizations for internal LNAs and 3 0 terminal A L , C L and G L are more favorable at À1.3 and À1.2 kcal/mol, respectively. Moreover, if stabilization was only due to preorganization of an LNA sugar, then the effect would not saturate when alternate sugars are LNA. Evidently, the LNA substitution also affects the 5 0 neighboring base pair in a way that enhances the stabilization beyond that expected from preorganization of a single sugar. Interestingly, NMR structures of DNA/LNA chimeras bound to RNA show that only the DNA sugar 3 0 of the LNA is driven to a C3 0endo conformation for the sequence d(5 0 CTGAT L ATGC)/ 3 0 GACUAUACG, but all non-terminal DNA sugars are C3 0 -endo when all three Ts are LNAs (76) . The free energy increments at 37 C for LNA substitutions ranged from +0.83 to À1.90 kcal/mol with an average of À0.55 kcal/mol. This compares with a range from +0.18 to À2.17 kcal/mol and an average of À1.32 kcal/mol for the single internal LNA substitutions in Table 1 . The comparision suggests that single LNA substitutions are on average more stabilizing to 2 0 -O-methyl RNA/RNA duplexes than to DNA/DNA duplexes. This may reflect the expectation that LNA substitutions do not have a large effect on the conformations of 2 0 -O-methyl RNA/RNA duplexes, but alter the conformations of DNA/DNA duplexes.
LNA substitutions should be useful for probing RNA with short 2 0 -O-methyl RNA oligonucleotides RNA structure can be probed with short oligonucleotides on microarrays (3) . To optimize such methods, it is necessary to have tight binding that is sequence independent and that discriminates against mismatches. It appears that LNA nucleotides can be used to achieve this. For example, free energy increments for 2 0 -O-methyl RNA/RNA nearest neighbors range from À0.7 to À3.5 kcal/mol, corresponding to 5 0 A M U M /3 0 UA and 5 0 G M C M /3 0 CG, respectively ( Table 4 ). The average increment of À1.3 kcal/mol of internal and 3 0 terminal LNA can help compensate for such less favorable stability of AU relative to GC pairs. The stability enhancement from LNA can also allow the use of shorter oligonucleotides.
The potential disadvantage to LNA substitutions in 2 0 -Omethyl RNA oligonucleotides is that discrimination against mismatches containing an LNA may be less than with a complete 2 0 -O-methyl RNA backbone. This was clearly true for three of the four cases where such direct comparisons were made. Nevertheless, internal mismatches with LNA nucleotides are considerably destabilizing, averaging a penalty of 4.1 kcal/mol at 37 C (Table 5) , which translates to almost a 1000-fold weaker binding due to a single mismatch. When LNAs flanked an A M -G mismatch, the mismatch penalty at 37 C was 4.4 kcal/mol compared with 3.3 kcal/mol in the absence of LNAs. Such an effect may reflect enhanced rigidity due to LNA, which thereby prevents a mismatch from adopting a favorable conformation. Thus, it may be advantageous to use LNAs to flank nucleotides likely to give small mismatch penalties. Draft versus finished sequence data for DNA and protein diagnostic signature development Sequencing pathogen genomes is costly, demanding careful allocation of limited sequencing resources. We built a computational Sequencing Analysis Pipeline (SAP) to guide decisions regarding the amount of genomic sequencing necessary to develop high-quality diagnostic DNA and protein signatures. SAP uses simulations to estimate the number of target genomes and close phylogenetic relatives (near neighbors or NNs) to sequence. We use SAP to assess whether draft data are sufficient or finished sequencing is required using Marburg and variola virus sequences. Simulations indicate that intermediate to high-quality draft with error rates of 10(−3)–10(−5) (∼8× coverage) of target organisms is suitable for DNA signature prediction. Low-quality draft with error rates of ∼1% (3× to 6× coverage) of target isolates is inadequate for DNA signature prediction, although low-quality draft of NNs is sufficient, as long as the target genomes are of high quality. For protein signature prediction, sequencing errors in target genomes substantially reduce the detection of amino acid sequence conservation, even if the draft is of high quality. In summary, high-quality draft of target and low-quality draft of NNs appears to be a cost-effective investment for DNA signature prediction, but may lead to underestimation of predicted protein signatures. Draft sequencing requires that the order of base pairs in cloned fragments of a genome be determined usually at least four times (4· depth of coverage) at each position for a minimum degree of draft accuracy. This information is assembled into contigs, or fragments of the genome that cannot be joined further due to lack of sequence information across gaps between the contigs. To generate high-quality draft, usually $8· coverage is optimal (1). Finished sequence, without gaps or ambiguous base calls, usually requires 8· to 10· coverage, along with additional analyses, often manual, to orient the contigs relative to one another and to close the gaps between them in a process called finishing. In fact, it has been stated that 'the defining distinction of draft sequencing is the avoidance of significant human intervention' (1) , although there are computational tools that may also be capable of automated finishing in some circumstances (2) .
While some tabulate the cost differential between highquality draft versus finished sequences to be 3-to 4-fold, and the speed differential to be >10-fold (1), others state that the cost differential is a more modest 1.3-to 1.5-fold (3) . In either case, draft sequencing is cheaper and faster. Experts have debated whether finished sequencing is always necessary, considering the higher costs (1, 3, 4) .
Thus, here we set out to determine whether draft sequence data are adequate for the computational prediction of DNA and protein diagnostic signatures. By a 'signature' we mean a short region of sequence that is sufficient to uniquely identify an organism down to the species level, without false negatives due to strain variation or false positives due to cross reaction with close phylogenetic relatives. In addition, for DNA signatures, we require that the signature be suitable for a TaqMan reaction (e.g. composed of two primers and a probe of the desired T m s). Limited funds and facilities in which to sequence biothreat pathogens mean that decision makers must choose wisely which and how many organisms to sequence. Money and time saved as a result of draft rather than finished sequencing enables more target organisms, more isolates of the target and more near neighbors (NNs) of the target to be sequenced. However, if draft data do not facilitate the generation of highquality signatures for detection, the tradeoff of quantity over quality will not be worth it.
We used the Sequencing Analysis Pipeline (SAP) (5, 6) to compare the value of finished sequence, real draft sequence and simulated draft sequence of different qualities for the computational prediction of DNA and protein signatures for pathogen detection/diagnostics. Marburg and variola viruses were used as model organisms for these analyses, due to the availability of multiple genomes for these organisms. We hope that variola may serve as a guide for making predictions about The online version of this article has been published under an open access model. Users are entitled to use, reproduce, disseminate, or display the open access version of this article for non-commercial purposes provided that: the original authorship is properly and fully attributed; the Journal and Oxford University Press are attributed as the original place of publication with the correct citation details given; if an article is subsequently reproduced or disseminated not in its entirety but only in part or as a derivative work this must be clearly indicated. For commercial re-use, please contact [email protected] bacteria, in which the genomes are substantially larger, and thus the cost of sequencing is much higher than for viruses. Variola was selected as the best available surrogate for bacteria at the time we began these analyses because:
i. it is double-stranded DNA; ii. it has a relatively low mutation rate, more like bacteria than like the RNA or shorter DNA viruses that have higher mutation rates and thus higher levels of variation; iii. it is very long for a virus, albeit shorter than a bacterial genome; iv. we have access to many genomes, which were sequenced by our collaborators at the US Centers for Disease Control and Prevention in Atlanta, GA; v. there are finished genomes available, so we can compare actual finished data with simulated draft data. Only recently have a fairly large number of Bacillus anthracis genomes become available to us. However, since only some of these are finished, currently we cannot compare finished with draft results for this bacterial genome.
The sequencing analysis pipeline uses the DNA and protein signature pipelines
The draft SAP simulations are nearly identical to those using finished genomes, described previously (5, 6) . The SAP ( Figure 1 ) performs stochastic (Monte Carlo) simulations and includes our DNA and Protein Signature Pipelines as components, which will be summarized briefly below. It is necessary to describe what the signature pipelines do before the SAP can be clearly described, so signature pipelines will be discussed first. As a step within the DNA and Protein Signature Pipelines, DNA sequence alignments of multiple draft genomes are required. For this we use the WGASA software, also summarized below. Once each of these components of the SAP has been presented, the SAP itself will be described.
The DNA Signature Prediction Pipeline, described in detail elsewhere (7) (8) (9) (10) , finds sequence regions that are conserved among target genomes by creating a consensus based on a multiple sequence alignment. WGASA is the software used in the analyses here to create an alignment and will be discussed below. Next, the DNA Signature Pipeline identifies regions that are unique in the target sequence consensus relative to all other non-target bacterial and viral sequences that we have in a >1 Gb database, which is frequently updated from the NCBI GenBank sequence database (11) and other sources (e.g. our collaborators at the CDC, USDA and other public sources, such as TIGR, Sanger Institute and the Joint Genome Institute). From the conserved, unique regions, signatures are selected based on the requirements of a particular technology, in this case, TaqMan PCR. These signature candidates may then proceed for further in silico screening (BLAST analyses to look for undesired inexact matches) before undergoing laboratory screening. For an SAP run, first a pool of target genome and a pool of NN genomes are collected. Then many random subsamples of target and NN genomes are selected from the pool, and each subsample is run through either the DNA signature pipeline or the protein signature pipeline, which identify regions conserved among target genomes and unique relative to non-target genomes, where unique regions are evaluated by comparing to a large sequence database of all currently available bacterial and viral complete genomes or the non-redundant protein database, excluding NNs from the NN pool that are not in that random subsample. Thus, each run of the SAP requires many runs of the DNA or protein signature pipelines with different random samples, generating a range of outcomes that are plotted on range plots.
Protein signature prediction and SAP methods have previously been described in detail (6) . The following briefly describes the procedure. First, target genomes are aligned using WGASA. A set of gene (start, end) pairs for both the plus and minus strands relative to the reference genome is required. This implies that coding frames for the translation of nucleic acid codons into amino acids for each protein of the target organism's genome have been correctly determined. From the aligned genomes, nucleotide codons are translated into amino acid sequence based on the gene locations, and conserved strings of six or more amino acids among all the target genomes are recorded. These conserved fragments are then compared with the NCBI GenBank non-redundant (nr) database of amino acid sequences, unveiling peptides that are unique to the target species. For our computations, we require that if a peptide signature is longer than six amino acids, then every sub-string of length six amino acids is also conserved and unique. There may be many conserved and unique peptide signatures on the same and on different proteins. The resulting conserved, unique peptides that are at least six amino acids long from open reading frames are considered to be protein signature candidates.
Signature peptides may be used as targets for antibody or ligand binding and may be developed for use in detection, therapeutics or vaccines (12, 13) . Since the signature regions are highly conserved within a species, it is likely that they are functionally important to the organism's survival or reproduction. Those signatures that land on or near protein active sites may be developed into therapeutics, since antibody or ligand binding may interfere with protein function. Signature regions may even be considered as vaccine targets, since these unique peptides may evoke a specific response in the host (14, 15) .
For draft genomes, WGASA, or Whole Genome Analysis through Scalable Algorithms, is used to align multiple sequences. This is the only available tool that enables multiple sequence alignment of draft genomes and that is capable of aligning large or many genomes. WGASA requires at least one finished reference genome and the others may be draft.
Only recently has it become possible to use the DNA Signature Pipeline to predict signature candidates for draft genomes. This capability is due to the invention of software for multiple sequence alignment of draft genomes with at least one completed full genome. WGASA was developed by David Hysom, Chuck Baldwin and Scott Kohn in the Computations directorate at Lawrence Livermore National Laboratory. They designed the software in close communication with members of our bioinformatics team, and it is tailored for our needs of generating diagnostic and forensic pathogen signatures.
WGASA can efficiently align large (e.g. bacterial) genomes. In addition, the developers have created a parallel version that runs in minutes, allowing the SAP simulations, involving thousands of calls to WGASA, to complete in a feasible time frame. In addition to the SAP analyses, this tool has enabled us to revisit signature predictions for several important organisms, such as the food-borne pathogen Listeria, that were previously problematic because some of the sequences were available only in draft format.
The tool requires that there be one or more complete, finished genome, and any number of draft sequences. It is based on suffix-tree algorithms (16) . It requires that anchors, identical sequence fragments of user-specified length, be found in each of the genomes to be aligned. Thus, there must be some level of sequence conservation among the genomes in order to discover anchors of sufficient length (e.g. 35-60 bp) that are present in all the genomes. Then the regions between the anchors are aligned using a tool, such as clustalw or HMMer. The algorithm functions most efficiently if anchors are frequent and dispersed across the genomes to provide even coverage. If substitutions, deletions, insertions or gaps in sequence information (e.g. between contigs) result in an anchor's absence in one or more of the genomes, then those regions must be aligned using clustalw, which is slower and more memory intensive for large amounts of sequence data. Similar to all anchor-based alignment algorithms, WGASA is dependent upon a high degree of co-linearity across all input genomes.
The SAP for DNA signature analyses operates as follows. First, all available complete genomes of target were gathered into a pool with the total genome count called T. A second pool was created of all available NN complete genomes, with the total count of sequences called N. Next, we selected 10 random samples of size t targets and n NNs, for all t ranging from 1 to T and all n ranging between 1 and minimum(10,N). We ran the DNA Signature Prediction Pipeline for each sample, with signature prediction based on conservation among the t target strains and uniqueness relative to a >1 Gb database minus those NNs in the NN pool that were not chosen in that sample. Thus, for each sample, signature candidates were predicted as though we had only t target and n NN sequences, as well as the rest of the less-closely related organisms in our database that are not considered NNs. In addition to the number of TaqMan signature candidates, the fraction of the genome that is conserved among the t target sequences was also calculated. Based on the combined results of the many signature pipeline runs using random target samples of size t and n, we assessed how much sequence data, that is, the values of t and n, was required to approximate the number of signature candidates c that were predicted when the full data set (all target and NN sequences, t ¼ T and n ¼ N) was analyzed with the signature pipeline. Using the full data set will yield the fewest signatures, because lack of conservation or uniqueness will winnow away all unsuitable candidates.
Thus, the SAP performs Monte Carlo sampling from the target and NN genomes, runs each sample through the signature pipeline and summarizes the results of the hundreds of signature pipeline runs in a single plot. On our 24 CPU Sun server, up to seven signature pipeline simulations may be run in parallel, each requiring $15-22 min for viral genomes. All of the SAP analyses of dozens of bacteria and viruses to date have used a total run time of 6.26 years (operating in parallel), with an average pipeline run time of 0.522 h, and a process time span of 2.32 years.
The span of predictions generated by different random samples of genomes is illustrated using range plots (Figures 2-8 ). Along the y-axis, whole numbers represent the number of target strains t and the incremental values between the integers represent the number n of NN genomes. Only Figures 3, 5 and 6 have the incremental n values, because for the other plots of target sequence conservation only, the number of NNs was not relevant, and for the protein analyses NN comparisons were not made (described below). Outcomes of the number of signature candidates or the fraction of the target genome that is conserved are plotted along the x-axis as horizontal lines spanning the range (of predicted numbers of signatures or fraction conserved) for the s random samples of size (t,n) with the median and quantiles of the range indicated by colored, short vertical lines. If a random sample of t target strains and n NN strains were sequenced, there would be a 90% chance that the number of signature candidates for that sample would be less than or equal to the 90% quantile mark. The expected outcome is a reduction in the number of signature candidates or the fraction of the genome that is conserved as the number of target and NN sequences used in the simulations increases, due to a reduction in conservation from additional targets and a reduction in uniqueness from additional NNs.
The SAP analyses for proteins proceed much like that for DNA signatures. Random samples of size t target sequences are generated, where t ranges from 1 to T, the total number of target sequences in the pool. Either finished data, actual draft data, or draft data simulated as described below are aligned using WGASA. The protein signature prediction pipeline is run on each random sample, and the range, median, 75th and 90th quantiles of the number of protein signature candidates for the samples of a given target size t is plotted in range plots as described above.
Our DNA SAP analyses examined the effects of both the number of target as well as the number of NN sequences, but To discriminate samples in which zero NNs were used, the range is drawn as a horizontal gray line, and when n > 0, the range is drawn as a black line. The best estimate of the true value is the quality measure determined using the entire target and NN pools, and is represented by a vertical black line. This best estimate plus a constant c ¼ 20 is at the location of the vertical dashed line and was selected to indicate a reasonable distance from the true answer. The 75% quantile for each range is shown with a black, vertical tick mark. our protein SAP analyses investigated the effects of only the number of target sequences. This is because composing the lists of NN proteins for random, temporary exclusion from the protein nr database (to estimate the value of that NN sequence data) would be difficult to automate for rapid, high-throughput computations. Thus, we compared the target proteins with all the proteins in nr, regardless of their phylogenetic relationship to the target. This was comparable with DNA SAP results using all available NN data.
We had sequence data for four strains of Marburg virus, both the actual draft and the finished versions of those same isolates, provided for these analyses by a colleague working at Lawrence Livermore National Laboratory. The draft sequence was of $3· to 6· coverage, which enabled us to compare SAP results using the same strains in finished form. The identities of these sequences are provided in the Appendix. For the draft Marburg analyses, we selected one finished strain, the reference strain from GenBank (gi|13489275|ref|NC_001608.2|
Marburg virus, complete genome), as the WGASA reference genome, and then used random sub-samples from the four draft genomes. Marburg was the only organism for which we could obtain a sufficient number of draft genomes for the SAP Monte Carlo simulations. A total of 814 simulations for DNA signatures, i.e. individual runs of the DNA signature pipeline, and 48 simulations for protein signatures were performed using Marburg finished and draft data, requiring an average of 15 min per simulation.
We used finished sequence data generously provided by collaborators at the US CDC for 28 variola major genomes and 22 NN genomes from the Orthopox family. The sequence identities are provided in the Appendix. Since we did not have real draft data available, we developed a program to simulate draft sequence from finished sequence, based on guidance from two colleagues who have been involved in sequencing efforts and the finishing process in the Biology and Biotechnology Research Program at Lawrence Livermore National Laboratory. In outline, the draft simulator program randomly cuts a genome into contigs of a size randomly selected from an exponential distribution. Stochastic simulation also determines whether there are gaps or overlaps between contigs, as well as the size of the gap or overlap. Sequencing errors are also simulated.
The following paragraphs describe the draft simulation process in greater detail. First, the 5 0 end of the sequence is simulated as missing or present according to a random (Bernoulli) trial based on the probability of there being a gap in the sequence data. If simulations randomly determine that the first part of the sequence is missing, then the size of the missing segment is selected randomly from a uniform distribution ranging from the minimum gap size to the maximum gap size. The length of the first contig is selected randomly from an exponential distribution with a non-zero minimum contig size and a maximum contig size that is a fraction of the mean genome length for the species. The mean of this exponential distribution is also specified as a fraction of the mean genome length.
Next, a random Bernoulli trial again determines whether there is a gap or overlap between the first and second contigs, and the size of the gap or overlap is chosen from the appropriate uniform distribution (range for gap size ¼ 1-2000 bases, range for overlap size ¼ 20-40 bases). The size of the contig is selected from the exponential distribution as described above. Additional contigs are simulated in a similar manner.
Within each contig, sequencing errors are simulated based on the size of the contig, and whether the base position is at an end (first or last 100 bases) or in the middle of the contig. For long, double-stranded DNA viruses (e.g. variola) and bacteria, the sequencing error rates are larger at the beginning and end of a contig than in the middle, and small contigs are more likely to contain sequencing errors than are large contigs. In contrast, due to differences in generating the products for Sanger sequencing that are employed for smaller RNA and DNA viruses, there are often more sequencing errors in the middle of contigs for such smaller viral draft genomes. Although we did not specifically simulate draft for RNA and short DNA viruses, our simulator should work with minor modification to a few parameters. Thus, there are four parameters that must be specified for simulating sequencing errors: (i) the size cutoff for small versus large contigs, (ii) the probability of errors in the middle portion of small versus large contigs, (iii) the length of the contig ends where sequencing is either less accurate (bacteria and long doublestranded DNA viruses) or more accurate (small viruses, RNA viruses) and (iv) the probability of sequencing errors at the contig ends. If there is a sequencing error at a particular base, we assumed that that base is randomly changed to one of the other three bases with equal probability. Although additional features could be added to the draft simulation tool, the stochastic features that we have incorporated capture the main features of draft sequence and produce data that are suitable for SAP analyses.
We performed six sets of analyses using simulated variola draft. Three sets of simulated variola draft runs of the SAP used the following parameters: probability of a gap between contigs ¼ 0.95; probability of overlap between contigs ¼ 0.05; minimum gap size if there is a gap (uniform distribution) ¼ 1 bp; maximum gap size ¼ 2000 bp; minimum overlap if there is overlap (uniform distribution) ¼ 20 bp; maximum overlap ¼ 40 bp; minimum contig size (exponential distribution) ¼ 2000 bp; maximum contig size ¼ 0.5 · (mean genome length) bp mean contig size ¼ 0.05 · (mean genome length) bp cutoff size for small versus large contigs ¼ 10 000 bp; probability of sequence errors inside large contigs ¼ 0.01; probability of sequence errors inside small contigs ¼ 0.05; We will refer to the above set of simulations as those with a high probability of sequencing errors, or low-quality draft. The other three simulated variola draft runs used all the same parameters as above, except that the sequencing error rates were dramatically lower, more in line with error rates of 10 À5 / base that the US Centers for Disease Control and Prevention (CDC) has indicated for their draft variola genomes: probability of sequence errors inside large contigs ¼ 10 À5 ; probability of sequence errors inside small contigs ¼ 10 À4 ; probability of sequence errors in the contig ends ¼ 10 À3 ; These runs were referred to as low error rate, or highquality draft. Finally, we performed SAP runs using high error rate (low quality) simulated draft of the NN sequences and intermediate quality simulated draft of target genomes, using the following probabilities of sequencing errors: probability of sequence errors inside large contigs ¼ 10 À3 ; probability of sequence errors inside small contigs ¼ 10 À3 ; probability of sequence errors in the contig ends ¼ 10 À3 ;
The intermediate quality simulated draft is consistent with error rates for draft sequencing cited in the literature (1,3) .
For the parameter values specified above, three SAP experiments were simulated. In the first, only the target sequences were simulated into draft, and the NN sequences remained as finished sequences. In the second, the NN sequences were converted to simulated draft and the target sequences remained as finished. In the third, both target and NN sequences were simulated into draft. In the second and third cases, all the NNs were run through the draft simulator each time they were chosen, so that the draft sequences (i.e. location and extent of gaps and sequence errors) differ for the same genome among samples. In the first and third cases, the target sequences must be aligned, and WGASA requires that one of the sequences be a finished genome for reference. Thus, for each random sample from the pool of target genomes, one genome was randomly selected to be the finished genome, and so was left as finished sequence, and the other genomes in the sample were replaced with simulated draft sequence (by running through the draft simulator) before alignment. As with NNs, target draft sequences differ for the same genome among samples due to the randomness of the draft simulation each time it is run. In addition, the target genome that is chosen to be the finished, reference genome differs between samples, and the other target genomes in the sample simulation are replaced with simulated draft versions of the actual finished sequences. Then these sequences were aligned using WGASA and the SAP process was run as described above. A total of 1101 stochastic simulations per 'experiment' were performed, requiring $18 min per simulation. Each simulation involved randomly selecting the subset of target and NN sequences to be included, simulating the draft data based on the finished genomes, aligning the target sample, and finally running the DNA Signature Pipeline. There were four combinations examined: (i) finished variola and finished NN, (ii) draft variola and finished NN, (iii) finished variola and draft NN and (iv) draft variola and draft NN, with each of the draft runs repeated for both low and high sequencing error rates. The combination (iv) was also run with intermediate quality simulated draft variola and low-quality simulated draft NNs. In total, there were eight computational experiments for the finished and simulated draft variola data.
We have used the following function to estimate viral sequencing costs, based on discussion with our laboratory colleagues involved in sequencing and finishing. This is merely a rough estimate, and the actual costs of sequencing any given organism may differ substantially from this rule-of-thumb calculation. In Equation 1, it is assumed that the cost of sequencing viruses does not decline for sequencing second and subsequent isolates. While this may be a false assumption in cases where isolates are similar to one another, in other cases where the new sequences are divergent, as isolates from different outbreaks or for viruses with rapid mutation rates, the cost is especially unlikely to decline. In addition, the $0.40/bp figure for draft of 6· to 8· coverage could range from $0.30 to $0.50/bp using shotgun sequencing, but may be as low as $0.10-$0.20/bp if primer walking works well (i.e. known primer sites are found in new isolates). Finishing could be 1-3 times more than the cost of draft, so we used a factor of 2 times more (draft $0.4/bp, finished $0.4 + 0.8/bp) in the equation above as a reasonable estimate. With rapidly evolving sequencing technologies and costs, these are only rough guides that may quickly become outdated.
It may be substantially less expensive, on the order of 3-fold, to generate draft compared with finished sequence data for an organism like Marburg virus, according to estimates using Equation 1. For example, for $45K, one could sequence either two finished genomes or one finished and three draft. However, draft sequencing of this low quality (3· to 6·) for Marburg causes a dramatic decline in the ability to computationally eliminate regions of poor conservation, and thus to exclude poor signature regions ( Figure 2 ). This occurs because gaps in the draft data of some of the sequences mask sequence variation among strains. Using the best available data, all six finished genomes, there is 75.2% sequence conservation. The deficiencies of draft data give a false impression that there is 92.6% sequence conservation (Table 1) . Each additional finished genome reduces the conserved fraction by $5%, compared with a reduction of only 2% per genome for the draft data.
The overestimation of conservation using draft Marburg data also results in overestimation of the number of signature candidates (Figure 3 ). Samples of four draft targets plus one finished reference yield 43 signature candidates. A smaller sample size of only two draft targets and one finished reference generate upwards of 80 candidates. These results differ from those using finished genomes, where the lack of sequence conservation is more evident and there are zero TaqMan signatures conserved among all strains. Most combinations of four finished genomes are sufficient to eliminate nonconserved signatures ( Figure 3A ). Although predictions that there are 0 signature candidates shared among all finished strains may seem to argue against TaqMan methods, in fact this information provides important guidance for the development of TaqMan signatures with degenerate bases or a set of signatures that will, in combination, pick up all sequenced strains. Other analyses indicate that there are TaqMan signatures conserved among five of the six strains, so that two signatures would form a minimal set that could detect both the one divergent and the other five strains.
Estimated sequencing costs of draft variola and draft NNs indicate that draft may require only one-quarter to one-half the costs of finished sequencing. Simulations of high-quality draft data indicated that it is as good as finished data for diagnostic signature prediction. The conservation range plots ( Figure 4A and B) are virtually identical for finished and high-quality draft, and indicate that $98% of the genome is conserved among sequenced isolates. For intermediate quality draft ( Figure 4C ) the conservation range plot is also similar to that for finished sequence, showing that $97% of the genome appears to be conserved. The range plots for the number of TaqMan signature candidates are very similar for finished sequence data, high or intermediate quality draft target, and high or low-quality draft NNs ( Figure 5A-D) .
In contrast to the results using high-quality simulated draft or actual Marburg draft, simulations of low-quality variola draft target illustrate that sequence conservation may be underestimated compared with results with finished sequence data, due to sequencing errors (Table 1 and Figure 4D ). With lowquality draft target, it appears that only 58% of the genome is conserved among isolates.
Low-quality (high error rate) draft NN data, however, yield results that are very similar to those with high-quality draft or finished NN data, as long as the target sequence information is of intermediate to high quality ( Figure 5C -D andFigure 6): At least four NN sequences are necessary to ensure that signature regions are unique, whether the NN data are low-or highquality draft or finished. That is, our simulations indicate that low-quality draft NN data are adequate for predicting DNA signatures, as long as there is good quality target sequence data. This results because errors in the NN sequences occur at random locations that differ in each NN sequence. As long as at least one of the NNs has enough correct sequence to eliminate each of the non-unique target regions, then the unique regions of the target can be determined.
The results illustrated in the figures are emphasized by the data in Table 1 . This table shows the fraction of the target genome that is conserved and conserved+unique, the number of conserved+unique regions that are at least 18 contiguous base pairs long and the number of base pairs in the largest of these regions, since these are the sections that are of sufficient length for one or possibly more primers to be located. The number of these regions is similar for finished data and for draft with a low error rate. Low-quality draft (with a high rate of sequencing errors) for the target data, however, gives the false impression that there are fewer and shorter regions that are conserved and suitable as signature regions than is actually the case.
There is an artifact in some of our results that is a consequence of the order in which we calculate conservation and then uniqueness, although this does not affect the signatures that are predicted. First, a conservation gestalt is generated from the sequence alignment, in which non-conserved bases are replaced by a dot ('.'). Then uniqueness is calculated based upon perfect matches of at least 18 bp long between the conservation gestalt and a large sequence database of non-target The percent of the target genome that is conserved varies slightly among the runs using finished target sequences because different genomes were randomly selected to be the reference strain in each multiple sequence alignment.
sequences. Non-conserved bases in the conservation gestalt may break up a region into conserved fragments of <18 bases long, and as a result these short fragments are not tested for uniqueness. Consequently, if there is a low level of conservation, then we may overestimate the fraction of the genome that is unique. For example, in Table 1 the conserved+unique fraction is 4% with finished variola target data, but is overestimated at 58% with low-quality draft. This artifact does not, however, affect TaqMan signature prediction, since the regions suitable for primers and probes must have at least 18 contiguous, conserved bases, and all of these are tested for uniqueness, i.e. there is no underestimation of uniqueness in conserved fragments that are at least 18 bp in length, and thus no underestimate of uniqueness in the predicted signatures.
We are working to eliminate this issue in future versions of the software.
Protein results show a large disparity between finished and draft data. There are 113 protein signature candidates for finished Marburg data compared with only two protein signature candidates for Marburg draft (Figure 7 ). For variola, using all available target data, 97, 14, 6 and 0 protein signatures are predicted using finished, low error, intermediate error and high error draft target data, respectively ( Figure 8 ). Thus, sequencing errors substantially reduce the detection of amino acid sequence conservation, even if sequencing errors occur at the low rate of 10 À4 -10 À5 across most of the genome. The pattern of how additional sequences reduce the number of protein signature candidates also differs for draft compared with finished sequence data. With finished data, there is a large range in the number of peptide signature candidates predicted with 17 or fewer variola genomes, and this range narrows around the lower bound with >17 genomes. With 16 genomes, the 75% quantile mark approaches the final predicted number of 97 signatures ( Figure 8A ). This pattern indicates that there is a set of 97 peptides that are highly conserved among all currently sequenced variolas, which are unlikely to be eroded even as more sequence data are obtained. In other words, additional sequence data are probably not needed at this time in order to computationally predict good peptide signature targets, and as few as 16 finished target sequences would most likely have been adequate to generate this same list of $100 peptide signatures.
Draft data, in contrast, whether they are of low or high quality, mask the above pattern ( Figure 8B and C): the range and 75th quantile of the number of peptide signatures gradually decline with each additional target sequence (rather than a sudden, sharp drop as is seen with the finished data), suggesting that additional target sequences would continue to erode the number of peptide signatures. This occurs since sequencing errors occur at random, in different locations in each of the draft target genomes, and obscure the truly conserved peptides. One might falsely infer from peptide SAP results based on the draft data that additional sequencing (beyond the 28 variola major genomes used here) would be useful in generating peptide signature candidates. In actuality, however, SAP analyses using the finished sequence data indicate that there are already ample sequence data for peptide signature prediction.
The failure of draft sequencing for Marburg at 3· to 6· coverage or of simulated variola draft with a high error rate to facilitate the prediction of detection signatures highlights a need for finished viral sequences, or at least for draft of high quality such as 8·. Otherwise, a large number of signature candidates either will fail in screening because they are incorrectly designated as conserved among strains (as observed with the Marburg results), or too few regions will be classified as conserved (as observed with variola), and thus not be considered for signatures.
The variola simulations with intermediate to high-quality draft (that is, a low error rate, approximating what one might observe with 8· coverage) target and/or NN genomes deliver virtually the same results as finished genomes. Considering that it costs approximately three times as much to generate finished sequence as it does draft, our analyses indicate that investing in more high-quality draft target genomes is better than investing in fewer finished genomes. For our analyses, only one target strain must be finished, and the remaining target sequences and all the NNs may be provided as draft.
Our results indicate that NN sequencing may be of low coverage, and thus of low quality, without serious detriment to signature prediction, as long as there are at least four NN draft genome sequences.
If high-quality draft sequence is used, and it appears that there is too little sequence conservation among target strains, one might relax specifications for 100% conservation among strains for diagnostic signature prediction. Calculations indicate that it is often possible to generate signatures if one allows a base to be considered 'conserved' if it is present in only a fraction of the genomes (e.g. 75%) rather than the standard requirement for 100% conservation when finished sequence data are used. We have used this 'ratio-to-win' option to generate signature candidates for some highly divergent RNA viruses (for which we have finished sequence), although usually our preference is to include degenerate bases, especially when there are only a few bases with heterogeneity among strains in a given signature candidate. Using a ratio-to-win approach may be particularly important for the generation of protein signature candidates, since draft target data severely compromises the ability to detect conserved strings of amino acids.
In summary, intermediate to high-quality draft sequencing of target genomes, combined with low-quality draft sequencing of close phylogenetic relatives, is sufficient for the prediction of DNA diagnostic signatures. Prediction of peptide/ protein signature candidates, in contrast, requires finished sequencing to avoid substantial underestimation of conserved peptide regions. An ontology for immune epitopes: application to the design of a broad scope database of immune reactivities BACKGROUND: Epitopes can be defined as the molecular structures bound by specific receptors, which are recognized during immune responses. The Immune Epitope Database and Analysis Resource (IEDB) project will catalog and organize information regarding antibody and T cell epitopes from infectious pathogens, experimental antigens and self-antigens, with a priority on NIAID Category A-C pathogens () and emerging/re-emerging infectious diseases. Both intrinsic structural and phylogenetic features, as well as information relating to the interactions of the epitopes with the host's immune system will be catalogued. DESCRIPTION: To effectively represent and communicate the information related to immune epitopes, a formal ontology was developed. The semantics of the epitope domain and related concepts were captured as a hierarchy of classes, which represent the general and specialized relationships between the various concepts. A complete listing of classes and their properties can be found at . CONCLUSION: The IEDB's ontology is the first ontology specifically designed to capture both intrinsic chemical and biochemical information relating to immune epitopes with information relating to the interaction of these structures with molecules derived from the host immune system. We anticipate that the development of this type of ontology and associated databases will facilitate rigorous description of data related to immune epitopes, and might ultimately lead to completely new methods for describing and modeling immune responses. An epitope can be defined as the molecular structure recognized by the products of immune responses. According to this definition, epitopes are the specific molecular entities engaged in binding to antibody molecules or specific T cell receptors. An extended definition also includes the specific molecules binding in the peptide binding sites of MHC receptors. We have previously described [1] the general design of the Immune Epitope Database and Analysis Resource (IEDB), a broad program recently initiated by National Institute of Allergy and Infectious Diseases (NIAID). The overall goal of the IEDB is to catalog and organize a large body of information regarding antibody and T cell epitopes from infectious pathogens and other sources [2] . Priority will be placed on NIAID Category A-C pathogens (http://www2.niaid.nih.gov/Biodefense/ bandc_priority.htm) and emerging/re-emerging infectious diseases. Epitopes of human and non-human primates, rodents, and other species for which detailed information is available will be included. It is envisioned that this new effort will catalyze the development of new methods to predict and model immune responses, will aid in the discovery and development of new vaccines and diagnostics, and will assist in basic immunological investigations.
The IEDB will catalog structural and phylogenetic information about epitopes, information about their capacity to bind to specific receptors (i.e. MHC, TCR, BCR, Antibodies), as well as the type of immune response observed following engagement of the receptors (RFP-NIH-NIAID-DAIT-03/31: http://www.niaid.nih.gov/contract/archive/ rfp0331.pdf).
In broad terms, the database will contain two general categories of data and information associated with immune epitopes -intrinsic and extrinsic (context-dependent data). Intrinsic features of an epitope are those characteristics that can be unequivocally defined and are specified within the epitope sequence/structure itself. Examples of intrinsic features are the epitope's sequence, structural features, and binding interactions with other immune system molecules. To describe an immune response associated with a specific epitope, context information also needs to be taken into account. Contextual information includes, for example, the species of the host, the route and dose of immunization, the health status and genetic makeup of the host, and the presence of adjuvants. In this respect, the IEDB project transcends the strict boundaries of database development and reaches into a systems biology application, attempting for the first time to integrate structural information about epitopes with comprehensive details describing their complex interaction with the immune system of the host, be it an infected organism or a vaccine recipient [1] [2] [3] .
For these reasons, it was apparent at the outset of the project that it was crucial to develop a rigorous conceptual framework to represent the knowledge related to the epitopes. Such a framework was key to sharing information and ideas among developers, scientists, and potential users, and to allowing the design of an effective logical structure of the database itself. Accordingly, we decided to develop a formal ontology. Over the years, the term "ontology" has been defined and utilized in many ways by the knowledge engineering community [4] . We will adopt the definition of "ontology" as "the explicit formal specifications of the terms in a domain and the relationships among them" [5] . According to Noy and McGuinness [6] , "ontology defines a common vocabulary for researchers who need to share information in a domain and helps separate domain knowledge from operational knowledge". Thus, availability of a formal ontology is relevant in designing a database, in cataloging the information, in communicating the database structure to researchers, developers and users, and in integrating multiple database schema designs and applications.
Several existing databases catalog epitope related data. We gratefully acknowledge that we have been able use these previous experiences in the design and implementation of the IEDB. MHCPEP [7] , SYFPEITHI [8] , FIMM [9] , HLA Ligand Database [10] , HIV Immunology Database [11] , JenPep [12] , AntiJen [13] , and MHCBN [14] are all publicly available epitope related databases. In general, these databases provide information relating to epitopes, but do not catalog in-depth information relating to their interactions with the host's immune system. It should also be noted that none of these databases has published a formal ontology, but all of them rely on informal or implicit ontologies. We have taken into account as much as possible these ontologies, inferring their structure by informal communications with database developers or perusal of the databases websites.
The ontology developed for IEDB and described herein complements two explicit ontologies that are presently available: the IMGT-Ontology and the Gene Ontology (GO). The IMGT-Ontology [15] was created for the international ImMunoGeneTics Database (IMGT), which is an integrated database specializing in antigen receptors (immunoglobulin and T Cell receptors) and MHC molecules of all vertebrate species. This is, to the best of our knowledge, the first ontology in the domain of immunogenetics and immunoinformatics. The GO project [16] provides structured, controlled vocabularies that cover several domains of molecular and cellular biology. GO provides an excellent framework for genes, gene products, and their sequences, but it does not address the specific epitope substructure of the gene products. The IMGT provides an excellent ontological framework for the immune receptors but lacks information relating to the epitopes themselves. Therefore it was necessary to expand the available ontologies and to create an ontology specifically designed to represent the information of immune receptor interaction with immune epitopes. Wherever possible, the IEDB ontology conforms to standard vocabularies for capturing values for certain fields. For capturing disease names, IEDB uses the International Classification for Diseases (ICD-10) [17]. The NCBI Taxonomy database nomenclature [18, 19] is used to capture species and strain names, and HLA Allele names are consistent with the HLA nomenclature reports [20] .
The IEDB is being developed as a web-accessible database using Oracle 10g and Enterprise Java (J2EE). Industry standard software design has been followed and it is expected that IEDB will be available for public users by the end of 2005.
Protégé http://protege.stanford.edu was used to design and document the IEDB ontology. Protégé is a free, open source ontology editor and knowledge-base framework, written in Java. It provides an environment for creating ontologies and the terms used in those ontologies. Protégé supports class, slot, and instance creation, allowing users to specify relationships between appropriate entities. Two features that IEDB ontology effort used extensively were Protégé's support for creating ontology terms and for viewing the term hierarchies and the definitions. The support for a central repository on ontologies, along with browsing support, is key in reviewing and reusing ontologies.