Acta Crystallographica
Section D STRUCTURAL BIOLOGY ![sample discussion for research paper Journals Logo](https://journals.iucr.org/logos/iucr_journals_logo_spaces.png)
1. Introduction
4. discussion, 5. related literature, supporting information.
![sample discussion for research paper](https://journals.iucr.org/logos/buttonlogos/settings.png)
Format | | BIBTeX |
| | EndNote |
| | RefMan |
| | Refer |
| | Medline |
| | CIF |
| | SGML |
| | Plain Text |
| | Text |
|
![sample discussion for research paper](https://journals.iucr.org/logos/buttonlogos/pageviews.png)
research papers \(\def\hfill{\hskip 5em}\def\hfil{\hskip 3em}\def\eqno#1{\hfil {#1}}\)
![sample discussion for research paper Open Access](https://journals.iucr.org/logos/open.png)
Validation of electron-microscopy maps using solution small-angle X-ray scattering
a Department of Chemistry and Interdisciplinary Nanoscience Center (iNANO), Aarhus University, Gustav Wieds Vej 14, 8000 Aarhus, Denmark * Correspondence e-mail: [email protected]
The determination of the atomic resolution structure of biomacromolecules is essential for understanding details of their function. Traditionally, such a structure determination has been performed with crystallographic or nuclear resonance methods, but during the last decade, cryogenic transmission electron microscopy (cryo-TEM) has become an equally important tool. As the blotting and flash-freezing of the samples can induce conformational changes, external validation tools are required to ensure that the vitrified samples are representative of the solution. Although many validation tools have already been developed, most of them rely on fully resolved atomic models, which prevents early screening of the cryo-TEM maps. Here, a novel and automated method for performing such a validation utilizing small-angle X-ray scattering measurements, publicly available through the new software package AUSAXS , is introduced and implemented. The method has been tested on both simulated and experimental data, where it was shown to work remarkably well as a validation tool. The method provides a dummy atomic model derived from the EM map which best represents the solution structure.
Keywords: electron microscopy ; small-angle X-ray scattering ; electron-microscopy validation ; structure determination .
Small-angle X-ray scattering (SAXS) is an alternative but low-resolution technique for structural analysis. While similar in principle to X-ray crystallography based on interference of scattered X-rays, the requirement for crystallization is evaded by simultaneously measuring the scattering pattern of multiple molecules in solution. The result is a single one-dimensional orientationally averaged intensity curve that is dependent on both the shape and size of the sample. One of the primary advantages of SAXS is that macromolecular molecules and complexes can be measured in their native state in solution, without any special sample preparation. This feature is exactly what makes the technique so useful for validation.
The next section will present and detail the method itself, including brief discussions of all of the major design decisions. This is followed by a section detailing how the method has been tested with both simulated and experimental data, along with tables of all test results.
In TEM, the electrons interact with the electric field generated by the individual atoms of the sample molecule. Since these fields are continuous, the surface of the molecule is not well defined in an EM map. When visualizing such a map with, for example, PyMOL , one must instead pick some threshold cutoff value, which is then used to define the surface. A 3D TEM map thus represent the Coulomb charge -density distribution, represented on a grid with a resolution dependent on the experimental setup.
| A helpful way of visualizing EM maps. The left panel shows the typical visualization from . The right panel shows how the maps can also be interpreted as a stack of 2D contour plots. |
2.1. Model generation
The creation of a single model for some threshold value thus involves first placing weighted (either by density or by some constant) dummy atoms and then simulating a hydration layer. The next step is to vary this threshold value to generate an entire series of dummy models of varying sizes. Note that the models for nearby threshold values are expected to be very similar.
2.2. Model selection
Although there are already plenty of programs that can calculate these expected scattering curves for the models ( CRYSOL , FoXS , …), we decided to use our own implementation. There are two primary reasons for this choice.
, 2013 ), by adding layers of uniformly distributed electron density around the surface (Svergun , 1995 ; Grudinin , 2017 ), or with explicit molecular-dynamics simulation calculations (Knight & Hub, 2015 ). Since performing actual simulations is too slow for our purposes, we believe the best alternative is to actually model the hydration molecules as randomly distributed dummy solvent atoms close to the protein surface. into the Debye equation, a major performance improvement can be achieved when calculating the total histograms and expected scattering curves of similar structures. The idea is that by splitting the EM map into an onion-like structure with regions of similar density values, it becomes possible to reuse previous scattering calculations when scanning the threshold value. More specifically, the threshold value is scanned from its highest value to its lowest value while saving the self-correlation histogram of each `onion shell'. The self-correlations from the inner shells can then directly be reused when evaluating the scattering from a threshold value outside their region. Thus, instead of being a ( ) process in the number of atoms, evaluating the scattering from similar structures is improved to an ( ) process, where is the number of additional scatterers. With the threshold parameter being nearly continuous and by scanning from high to low, thus creating a series of similar models, is small compared with . Implementing optimizations such as this in existing libraries is a major undertaking, and is impossible for the closed-source . Developing a new library that natively supports these partial histograms was the easiest solution. We will return to this performance discussion later. |
For highly ordered structures, such as the lattice structure of the maps, it turned out that using the binned distance approximation typically used in conjunction with the Debye equation resulted in significant inaccuracies. This is because in such highly ordered structures some distances are much likelier than others, yet the binning does not account for this and shifts them to the centre of the closest bin. With almost every single distance being shifted by a small amount, the error propagates into a significant uncertainty in the final scattering profile. To solve this issue, we introduced weighted bins into the approximation, where the centre of the bin is determined based on its contents, calculated as the centre of mass of the bin. This neatly solves the issue, while still providing the significant performance benefit of the binning approximation. Note that using weighted bins is usually not necessary when evaluating the scattering of a typical protein, only when dealing with highly ordered structures, as we are here.
The method optimizes four parameters in total, where the first is the threshold value itself. As explained previously, for efficiency reasons this parameter is scanned using a fixed step size, starting from its highest value and moving towards its lowest, thus generating a number of equidistant dummy models. For each of these models, three additional parameters are optimized: two for the simple linear fit to the scattering data I exp = aI model + b , and a third for fitting the scattering contrast of the hydration layer. Although adding the hydration layer generally provides a dramatic improvement to χ 2 , it also comes with a major drawback: the scattering contrast parameter is strongly correlated to the threshold value. This is only to be expected, as they both control the effective size of the model: the former by enhancing the scattering contribution from the dummy water surface atoms and the latter by directly varying the size. The strong correlation between these parameters naturally leads to large uncertainties in them, although this is not a concern as the former is an arbitrary scaling constant and the latter is only approximative. What is more problematic is the discrete nature of the data stored in the maps, with a small but finite difference between the density values of neighbouring voxels. When the threshold value crosses such a boundary, a number of new dummy atoms are added to the model proportional to the current surface area , while the total number of dummy atoms is of course proportional to the volume . Thus, for small volumes the scattering contribution of the newly added dummy atoms is significant, leading to a high variance in this region of the χ 2 landscape. Typically, the extreme low-volume region is not of interest for the fit itself, meaning that only limited variance is observed in the relevant area of the landscape. The problem is further mitigated by using a moving average as an estimate of the actual χ 2 .
Since the threshold value is directly related to the size of the dummy model, there is in principle a one-to-one mapping from the threshold value to the total mass of the model. With this mapping the threshold axis can be replaced with a mass axis, which may be useful for real applications, especially in cases with multiple minima in the χ 2 landscape. Since dummy models are generated for all identified minima, the user can then subsequently select only the one that they are interested in based on the mass. It should be mentioned that this mass axis comes with a significant uncertainty and may be unsuitable for absolute comparisons.
The ideal test would be to simulate both SAXS scattering curves and EM maps from a complete atomic structure, while also varying the resolution of the map. While the former is doable, the latter is a nontrivial problem that currently only has approximate solutions. This immediately makes this approach unusable, since one cannot determine whether a bad fit is due to issues with the map simulation or due to the method itself. We have thus focused on simulating SAXS data for our EM tests.
To better emulate experimental data, each point of the scattering curve should have an error associated with it. By comparison to a series of measured SAXS data sets, we have empirically found the errors to be reasonably well described by the equation
After the errors have been calculated using this equation, Gaussian noise with this magnitude is imposed on the simulated data.
3.1. Examples using experimental EM maps and simulated SAXS data
As part of the standard validation suites required before deposition of an EM map, a high-resolution atomic structure model is built and refined to fit the map itself. Since this fitted structure should be a good representation of the map, it can be used for testing, i.e. we can use the high-resolution model to generate a simulated SAXS data set for the test. We would then expect the agreement to be good, but not necessarily perfect. The tests will also serve as guidelines for the kind of results and agreements that one can expect from the method in general.
A random selection of maps covering a wide range of resolutions was downloaded for this test. SAXS measurements were then simulated for each as described above, and subsequently fitted by the scattering from the map itself as per the method described in the present paper, using unity weights.
The results of applying the method to a series of maps of varying resolution, where the SAXS data were simulated using the high-resolution models | is the expected mass of the atomic structure as reported by the RCSB PDB (Berman , 2000 ), while | Map | Res. (Å) | (kDa) | | | | 1.27 | 498 | 459 | 1.13 | | 1.78 | 271 | 370 | 1.71 | | 2 | 660 | 877 | 3.14 | | 2.3 | 1404 | 1816 | 1.97 | | 2.46 | 245 | 136 | 3.21 | | 2.95 | 127 | 120 | 5.08 | | 3.06 | 431 | 412 | 1.70 | | 3.6 | 662 | 664 | 1.86 | | 4.6 | 661 | 793 | 4.23 | | 6.6 | 663 | 592 | 1.90 | | Map | Res. (Å) | (kDa) | | | | 1.88 | 121 | 131 | 6.50 | | 2.89 | 124 | 122 | 6.03 | | 2.94 | 478 | 531 | 6.11 | | 3.5 | 138 | 68 | 17.6 | | 3.7 | 146 | 114 | 6.56 | | 4.5 | 663 | 844 | 6.81 | | . The map is very porous, as if made of thousands of individual lumps. When the dummy structure is generated to calculate the scattering curve, this directly translates to a porous dummy structure, which is a poor match for the solid fitted atomic structure. . The high-resolution structure used to simulate the SAXS data has a high degree of internal structure, which is not reflected in the dummy structure from the map. Together with some disagreement near the surface, a . While the majority of the map matches the atomic structure extremely well, there is a small domain at the tip of the molecule which is unaccounted for in the atomic structure. This discrepancy is likely to explain the increased . This is another case of a porous map, although it is a much worse match to its atomic structure than the map was. This is likely to be due to its smaller size and lower resolution. . There is some disagreement between the map and the atomic structure near the flexible random coils of the protein structure, and also some minor internal disagreements. Both of these contribute to the larger than expected . The protein is a tetramer which is open at one end, with a lower density in this region due to the disorder. Thus, when applying a threshold cutoff these disordered parts are completely left out. We will return to this map in the next section. | and (2021 ). The two TFE maps are from Sah-Teli (2019 ). | | (kDa) | Map | Res. (Å) | | | | | 68 | | 3.5 | 63 | 10.0 | 10.1 | | 126 | | 2.95 | 201 | 10.2 | 10.2 | | 2.89 | 385 | 9.4 | 9.4 | | 193 | | 3.2 | 312 | 2.5 | 2.5 | | 244 | EcTFE | 24 | 434 | 20.5 | 20.5 | | 501 | anEcTFE | 23 | 903 | 2.5 | 2.5 | A2M | 720 | | 4.5 | 473 | 22.8 | 24.1 | | 6.6 | 288 | 11.2 | 18.0 | Harwood | 24 | 1398 | 283 | 284 | A2M | 780 | | 4.6 | 1223 | 27.8 | 28.4 | | 3.6 | 878 | 26.8 | 27.3 | | | Results for native α2M using a stained EM map and experimental SAXS data from Harwood (2021 ). The top panel shows the fit and the associated residuals; the inset shows the optimized dummy structure in transparent grey, with the expected atomic structure in orange. Both qualitatively and quantitatively comparing these suggests that this is a poor fit. The bottom left panel shows the χ landscape as a function of the mass, with vertical red lines indicating local minima. The right panel is an enlarged view of the area near the interpolated absolute minimum (blue dot). | The analyses performed here shows that one should always be aware of the quality of the map and that the conditions used for SAXS are identical to those used for EM before making comparisons with the method. 3.2. Examples using fully experimental data | Results for . The top panel shows the fit and the associated residuals. The small inset shows the optimized dummy structure in transparent grey, with the atomic structure deposited alongside the SAXS data in orange. Qualitatively comparing these suggests that this is a good fit. The bottom left panel shows the χ landscape as a function of the mass, with the blue dots indicating local minima. The right panel is an enlarged view of the area near the interpolated absolute minimum (blue dot), which illustrates why an averaged | As we have previously mentioned, most EM map depositions also include a high-resolution atomic structure representative of the map. Although this structure is not used in our method, it is still relevant to visually compare against it, since it typically gives a good fit to the SAXS data. This visual comparison can be seen in Supplementary Figs. S1–S3 , where the maps and structures have all been manually aligned, both in space and in threshold cutoff level, to give the best visual agreement. The maps for SASDEL9 and SASDEM9 could not be aligned since their resolutions were too low. These visualizations will be a great aid for the following discussion. and . All of these maps, , and , are somewhat porous and have similar . The EcTFE map is from negative-stained EM and is of low resolution. It does not appear to be a good match to the structure; in fact, the agreement is so poor that it could not even be manually aligned, thus explaining why it is not presented along with the other structures in the visualization figures. Although the map, anEcTFE, is also of low resolution, it is in better agreement with the corresponding SAXS data. This is likely to be due to it being both larger and more spherical, thus reducing the resolution necessary to accurately represent it. . As already mentioned, the map is a tetramer with lower density at one end due to its being disordered. This means that when applying a threshold cutoff value, most of this area will be removed, thus explaining the low fitted mass. This can also be seen visually as the parts of the structure reaching out of the map in . The second map, , suffers from the same density issue, but results in a smaller . The high-resolution structure is a good match to the map, except for the two additional internally bound trypsins that are not present in the map, one of which can be seen at the top of the leftmost panel. The map also appears to be missing some internal structure. Again, the second map, , is a slightly different conformation. | | The resulting scattering profiles from using the presented method with both experimental SAXS data and matching EM maps. The small inset shows the optimized dummy structure in transparent grey, with the atomic structure deposited alongside the SAXS data in orange. Qualitatively comparing both the scattering profiles and the optimized dummy structures suggests that these are all good fits, thus successfully validating the EM maps. | The kind of analysis that we have performed here is exactly the intended application of the presented method. The inputs are an experimental SAXS data set and an EM map from the same molecule or complex. The program then determines the agreement between the two by using a scattering curve calculated from the map. When the agreement is good, the map has successfully been validated. When the agreement is poor, further examination of the map for spurious effects and the fitted scattering profile is warranted. 3.3. Alternate weightingWe mentioned earlier that two weighting modes are supported: using the densities from the map itself (dynamic weights) as the scattering weights of the dummy atoms or alternatively using a single weight of unity for everything (unity weights). Through the tests performed here, we found that using unity weights is the best option since it results in more realistic mass estimates and dummy structures. This is somewhat counterintuitive, as one would think that using all of the information contained in the map would result in more accurate calculations. The following arguments explain why this is not the case. not the same as an excess electron-density map as probed by SAXS. Furthermore, the averaging and normalization procedures involved in the processing of EM maps may reduce the similarity to excess electron-density maps even further. | 3.4. Benchmarking | Benchmarking of the different fitting programs. is the single-threaded implementation of our method, while is the multi-threaded implementation. is the relevant benchmark for this paper, as it represents the average execution time for evaluating multiple similar structures around a given size. The error bars are too small to be seen on this figure. The data can also be found tabulated in . | 3.5. Comparison with other methodsAlthough the methodology is similar to ours, there are some crucial differences. Firstly, we do not have to construct approximate course-grained representations; instead, we use the intrinsic grid of the map itself to accurately represent it. Due to our highly efficient scattering calculator, we also do not have to downsample the map as heavily, thus preserving the structural information necessary for accurately estimating the scattering profile. Together, these factors allow us to compare the entire q -range used in a typical SAXS data set. Although the method has proven to be quite useful, there are some general points that should be considered. The first is a caveat related to the different interactions of electrons and X-ray photons with matter. Since electron microscopy is based on the interaction of electrons with matter, the technique samples the Coulomb charge density of the molecule. In contrast to this, small-angle X-ray scattering is based on photon scattering, and thus samples the electron density. Although the two are somewhat similar, there are important differences. One such difference is that electrons interact with the charge of the nuclei, whereas a typical X-ray does not. One way of realizing this difference is by comparing their scattering lengths: for ionized oxygen, the electron scattering length can be negative, indicating phase shifts in the scattering process. Meanwhile, the photon scattering lengths are strictly positive since we cannot have a negative X-ray scattering length. In our approach, this difference is ignored. Here, we have presented a method for the validation of EM maps. Although we have developed our own efficient implementation of the method, it is also possible to replicate some of the included procedures with existing program suites, as we have previously discussed. However, none of these existing options are able to easily and consistently replicate our method, and are mostly too impractical to be real and practical alternatives. Also, although it is possible to perform such a validation using these tools, it does not seem that the community is aware of this. Therefore, implementing all of these procedures in a single, easy-to-use program, as we have performed here, serves to make the method more known and accessible to the community as a whole. The program is open source and freely available for academic use from its GitHub page https://github.com/AUSAXS/AUSAXS , including a graphical user interface. Comments and contributions to the implementation are welcomed there. We have also made a short user guide available in the supporting information ; more detailed instructions can be found online. Supporting information providing additional details and resources. . DOI: https://doi.org/10.1107/S2059798324005497/wan5003sup1.pdf AcknowledgementsWe would like to thank Dr Rajaram Venkatesan for providing the EcTFE and anEcTFE maps. Fruitful discussions with Professor Gregers Rom Andersen, Dr Thomas Boesen, and Dr Andreas Bøggild are also acknowledged. Funding informationThis work was supported by grant 1026-00209B from the Independent Research Fund Denmark. This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) Licence , which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited. Follow Acta Cryst. D | ![](//academichelp.site/777/templates/cheerup/res/banner1.gif) |
| |
IMAGES
VIDEO
COMMENTS
Table of contents. What not to include in your discussion section. Step 1: Summarize your key findings. Step 2: Give your interpretations. Step 3: Discuss the implications. Step 4: Acknowledge the limitations. Step 5: Share your recommendations. Discussion section example. Other interesting articles.
The discussion section is one of the final parts of a research paper, in which an author describes, analyzes, and interprets their findings. They explain the significance of those results and tie everything back to the research question(s). In this handout, you will find a description of what a discussion section does, explanations of how to ...
Begin with a clear statement of the principal findings. This will reinforce the main take-away for the reader and set up the rest of the discussion. Explain why the outcomes of your study are important to the reader. Discuss the implications of your findings realistically based on previous literature, highlighting both the strengths and ...
1.Introduction—mention gaps in previous research¹⁻². 2. Summarizing key findings—let your data speak¹⁻². 3. Interpreting results—compare with other papers¹⁻². 4. Addressing limitations—their potential impact on the results¹⁻². 5. Implications for future research—how to explore further¹⁻².
The discussion section provides an analysis and interpretation of the findings, compares them with previous studies, identifies limitations, and suggests future directions for research. This section combines information from the preceding parts of your paper into a coherent story. By this point, the reader already knows why you did your study ...
Papers usually end with a concluding section, often called the "Discussion.". The Discussion is your opportunity to evaluate and interpret the results of your study or paper, draw inferences and conclusions from it, and communicate its contributions to science and/or society. Use the present tense when writing the Discussion section.
An example of research summary in discussion. 3.2. An example of result interpretation in discussion. 3.3. An example of literature comparison in discussion. 3.4. An example of research implications in discussion. 3.5. An example of limitations in discussion.
The discussion section is often considered the most important part of your research paper because it: Most effectively demonstrates your ability as a researcher to think critically about an issue, to develop creative solutions to problems based upon a logical synthesis of the findings, and to formulate a deeper, more profound understanding of the research problem under investigation;
Step 1: Restate your research problem and research questions. The first step in writing up your discussion chapter is to remind your reader of your research problem, as well as your research aim (s) and research questions. If you have hypotheses, you can also briefly mention these.
Begin the Discussion section by restating your statement of the problem and briefly summarizing the major results. Do not simply repeat your findings. Rather, try to create a concise statement of the main results that directly answer the central research question that you stated in the Introduction section.
Papers that are submitted to a journal for publication are sent out to several scientists (peers) who look carefully at the paper to see if it is "good science". These reviewers then recommend to the editor of a journal whether or not a paper should be published. Most journals have publication guidelines. Ask for them and follow them exactly.
In an empirical research paper, the purpose of the Discussion section is to interpret the results and discuss their implications, thereby establishing (and often qualifying) the practical and scholarly significance of the present study. It may be helpful to think of the Discussion section as the inverse of the introduction to an empirical ...
Tips to Write the Results Section. Direct the reader to the research data and explain the meaning of the data. Avoid using a repetitive sentence structure to explain a new set of data. Write and highlight important findings in your results. Use the same order as the subheadings of the methods section.
Discussion Section. The overall purpose of a research paper's discussion section is to evaluate and interpret results, while explaining both the implications and limitations of your findings. Per APA (2020) guidelines, this section requires you to "examine, interpret, and qualify the results and draw inferences and conclusions from them ...
Discussion is mainly the section in a research paper that makes the readers understand the exact meaning of the results achieved in a study by exploring the significant points of the research, its ...
Step 3: Relate to Existing Literature. In this step, link up your discoveries with what other researchers have already figured out. Share if your results are similar to or different from what's been found before. This helps give more background to your study and shows you know what other scientists have been up to.
The discussion section can be written in 3 parts: an introductory paragraph, intermediate paragraphs and a conclusion paragraph. For intermediate paragraphs, a "divide and conquer" approach, meaning a full paragraph describing each of the study endpoints, can be used. In conclusion, academic writing is similar to other skills, and practice ...
The discussion section, a systematic critical appraisal of results, is a key part of a research paper, wherein the authors define, critically examine, describe and interpret their findings ...
The discussion section of a research paper is where the author analyzes and explains the importance of the study's results. It presents the conclusions drawn from the study, compares them to previous research, and addresses any potential limitations or weaknesses. The discussion section should also suggest areas for future research.
Writing a discussion section is where you really begin to add your interpretations to the work. In this critical part of the research paper, you start the process of explaining any links and correlations apparent in your data. If you left few interesting leads and open questions in the results section, the discussion is simply a matter of ...
1. Begin by discussing the research question and talking about whether it was answered in the research paper based on the results. 2. Highlight any unexpected and/or exciting results and link them to the research question. 3. Point out some previous studies and draw comparisons on how your study is different. 4.
This template covers all the core components required in the discussion/analysis chapter of a typical dissertation or thesis, including: The purpose of each section is explained in plain language, followed by an overview of the key elements that you need to cover. The template also includes practical examples to help you understand exactly what ...
Future research on this topic would benefit from the inclusion of control samples (plants or plant parts not consumed by chimpanzees); however, in this study, assay costs were a prohibiting factor. Additional information regarding the nutritional and mineral content of the species mentioned in this study is needed to better understand the ...
Table of contents. What not to include in your discussion section. Step 1: Summarise your key findings. Step 2: Give your interpretations. Step 3: Discuss the implications. Step 4: Acknowledge the limitations. Step 5: Share your recommendations. Discussion section example.
Discussions of Previously Published Papers. Discussion on "Sample Size Determination for Credibility Estimation," by Liang Hong, Volume 26(4) ... Register to receive personalised research and resources by email. Sign me up. Taylor and Francis Group Facebook page.
Schreglmann received institutional salaries supported by the EU Horizon 2020 research and innovation programme under grant agreement No. 863664, support from the Advanced Clinician Scientist ...
Discussions of Previously Published Papers. Author's Reply to Discussion on "Sample Size Determination for Credibility Estimation" ... Register to receive personalised research and resources by email. Sign me up. Taylor and Francis Group Facebook page.
When China's Chang'e-5 mission in 2020 retrieved the first lunar samples in decades from the moon's near side, the first research was carried out by a joint team of Chinese and Western ...
2.1. Model generation. The first task is to construct a set of dummy-atom models for an EM map. The simplest way of constructing such a model is by imposing a threshold cutoff value d, similar to the alpha level in PyMOL, below which the density is assumed to be noise and is therefore discarded.The intrinsic grid of the map itself is then used to place dummy atoms, each weighted either by the ...