Outline with boxes or circles
When adding annotations to an image, scientists should consider the following steps.
Annotations help to orient the audience but may also obstruct parts of the image. Authors must find the right balance between too few and too many annotations. (1) Example with no annotations. Readers cannot determine what is shown. (2) Example with a few annotations to orient readers to key structures. (3) Example with many annotations, which obstruct parts of the image. The long legend below the figure is confusing. (4) Example shows a solution for situations where many annotations are needed to explain the image. An annotated version is placed next to an unannotated version of the image for comparison. The legend below the image helps readers to interpret the image, without having to refer to the figure legend. Note the different requirements for space. Electron microscope images show mouse pancreatic beta-islet cells.
Cells and their structures are almost all transparent. Every dye, stain, and fluorescent label therefore should be clearly explained to the audience. Labels should be colorblind safe. Large labels that stand out against the background are easy to read. Authors can make figures easier to interpret by placing the color label close to the structure; color labels should only be placed in the figure legend when this is not possible. Example images were created based on problems observed by reviewers. Microscope images show D . melanogaster egg chambers stained with the DNA dye DAPI (4′,6-diamidino-2-phenylindole) and probe for a specific mRNA species [ 18 ]. All images have the same scale.
(1) The annotations displayed in the first image are inaccessible to colorblind individuals, as shown with the visibility test below. This example was created based on problems observed by reviewers. (2, 3) Two colorblind safe alternative annotations, in color (2) and in grayscale (3). The bottom row shows a test rendering for deuteranopia colorblindness. Note that double-encoding of different hues and different shapes (e.g., different letters, arrow shapes, or dashed/nondashed lines) allows all audiences to interpret the annotations. Electron microscope images show mouse pancreatic beta-cell islet cells. All images have the same scale.
Each figure and legend are meant to be self-explanatory and should allow readers to quickly assess a paper or understand complex studies that combine different methodologies or model systems. To date, there are no guidelines for figure legends for images, as the scope and length of legends varies across journals and disciplines. Some journals require legends to include details on object, size, methodology, or sample size, while other journals require a minimalist approach and mandate that information should not be repeated in subsequent figure legends.
Our data suggest that important information needed to interpret images was regularly missing from the figure or figure legend. This includes the species and tissue type, or object shown in the figure, clear explanations of all labels, annotations and colors, and markings or legend entries denoting insets. Presenting this information on the figure itself is more efficient for the reader; however, any details that are not marked in the figure should be explained in the legend.
While not reporting species and tissue information in every figure legend may be less of an issue for papers that examine a single species and tissue, this is a major problem when a study includes many species and tissues, which may be presented in different panels of the same figure. Additionally, the scientific community is increasingly developing automated data mining tools, such as the Source Data tool, to collect and synthesize information from figures and other parts of scientific papers. Unlike humans, these tools cannot piece together information scattered throughout the paper to determine what might be shown in a particular figure panel. Even for human readers, this process wastes time. Therefore, we recommend that authors present information in a clear and accessible manner, even if some information may be repeated for studies with simple designs.
A flood of images is published every day in scientific journals and the number is continuously increasing. Of these, around 4% likely contain intentionally or accidentally duplicated images [ 3 ]. Our data show that, in addition, most papers show images that are not fully interpretable due to issues with scale markings, annotation, and/or color. This affects scientists’ ability to interpret, critique, and build upon the work of others. Images are also increasingly submitted to image archives to make image data widely accessible and permit future reanalyses. A substantial fraction of images that are neither human nor machine-readable lowers the potential impact of such archives. Based on our data examining common problems with published images, we provide a few simple recommendations, with examples illustrating good practices. We hope that these recommendations will help authors to make their published images legible and interpretable.
Limitations: While most results were consistent across the 3 subfields of biology, findings may not be generalizable to other fields. Our sample included the top 15 journals that publish original research for each field. Almost all journals were indexed in PubMed. Results may not be generalizable to journals that are unindexed, have low impact factors, or are not published in English. Data abstraction was performed manually due to the complexity of the assessments. Error rates were 5% for plant sciences, 4% for physiology, and 3% for cell biology. Our assessments focused on factors that affect readability of image-based figures in scientific publications. Future studies may include assessments of raw images and meta-data to examine factors that affect reproducibility, such as contrast settings, background filtering, and processing history.
The role of journals in improving the quality of reporting and accessibility of image-based figures should not be overlooked. There are several actions that journals might consider.
The role of scientists in the community is multifaceted. As authors, scientists should familiarize themselves with guidelines and recommendations, such as ours provided above. As reviewers, scientists should ask authors to improve erroneous or uninformative image-based figures. As instructors, scientists should ensure that bioimaging and image data handling is taught during undergraduate or graduate courses, and support existing initiatives such as NEUBIAS (Network of EUropean BioImage AnalystS) [ 31 ] that aim to increase training opportunities in bioimage analysis.
Scientists are also innovators. As such, they should support emerging image data archives, which may expand to automatically source images from published figures. Repositories for other types of data are already widespread; however, the idea of image repositories has only recently gained traction [ 32 ]. Existing image databases, which are mainly used for raw image data and meta-data, include the Allen Brain Atlas, the Image Data Resource [ 33 ], and the emerging BioImage Archives [ 32 ]. Springer Nature encourages authors to submit imaging data to the Image Data Resource [ 33 ]. While scientists have called for common quality standards for archived images and meta-data [ 32 ], such standards have not been defined, implemented, or taught. Examining standard practices for reporting images in scientific publications, as outlined here, is one strategy for establishing common quality standards.
In the future, it is possible that each image published electronically in a journal or submitted to an image data repository will follow good practice guidelines and will be accompanied by expanded “meta-data” or “alt-text/attribute” files. Alt-text is already published in html to provide context if an image cannot be accessed (e.g., by blind readers). Similarly, images in online articles and deposited in archives could contain essential information in a standardized format. The information could include the main objective of the figure, specimen information, ideally with RRID [ 34 ], specimen manipulation (dissection, staining, RRID for dyes and antibodies used), as well as the imaging method including essential items from meta-files of the microscope software, information about image processing and adjustments, information about scale, annotations, insets, and colors shown, and confirmation that the images are truly representative.
Our meta-research study of standard practices for presenting images in 3 fields highlights current shortcomings in publications. Pubmed indexes approximately 800,000 new papers per year, or 2,200 papers per day ( https://www.nlm.nih.gov/bsd/index_stats_comp.html ). Twenty-three percent [ 1 ], or approximately 500 papers per day, contain images. Our survey data suggest that most of these papers will have deficiencies in image presentation, which may affect legibility and interpretability. These observations lead to targeted recommendations for improving the quality of published images. Our recommendations are available as a slide set via the OSF and can be used in teaching best practice to avoid misleading or uninformative image-based figures. Our analysis underscores the need for standardized image publishing guidelines. Adherence to such guidelines will allow the scientific community to unlock the full potential of image collections in the life sciences for current and future generations of researchers.
We examined original research articles that were published in April of 2018 in the top 15 journals that publish original research for each of 3 different categories (physiology, plant science, cell biology). Journals for each category were ranked according to 2016 impact factors listed for the specified categories in Journal Citation Reports. Journals that only publish review articles or that did not publish an April issue were excluded. We followed all relevant aspects of the PRISMA guidelines [ 35 ]. Items that only apply to meta-analyses or are not relevant to literature surveys were not followed. Ethical approval was not required.
Articles were identified through a PubMed search, as all journals were PubMed indexed. Electronic search results were verified by comparison with the list of articles published in April issues on the journal website. The electronic search used the following terms:
Physiology: ("Journal of pineal research"[Journal] AND 3[Issue] AND 64[Volume]) OR ("Acta physiologica (Oxford, England)"[Journal] AND 222[Volume] AND 4[Issue]) OR ("The Journal of physiology"[Journal] AND 596[Volume] AND (7[Issue] OR 8[Issue])) OR (("American journal of physiology. Lung cellular and molecular physiology"[Journal] OR "American journal of physiology. Endocrinology and metabolism"[Journal] OR "American journal of physiology. Renal physiology"[Journal] OR "American journal of physiology. Cell physiology"[Journal] OR "American journal of physiology. Gastrointestinal and liver physiology"[Journal]) AND 314[Volume] AND 4[Issue]) OR (“American journal of physiology. Heart and circulatory physiology”[Journal] AND 314[Volume] AND 4[Issue]) OR ("The Journal of general physiology"[Journal] AND 150[Volume] AND 4[Issue]) OR ("Journal of cellular physiology"[Journal] AND 233[Volume] AND 4[Issue]) OR ("Journal of biological rhythms"[Journal] AND 33[Volume] AND 2[Issue]) OR ("Journal of applied physiology (Bethesda, Md.: 1985)"[Journal] AND 124[Volume] AND 4[Issue]) OR ("Frontiers in physiology"[Journal] AND ("2018/04/01"[Date—Publication]: "2018/04/30"[Date—Publication])) OR ("The international journal of behavioral nutrition and physical activity"[Journal] AND ("2018/04/01"[Date—Publication]: "2018/04/30"[Date—Publication])).
Plant science: ("Nature plants"[Journal] AND 4[Issue] AND 4[Volume]) OR ("Molecular plant"[Journal] AND 4[Issue] AND 11[Volume]) OR ("The Plant cell"[Journal] AND 4[Issue] AND 30[Volume]) OR ("Plant biotechnology journal"[Journal] AND 4[Issue] AND 16[Volume]) OR ("The New phytologist"[Journal] AND (1[Issue] OR 2[Issue]) AND 218[Volume]) OR ("Plant physiology"[Journal] AND 4[Issue] AND 176[Volume]) OR ("Plant, cell & environment"[Journal] AND 4[Issue] AND 41[Volume]) OR ("The Plant journal: for cell and molecular biology"[Journal] AND (1[Issue] OR 2[Issue]) AND 94[Volume]) OR ("Journal of experimental botany"[Journal] AND (8[Issue] OR 9[Issue] OR 10[Issue]) AND 69[Volume]) OR ("Plant & cell physiology"[Journal] AND 4[Issue] AND 59[Volume]) OR ("Molecular plant pathology"[Journal] AND 4[Issue] AND 19[Volume]) OR ("Environmental and experimental botany"[Journal] AND 148[Volume]) OR ("Molecular plant-microbe interactions: MPMI"[Journal] AND 4[Issue] AND 31[Volume]) OR (“Frontiers in plant science”[Journal] AND ("2018/04/01"[Date—Publication]: "2018/04/30"[Date—Publication])) OR (“The Journal of ecology” ("2018/04/01"[Date—Publication]: "2018/04/30"[Date—Publication])).
Cell biology: ("Cell"[Journal] AND (2[Issue] OR 3[Issue]) AND 173[Volume]) OR ("Nature medicine"[Journal] AND 24[Volume] AND 4[Issue]) OR ("Cancer cell"[Journal] AND 33[Volume] AND 4[Issue]) OR ("Cell stem cell"[Journal] AND 22[Volume] AND 4[Issue]) OR ("Nature cell biology"[Journal] AND 20[Volume] AND 4[Issue]) OR ("Cell metabolism"[Journal] AND 27[Volume] AND 4[Issue]) OR ("Science translational medicine"[Journal] AND 10[Volume] AND (435[Issue] OR 436[Issue] OR 437[Issue] OR 438[Issue])) OR ("Cell research"[Journal] AND 28[Volume] AND 4[Issue]) OR ("Molecular cell"[Journal] AND 70[Volume] AND (1[Issue] OR 2[Issue])) OR("Nature structural & molecular biology"[Journal] AND 25[Volume] AND 4[Issue]) OR ("The EMBO journal"[Journal] AND 37[Volume] AND (7[Issue] OR 8[Issue])) OR ("Genes & development"[Journal] AND 32[Volume] AND 7–8[Issue]) OR ("Developmental cell"[Journal] AND 45[Volume] AND (1[Issue] OR 2[Issue])) OR ("Current biology: CB"[Journal] AND 28[Volume] AND (7[Issue] OR 8[Issue])) OR ("Plant cell"[Journal] AND 30[Volume] AND 4[Issue]).
Screening for each article was performed by 2 independent reviewers (Physiology: TLW, SS, EMW, VI, KW, MO; Plant science: TLW, SJB; Cell biology: EW, SS) using Rayyan software (RRID:SCR_017584), and disagreements were resolved by consensus. A list of articles was uploaded into Rayyan. Reviewers independently examined each article and marked whether the article was included or excluded, along with the reason for exclusion. Both reviewers screened all articles published in each journal between April 1 and April 30, 2018, to identify full length, original research articles ( S1 – S3 Tables, S1 Fig ) published in the print issue of the journal. Articles for online journals that do not publish print issues were included if the publication date was between April 1 and April 30, 2018. Articles were excluded if they were not original research articles, or if an accepted version of the paper was posted as an “in press” or “early release” publication; however, the final version did not appear in the print version of the April issue. Articles were included if they contained at least one eligible image, such as a photograph, an image created using a microscope or electron microscope, or an image created using a clinical imaging technology such as ultrasound or MRI. Blot images were excluded, as many of the criteria in our abstraction protocol cannot easily be applied to blots. Computer generated images, graphs, and data figures were also excluded. Papers that did not contain any eligible images were excluded.
All abstractors completed a training set of 25 articles before abstracting data. Data abstraction for each article was performed by 2 independent reviewers (Physiology: AA, AV; Plant science: MO, TLA, SA, KW, MAG, IF; Cell biology: IF, AA, AV, KW, MAG). When disagreements could not be resolved by consensus between the 2 reviewers, ratings were assigned after a group review of the paper. Eligible manuscripts were reviewed in detail to evaluate the following questions according to a predefined protocol (available at: https://doi.org/10.17605/OSF.IO/B5296 ) [ 14 ]. Supplemental files were not examined, as supplemental images may not be held to the same peer review standards as those in the manuscript.
The following items were abstracted:
Questions 7 and 8 were assessed by using Color Oracle [ 36 ] (RRID:SCR_018400) to simulate the effects of deuteranopia.
Ten percent of articles in each field were randomly selected for verification abstraction, to ensure that abstractors in different fields were following similar procedures. Data were abstracted by a single abstractor (TLW). The question on species and tissue was excluded from verification abstraction for articles in cell biology and plant sciences, as the verification abstractor lacked the field-specific expertise needed to assess this question. Results from the verification abstractor were compared with consensus results from the 2 independent abstractors for each paper, and discrepancies were resolved through discussion. Error rates were calculated as the percentage of responses for which the abstractors’ response was incorrect. Error rates were 5% for plant sciences, 4% for physiology, and 3% for cell biology.
Data are presented as n (%). Summary statistics were calculated using Python (RRID:SCR_008394, version 3.6.9, libraries NumPy 1.18.5 and Matplotlib 3.2.2). Charts were prepared with a Python-based Jupyter Notebook (Jupyter-client, RRID:SCR_018413 [ 37 ], Python version 3.6.9, RRID:SCR_008394, libraries NumPy 1.18.5 [ 38 ], and Matplotlib 3.2.2 [ 39 ]) and assembled into figures with vector graphic software. Example images were previously published or generously donated by the manuscript authors as indicated in the figure legends. Image acquisition was described in references ( D . melanogaster images [ 18 ], mouse pancreatic beta islet cells: A. Müller personal communication, and Orobates pabsti [ 19 ]). Images were cropped, labeled, and color-adjusted with FIJI [ 15 ] (RRID:SCR_002285) and assembled with vector-graphic software. Colorblind and grayscale rendering of images was done using Color Oracle [ 36 ] (RRID:SCR_018400). All poor and clear images presented here are “mock examples” prepared based on practices observed during data abstraction.
This flow chart illustrates the number of included and excluded journals or articles, along with reasons for exclusion, at each stage of the study.
Values are n, or n (% of all articles). Screening was performed to exclude articles that were not full-length original research articles (e.g., reviews, editorials, perspectives, commentaries, letters to the editor, short communications, etc.), were not published in April 2018, or did not include eligible images. AJP, American Journal of Physiology.
Values are n, or n (% of all articles). Screening was performed to exclude articles that were not full-length original research articles (e.g., reviews, editorials, perspectives, commentaries, letters to the editor, short communications, etc.), were not published in April 2018, or did not include eligible images. *This journal was also included on the cell biology list (Table S3). **No articles from the Journal of Ecology were screened as the journal did not publish an April 2018 issue.
Values are n, or n (% of all articles). Screening was performed to exclude articles that were not full-length original research articles (e.g., reviews, editorials, perspectives, commentaries, letters to the editor, short communications, etc.), were not published in April 2018, or did not include eligible images. *This journal was also included on the plant science list (Table S2).
Values are percent of papers.
We thank the eLife Community Ambassadors program for facilitating this work, and Andreas Müller and John A. Nyakatura for generously sharing example images. Falk Hillmann and Thierry Soldati provided the amoeba strains used for imaging. Some of the early career researchers who participated in this research would like to thank their principal investigators and mentors for supporting their efforts to improve science.
GFP | green fluorescent protein |
LUT | lookup table |
OSF | Open Science Framework |
RRID | research resource identifier |
TLW was funded by American Heart Association grant 16GRNT30950002 ( https://www.heart.org/en/professional/institute/grants ) and a Robert W. Fulk Career Development Award (Mayo Clinic Division of Nephrology & Hypertension; https://www.mayoclinic.org/departments-centers/nephrology-hypertension/sections/overview/ovc-20464571 ). LHH was supported by The Hormel Foundation and National Institutes of Health grant CA187035 ( https://www.nih.gov ). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
28 Oct 2020
Dear Dr Weissgerber,
Thank you for submitting your manuscript entitled "Creating Clear and Informative Image-based Figures for Scientific Publications" for consideration as a Meta-Research Article by PLOS Biology.
Your manuscript has now been evaluated by the PLOS Biology editorial staff as well as by an academic editor with relevant expertise and I am writing to let you know that we would like to send your submission out for external peer review.
However, before we can send your manuscript to reviewers, we need you to complete your submission by providing the metadata that is required for full assessment. To this end, please login to Editorial Manager where you will find the paper in the 'Submissions Needing Revisions' folder on your homepage. Please click 'Revise Submission' from the Action Links and complete all additional questions in the submission questionnaire.
Please re-submit your manuscript within two working days, i.e. by Oct 30 2020 11:59PM.
Login to Editorial Manager here: https://www.editorialmanager.com/pbiology
Once your full submission is complete, your paper will undergo a series of checks in preparation for peer review, after which it will be sent out for review.
Given the disruptions resulting from the ongoing COVID-19 pandemic, please expect some delays in the editorial process. We apologise in advance for any inconvenience caused and will do our best to minimize impact as far as possible.
Feel free to email us at gro.solp@ygoloibsolp if you have any queries relating to your submission.
Kind regards,
Roland G Roberts, PhD,
Senior Editor
PLOS Biology
Thank you very much for submitting your manuscript "Creating Clear and Informative Image-based Figures for Scientific Publications" for consideration as a Meta-Research Article at PLOS Biology. Your manuscript has been evaluated by the PLOS Biology editors, an Academic Editor with relevant expertise, and by five independent reviewers. I must apologise for the excessive number of reviewers; we usually aim for three or four, but an administrative oversight led to us recruiting an extra one. I hope that you nevertheless find all the comments useful.
You'll see that the reviewers are broadly positive about your study, but each raises a number of concerns and makes suggestions for improvement. In light of the reviews (below), we are pleased to offer you the opportunity to address the from the reviewers in a revised version that we anticipate should not take you very long. We will then assess your revised manuscript and your response to the reviewers' comments and we may consult the reviewers again.
We expect to receive your revised manuscript within 1 month.
Please email us ( gro.solp@ygoloibsolp ) if you have any questions or concerns, or would like to request an extension. At this stage, your manuscript remains formally under active consideration at our journal; please notify us by email if you do not intend to submit a revision so that we may end consideration of the manuscript at PLOS Biology.
**IMPORTANT - SUBMITTING YOUR REVISION**
Your revisions should address the specific points made by each reviewer. Please submit the following files along with your revised manuscript:
1. A 'Response to Reviewers' file - this should detail your responses to the editorial requests, present a point-by-point response to all of the reviewers' comments, and indicate the changes made to the manuscript.
*NOTE: In your point by point response to the reviewers, please provide the full context of each review. Do not selectively quote paragraphs or sentences to reply to. The entire set of reviewer comments should be present in full and each specific point should be responded to individually.
You should also cite any additional relevant literature that has been published since the original submission and mention any additional citations in your response.
2. In addition to a clean copy of the manuscript, please also upload a 'track-changes' version of your manuscript that specifies the edits made. This should be uploaded as a "Related" file type.
*Resubmission Checklist*
When you are ready to resubmit your revised manuscript, please refer to this resubmission checklist: https://plos.io/Biology_Checklist
To submit a revised version of your manuscript, please go to https://www.editorialmanager.com/pbiology/ and log in as an Author. Click the link labelled 'Submissions Needing Revision' where you will find your submission record.
Please make sure to read the following important policies and guidelines while preparing your revision:
*Published Peer Review*
Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. Please see here for more details:
https://blogs.plos.org/plos/2019/05/plos-journals-now-open-for-published-peer-review/
*PLOS Data Policy*
Please note that as a condition of publication PLOS' data policy ( http://journals.plos.org/plosbiology/s/data-availability ) requires that you make available all data used to draw the conclusions arrived at in your manuscript. If you have not already done so, you must include any data used in your manuscript either in appropriate repositories, within the body of the manuscript, or as supporting information (N.B. this includes any numerical values that were used to generate graphs, histograms etc.). For an example see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5
*Blot and Gel Data Policy*
We require the original, uncropped and minimally adjusted images supporting all blot and gel results reported in an article's figures or Supporting Information files. We will require these files before a manuscript can be accepted so please prepare them now, if you have not already uploaded them. Please carefully read our guidelines for how to prepare and upload this data: https://journals.plos.org/plosbiology/s/figures#loc-blot-and-gel-reporting-requirements
*Protocols deposition*
To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosbiology/s/submission-guidelines#loc-materials-and-methods
Thank you again for your submission to our journal. We hope that our editorial process has been constructive thus far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.
Roli Roberts
Senior Editor,
gro.solp@streborr ,
*****************************************************
REVIEWERS' COMMENTS:
Reviewer #1:
[identifies herself as Elisabeth Bik]
In this paper, the authors screened hundreds of papers from three different scientific fields (physiology, cell biology, and plant sciences) and selected 580 papers that included photographic images. They analyzed the papers containing photographic images for the presence of scale bars, inset annotation, clear labeling, colorblindness-friendly color scheme, adequate description of the specimen etc. The majority of the papers failed one of these criteria. Examples of good and bad image labeling are given throughout the manuscript.
The paper is a welcome addition to the field of meta-science (science about science papers, and provides clear guidelines about what constitutes good labeling and color-use of photographic images in biomedical papers. The search strategy is clearly described and reproducible, and the paper was easy to read and understand. Also, kudos to the authors for including an image featuring Darth Vader.
I have some minor comments.
General comments:
It would be nice if the Abstract should include the total number of papers (580) screened for this study - that number is somewhat hard to find. It is included in Figure S1 (flow chart) and the discussion but it would be good to include it in the abstract and the first paragraph of the Results (see below).
The term "Microphotograph" might benefit from a definition. It appears the authors mean a photo taken from a specimen under a microscope (e.g. of cells or tissues), but I am not sure. Is a "Photograph" then defined as a photo of something visible to the eye such as a plant or a petridish? One could call all the image types mentioned in Figure 1A "photographs", so maybe consider using the term "macrophotograph" for a photo that is not a microphotograph.
Are the examples shown in Figure 4-6 from the papers that were screened for this paper? Or were they taken from public sources (as indicated for some photos) and then manipulated digitally to either remove or add a scale bar (see fig 4)? It would be nice to clearly define that in the Methods (or maybe I missed that).
Specific comments
Page 1, Affiliations of the authors: Typo: "Uterecht"
Introduction. At the end of the Introduction, and the end of "Using a science of science approach...." on Page 4, there are several references to specific figures. I would personally not expect these in the Results, but rather in the Introduction, so maybe consider removing part of that last paragraph of "Using a science...." to the beginning of the Results?
Results. Page 4. It would be more clear to start the Results section by mentioning how many papers (580) were screened.
Results. Page 4. "More than half of the papers in the sample contained images (plant science: 68%, cell biology: 72%, physiology: 55%)." - These numbers do not seem to match the data provided in Supplemental Tables 1-3. Maybe I am misunderstanding something, but Supplemental Tables 1-3 mention 39.9, 51.2, and 38.9% of papers, which are much lower numbers.
Physiology: 431 screened - 172 included (39.9%)
Plant science: 502 screened - 257 included (51.2%)
Cell Biology: 409 screened - 159 included (38.9%)
On page 6, "Approximately half of the papers (47-58%) also failed or partially failed to adequately explain insets. " appears to refer to Figure 1C, right panel, but the figure number/panel is not mentioned. Maybe add that?
Page 11, under 3 "Use Color wisely in images", "Images showing ielectron micrographs" should perhaps read "Images showing electron microphotographs"
Page 13, Maybe write "Deuteranopia, the most common form of colorblindness..." to remind the reader of what the term means (used a lot in the following paragraph)
Discussion. Page 22: "intentionally or accidentally manipulated images" - should be "intentionally or accidentally duplicated images"
Page 22: What is meant by "Error rates" here? The numbers listed here do not appear to match anything else in the paper. Maybe a reference or reminder needs to be included here.
Discussion. Page 22: "Actions journals can take to make image-based figures more transparent and easier to interpret". An important item not listed here, but that I personally think is very important, is to add particular requirements about e.g. the use of colorblind-safe colors and inclusion of scale bars in the journal's guidelines for figure preparation/guidelines for authors. Many of these requirements could be listed to the guidelines that many journals already have online. It is much easier to have these requirements up front instead of trying to fix them during the manuscript reviewing stage.
Page 23. "of which 500 are estimated to contain images" - do the authors mean photographic images? What is this number based on?
Figure 1B and Figure 1C layout could be more similar to each other
Figure 1C - right hand panel not described in Results, and not clear how it differs from what is shown in the left panel
In Figure 4, Square = 1cm, should this be 1cm2?
Figure 4 refers to 1-3 and 4-6 but there are no numbers in the figure itself.
Figure 4 typo: "Micropcope"
Figure 12: In top right, I did not think the color annotation was that clear ; I liked the solution used in the top left, although that is not color blind safe - could something similar be used in the top right? The line to the mRNA appears to land in an area that has both colors, which was not very clear. Maybe moving it a bit to the left so that it would land in a clear green area would help.
Methods. Page 25, under "Screening" what is meant by "using Rayyan software"? I was not familiar with that tool.
Supplemental materials. The Plant Cell articles were included twice in Tables S2 and S3, which was potentially confusing, since now the totals of Tables S1-S3 cannot be summed. I would recommend leaving them out of the Cell Biology table (S3), with a little note under the table, so that there are no duplicate values across the tables.
Table S1-S3: maybe include percentages in the top row, e.g., n=409 n=159 (38.9%)
Page 29, under Table S2, should be "This journal was also included on the cell biology list (Table S3)." instead of "(Table S2)".
Reviewer #2:
In general, I find this paper to be excellent and to be potentially a very valuable resource to the community. I appreciate the large amount of work their initial quantitative findings must have required, and the thoroughness of the recommendations they have put together.
My largest critique (the only one I feel would be NECESSARY to address before publication is that in general), the authors prescribe certain things readers should do when authoring their own papers, but are inconsistent in whether or not they tell readers how to do that (or point them to an educational resource). This is not universal- they do, for example, point the reader to resources for simulating colorblindness in the text around Figures 7 and 8, but not how to do the inversions or greyscale testing in Figure 6, how to generate labels ala Figures 10 and 11, etc. Obviously it would be outside the scope of this paper to teach readers to do every task in every POSSIBLE software it could be done in, but the authors could select one or two commonly used tools (such as FIJI, Photoshop/Illustrator, etc, though for maximum utility my vote would be for something free to use) and provide guidance in those. This could be done along the way, and/or as part of a section at the beginning describing what are some commonly used tools for figure creation (and pointing to resources for each to learn to do common tasks). In that vein it would also be nice for the authors to more fully credit the tools that were used to make their own figures (they describe which python libraries are used in the creation of their bar graphs, but don't cite the relevant publications for those libraries or for the jupyter project itself (which according to the OSF project is how those figures were created), nor do they describe which software tool(s) they used to create the rest of the figures (They mention the QuickFigures tool at one point, though it's not clear that is what's used in this work or not).
An additional few smaller critiques-
1) The degree to which the authors obey their own rules for best practices vary; many of the images in the paper lack scale bars, for example, or have illegible bars (figure 6). I understand in most cases that is not the point being illustrated in that particular figure, and would not see it as a blocker for publication, but it would be nice to see them used more consistently, especially in the "good" images.
2) The text in the table in Figure 10 is VERY small, it might be better to move it below rather than beside the figure so it can more easily be enlarged. The text in other figures (such as 9 and 11 is also borderline tiny)
3) I personally find the broken-up-bar-graph in figure 1B a bit hard to read, especially as the bars for "Some scale bar dimensions" and "All/some magnification in legend" are overlapping; breaking it into multiple bar plots ala 1A lacks the "nice" effect of seeing how things add to 100 but might be more clear.
Reviewer #3:
The manuscript starts with quantification of image usage in publications and is followed by quantification of correct/incorrect image reporting (usage of scale information, insets etc.). The analysis of the published papers served the authors to discover problems and to come-up with suggestions that are presented in the following - core part of the manuscript. Here the authors give clear suggestions to relevant steps of image representation and figure preparation. Each step is visualized comparing wrong and right/improved approaches, such that the readers can compare the differences immediately by themselves. The manuscript ends with a final discussion that includes action points suggested to journal and the scientific community. The manuscript is very clearly written and gives the reader clear recommendations on how to improve image display.
Novelty and significance
While the single steps addressed (scale bar, color scheme, annotations) are not novel, the way of presenting it with the comparison in figures and the focus on the "colorblind safe" images is. The discussion in context of modern publishing (online) and the connection to online image repositories is timely.
The manuscript gives the reader a very clear "workflow" of what to do in different cases (e.g. 2 color image vs. 3 color image, or EM image vs. color photo) in order to avoid pitfalls. With this I expect it to be of great use, especially (but not only) for early career scientists.
Points of criticism:
I would have wished for a discussion around the flexibility of the rules and a potential of "miscounting" in the quantification of fig 1. E.g. also in this manuscript the scale bar is missing in most figures and would have been counted accordingly as "Partial scale information" in figure 1. (The reason why the scale bar is missing is written in the text of the manuscript.)
Also, I would have wished for a discussion whether/whether not it is important to include details in the figure legend, especially about tissue specification. Under section 7 (prepare figure legends) it is written that some journals require details, while others not - which clearly shows different opinions about this topic. Figure 2B "Are species/tissue/object clearly described in the legend?" shows to me rather different opinions on this topic rather than clear errors in image representation.
Minor comments:
- Fig 1: Include to the supplementary examples of images classified as e.g. "insets inaccurately marked, some marked " etc. if this is possible following copyright of already published figures.
- Fig 3A, subcellular scale image is saturated
- Fig 3B. Solution (cell image): inset marking is not fully transparent
- Fig 4: Ruler as scale bar - Square: 1cm; square not visible in this magnification
- Fig. 5: "Darth Vader being attached" - kids playing Star Wars?
- Section 5. Design the figure: "either from top to bottom and/or from right to left" should presumably read as "left to right"
- Fig 6 scale bar not visible in the print as it is for now
- Fig 8 Split the color channel: blue described as "least visible" in Fig. 6, but used anyway
- Same in Fig. 12 (red), described as "least visible" in Fig. 6, but used anyway
Reviewer #4:
[identifies herself as Perrine Paul-Gilloteaux]
This paper proposes a systematic review of figures in literature in biology-related fields, following some of the PRISMA guidelines, to assess the quality of these published figures. The criteria assessed are the accessibility of figures for color-blindness scientist, the presence of some minimal information as defined by the authors in the legend, the clarity of annotations or insets as assessed by the authors, the presence and clarity of the scale bar. The minimal information (in addition to the scale bar) that should be reported in the legend, as defined by the authors, are defined as the species (or cell lines) observed and the explanation of colors shown. Statistics on the binary fulfilment of these criteria are reported on the selected sample of publications.
The main message reported is that a majority of figures manually inspected by the authors did not fulfil all these criteria.
In addition the authors provides some examples of DO and DON'T for these points and provide guidelines to design good quality figures, according to these criteria.
While the study is certainly a considerable amount of work, and may point out that editors and reviewers did not do their job (PLOS Biology was not assessed) (reporting scale bar is at least known and required to be present and all figures by editors), I am questioning the choice of the criteria assessed. In particular, authors stated that these criteria serve the reproducibility, I do not understand how badly presented insets may reduce the reproducibility, as stated by the authors. It may unserve the readability, or send a bad message of the rigour of the study, but even this would need to be supported as statements, since in the study the figures which were not filling these criteria did not need them to be understood by the reader. More important guidelines, such as the one asked by journal publishing guidelines (contrast settings, background filtering, process history) would be more important as they can lead to wrong and false messages. The choice of these particular criteria should have been defended against some data or example about how they prevent reproducibility.
Then, showing with the permission of editors/authors, some example of badly assessed figures would have been useful: in particular I am doubtful about the unvisible annotation due to the blending with background color and how it can escape, the example shown of DON'T would serve better the message if taken from real published papers. Real example from real papers of figures assessed as not filling some of the criteria would serve better the message of the paper. Or even more ambitious, adding some reporting on the subjective loss of information and understanding in these papers by the authors of this meta analysis?
For example, even if it is indeed not deserving the main message of the paper, scale bar is not reported in most of the figures of this paper itself (it would have been expected at least for the example of different scale of images Figure 3 ) and in the same time species is reported for all figures when it brings no element to the main message, which is not biologically-related.
Also in the reporting of the method, I could not get how was defined the error rate mentioned: discrepancy in the binary answer of reviewers on each criteria? Are the scripts to compute the statistics provided? I could not find it on the link provided by the authors.
In addition, one of the main conclusion is also that these recommendations could help in designing the minimal information required when depositing data, but actually the repositories mentioned (IDR, Cell Atlas) store the raw data, not the figures, so the criteria and factors assessed are not applicable. Could the authors comment on this point or clarify this?
In conclusion, while the topic is timely relevant in the time of the reproducibility crisis, the authors are sending some messages that should be in the hand of the editors while editing the final proof of papers, in particular with the limited amount and impact of the criteria assessed. The two parts of the paper: constatation of the state of figure published in April 2018 against the criteria defined by the authors, followed by related guidelines and recommendations, are coherent together but the angle taken is too narrow:, in particular when stating as a main mission the reproducibility of papers. It may be of relevance for teaching courses but I am not sure about its categorization as a research paper as it is. The meta analysis could be of further interest if the support of the message was stronger by proving how this failure in criteria deserves reproducibility and interpretation of the data, as I am not convinced the ones chosen are the more important.
Reviewer #5:
[identifies himself as Simon F. Nørrelykke]
* Summary of the research and my overall impression
** 1. summarise what the ms claims to report
This manuscript details the results from a group of researchers across the globe who got together to document the state of image-based figures in scientific publications. The results obtained show that there is ample room for improvement and the authors proceed by giving figure-creation recommendations that, if followed by authors and journals, should greatly increase the quality of published figures.
Fraudulent image manipulation and how to acquire images is not the focus of this manuscript. Microscopy images, both transmitted, fluorescent, and electron, as well and photographs, are the focus; medical images (MRI, ultrasound, etc) were allowed but rare in the three fields studied.
All papers published during April 2018 in 15 journals (45 journals in total) in the three fields of plant science, cell biology, and physiology were manually examined and scored along several dimensions according to a shared protocol, available online and discussed in the manuscript.
580 papers were examined by "eLife Community Ambassadors from around the world" working together.
Only 2--16% of these papers met all the criteria set for good practices.
Detailed recommendations are given for the preparation of figures with microscopy images. These include discussions of scale bars, insets, colors/colorblindness, label, annotations, legends etc.
Though figures are ideally be designed to reach a wide audience, incl. scientists in other fields, they are typically only interpretable by a very narrow one, if at all.
The advise given on selecting the relevant magnification, how and where to include scale bars, and usage of color, should all be common sense, but apparently is not (behold the results of the investigation reported in this manuscript.) They are thus valuable, even if not novel or thought-provoking, and should be mandatory reading for every student preparing their first manuscript - and perhaps for a majority of PIs, reviewers, and editors alike.
** 2. give overview of the strengths and weaknesses of the ms
- Well written manuscript that reads well (except, perhaps, for the results section)
- The results section is very dry. Six paragraphs lists a large number of percentages. This is data but almost not information. An actuarian may disagree. Figures contain slightly more data and in a more digestible format (graphical).
- Data-acquisition: The number of journals assessed and the approach taken (two reviewers per paper and a clear protocol) is scientific and convincing
- The recommendations are clear and well illustrated
- Though most/all of the points are not new to anyone used to working with images (colorblindness, contrast, scale bars etc), it is useful to see them all collected and commented on in one place - also, every number of years it is useful to remind the community that these things are still (or increasingly? we don't know) an issue.
- Being literal about PLOS criteria:
+ Originality :: this is, as far as I know, the first papers reporting solidly on image-based figure quality
+ Importance to researchers in its field :: Important enough that it should be mandatory reading for any figure-creating scientist
+ Interest to scientists outside the field :: The findings and recommendations cover three fields and easily generalise to other fields
+ Rigorous methodology and substantial evidence for its conclusions :: Yes! Details given elsewhere in report.
** 3. recommended course of action
Publish after revision.
Highlight with editorial mention and Twitter activity.
This paper may do more for science than many a pure research manuscript.
* Specific areas that could be improved
** Major issues
- Major, somewhat, because pointing to conceptual issues
+ p. 6 "We evaluated only images in which the authors could have adjusted the image colors (i.e. fluorescence microscopy)"
+ Unless I misunderstand, it is perfectly possible to adjust the colors in any image, so this limitation to fluorescent microscopy images seems to not be justified by the argument given.
+ Example: In an RGB image, e.g. a photo of a flower, the user can set a different color for each of the three channels. This is easily done in, e.g. Imagej/Fiji using the channel tool
* https://imagej.net/docs/guide/146-28.html#toc-Subsection-28.5
* https://imagej.net/docs/guide/146-28.html#sub:Channels ...[Z]
+ Fix: redo research or reformulate sentence to simply state which images you comment on.
+ Or, did you perhaps mean "e.g." and not "i.e."?
- Major, but fixable, because pointing to conceptual issues
+ p. 12: "Digital microscope setups capture each channel's intensities in greyscale values."
+ Nope: Some do, some don't.
+ Fluorescent microscopes equipped with filter cubes and very light sensitive CCDs (CMOSs) tend to, as do confocals
+ Slides scanners (also microscopes) are usually equipped with RGB cameras.
+ Suggested fix: delete sentence after understanding why it is wrong
- Suggestion for how to lead by example and in the interest of reproducibility
+ Share the data in an interoperable manner (FAIR principles)
+ Share the Python notebooks used for statistical analysis
+ Share the scripts used to create figures (unless assembled by hand)
+ Do this in GitHub, Zenodo, or the journal website
** Minor issues
- p3: EMBO's Source Data tool (RRID:SCR_015018)
+ Is this supposed to be a link or reference?
- p6: "Color Oracle ( https://colororacle.org/ , RRID:SCR_018400)."
+ What is RRID? Not explained until p. 23.
- p. 5, Figure 1
+ Please give n in subpanel B, similar to A and C, or Fig 2 A, B, C.
+ Or state that numbers are the same as in A
- p. 11, Figure 4
+ This figure would be more powerful if the problems were 1-1 mirrored by solutions
+ Only two of the five problem images are solved
+ The ruler shown in the bottom right corner is too small to illustrate the point otherwise made: Zooming in, in the pdf, does not give clearly resolved 1cm squares, perhaps due to jpg effect.
+ Alternatively, rename from "problem" and "solution" to something not evoking expectations of solutions to the problems, e.g. by removing those two words.
- p. 12, Figure 5, top row
+ This is a very unlikely example of a scientific image
+ Resist temptation of including photos of family members ;-)
+ If you cannot find a natural, scientific, example, perhaps this is not an actual problem?
- p. 12, Figure 5, third and fourth row
+ Recommendations: the splitting should be in addition to, not instead of, adjusting for colorblindness in a merged image
+ Yes, you refer to Fig 8, but here is a natural place to mention it
- p. 13, Figure 6
+ This figure ought to be redundant, to the extent that the reader knows that higher contrast has higher contrast
+ If, however, the authors saw many examples of dark colors on dark background during their scans of papers, this could still seem a justified figure
+ "Free tools, such as Color Oracle (RRID:SCR_018400)"
+ Also available, for images, in the very popular open source software Fiji under "Image > Color > Simulate Color Blindness"
- p. 15, Figure 8
+ You show possible solutions but do not say what you recommend.
+ Please, do that and argue for the choice!
+ "QuickFigures (RRID:SCR019082)"
+ Does this software support reproducibility (creates scripts that can generate entire figure)?
+ Please comment in manuscript
- p. 17, Figure 10
+ Text in right half of figure is too small to comfortably read
- p. 21 Figure 13
+ Add title to third column
+ "increase training opportunities in bioimaging"
+ Should, likely, read "increase training opportunities in bioimage analysis"
- p. 35, Figure S1
+ Please create higher quality figure that better supports zooming in
- Suggestion
+ Cite first author's recent paper in F1000R-NEUBIAS on same topic
30 Jan 2021
Submitted filename: Response_to_reviewers_R1_20200126.docx
26 Feb 2021
Dear Tracey,
I've obtained advice from two of the previous reviewers, and on behalf of my colleagues and the Academic Editor, Jason Swedlow, I'm pleased to say that we can in principle offer to publish your Meta-Research Article "Creating Clear and Informative Image-based Figures for Scientific Publications" in PLOS Biology, provided you address any remaining formatting and reporting issues. These will be detailed in an email that will follow this letter and that you will usually receive within 2-3 business days, during which time no action is required from you. Please note that we will not be able to formally accept your manuscript and schedule it for publication until you have made the required changes.
Please take a minute to log into Editorial Manager at http://www.editorialmanager.com/pbiology/ , click the "Update My Information" link at the top of the page, and update your user information to ensure an efficient production process.
PRESS: We frequently collaborate with press offices. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximise its impact. If the press office is planning to promote your findings, we would be grateful if they could coordinate with gro.solp@sserpygoloib . If you have not yet opted out of the early version process, we ask that you notify us immediately of any press plans so that we may do so on your behalf.
We also ask that you take this opportunity to read our Embargo Policy regarding the discussion, promotion and media coverage of work that is yet to be published by PLOS. As your manuscript is not yet published, it is bound by the conditions of our Embargo Policy. Please be aware that this policy is in place both to ensure that any press coverage of your article is fully substantiated and to provide a direct link between such coverage and the published work. For full details of our Embargo Policy, please visit http://www.plos.org/about/media-inquiries/embargo-policy/ .
Thank you again for supporting Open Access publishing. We look forward to publishing your paper in PLOS Biology.
Best wishes,
Roland G Roberts, PhD
Senior Editor
_______________
[identifies herself as Elisabeth M Bik]
I thank the authors for addressing all of the comments raised by the reviewers. I look forward to see this paper published.
[identifies herself as Beth Cimini]
The authors have satisfied my concerns and I can happily recommend this work for publication.
If you are confused about whether you should include pictures, images, charts, and other non-textual elements in your research paper or not, I would suggest you must insert such elements in your research paper. Including non-textual elements like images and charts in the research paper helps extract a higher acceptance of your proposed theories.
An image or chart will make your research paper more attractive, interesting, explanatory, and understandable for the audience. In addition, when you cite an image or chart, it helps you describe your research and its parts with far more precision than simple, long paragraphs.
There are plenty of reasons why you should cite images in your research paper. However, most scholars and academicians avoid it altogether, losing the opportunity to make their research papers more interesting and garner higher readership.
Additionally, it has been observed that there are many misconceptions around the use or citation of images in research papers. For example, it is widely believed and practiced that using pictures or any graphics in the research papers will render it unprofessional or non-academic. However, in reality, no such legit rules or regulations prohibit citing images or any graphic elements in the research papers.
You will find it much easier once you know the appropriate way to cite images or non-textual elements in your research paper. But, it’s important to keep in mind some rules and regulations for using different non-textual elements in your research paper. You can easily upgrade your academic/ research writing skills by leveraging various guides in our repository.
In this guide, you will find clear explanations and guidelines that will teach you how to identify appropriate images and other non-textual elements and cite them in your research paper. So, cut the clutter; let’s start.
Although it’s not mandatory to cite images in a research paper, however, if you choose to include them, it will help showcase your deep understanding of the research topic. It can even represent the clarity you carry for your research topic and help the audience navigate your paper easily.
There are several reasons why you must cite images in your research paper like:
While writing your research paper, certain topics will be comparatively more complex than others. In such a scenario where you find out that words are not providing the necessary explanation, you can always switch to illustrating the process using images. For example, you can write paragraphs describing climate change and its associated factors and/or cite a single illustration to describe the complete process with its embedded factors.
To create an impeccable research paper, you need to include evidence and examples supporting your argument for the research topic. Rather than always explaining the supporting evidence and examples through words, it will be better to depict them through images. For example, to demonstrate climate change's effects on a region, you can always showcase and cite the “before and after” images.
If your research topic requires segregation into various sub-topics and further, you can easily group and classify them in the form of a classification tree or a chart. Providing such massive information in the format of a classification tree will save you a lot of words and present the information in a more straightforward and understandable form to your audience.
Including images in your research paper, theses, and dissertations will help you garner the audience's greater attention. If you add or cite images in the paper, it will provide a better understanding and clarification of the topics covered in your research. Additionally, it will make your research paper visually attractive.
Using and citing images in a research paper as already explained can make your research paper more understanding and structured in appearance. For this, you can use photos, drawings, charts, graphs, infographics, etc. However, there are no mandatory regulations to use or cite images in a research paper, but there are some recommendations as per the journal style.
Before including any images in your research paper, you need to ensure that it fits the research topic and syncs with your writing style. As already mentioned, there are no strict regulations around the usage of images. However, you should make sure that it satisfies certain parameters like:
You can cite images in your research paper either at the end, in between the topics, or in a separate section for all the non-textual elements used in the paper. You can choose to insert images in between texts, but you need to provide the in-text citations for every image that has been used.
Additionally, you need to attach the name, description and image number so that your research paper stays structured. Moreover, you must cite or add the copyright details of the image if you borrow images from other platforms to avoid any copyright infringement.
You can earn an advantage by providing better and simple explanations through graphs and charts rather than wordy descriptions. There are several reasons why you must cite or include graphs and charts in your research paper:
With the usage of graphs and charts, you can answer several questions of your readers without them even questioning. With charts and graphs, you can provide an immense amount of information in a brief yet attractive manner to your readers, as these elements keep them interested in your research topic.
Providing these non-textual elements in your research paper increases its readability. Moreover, the graphs and charts will drive the reader’s attention compared to text-heavy paragraphs.
You can easily use the graphs or charts of some previously done research in your chosen domain, provided that you cite them appropriately, or else you can create your graphs through different tools like Canva, Excel, or MS PowerPoint. Additionally, you must provide supporting statements for the graphs and charts so that readers can understand the meaning of these illustrations easily.
Similarly, like pictures or images, you can choose one of the three possible methods of placement in your research paper, i.e., either after the text or on a different page right after the corresponding paragraph or inside the paragraph itself.
Once you have decided the type of images you will be using in your paper, understand the rules of various journals for the fair usage of these elements. Using pictures or graphs as per these rules will help your reader navigate and understand your research paper easily. If you borrow or cite previously used pictures or images, you need to follow the correct procedure for that citation.
Usage or citation of pictures or graphs is not prohibited in any academic writing style, and it just differs from each other due to their respective formats.
Most of the scientific works, society, and media-based research topics are presented in the APA style. It is usually followed by museums, exhibitions, galleries, libraries, etc. If you create your research paper in APA style and cite already used images or graphics, you need to provide complete information about the source.
In APA style, the list of the information that you must provide while citing an element is as follows:
If you want to cite some images from the internet, try providing its source link rather than the name or webpage.
Johanson, M. (Photographer). (2017, September, Vienna, Austria. Rescued bird. National gallery.
MLA style is again one of the most preferred styles worldwide for research paper publication. You can easily use or cite images in this style provided no rights of the image owner get violated. Additionally, the format or the information required for citation or usage is very brief yet precise.
In the MLA style, the following are the details that a used image or graph must carry:
Auteur, Henry. “Abandoned gardens, Potawatomi, Ontario.” Historical Museum, Reproduction no. QW-YUJ78-1503141, 1989, www.flickr.com/pictures/item/609168336/
It is easy to cite images in your research paper, and you should add different forms of non-textual elements in the paper. There are different rules for using or citing images in research papers depending on writing styles to ensure that your paper doesn’t fall for copyright infringement or the owner's rights get violated.
No matter which writing style you choose to write your paper, make sure that you provide all the details in the appropriate format. Once you have all the details and understanding of the format of usage or citation, feel free to use as many images that make your research paper intriguing and interesting enough.
If you still have doubts about how to use or cite images, join our SciSpace (Formerly Typeset) Community and post your questions there. Our experts will address your queries at the earliest. Explore the community to know what's buzzing and be a part of hot discussion topics in the academic domain.
Learn more about SciSpace's dedicated research solutions by heading to our product page. Our suite of products can simplify your research workflows so that you can focus more on what you do best: advance science.
With a best-in-class solution, you can handle everything from literature search and discovery to profile management, research writing, and formatting.
You might also like.
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Published on March 25, 2021 by Jack Caulfield . Revised on June 28, 2022.
To cite an image, you need an in-text citation and a corresponding reference entry. The reference entry should list:
The format varies depending on where you accessed the image and which citation style you’re using: APA , MLA , or Chicago .
Upload your document to correct all your mistakes in minutes
Citing an image in apa style, citing an image in mla style, citing an image in chicago style, frequently asked questions about citations.
In an APA Style reference entry for an image found on a website , write the image title in italics, followed by a description of its format in square brackets. Include the name of the site and the URL. The APA in-text citation just includes the photographer’s name and the year.
APA format | Author last name, Initials. (Year). [Format]. Site Name. URL |
---|---|
Reis, L. (2021). [Photograph]. Flickr. https://flic.kr/p/2kNpoXB | |
(Reis, 2021) |
The information included after the title and format varies for images from other containers (e.g. books , articles ).
When you include the image itself in your text, you’ll also have to format it as a figure and include appropriate copyright/permissions information .
For an artwork viewed at a museum, gallery, or other physical archive, include information about the institution and location. If there’s a page on the institution’s website for the specific work, its URL can also be included.
APA format | Author last name, Initials. (Year). [Format]. Institution Name, Location. URL |
---|---|
Kahlo, F. (1940). [Painting]. Museum of Modern Art, New York City, NY, United States. https://www.moma.org/collection/works/78333 | |
(Kahlo, 1940) |
In an MLA Works Cited entry for an image found online , the title of the image appears in quotation marks, the name of the site in italics. Include the full publication date if available, not just the year.
The MLA in-text citation normally just consists of the author’s last name.
MLA format | Author last name, First name. “Image Title.” , Day Month Year, URL. |
---|---|
Reis, Larry. “Northern Cardinal Female at Lake Meyer Park IA 653A2079.” , 22 Mar. 2021, https://flic.kr/p/2kNpoXB. | |
(Reis) |
The information included after the title and format differs for images contained within other source types, such as books and articles .
If you include the image itself as a figure, make sure to format it correctly .
A citation for an image viewed in a museum (or other physical archive, e.g. a gallery) includes the name and location of the institution instead of website information.
MLA format | Author last name, First name. “Image Title.” Year, Institution Name, City. |
---|---|
Kahlo, Frida. “Self-Portrait with Cropped Hair.” 1940, Museum of Modern Art, New York. | |
(Kahlo) |
In Chicago style , images may just be referred to in the text without need for a citation or bibliography entry.
If you have to include a full Chicago style image citation , however, list the title in italics, add relevant information about the image format, and add a URL at the end of the bibliography entry for images consulted online.
Chicago format | Author last name, First name. . Month Day, Year. Format. Website Name. URL. |
---|---|
Reis, Larry. . March 22, 2021. Photograph. Flickr. https://flic.kr/p/2kNpoXB. | |
1. Larry Reis, , March 22, 2021, photograph, Flickr, https://flic.kr/p/2kNpoXB. 2. Reis, . |
Chicago also offers an alternative author-date citation style . Examples of image citations in this style can be found here .
For an image viewed in a museum, gallery, or other physical archive, you can again just refer to it in the text without a formal citation. If a citation is required, list the institution and the city it is located in at the end of the bibliography entry.
Chicago format | Author last name, First name. . Year. Format. Institution Name, City. |
---|---|
Kahlo, Frida. . 1940. Oil on canvas, 40 x 27.9 cm. Museum of Modern Art, New York. | |
1. Frida Kahlo, , 1940, oil on canvas, 40 x 27.9 cm, Museum of Modern Art, New York. 2. Kahlo, . |
The main elements included in image citations across APA , MLA , and Chicago style are the name of the image’s creator, the image title, the year (or more precise date) of publication, and details of the container in which the image was found (e.g. a museum, book , website ).
In APA and Chicago style, it’s standard to also include a description of the image’s format (e.g. “Photograph” or “Oil on canvas”). This sort of information may be included in MLA too, but is not mandatory.
Untitled sources (e.g. some images ) are usually cited using a short descriptive text in place of the title. In APA Style , this description appears in brackets: [Chair of stained oak]. In MLA and Chicago styles, no brackets are used: Chair of stained oak.
For social media posts, which are usually untitled, quote the initial words of the post in place of the title: the first 160 characters in Chicago , or the first 20 words in APA . E.g. Biden, J. [@JoeBiden]. “The American Rescue Plan means a $7,000 check for a single mom of four. It means more support to safely.”
MLA recommends quoting the full post for something short like a tweet, and just describing the post if it’s longer.
In APA , MLA , and Chicago style citations for sources that don’t list a specific author (e.g. many websites ), you can usually list the organization responsible for the source as the author.
If the organization is the same as the website or publisher, you shouldn’t repeat it twice in your reference:
If there’s no appropriate organization to list as author, you will usually have to begin the citation and reference entry with the title of the source instead.
Check if your university or course guidelines specify which citation style to use. If the choice is left up to you, consider which style is most commonly used in your field.
Other more specialized styles exist for certain fields, such as Bluebook and OSCOLA for law.
The most important thing is to choose one style and use it consistently throughout your text.
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Caulfield, J. (2022, June 28). How to Cite an Image | Photographs, Figures, Diagrams. Scribbr. Retrieved June 7, 2024, from https://www.scribbr.com/citing-sources/cite-an-image/
Other students also liked, how to cite a youtube video | mla, apa & chicago, how to cite a website | mla, apa & chicago examples, how to cite a book | apa, mla, & chicago examples, scribbr apa citation checker.
An innovative new tool that checks your APA citations with AI software. Say goodbye to inaccurate citations!
Unprecedented photorealism, deep level of language understanding.
Google Research, Brain Team
We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment.
More from the Imagen family:
Visualization of Imagen. Imagen uses a large frozen T5-XXL encoder to encode the input text into embeddings. A conditional diffusion model maps the text embedding into a 64×64 image. Imagen further utilizes text-conditional super-resolution diffusion models to upsample the image 64×64→256×256 and 256×256→1024×1024.
Deep textual understanding → photorealistic generation, imagen research highlights.
Model | COCO FID ↓ |
---|---|
Trained on COCO | |
AttnGAN (Xu et al., 2017) | 35.49 |
DM-GAN (Zhu et al., 2019) | 32.64 |
DF-GAN (Tao et al., 2020) | 21.42 |
DM-GAN + CL (Ye et al., 2021) | 20.79 |
XMC-GAN (Zhang et al., 2021) | 9.33 |
LAFITE (Zhou et al., 2021) | 8.12 |
Make-A-Scene (Gafni et al., 2022) | 7.55 |
Not trained on COCO | |
DALL-E (Ramesh et al., 2021) | 17.89 |
GLIDE (Nichol et al., 2021) | 12.24 |
DALL-E 2 (Ramesh et al., 2022) | 10.39 |
Imagen (Our Work) | 7.27 |
#1 in coco fid · #1 in drawbench.
wearing a cowboy hat and wearing a sunglasses and
red shirt black leather jacket
playing a guitar riding a bike skateboarding
in a garden. on a beach. on top of a mountain.
Diffusion models have seen wide success in image generation [ 1 , 2 , 3 , 4 ]. Autoregressive models [ 5 ], GANs [ 6 , 7 ] VQ-VAE Transformer based methods [ 8 , 9 ] have all made remarkable progress in text-to-image research. More recently, Diffusion models have been explored for text-to-image generation [ 10 , 11 ], including the concurrent work of DALL-E 2 [ 12 ]. DALL-E 2 uses a diffusion prior on CLIP latents, and cascaded diffusion models to generate high resolution 1024×1024 images. We believe Imagen is much simpler, as Imagen does not need to learn a latent prior, yet achieves better results in both MS-COCO FID and side-by-side human evaluation on DrawBench. GLIDE [ 10 ] also uses cascaded diffusions models for text-to-image, but Imagen uses larger pretrained frozen language models, which we found to be instrumental to both image fidelity and image-text alignment. XMC-GAN [ 7 ] also uses BERT as a text encoder, but we scale to much larger text encoders and demonstrate the effectiveness thereof. The use of cascaded diffusion models is also popular throughout the literature [ 13 , 14 ], and has been used with success in diffusion models to generate high resolution images [ 2 , 3 ]. Finally, Imagen is part of a series of text-to-image work at Google Research, including its sibling model Parti .
There are several ethical challenges facing text-to-image research broadly. We offer a more detailed exploration of these challenges in our paper and offer a summarized version here. First, downstream applications of text-to-image models are varied and may impact society in complex ways. The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access. Second, the data requirements of text-to-image models have led researchers to rely heavily on large, mostly uncurated, web-scraped datasets. While this approach has enabled rapid algorithmic advances in recent years, datasets of this nature often reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups. While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.
Finally, while there has been extensive work auditing image-to-text and image labeling models for forms of social bias, there has been comparatively less work on social bias evaluation methods for text-to-image models. A conceptual vocabulary around potential harms of text-to-image models and established metrics of evaluation are an essential component of establishing responsible model release practices. While we leave an in-depth empirical analysis of social and cultural biases to future work, our small scale internal assessments reveal several limitations that guide our decision not to release our model at this time. Imagen, may run into danger of dropping modes of the data distribution, which may further compound the social consequence of dataset bias. Imagen exhibits serious limitations when generating images depicting people. Our human evaluations found Imagen obtains significantly higher preference rates when evaluated on images that do not portray people, indicating a degradation in image fidelity. Preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes. Finally, even when we focus generations away from people, our preliminary analysis indicates Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects. We aim to make progress on several of these open challenges and limitations in future work.
Chitwan Saharia * , William Chan * , Saurabh Saxena † , Lala Li † , Jay Whang † , Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho † , David Fleet † , Mohammad Norouzi *
* Equal contribution. † Core contribution.
We give thanks to Ben Poole for reviewing our manuscript, early discussions, and providing many helpful comments and suggestions throughout the project. Special thanks to Kathy Meier-Hellstern, Austin Tarango, and Sarah Laszlo for helping us incorporate important responsible AI practices around this project. We appreciate valuable feedback and support from Elizabeth Adkison, Zoubin Ghahramani, Jeff Dean, Yonghui Wu, and Eli Collins. We are grateful to Tom Small for designing the Imagen watermark. We thank Jason Baldridge, Han Zhang, and Kevin Murphy for initial discussions and feedback. We acknowledge hard work and support from Fred Alcober, Hibaq Ali, Marian Croak, Aaron Donsbach, Tulsee Doshi, Toju Duke, Douglas Eck, Jason Freidenfelds, Brian Gabriel, Molly FitzMorris, David Ha, Philip Parham, Laura Pearce, Evan Rapoport, Lauren Skelly, Johnny Soraker, Negar Rostamzadeh, Vijay Vasudevan, Tris Warkentin, Jeremy Weinstein, and Hugh Williams for giving us advice along the project and assisting us with the publication process. We thank Victor Gomes and Erica Moreira for their consistent and critical help with TPU resource allocation. We also give thanks to Shekoofeh Azizi, Harris Chan, Chris A. Lee, and Nick Ma for volunteering a considerable amount of their time for testing out DrawBench. We thank Aditya Ramesh, Prafulla Dhariwal, and Alex Nichol for allowing us to use DALL-E 2 samples and providing us with GLIDE samples. We are thankful to Matthew Johnson and Roy Frostig for starting the JAX project and to the whole JAX team for building such a fantastic system for high-performance machine learning research. Special thanks to Durk Kingma, Jascha Sohl-Dickstein, Lucas Theis and the Toronto Brain team for helpful discussions and spending time Imagening!
Purdue Online Writing Lab Purdue OWL® College of Liberal Arts
This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.
Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.
The Online Writing Lab at Purdue University houses writing resources and instructional material, and we provide these as a free service of the Writing Lab at Purdue. Students, members of the community, and users worldwide will find information to assist with many writing projects. Teachers and trainers may use this material for in-class and out-of-class instruction.
The Purdue On-Campus Writing Lab and Purdue Online Writing Lab assist clients in their development as writers—no matter what their skill level—with on-campus consultations, online participation, and community engagement. The Purdue Writing Lab serves the Purdue, West Lafayette, campus and coordinates with local literacy initiatives. The Purdue OWL offers global support through online reference materials and services.
The Purdue OWL® is committed to supporting students, instructors, and writers by offering a wide range of resources that are developed and revised with them in mind. To do this, the OWL team is always exploring possibilties for a better design, allowing accessibility and user experience to guide our process. As the OWL undergoes some changes, we welcome your feedback and suggestions by email at any time.
Please don't hesitate to contact us via our contact page if you have any questions or comments.
All the best,
Facebook twitter.
If 2023 was the year the world discovered generative AI (gen AI) , 2024 is the year organizations truly began using—and deriving business value from—this new technology. In the latest McKinsey Global Survey on AI, 65 percent of respondents report that their organizations are regularly using gen AI, nearly double the percentage from our previous survey just ten months ago. Respondents’ expectations for gen AI’s impact remain as high as they were last year , with three-quarters predicting that gen AI will lead to significant or disruptive change in their industries in the years ahead.
This article is a collaborative effort by Alex Singla , Alexander Sukharevsky , Lareina Yee , and Michael Chui , with Bryce Hall , representing views from QuantumBlack, AI by McKinsey, and McKinsey Digital.
Organizations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology. The survey also provides insights into the kinds of risks presented by gen AI—most notably, inaccuracy—as well as the emerging practices of top performers to mitigate those challenges and capture value.
Interest in generative AI has also brightened the spotlight on a broader set of AI capabilities. For the past six years, AI adoption by respondents’ organizations has hovered at about 50 percent. This year, the survey finds that adoption has jumped to 72 percent (Exhibit 1). And the interest is truly global in scope. Our 2023 survey found that AI adoption did not reach 66 percent in any region; however, this year more than two-thirds of respondents in nearly every region say their organizations are using AI. 1 Organizations based in Central and South America are the exception, with 58 percent of respondents working for organizations based in Central and South America reporting AI adoption. Looking by industry, the biggest increase in adoption can be found in professional services. 2 Includes respondents working for organizations focused on human resources, legal services, management consulting, market research, R&D, tax preparation, and training.
Also, responses suggest that companies are now using AI in more parts of the business. Half of respondents say their organizations have adopted AI in two or more business functions, up from less than a third of respondents in 2023 (Exhibit 2).
Most respondents now report that their organizations—and they as individuals—are using gen AI. Sixty-five percent of respondents say their organizations are regularly using gen AI in at least one business function, up from one-third last year. The average organization using gen AI is doing so in two functions, most often in marketing and sales and in product and service development—two functions in which previous research determined that gen AI adoption could generate the most value 3 “ The economic potential of generative AI: The next productivity frontier ,” McKinsey, June 14, 2023. —as well as in IT (Exhibit 3). The biggest increase from 2023 is found in marketing and sales, where reported adoption has more than doubled. Yet across functions, only two use cases, both within marketing and sales, are reported by 15 percent or more of respondents.
Gen AI also is weaving its way into respondents’ personal lives. Compared with 2023, respondents are much more likely to be using gen AI at work and even more likely to be using gen AI both at work and in their personal lives (Exhibit 4). The survey finds upticks in gen AI use across all regions, with the largest increases in Asia–Pacific and Greater China. Respondents at the highest seniority levels, meanwhile, show larger jumps in the use of gen Al tools for work and outside of work compared with their midlevel-management peers. Looking at specific industries, respondents working in energy and materials and in professional services report the largest increase in gen AI use.
The latest survey also shows how different industries are budgeting for gen AI. Responses suggest that, in many industries, organizations are about equally as likely to be investing more than 5 percent of their digital budgets in gen AI as they are in nongenerative, analytical-AI solutions (Exhibit 5). Yet in most industries, larger shares of respondents report that their organizations spend more than 20 percent on analytical AI than on gen AI. Looking ahead, most respondents—67 percent—expect their organizations to invest more in AI over the next three years.
Where are those investments paying off? For the first time, our latest survey explored the value created by gen AI use by business function. The function in which the largest share of respondents report seeing cost decreases is human resources. Respondents most commonly report meaningful revenue increases (of more than 5 percent) in supply chain and inventory management (Exhibit 6). For analytical AI, respondents most often report seeing cost benefits in service operations—in line with what we found last year —as well as meaningful revenue increases from AI use in marketing and sales.
As businesses begin to see the benefits of gen AI, they’re also recognizing the diverse risks associated with the technology. These can range from data management risks such as data privacy, bias, or intellectual property (IP) infringement to model management risks, which tend to focus on inaccurate output or lack of explainability. A third big risk category is security and incorrect use.
Respondents to the latest survey are more likely than they were last year to say their organizations consider inaccuracy and IP infringement to be relevant to their use of gen AI, and about half continue to view cybersecurity as a risk (Exhibit 7).
Conversely, respondents are less likely than they were last year to say their organizations consider workforce and labor displacement to be relevant risks and are not increasing efforts to mitigate them.
In fact, inaccuracy— which can affect use cases across the gen AI value chain , ranging from customer journeys and summarization to coding and creative content—is the only risk that respondents are significantly more likely than last year to say their organizations are actively working to mitigate.
Some organizations have already experienced negative consequences from the use of gen AI, with 44 percent of respondents saying their organizations have experienced at least one consequence (Exhibit 8). Respondents most often report inaccuracy as a risk that has affected their organizations, followed by cybersecurity and explainability.
Our previous research has found that there are several elements of governance that can help in scaling gen AI use responsibly, yet few respondents report having these risk-related practices in place. 4 “ Implementing generative AI with speed and safety ,” McKinsey Quarterly , March 13, 2024. For example, just 18 percent say their organizations have an enterprise-wide council or board with the authority to make decisions involving responsible AI governance, and only one-third say gen AI risk awareness and risk mitigation controls are required skill sets for technical talent.
The latest survey also sought to understand how, and how quickly, organizations are deploying these new gen AI tools. We have found three archetypes for implementing gen AI solutions : takers use off-the-shelf, publicly available solutions; shapers customize those tools with proprietary data and systems; and makers develop their own foundation models from scratch. 5 “ Technology’s generational moment with generative AI: A CIO and CTO guide ,” McKinsey, July 11, 2023. Across most industries, the survey results suggest that organizations are finding off-the-shelf offerings applicable to their business needs—though many are pursuing opportunities to customize models or even develop their own (Exhibit 9). About half of reported gen AI uses within respondents’ business functions are utilizing off-the-shelf, publicly available models or tools, with little or no customization. Respondents in energy and materials, technology, and media and telecommunications are more likely to report significant customization or tuning of publicly available models or developing their own proprietary models to address specific business needs.
Respondents most often report that their organizations required one to four months from the start of a project to put gen AI into production, though the time it takes varies by business function (Exhibit 10). It also depends upon the approach for acquiring those capabilities. Not surprisingly, reported uses of highly customized or proprietary models are 1.5 times more likely than off-the-shelf, publicly available models to take five months or more to implement.
Gen AI is a new technology, and organizations are still early in the journey of pursuing its opportunities and scaling it across functions. So it’s little surprise that only a small subset of respondents (46 out of 876) report that a meaningful share of their organizations’ EBIT can be attributed to their deployment of gen AI. Still, these gen AI leaders are worth examining closely. These, after all, are the early movers, who already attribute more than 10 percent of their organizations’ EBIT to their use of gen AI. Forty-two percent of these high performers say more than 20 percent of their EBIT is attributable to their use of nongenerative, analytical AI, and they span industries and regions—though most are at organizations with less than $1 billion in annual revenue. The AI-related practices at these organizations can offer guidance to those looking to create value from gen AI adoption at their own organizations.
To start, gen AI high performers are using gen AI in more business functions—an average of three functions, while others average two. They, like other organizations, are most likely to use gen AI in marketing and sales and product or service development, but they’re much more likely than others to use gen AI solutions in risk, legal, and compliance; in strategy and corporate finance; and in supply chain and inventory management. They’re more than three times as likely as others to be using gen AI in activities ranging from processing of accounting documents and risk assessment to R&D testing and pricing and promotions. While, overall, about half of reported gen AI applications within business functions are utilizing publicly available models or tools, gen AI high performers are less likely to use those off-the-shelf options than to either implement significantly customized versions of those tools or to develop their own proprietary foundation models.
What else are these high performers doing differently? For one thing, they are paying more attention to gen-AI-related risks. Perhaps because they are further along on their journeys, they are more likely than others to say their organizations have experienced every negative consequence from gen AI we asked about, from cybersecurity and personal privacy to explainability and IP infringement. Given that, they are more likely than others to report that their organizations consider those risks, as well as regulatory compliance, environmental impacts, and political stability, to be relevant to their gen AI use, and they say they take steps to mitigate more risks than others do.
Gen AI high performers are also much more likely to say their organizations follow a set of risk-related best practices (Exhibit 11). For example, they are nearly twice as likely as others to involve the legal function and embed risk reviews early on in the development of gen AI solutions—that is, to “ shift left .” They’re also much more likely than others to employ a wide range of other best practices, from strategy-related practices to those related to scaling.
In addition to experiencing the risks of gen AI adoption, high performers have encountered other challenges that can serve as warnings to others (Exhibit 12). Seventy percent say they have experienced difficulties with data, including defining processes for data governance, developing the ability to quickly integrate data into AI models, and an insufficient amount of training data, highlighting the essential role that data play in capturing value. High performers are also more likely than others to report experiencing challenges with their operating models, such as implementing agile ways of working and effective sprint performance management.
The online survey was in the field from February 22 to March 5, 2024, and garnered responses from 1,363 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 981 said their organizations had adopted AI in at least one business function, and 878 said their organizations were regularly using gen AI in at least one function. To adjust for differences in response rates, the data are weighted by the contribution of each respondent’s nation to global GDP.
Alex Singla and Alexander Sukharevsky are global coleaders of QuantumBlack, AI by McKinsey, and senior partners in McKinsey’s Chicago and London offices, respectively; Lareina Yee is a senior partner in the Bay Area office, where Michael Chui , a McKinsey Global Institute partner, is a partner; and Bryce Hall is an associate partner in the Washington, DC, office.
They wish to thank Kaitlin Noe, Larry Kanter, Mallika Jhamb, and Shinjini Srivastava for their contributions to this work.
This article was edited by Heather Hanselman, a senior editor in McKinsey’s Atlanta office.
Related articles.
More than 100 reference examples and their corresponding in-text citations are presented in the seventh edition Publication Manual . Examples of the most common works that writers cite are provided on this page; additional examples are available in the Publication Manual .
To find the reference example you need, first select a category (e.g., periodicals) and then choose the appropriate type of work (e.g., journal article ) and follow the relevant example.
When selecting a category, use the webpages and websites category only when a work does not fit better within another category. For example, a report from a government website would use the reports category, whereas a page on a government website that is not a report or other work would use the webpages and websites category.
Also note that print and electronic references are largely the same. For example, to cite both print books and ebooks, use the books and reference works category and then choose the appropriate type of work (i.e., book ) and follow the relevant example (e.g., whole authored book ).
Examples on these pages illustrate the details of reference formats. We make every attempt to show examples that are in keeping with APA Style’s guiding principles of inclusivity and bias-free language. These examples are presented out of context only to demonstrate formatting issues (e.g., which elements to italicize, where punctuation is needed, placement of parentheses). References, including these examples, are not inherently endorsements for the ideas or content of the works themselves. An author may cite a work to support a statement or an idea, to critique that work, or for many other reasons. For more examples, see our sample papers .
Reference examples are covered in the seventh edition APA Style manuals in the Publication Manual Chapter 10 and the Concise Guide Chapter 10
Textual works are covered in Sections 10.1–10.8 of the Publication Manual . The most common categories and examples are presented here. For the reviews of other works category, see Section 10.7.
Data sets are covered in Section 10.9 of the Publication Manual . For the software and tests categories, see Sections 10.10 and 10.11.
Audiovisual media are covered in Sections 10.12–10.14 of the Publication Manual . The most common examples are presented together here. In the manual, these examples and more are separated into categories for audiovisual, audio, and visual media.
Online media are covered in Sections 10.15 and 10.16 of the Publication Manual . Please note that blog posts are part of the periodicals category.
Creating standards, guidelines, processes, and workflows for content marketing is not the sexiest job.
But setting standards is the only way to know if you can improve anything (with AI or anything else).
Here’s the good news: All that non-sexy work frees time and resources (human and tech) you can apply to bring your brand’s strategies and plans to life.
But in many organizations, content still isn’t treated as a coordinated business function. That’s one of the big takeaways from our latest research, B2B Content Marketing Benchmarks, Budgets, and Trends: Outlook for 2024, conducted with MarketingProfs and sponsored by Brightspot .
A few symptoms of that reality showed up in the research:
I’ll walk you through the findings and share some advice from CMI Chief Strategy Advisor Robert Rose and other industry voices to shed light on what it all means for B2B marketers. There’s a lot to work through, so feel free to use the table of contents to navigate to the sections that most interest you.
Note: These numbers come from a July 2023 survey of marketers around the globe. We received 1,080 responses. This article focuses on answers from the 894 B2B respondents.
Methodology, ai: 3 out of 4 b2b marketers use generative tools.
Of course, we asked respondents how they use generative AI in content and marketing. As it turns out, most experiment with it: 72% of respondents say they use generative AI tools.
But a lack of standards can get in the way.
“Generative AI is the new, disruptive capability entering the realm of content marketing in 2024,” Robert says. “It’s just another way to make our content process more efficient and effective. But it can’t do either until you establish a standard to define its value. Until then, it’s yet just another technology that may or may not make you better at what you do.”
So, how do content marketers use the tools today? About half (51%) use generative AI to brainstorm new topics. Many use the tools to research headlines and keywords (45%) and write drafts (45%). Fewer say they use AI to outline assignments (23%), proofread (20%), generate graphics (11%), and create audio (5%) and video (5%).
Some marketers say they use AI to do things like generate email headlines and email copy, extract social media posts from long-form content, condense long-form copy into short form, etc.
Only 28% say they don’t use generative AI tools.
Among those who use generative AI tools, 91% use free tools (e.g., ChatGPT ). Thirty-eight percent use tools embedded in their content creation/management systems, and 27% pay for tools such as Writer and Jasper.
Asked if their organizations have guidelines for using generative AI tools, 31% say yes, 61% say no, and 8% are unsure.
We asked Ann Handley , chief content officer of MarketingProfs, for her perspective. “It feels crazy … 61% have no guidelines? But is it actually shocking and crazy? No. It is not. Most of us are just getting going with generative AI. That means there is a clear and rich opportunity to lead from where you sit,” she says.
“Ignite the conversation internally. Press upon your colleagues and your leadership that this isn’t a technology opportunity. It’s also a people and operational challenge in need of thoughtful and intelligent response. You can be the AI leader your organization needs,” Ann says.
While a lack of guidelines may deter some B2B marketers from using generative AI tools, other reasons include accuracy concerns (36%), lack of training (27%), and lack of understanding (27%). Twenty-two percent cite copyright concerns, and 19% have corporate mandates not to use them.
We also wondered how AI’s integration in search engines shifts content marketers’ SEO strategy. Here’s what we found:
Over one-fourth (28%) say they’re not doing any of those things, while 26% say they’re unsure.
AI may heighten the need to rethink your SEO strategy. But it’s not the only reason to do so, as Orbit Media Studios co-founder and chief marketing officer Andy Crestodina points out: “Featured snippets and people-also-ask boxes have chipped away at click-through rates for years,” he says. “AI will make that even worse … but only for information intent queries . Searchers who want quick answers really don’t want to visit websites.
“Focus your SEO efforts on those big questions with big answers – and on the commercial intent queries,” Andy continues. “Those phrases still have ‘visit website intent’ … and will for years to come.”
Many B2B marketers surveyed predict AI will dominate the discussions of content marketing trends in 2024. As one respondent says: “AI will continue to be the shiny thing through 2024 until marketers realize the dedication required to develop prompts, go through the iterative process, and fact-check output . AI can help you sharpen your skills, but it isn’t a replacement solution for B2B marketing.”
Back to table of contents
Generative AI isn’t the only issue affecting content marketing these days. We also asked marketers about how they organize their teams .
Among larger companies (100-plus employees), half say content requests go through a centralized content team. Others say each department/brand produces its own content (23%), and the departments/brand/products share responsibility (21%).
Seventy percent say their organizations integrate content strategy into the overall marketing sales/communication/strategy, and 2% say it’s integrated into another strategy. Eleven percent say content is a stand-alone strategy for content used for marketing, and 6% say it’s a stand-alone strategy for all content produced by the company. Only 9% say they don’t have a content strategy. The remaining 2% say other or are unsure.
Twenty-eight percent of B2B marketers say team members resigned in the last year, 20% say team members were laid off, and about half (49%) say they had new team members acclimating to their ways of working.
While team members come and go, the understanding of content doesn’t. Over half (54%) strongly agree, and 30% somewhat agree the leader to whom their content team reports understands the work they do. Only 11% disagree. The remaining 5% neither agree nor disagree.
And remote work seems well-tolerated: Only 20% say collaboration was challenging due to remote or hybrid work.
We asked B2B marketers about both content creation and non-creation challenges.
Most marketers (57%) cite creating the right content for their audience as a challenge. This is a change from many years when “creating enough content” was the most frequently cited challenge.
One respondent points out why understanding what audiences want is more important than ever: “As the internet gets noisier and AI makes it incredibly easy to create listicles and content that copy each other, there will be a need for companies to stand out. At the same time, as … millennials and Gen Z [grow in the workforce], we’ll begin to see B2B become more entertaining and less boring. We were never only competing with other B2B content. We’ve always been competing for attention.”
Other content creation challenges include creating it consistently (54%) and differentiating it (54%). Close to half (45%) cite optimizing for search and creating quality content (44%). About a third (34%) cite creating enough content to keep up with internal demand, 30% say creating enough content to keep up with external demand, and 30% say creating content that requires technical skills.
The most frequently cited non-creation challenge, by far, is a lack of resources (58%), followed by aligning content with the buyer’s journey (48%) and aligning content efforts across sales and marketing (45%). Forty-one percent say they have issues with workflow/content approval, and 39% say they have difficulty accessing subject matter experts. Thirty-four percent say it is difficult to keep up with new technologies/tools (e.g., AI). Only 25% cite a lack of strategy as a challenge, 19% say keeping up with privacy rules, and 15% point to tech integration issues.
We asked content marketers about the types of content they produce, their distribution channels , and paid content promotion. We also asked which formats and channels produce the best results.
As in the previous year, the three most popular content types/formats are short articles/posts (94%, up from 89% last year), videos (84%, up from 75% last year), and case studies/customer stories (78%, up from 67% last year). Almost three-quarters (71%) use long articles, 60% produce visual content, and 59% craft thought leadership e-books or white papers. Less than half of marketers use brochures (49%), product or technical data sheets (45%), research reports (36%), interactive content (33%), audio (29%), and livestreaming (25%).
Which formats are most effective? Fifty-three percent say case studies/customer stories and videos deliver some of their best results. Almost as many (51%) names thought leadership e-books or white papers, 47% short articles, and 43% research reports.
Regarding the channels used to distribute content, 90% use social media platforms (organic), followed by blogs (79%), email newsletters (73%), email (66%), in-person events (56%), and webinars (56%).
Channels used by the minority of those surveyed include:
Which channels perform the best? Most marketers in the survey point to in-person events (56%) and webinars (51%) as producing better results. Email (44%), organic social media platforms (44%), blogs (40%) and email newsletters (39%) round out the list.
When marketers pay to promote content , which channels do they invest in? Eighty-six percent use paid content distribution channels.
Of those, 78% use social media advertising/promoted posts, 65% use sponsorships, 64% use search engine marketing (SEM)/pay-per-click, and 59% use digital display advertising. Far fewer invest in native advertising (35%), partner emails (29%), and print display ads (21%).
SEM/pay-per-click produces good results, according to 62% of those surveyed. Half of those who use paid channels say social media advertising/promoted posts produce good results, followed by sponsorships (49%), partner emails (36%), and digital display advertising (34%).
When asked which organic social media platforms deliver the best value for their organization, B2B marketers picked LinkedIn by far (84%). Only 29% cite Facebook as a top performer, 22% say YouTube, and 21% say Instagram. Twitter and TikTok see 8% and 3%, respectively.
So it makes sense that 72% say they increased their use of LinkedIn over the last 12 months, while only 32% boosted their YouTube presence, 31% increased Instagram use, 22% grew their Facebook presence, and 10% increased X and TikTok use.
Which platforms are marketers giving up? Did you guess X? You’re right – 32% of marketers say they decreased their X use last year. Twenty percent decreased their use of Facebook, with 10% decreasing on Instagram, 9% pulling back on YouTube, and only 2% decreasing their use of LinkedIn.
Interestingly, we saw a significant rise in B2B marketers who use TikTok: 19% say they use the platform – more than double from last year.
To explore how teams manage content, we asked marketers about their technology use and investments and the challenges they face when scaling their content .
When asked which technologies they use to manage content, marketers point to:
But having technology doesn’t mean it’s the right technology (or that its capabilities are used). So, we asked if they felt their organization had the right technology to manage content across the organization.
Only 31% say yes. Thirty percent say they have the technology but aren’t using its potential, and 29% say they haven’t acquired the right technology. Ten percent are unsure.
Even so, investment in content management technology seems likely in 2024: 45% say their organization is likely to invest in new technology, whereas 32% say their organization is unlikely to do so. Twenty-three percent say their organization is neither likely nor unlikely to invest.
We introduced a new question this year to understand what challenges B2B marketers face while scaling content production .
Almost half (48%) say it’s “not enough content repurposing.” Lack of communication across organizational silos is a problem for 40%. Thirty-one percent say they have no structured content production process, and 29% say they lack an editorial calendar with clear deadlines. Ten percent say scaling is not a current focus.
Among the other hurdles – difficulty locating digital content assets (16%), technology issues (15%), translation/localization issues (12%), and no style guide (11%).
For those struggling with content repurposing, content standardization is critical. “Content reuse is the only way to deliver content at scale. There’s just no other way,” says Regina Lynn Preciado , senior director of content strategy solutions at Content Rules Inc.
“Even if you’re not trying to provide the most personalized experience ever or dominate the metaverse with your omnichannel presence, you absolutely must reuse content if you are going to deliver content effectively,” she says.
“How to achieve content reuse ? You’ve probably heard that you need to move to modular, structured content. However, just chunking your content into smaller components doesn’t go far enough. For content to flow together seamlessly wherever you reuse it, you’ve got to standardize your content. That’s the personalization paradox right there. To personalize, you must standardize.
“Once you have your content standards in place and everyone is creating content in alignment with those standards, there is no limit to what you can do with the content,” Regina explains.
Why do content marketers – who are skilled communicators – struggle with cross-silo communication? Standards and alignment come into play.
“I think in the rush to all the things, we run out of time to address scalable processes that will fix those painful silos, including taking time to align on goals, roles and responsibilities, workflows, and measurement,” says Ali Orlando Wert , senior director of content strategy at Appfire. “It takes time, but the payoffs are worth it. You have to learn how to crawl before you can walk – and walk before you can run.”
Almost half (46%) of B2B marketers agree their organization measures content performance effectively. Thirty-six percent disagree, and 15% neither agree nor disagree. Only 3% say they don’t measure content performance.
The five most frequently used metrics to assess content performance are conversions (73%), email engagement (71%), website traffic (71%), website engagement (69%), and social media analytics (65%).
About half (52%) mention the quality of leads, 45% say they rely on search rankings, 41% use quantity of leads, 32% track email subscribers, and 29% track the cost to acquire a lead, subscriber, or customer.
The most common challenge B2B marketers have while measuring content performance is integrating/correlating data across multiple platforms (84%), followed by extracting insights from data (77%), tying performance data to goals (76%), organizational goal setting (70%), and lack of training (66%).
Regarding goals, 84% of B2B marketers say content marketing helped create brand awareness in the last 12 months. Seventy-six percent say it helped generate demand/leads; 63% say it helped nurture subscribers/audiences/leads, and 58% say it helped generate sales/revenue (up from 42% the previous year).
To separate top performers from the pack, we asked the B2B marketers to assess the success of their content marketing approach.
Twenty-eight percent rate the success of their organization’s content marketing approach as extremely or very successful. Another 57% report moderate success and 15% feel minimally or not at all successful.
The most popular factor for successful marketers is knowing their audience (79%).
This makes sense, considering that “creating the right content for our audience” is the top challenge. The logic? Top-performing content marketers prioritize knowing their audiences to create the right content for those audiences.
Top performers also set goals that align with their organization’s objectives (68%), effectively measure and demonstrate content performance (61%), and show thought leadership (60%). Collaboration with other teams (55%) and a documented strategy (53%) also help top performers reach high levels of content marketing success.
We looked at several other dimensions to identify how top performers differ from their peers. Of note, top performers:
Little difference exists between top performers and their less successful peers when it comes to the adoption of generative AI tools and related guidelines. It will be interesting to see if and how that changes next year.
To explore budget plans for 2024, we asked respondents if they have knowledge of their organization’s budget/budgeting process for content marketing. Then, we asked follow-up questions to the 55% who say they do have budget knowledge.
Here’s what they say about the total marketing budget (excluding salaries):
Next, we asked about their 2024 content marketing budget. Forty-five percent think their content marketing budget will increase compared with 2023, whereas 42% think it will stay the same. Only 6% think it will decrease.
We also asked where respondents plan to increase their spending.
Sixty-nine percent of B2B marketers say they would increase their investment in video, followed by thought leadership content (53%), in-person events (47%), paid advertising (43%), online community building (33%), webinars (33%), audio content (25%), digital events (21%), and hybrid events (11%).
The increased investment in video isn’t surprising. The focus on thought leadership content might surprise, but it shouldn’t, says Stephanie Losee , director of executive and ABM content at Autodesk.
“As measurement becomes more sophisticated, companies are finding they’re better able to quantify the return from upper-funnel activities like thought leadership content ,” she says. “At the same time, companies recognize the impact of shifting their status from vendor to true partner with their customers’ businesses.
“Autodesk recently launched its first global, longitudinal State of Design & Make report (registration required), and we’re finding that its insights are of such value to our customers that it’s enabling conversations we’ve never been able to have before. These conversations are worth gold to both sides, and I would imagine other B2B companies are finding the same thing,” Stephanie says.
We asked an open-ended question about marketers’ top three content-related priorities for 2024. The responses indicate marketers place an emphasis on thought leadership and becoming a trusted resource.
Other frequently mentioned priorities include:
In another open-ended question, we asked B2B marketers, “What content marketing trends do you predict for 2024?” You probably guessed the most popular trend: AI.
Here are some of the marketers’ comments about how AI will affect content marketing next year:
Other trends include:
Among the related comments:
What does this year’s research suggest B2B content marketers do to move forward?
I asked CMI’s Robert Rose for some insights. He says the steps are clear: Develop standards, guidelines, and playbooks for how to operate – just like every other function in business does.
“Imagine if everyone in your organization had a different idea of how to define ‘revenue’ or ‘profit margin,’” Robert says. “Imagine if each salesperson had their own version of your company’s customer agreements and tried to figure out how to write them for every new deal. The legal team would be apoplectic. You’d start to hear from sales how they were frustrated that they couldn’t figure out how to make the ‘right agreement,’ or how to create agreements ‘consistently,’ or that there was a complete ‘lack of resources’ for creating agreements.”
Just remember: Standards can change along with your team, audiences, and business priorities. “Setting standards doesn’t mean casting policies and templates in stone,” Robert says. “Standards only exist so that we can always question the standard and make sure that there’s improvement available to use in setting new standards.”
He offers these five steps to take to solidify your content marketing strategy and execution:
For their 14 th annual content marketing survey, CMI and MarketingProfs surveyed 1,080 recipients around the globe – representing a range of industries, functional areas, and company sizes — in July 2023. The online survey was emailed to a sample of marketers using lists from CMI and MarketingProfs.
This article presents the findings from the 894 respondents, mostly from North America, who indicated their organization is primarily B2B and that they are either content marketers or work in marketing, communications, or other roles involving content.
Thanks to the survey participants, who made this research possible, and to everyone who helps disseminate these findings throughout the content marketing industry.
Cover image by Joseph Kalinowski/Content Marketing Institute
Content Marketing Institute (CMI) exists to do one thing: advance the practice of content marketing through online education and in-person and digital events. We create and curate content experiences that teach marketers and creators from enterprise brands, small businesses, and agencies how to attract and retain customers through compelling, multichannel storytelling. Global brands turn to CMI for strategic consultation, training, and research. Organizations from around the world send teams to Content Marketing World, the largest content marketing-focused event, the Marketing Analytics & Data Science (MADS) conference, and CMI virtual events, including ContentTECH Summit. Our community of 215,000+ content marketers shares camaraderie and conversation. CMI is organized by Informa Connect. To learn more, visit www.contentmarketinginstitute.com .
Marketingprofs is your quickest path to b2b marketing mastery.
More than 600,000 marketing professionals worldwide rely on MarketingProfs for B2B Marketing training and education backed by data science, psychology, and real-world experience. Access free B2B marketing publications, virtual conferences, podcasts, daily newsletters (and more), and check out the MarketingProfs B2B Forum–the flagship in-person event for B2B Marketing training and education at MarketingProfs.com.
Brightspot , the content management system to boost your business.
Why Brightspot? Align your technology approach and content strategy with Brightspot, the leading Content Management System for delivering exceptional digital experiences. Brightspot helps global organizations meet the business needs of today and scale to capitalize on the opportunities of tomorrow. Our Enterprise CMS and world-class team solves your unique business challenges at scale. Fast, flexible, and fully customizable, Brightspot perfectly harmonizes your technology approach with your content strategy and grows with you as your business evolves. Our customer-obsessed teams walk with you every step of the way with an unwavering commitment to your long-term success. To learn more, visit www.brightspot.com .
Scientific figures and images are an integral part of academic publishing . Several journal websites present thumbnails of figures alongside the abstract for all their publications. Consequently, figures and images start making an impression right from the point when readers begin their preliminary search! Several studies and scientific discourses have confirmed that scientific figures and images play a critical role in improving manuscript quality. Rather than going through a tedious verbose account, readers often prefer looking at figures and images.
Table of Contents
High-quality scientific figures and pictures convey data and information in a cohesive and reader-friendly manner. They help in presenting complex relationships, patterns and trends in a clear and concise manner. Therefore, it is paramount that authors publish figures that readers can interpret clearly and quickly . Images having poor quality, low resolution, and inconsistent in style can reduce the overall impact of a reader’s experience.
Rule #1: ascertain the message you wish to convey.
If you do not have a clear understanding of the purpose of a figure, it is highly unlikely that your audience will understand its purpose either! Therefore, before you pin down on the figure or image type, it is imperative to have a clear thought about what is the underlying message. Identify the core idea you wish to present using a figure. Also think about how can you best express it! This information can then guide you to choose an appropriate format, design, image or chart type.
You may have to display the scientific figure on different media. The two most common forms include print articles and electronic media. Image resolution and size are the two attributes one must consider when assessing the suitability of an image for online and print readability. Resolution is the number of pixels in a defined area, usually measured in inches. Authors should carefully check the guidelines for image resolution prior to journal submission . The resolution of an image for viewing on a monitor is designated as “pixels per inch or (ppi)”, whereas the term “dots per inch or (dpi)” represents the resolution of a printed image and refers to dots of ink in printing.
Another important element that defines the image quality is color. Computer monitors, digital cameras, video screens usually use the RGB (red, green, and blue) color mode in various combinations to create all the colors we see in an image. Printed images on the other hand create image colors using the CMYK (cyan, magenta, yellow, and black) color mode. As a best practice, journals suggest to convert digital images to CMYK mode to have a truer preview of how the image will appear in print publications.
Plan your scientific figures from the start rather than giving them an afterthought! Master the equipment, instrument and/or software you intend to use for capturing high-quality images. Take formal training if required. While acquiring images make a note of capture adjustments such as brightness or contrast. This will ensure consistency in your image acquisition process. Furthermore, ensure that you save these images or figures at high resolution and in the correct format.
Journals recommend various file formats for figures and images. The most recommended format for saving scientific pictures however is TIFF (Tagged Image File Format) as it is lossless (the number and color of pixels is preserved despite multiple saves or alterations) and do not degrade. JPEG (Joint Photographic Experts Group) can be used for autoradiographs or micrographs as the compression allows submission of much higher resolution images for a given file size. However, one must minimize the number of times an altered version is saved, in order to prevent degradation of quality. PNG (Portable Networks Graphics) can be used for images where quality can be compromised for a smaller file size.
Whichever format you choose to save your final scientific figures , you must always keep the original files as a backup. Furthermore, it is also advisable to save files in the native file format of the image acquisition software, since these files may contain metadata of instrument setting. As a best practice, it is always beneficial to it handy in case you receive any questions from reviewers or editors during peer review.
Prior to submission, authors generally use image-editing tools and and software to make adjustment or alterations to their images for creating publication-quality material. A word of caution here! The final scientific figures must be an accurate representation of original data and conform to ethical standards. Inappropriate manipulation of images can lead to manuscript rejection and mistrust on research credibility. For instance, if you are comparing (control vs. several different treatments) a group of images demonstrating cellular fluorescence in a single picture, you must capture them using the same instrument/equipment setting. In addition, any adjustments or must not eliminate or obscure any critical information. Furthermore, if you are making gamma value adjustments or using pseudo-colors to highlight certain aspects, disclose it in the manuscript.
Resizing is an essential step to create an image that fits journal recommendations. Making an image smaller (i.e. decreasing the number of pixels) is acceptable as software can combine multiple pixels into a single pixel. However, when there is an attempt to increase the number of pixels, the computer software needs to create additional pixels. This may result in misinterpretation of data.
Let us know how these tips assisted you in creating scientific figures and pictures in the comments section below!
Rate this article Cancel Reply
Your email address will not be published.
The International Committee of Medical Journal Editors (ICMJE) recently updated its recommendations for best practices…
Can AI Tools Prepare a Research Manuscript From Scratch? — A comprehensive guide
As technology continues to advance, the question of whether artificial intelligence (AI) tools can prepare…
Successful research conduction requires proper planning and execution. While there are multiple reasons and aspects…
How to Use CSE Style While Drafting Scientific Manuscripts
What is CSE Style Guide? CSE stands for Council of Science Editors. Originated in the…
How to Create Publication-ready Manuscripts Using AIP Style Guide
What is AIP Style Guide? The AIP style guide refers to a specific citation format…
What Are the Unique Characteristics of the AMA Style Guide?
Sign-up to read more
Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:
We hate spam too. We promise to protect your privacy and never spam you.
I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:
As a researcher, what do you consider most when choosing an image manipulation detector?
NEW DELHI — Indian Prime Minister Narendra Modi declared victory Tuesday, but it wasn't the landslide he had been predicting as his party lost seats to a stronger-than-expected opposition.
Still, Modi declared that Indian voters had “shown immense faith” both in his party and his National Democratic Alliance coalition after he locked down a rare third term as leader of the world’s most populous country following a divisive decade in power.
“This is a victory for the world’s biggest democracy,” Modi told the crowd at his party’s headquarters.
His Hindu nationalist Bharatiya Janata Party, or BJP, and allied parties appeared to have secured almost 300 of 543 seats in Parliament, early election results showed, which would give them a simple majority.
But for the first time since the BJP swept to power in 2014, it did not secure a majority on its own, The Associated Press reported. It won 240 seats with the opposition performing better than expected after exit polls suggested Modi’s alliance was cruising toward an overwhelming victory.
That leaves Modi, whose dominance over India has steadily grown since he gained power in 2014, dependent on forming a coalition to remain in power.
Even that could be in doubt. Rahul Gandhi, leader of the opposition Indian National Congress has left open the possibility that he may try to form a coalition with two parties allied with the BJP that used to be Congress’ partners.
This is not how the election was supposed to go for Modi, who has a vast base of supporters both at home and among the large Indian diaspora who see him as responsible for India’s rocketing economy and rising confidence on the world stage. According to Morning Consult , Modi is by far the world’s most popular leader, with an approval rating of 74%.
But critics say Modi has also eroded human rights in India and stoked religious tensions, particularly against India’s Muslim minority.
Modi and other BJP candidates were accused of hate speech and other inflammatory rhetoric during the campaign.
India is also struggling to provide enough jobs for its 1.4 billion people, despite being the world’s fastest-growing major economy .
Outside BJP headquarters in New Delhi on Tuesday, dozens of Modi supporters danced to drums and chanted Hindu nationalist slogans. They wore shirts that read “I am Modi’s family” and scarves the color of saffron, the BJP’s official color which is also associated with Hindu nationalism.
Inside, the feeling was less celebratory.
Anxious and disappointed party workers and Modi supporters were glued to the TV screens, awaiting the final results as the supermajority they had hoped for appeared increasingly out of reach. Others were angry.
“Some voters betrayed us,” said Ram Shankar Maharaj, a Hindu priest who had traveled to New Delhi to watch the results from his home in the northern city of Ayodhya, where Modi in January presided over the opening of a grand Hindu temple on a contested holy site . "They betrayed Indian tradition."
The Ayodhya constituency that includes the temple was among those that the BJP conceded on Tuesday.
“We should have gotten 500 [seats],” Maharaj added. “India will suffer from this. Had they cleared 400, the country would flourish.”
India’s benchmark stock indices closed at record highs on Monday after exit polls pointed to a thumping victory for Modi, then fell sharply Tuesday as the results became more muddied.
Speaking across from BJP headquarters Tuesday night, Modi said his alliance was poised to form a government. Rather than focusing on the BJP itself, he mentioned the broader alliance multiple times and praised its leaders.
Congress, the main opposition party, was in a buoyant mood. “This is the people’s victory, and democracy’s victory,” Congress President Mallikarjun Kharge told a news conference.
Regardless of the results, Modi’s ethos of a Hindu-first nation is now deeply entrenched in Indian politics, raising fears among Muslims and other minority groups over how they would fare during five more years of Modi rule.
In Modi’s home seat of Varanasi, which voted Saturday in the last of seven phases of voting , Tasneem Fatma walked out of a polling station wearing a burqa, saying, “We want a united India, not for Hindu, Muslim, Sikh, Isai.”
But Fatma, 20, a business student, was interrupted by an older man who said there was no religious divide. He also dismissed Fatma’s concerns about unemployment, saying, “If you are educated and if you are capable of the job, you can take the job.”
As the discussion grew more heated, police officers asked the man to leave before NBC News could ask for his name.
India’s election is considered the world’s largest , with nearly a billion registered voters and polling that spanned over six weeks. But it was not just the sheer size of the election that posed a challenge for officials.
Voting has taken place amid unusually high temperatures that have exceeded 120 degrees in New Delhi , the capital, and experts say that may have depressed turnout. At least 33 people in three states died of suspected heatstroke just on Friday, Reuters reported, including election officials who were on duty.
Although Indian summers are generally hot, scientists say heat waves in India and elsewhere in South Asia are becoming hotter, longer and more frequent at least partly as a result of climate change . Neither the BJP nor the opposition said much about climate change during the campaign.
The issue foremost in the minds of voters who spoke with NBC News was jobs.
It’s an especially big worry for those ages 15 to 29, who make up 83% of unemployed people in India, according to a report in March .
“Why is nobody talking about rising costs or lack of jobs or poor kids dying or trees being cut?” Fatma asked.
The opposition, led by the Congress party, has tried to use such issues to drive voters away from Modi. Aware of the gargantuan effort it would take to defeat him, the fractured opposition formed an alliance that quickly faltered.
Opposition parties also accused Modi’s government of trying to stifle their campaigns by arresting their leaders and freezing their funds, allegations the BJP denied.
Today’s India is run by “a very strong, dominant BJP, which in 1984 had only got four seats in Parliament,” said Yamini Aiyar, former chief executive of the Center for Policy Research, a highly regarded think tank in New Delhi that has been targeted by a Modi government crackdown on civil society.
In recent years especially, she said, the BJP has become “creepingly authoritarian.”
“Our democracy is at stake,” Aiyar said.
According to Freedom House , a nonprofit pro-democracy organization in Washington, elections in India are generally considered free and fair, but they are being held in an environment in which freedom of expression is shrinking.
It cited the arrests and prosecutions of journalists, information manipulation using artificial intelligence and other technologies, and Indian authorities’ demands that social media companies remove online content critical of the government , among other issues.
Modi’s shaky rights record can make things awkward for Washington, which views India as an important counterweight to China . Though India is not a formal U.S. ally, it is an important defense partner and a member of strategic security groupings such as the Quad, which also includes the U.S., Australia and Japan.
Modi, who rarely takes live questions from journalists, pushed back against criticism at a joint news conference with President Joe Biden during a state visit to Washington last year.
“In India’s democratic values, there’s absolutely no discrimination, neither on basis of caste, creed or age or any kind of geographic location,” he said.
U.S. authorities also say Indian agents may have been involved in the attempted assassination last year of a Sikh activist living in New York. India denies the allegations, saying such a crime would be “contrary to government policy.”
Experts say the U.S. relationship with India will continue to strengthen, regardless of the final election results in either country.
“China remains the elephant in the room or the presence that is shaping the alignments and realignments across the world,” Aiyar said.
Mithil Aggarwal is a Hong Kong-based reporter/producer for NBC News.
Janis Mackey Frayer is a Beijing-based correspondent for NBC News.
COMMENTS
Finally, while a lot of data is helpful to have, be sure to reduce the presence of "chartjunk" - the unnecessary visual elements that distract the reader from what really matters…your data! There are various tools/platforms to help you create high-quality images for research papers including R, ImageJ, ImageMagick, Cytospace, and more.
1. Editable Images. The best kind of science images are editable vector files that allow you to customize the designs to best match the main points of your research. These include image file types such as Scalable Vector Graphics (.svg), Adobe Illustrator (.ai), Affinity Designer (.afdesign), Encapsulated PostScript (.eps), and some files in ...
Create science figures in minutes with BioRender scientific illustration software! ... Because of the large number of pre-drawn icons and color schemes to choose from, I can create beautiful images that accurately depict our scientific findings in no time. I don't know what I would do without BioRender. My 'circles and square figure' days in ...
Rule 4: Refine and repeat until the story is clear. The goal of good figure design is to have your audience clearly understand the main point of your research. That is why the final rule is to spend time refining the figure using the purpose, composition, and color tools so that the final design is clear. It is normal to make 2-3 versions of a ...
Adobe Illustrator is another popular image editing software. It is a vector-based drawing program that allows the user to import images, create drawings, and align multiple images into one figure. The figure that is generated can be exported as a high-resolution image that is ready for publication. Illustrator allows the user to fully customize ...
Recommended for: Persona 1: The Grad Student. It needs no introductions - it's the classic, trusty, Microsoft PowerPoint. The good news is that you likely already own a paid license for this through your institution. Though just in case you don't, you can purchase it for $160 AUD to keep forever.
Scientific illustrations are used to present the information in your article in a way that is easier to understand for your audience. The illustration should clarify the topic of your research. For example, if you are writing about the way a stent opens up blocked arteries in a heart, then you can create an illustration showing the key ...
useful software for editing images and is free to download. Again, when you make changes to an image, remember to save your originals, save the updated image as a tiff, and record all changes. Also, if you are comparing a group of images, such as florescence images of the same cell, and you edit one of the images (contrast, etc.), the edit must ...
Individual images as well as collections of images are easy to download in a .png file format. Images are under a Creative Commons 3.0 license, which requires users to give appropriate credit, provide a link to the license, and indicate if changes were made to the images. This requirement means the images are probably better suited for ...
General guidelines for figure design. • Identifying both the purpose of and audience for the figure allows one to best design an illustration that expresses the intended message • There are many types of scientific visualizations, but below are some common items to consider: • Information density • Information flow • Scientific rigor ...
Often, raster images have a specified resolution stored separately from the pixel values (a.k.a. metadata).This resolution metadata isn't really an integral part of the raster image, though it can be useful for conveying important information, such as the scale factor of a microscope or the physical size at which an image is intended to be printed. . Similarly, vector images may use a physical ...
A behind-the-scenes look at how to create scientific graphics that summarize the latest research. These kinds of images increase comprehension and provide va...
Nowadays the majority of journals put papers online, and many journals are online-only. ... Becuase, I found it comparatively easy to create images than directly in the word. Cite. 2 ...
Here, three researchers share their advice on how to create sci¬entific figures that are both accurate and engaging. 1. Use an image-processing workflow. Through her experience teaching visual ...
Increase the size of your "slide" to make sure the final image quality is high enough for print. In PowerPoint, go to the Design tab > Customise > Slide Size > Custom Slide Size. The largest slide size is A3, so I chose that to start with. Adjust the slide size as you see fit. Add your images and arrange them according to your desired layout.
Twenty-eight (16%) physiology papers, 19 (12%) cell biology papers, and 6 (2%) plant sciences papers met all criteria for all image-based figures in the paper. In plant sciences and physiology, the most common problems were with scale bars, insets, and specifying in the legend the species and tissue or object shown.
Twenty-eight (16%) physiology papers, 19 (12%) cell biology papers, and 6 (2%) plant sciences papers met all criteria for all image-based figures in the paper. In plant sciences and physiology, the most common problems were with scale bars, insets, and specifying in the legend the species and tissue or object shown.
You can cite images in your research paper either at the end, in between the topics, or in a separate section for all the non-textual elements used in the paper. You can choose to insert images in between texts, but you need to provide the in-text citations for every image that has been used. Additionally, you need to attach the name ...
Citing an image in APA Style. In an APA Style reference entry for an image found on a website, write the image title in italics, followed by a description of its format in square brackets. Include the name of the site and the URL. The APA in-text citation just includes the photographer's name and the year. APA format. Author last name, Initials.
Fortunately, we have several tools that can help us effectively prepare or improvise them. Here we give you a summary of the top tools that can be used to create images and figures for scientific research publications. You can also access detailed information on some of these tools here. SmartShorts. 5 stars 4 stars 3 stars 2 stars 1 star.
Google Research, Brain Team. We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation.
Follow these easy steps to edit a PDF online by adding comments : Choose a PDF to edit by clicking the Select a file button above, or drag and drop a file into the drop zone. Once Acrobat uploads the file, sign in to add your comments. Use the toolbar to add text, sticky notes, highlights, drawings, and more. Download your annotated file or get ...
Mission. The Purdue On-Campus Writing Lab and Purdue Online Writing Lab assist clients in their development as writers—no matter what their skill level—with on-campus consultations, online participation, and community engagement. The Purdue Writing Lab serves the Purdue, West Lafayette, campus and coordinates with local literacy initiatives.
The AI-related practices at these organizations can offer guidance to those looking to create value from gen AI adoption at their own organizations. To start, gen AI high performers are using gen AI in more business functions—an average of three functions, while others average two. They, like other organizations, are most likely to use gen AI ...
More than 100 reference examples and their corresponding in-text citations are presented in the seventh edition Publication Manual.Examples of the most common works that writers cite are provided on this page; additional examples are available in the Publication Manual.. To find the reference example you need, first select a category (e.g., periodicals) and then choose the appropriate type of ...
So, how do content marketers use the tools today? About half (51%) use generative AI to brainstorm new topics. Many use the tools to research headlines and keywords (45%) and write drafts (45%). Fewer say they use AI to outline assignments (23%), proofread (20%), generate graphics (11%), and create audio (5%) and video (5%). Click the image to ...
Resizing is an essential step to create an image that fits journal recommendations. Making an image smaller (i.e. decreasing the number of pixels) is acceptable as software can combine multiple pixels into a single pixel. However, when there is an attempt to increase the number of pixels, the computer software needs to create additional pixels.
Right-click menu - Right-click the image to open the right-click menu. Select Edit . Selecting the edit image or the edit option loads the image in the integrated Adobe express editor. You can edit your image as desired. A. In place tool pallete B. Edit image option in the edit toolbar on the left. A. Edit option in the right-click menu.
Mr. Modi took a more positive view in a statement on X declaring that his coalition had won a third term. "This is a historical feat in India's history," he said. Supporters of the Congress ...
Had they cleared 400, the country would flourish.". India's benchmark stock indices closed at record highs on Monday after exit polls pointed to a thumping victory for Modi, then fell sharply ...