Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.

We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.

Brief introduction to this section that descibes Open Access especially from an IntechOpen perspective

Want to get in touch? Contact our London head office or media team here

Our team is growing all the time, so we’re always on the lookout for smart people who want to help us reshape the world of scientific publishing.

Home > Books > Qualitative versus Quantitative Research

Research Methods in Library and Information Science

Submitted: 28 October 2016 Reviewed: 23 March 2017 Published: 28 June 2017

DOI: 10.5772/intechopen.68749

Cite this chapter

There are two ways to cite this chapter:

From the Edited Volume

Qualitative versus Quantitative Research

Edited by Sonyel Oflazoglu

To purchase hard copies of this book, please contact the representative in India: CBS Publishers & Distributors Pvt. Ltd. www.cbspd.com | [email protected]

Chapter metrics overview

5,880 Chapter Downloads

Impact of this chapter

Total Chapter Downloads on intechopen.com

IntechOpen

Total Chapter Views on intechopen.com

Overall attention for this chapters

Library and information science (LIS) is a very broad discipline, which uses a wide rangeof constantly evolving research strategies and techniques. The aim of this chapter is to provide an updated view of research issues in library and information science. A stratified random sample of 440 articles published in five prominent journals was analyzed and classified to identify (i) research approach, (ii) research methodology, and (iii) method of data analysis. For each variable, a coding scheme was developed, and the articles were coded accordingly. A total of 78% of the articles reported empirical research. The rest 22% were classified as non‐empirical research papers. The five most popular topics were “information retrieval,” “information behaviour,” “information literacy,” “library services,” and “organization and management.” An overwhelming majority of the empirical research articles employed a quantitative approach. Although the survey emerged as the most frequently used research strategy, there is evidence that the number and variety of research methodologies have been increased. There is also evidence that qualitative approaches are gaining increasing importance and have a role to play in LIS, while mixed methods have not yet gained enough recognition in LIS research.

  • library and information science
  • research methods
  • research strategies
  • data analysis techniques
  • research articles

Author Information

Aspasia togia *.

  • Department of Library Science & Information Systems, Technological Educational Institute (TEI) of Thessaloniki, Greece

Afrodite Malliari

  • DataScouting, Thessaloniki, Greece

*Address all correspondence to: [email protected]

1. Introduction

Library and information science (LIS), as its name indicates, is a merging of librarianship and information science that took place in the 1960s [ 1 , 2 ]. LIS is a field of both professional practice and scientific inquiry. As a field of practice, it includes the profession of librarianship as well as a number of other information professions, all of which assume the interplay of the following:

information content,

the people who interact with the content, and

the technology used to facilitate the creation, communication, storage, or transformation of the content [ 3 ].

The disciplinary foundation of LIS, which began in the 1920s, aimed at providing a theoretical foundation for the library profession. LIS has evolved in close relationship with other fields of research, especially computer science, communication studies, and cognitive sciences [ 4 ].

The connection of LIS with professional practice, on one hand, and other research fields on the other has influenced its research orientation and the development of methodological tools and theoretical perspectives [ 5 ]. Research problems are diverse, depending on the research direction, local trends, etc. Most of them relate to the professional practice although there are theoretical research statements as well. LIS research strives to address important information issues, such as these of “ information retrieval, information quality and authenticity, policy for access and preservation, the health and security applications of data mining ”(p. 3) [ 6 ]. The research is multidisciplinary in nature, and it has been heavily influenced by research designs developed in the social, behavioral, and management sciences and to a lesser extent by the theoretical inquiry adopted in the humanities [ 7 ]. Methods used in information retrieval research have been adapted from computer science. The emergence of evidence‐based librarianship in the late 1990s brought a positivist approach to LIS research, since it incorporated many of the research designs and methods used in clinical medicine [ 7 , 8 ]. In addition, LIS has developed its own methodological approaches, a prominent example of which is bibliometrics. Bibliometrics, which can be defined as “ the use of mathematical and statistical methods to study documents and patterns of publication ” (p. 38) [ 9 ], is a native research methodology, which has been extensively used outside the field, especially in science studies [ 10 ].

Library and information science research has been often criticized as being fragmentary, narrowly focused, and oriented to practical problems [ 11 ]. Many authors have noticed limited use of theory in published research and have advocated greater use of theory as a conceptual basis in LIS research [ 4 , 11 – 14 ]. Feehan et al. [ 13 ] claimed that LIS literature has not evolved enough to support a rigid body of its own theoretical basis. Jarvelin and Vakkari [ 15 ] argued that LIS theories are usually vague and conceptually unclear, and that research in LIS has been dominated by a paradigm which “ has made little use of such traditional scientific approaches as foundations and conceptual analysis, or of scientific explanation and theory formulation ” (p. 415). This lack of theoretical contributions may be associated with the fact that LIS emanated from professional practice and is therefore closely linked to practical problems such as the processing and organization of library materials, documentation, and information retrieval [ 15 , 16 ].

In this chapter, after briefly discussing the role of theory in LIS research, we provide an updated view of research issues in the field that will help scholars and students stay informed about topics related to research strategies and methods. To accomplish this, we describe and analyze patterns of LIS research activity as reflected in prominent library journals. The analysis of the articles highlights trends and recurring themes in LIS research regarding the use of multiple methods, the adoption of qualitative approaches, and the employment of advanced techniques for data analysis and interpretation [ 17 ].

2. The role of theory in LIS research

The presence of theory is an indication of research eminence and respectability [ 18 ], as well as a feature of discipline’s maturity [ 19 , 20 ]. Theory has been defined in many ways. “ Any of the following have been used as the meaning of theory: a law, a hypothesis, group of hypotheses, proposition, supposition, explanation, model, assumption, conjecture, construct, edifice, structure, opinion, speculation, belief, principle, rule, point of view, generalization, scheme, or idea ” (p. 309) [ 21 ]. A theory can be described as “ a set of interrelated concepts, definitions, and propositions that explains or predicts events or situations by specifying relations among variables ” [ 22 ]. According to Babbie [ 23 ], research is “ a systematic explanation for the observed facts and laws that related to a particular aspect of life ” (p. 49). It is “ a multiple‐level component of the research process, comprising a range of generalizations that move beyond a descriptive level to a more explanatory level ” [ 24 ] (p. 319). The role of theory in social sciences is, among other things, to explain and predict behavior, be usable in practical applications, and guide research [ 25 ]. According to Smiraglia [ 26 ], theory does not exist in a vacuum but in a system that explains the domains of human actions, the phenomena found in these domains, and the ways in which they are affected. He maintains that theory is developed by systematically observing phenomena, either in the positivist empirical research paradigm or in the qualitative hermeneutic paradigm. Theory is used to formulate hypotheses in quantitative research and confirms observations in qualitative research.

Glazier and Grover [ 24 ] proposed a model for theory‐building in LIS called “circuits of theory.” The model includes taxonomy of theory, developed earlier by the authors [ 11 ], and the critical social and psychological factors that influence research. The purpose of the taxonomy was to demonstrate the relationships among the concepts of research, theory, paradigms, and phenomena. Phenomena are described as “ events experienced in the empirical world ” (p. 230) [ 11 ]. Researchers assign symbols (digital or iconic representations, usually words or pictures) to phenomena, and meaning to symbols, and then they conceptualize the relationships among phenomena and formulate hypotheses and research questions. “ In the taxonomy, empirical research begins with the formation of research questions to be answered about the concepts or hypotheses for testing the concepts within a narrow set of predetermined parameters ” (p. 323) [ 24 ]. Various levels of theories, with implications for research in library and information Science, are described. The first theory level, called substantive theory , is defined as “ a set of propositions which furnish an explanation for an applied area of inquiry ” (p. 233) [ 11 ]. In fact, it may not be viewed as a theory but rather be considered as a research hypothesis that has been tested or even a research finding [ 16 ]. The next level of theory, called formal theory , is defined as “ a set of propositions which furnish an explanation for a formal or conceptual area of inquiry, that is, a discipline ” (p. 234) [ 11 ]. Substantive and formal theories together are usually considered as “middle range” theory in the social sciences. Their difference lies in the ability to structure generalizations and the potential for explanation and prediction. The final level, grand theory , is “ a set of theories or generalizations that transcend the borders of disciplines to explain relationships among phenomena ” (p. 321) [ 24 ]. According to the authors, most research generates substantive level theory, or, alternatively, researchers borrow theory from the appropriate discipline, apply it to the problem under investigation, and reconstruct the theory at the substantive level. Next in the hierarchy of theoretical categories is the paradigm , which is described as “ a framework of basic assumptions with which perceptions are evaluated and relationships are delineated and applied to a discipline or profession ” (p. 234) [ 11 ]. Finally, the most significant theoretical category is the world view , which is defined as “ an individual’s accepted knowledge, including values and assumptions, which provide a ‘filter’ for perception of all phenomena ” (p. 235) [ 11 ]. All the previous categories contribute to shaping the individual’s worldview. In the revised model, which places more emphasis on the impact of social environment on the research process, research and theory building is surrounded by a system of three basic contextual modules: the self, society, and knowledge, both discovered and undiscovered. The interactions and dialectical relationships of these three modules affect the research process and create a dynamic environment that fosters theory creation and development. The authors argue that their model will help researchers build theories that enable generalizations beyond the conclusions drawn from empirical data [ 24 ].

In an effort to propose a framework for a unified theory of librarianship, McGrath [ 27 ] reviewed research articles in the areas of publishing, acquisitions, classification and knowledge organization, storage, preservation and collection management, library collections, and circulations. In his study, he included articles that employed explanatory and predictive statistical methods to explore relationships between variables within and between the above subfields of LIS. For each paper reviewed, he identified the dependent variable, significant independent variables, and the units of analysis. The review displayed explanatory studies “ in nearly every level, with the possible exception of classification, while studies in circulation and use of the library were clearly dominant. A recapitulation showed that a variable at one level may be a unit of analysis at another, a property of explanatory research crucial to the development of theory, which has been either ignored or unrecognized in LIS literature ” (p. 368) [ 27 ]. The author concluded that “explanatory and predictive relationships do exist and that they can be useful in constructing a comprehensive unified theory of librarianship” (p. 368) [ 27 ].

Recent LIS literature provides several analyses of theory development and use in the field. In a longitudinal analysis of information needs and uses of literature, Julien and Duggan [ 28 ] investigated, among other things, to what extent LIS literature was grounded in theory. Articles “ based on a coherent and explicit framework of assumptions, definitions, and propositions that, taken together, have some explanatory power ” (p. 294) were classified as theoretical articles. Results showed that only 18.3% of the research studies identified in the sample of articles examined were theoretically grounded.

Pettigrew and McKechnie [ 29 ] analyzed 1160 journal articles published between 1993 and 1998 to determine the level of theory use in information science research. In the absence of a singular definition of theory that would cover all the different uses of the term in the sample of articles, they operationalized “theory” according to authors’ use of the term. They found that 34.1% of the articles incorporated theory, with the largest percentage of theories drawn from the social sciences. Information science itself was the second most important source of theories. The authors argued that this significant increase in theory use in comparison to earlier studies could be explained by the research‐oriented journals they selected for examination, the sample time, and the broad way in which they defined “theory.” With regard to this last point, that is, their approach of identifying theories only if the author(s) describe them as such in the article, Pettigrew and McKechnie [ 29 ] observed significant differences in how information science researchers perceive theory:

Although it is possible that conceptual differences regarding the nature of theory may be due to the different disciplinary backgrounds of researchers in IS, other themes emerged from our data that suggest a general confusion exists about theory even within subfields. Numerous examples came to light during our analysis in which an author would simultaneously refer to something as a theory and a method, or as a theory and a model, or as a theory and a reported finding. In other words, it seems as though authors, themselves, are sometimes unsure about what constitutes theory. Questions even arose regarding whether the author to whom a theory was credited would him or herself consider his or her work as theory (p. 68).

Kim and Jeong [ 16 ] examined the state and characteristics of theoretical research in LIS journals between 1984 and 2003. They focused on the “theory incident,” which is described as “an event in which the author contributes to the development or the use of theory in his/her paper.” Their study adopted Glazier and Grover’s [ 24 ] model of “circuits of theory.” Substantive level theory was operationalized to a tested hypothesis or an observed relationship, while both formal and grand level theories were identified when they were named as “theory,” “model,” or “law” by authors other than those who had developed them. Results demonstrated that the application of theory was present in 41.4% of the articles examined, signifying a significant increase in the proportion of theoretical articles as compared to previous studies. Moreover, it was evident that both theory development and theory use had increased by the year. Information seeking and use, and information retrieval, were identified as the subfields with the most significant contribution to the development of the theoretical framework.

In a more in‐depth analysis of theory use in Kumasi et al. [ 30 ] qualitatively analyzed the extent to which theory is meaningfully used in scholarly literature. For this purpose, they developed a theory talk coding scheme, which included six analytical categories, describing how theory is discussed in a study. The intensity of theory talk in the articles was described across a continuum from minimal (e.g., theory is discussed in literature review and not mentioned later) through moderate (e.g., multiple theories are introduced but without discussing their relevance to the study) to major (e.g., theory is employed throughout the study). Their findings seem to support the opinion that “ LIS discipline has been focused on the application of specific theoretical frameworks rather than the generation of new theories ” (p. 179) [ 30 ]. Another point the authors made was about the multiple terms used in the articles to describe theory. Words such as “framework,” “model,” or “theory” were used interchangeably by scholars.

It is evident from the above discussion that the treatment of theory in LIS research covers a spectrum of intensity, from marginal mentions to theory revising, expanding, or building. Recent analyses of the published scholarship indicate that the field has not been very successful in contributing to existing theory or producing new theory. In spite of this, one may still assert that LIS research employs theory, and, in fact, there are many theories that have been used or generated by LIS scholars. However, “ calls for additional and novel theory development work in LIS continue, particularly for theories that might help to address the research practice gap ” (p. 12) [ 31 ].

3. Research strategies in LIS

3.1. surveys of research methods.

LIS is a very broad discipline, which uses a wide range of constantly evolving research strategies and techniques [ 32 ]. Various classification schemes have been developed to analyze methods employed in LIS research (e.g., [ 13 , 15 , 17 , 33 – 35 , 38 ]). Back in 1996, in the “research record” column of the Journal of Education for Library and Information Science, Kim [ 36 ] synthesized previous categories and definitions and introduced a list of research strategies, including data collection and analysis methods. The listing included four general research strategies: (i) theoretical/philosophical inquiry (development of conceptual models or frameworks), (ii) bibliographic research (descriptive studies of books and their properties as well as bibliographies of various kinds), (iii) R&D (development of storage and retrieval systems, software, interface, etc.), and (iv) action research, it aims at solving problems and bringing about change in organizations. Strategies are then divided into quantitative and qualitative driven. In the first category are included descriptive studies, predictive/explanatory studies, bibliometric studies, content analysis, and operation research studies. Qualitative‐driven strategies are considered the following: case study, biographical method, historical method, grounded theory, ethnography, phenomenology, symbolic interactionism/semiotics, sociolinguistics/discourse analysis/ethnographic semantics/ethnography of communication, and hermeneutics/interpretive interactionism (p. 378–380) [ 36 ].

Systematic studies of research methods in LIS started in the 1980s and several reviews of the literature have been conducted over the past years to analyze the topics, methodologies, and quality of research. One of the earliest studies was done by Peritz [ 37 ] who carried out a bibliometric analysis of the articles published in 39 core LIS journals between 1950 and 1975. She examined the methodologies used, the type of library or organization investigated, the type of activity investigated, and the institutional affiliation of the authors. The most important findings were a clear orientation toward library and information service activities, a widespread use of the survey methodology, a considerable increase of research articles after 1960, and a significant increase in theoretical studies after 1965.

Nour [ 38 ] followed up on Peritz’s [ 37 ] work and studied research articles published in 41 selected journals during the year 1980. She found that survey and theoretical/analytic methodologies were the most popular, followed by bibliometrics. Comparing these findings to those made by Peritz [ 37 ], Nour [ 38 ] found that the amount of research continued to increase, but the proportion of research articles to all articles had been decreasing since 1975.

Feehan et al. [ 13 ] described how LIS research published during 1984 was distributed over various topics and what methods had been used to study these topics. Their analysis revealed a predominance of survey and historical methods and a notable percentage of articles using more than one research method. Following a different approach, Enger et al. (1989) focused on the statistical methods used by LIS researchers in articles published during 1985 [ 39 ]. They found that only one out of three of the articles reported any use of statistics. Of those, 21% used descriptive statistics and 11% inferential statistics. In addition, the authors found that researchers from disciplines other than LIS made the highest use of statistics and LIS faculty showed the highest use of inferential statistics.

An influential work, against which later studies have been compared, is that of Jarvelin and Vakkari [ 15 ] who studied LIS articles published in 1985 in order to determine how research was distributed over various subjects, what approaches had been taken by the authors, and what research strategies had been used. The authors replicated their study later to include older research published between 1965 and 1985 [ 40 ]. The main finding of these studies was that the trends and characteristics of LIS research remained more or less the same over the aforementioned period of 20 years. The most common topics were information service activities and information storage and retrieval. Empirical research strategies were predominant, and of them, the most frequent was the survey. Kumpulainen [ 41 ], in an effort to provide a continuum with Jarvelin and Vakkeri’s [ 15 ] study, analyzed 632 articles sampled from 30 core LIS journals with respect to various characteristics, including topics, aspect of activity, research method, data selection method, and data analysis techniques. She used the same classification scheme, and she selected the journals based on a slightly modified version of Jarvelin and Vakkari’s [ 15 ] list. Library services and information storage and retrieval emerged again as the most common subjects approached by the authors and survey was the most frequently used method.

More recent studies of this nature include those conducted by Koufogiannakis et al. [ 42 ], Hildreth and Aytac [ 43 ], Hider and Pymm [ 32 ], and Chu [ 17 ]. Koufogiannakis et al. [ 42 ] examined research articles published in 2001 and they found that the majority of them were questionnaire‐based descriptive studies. Comparative, bibliometrics, content analysis, and program evaluation studies were also popular. Information storage and retrieval emerged as the predominant subject area, followed by library collections and management. Hildreth and Aytac [ 43 ] presented a review of the 2003–2005 published library research with special focus on methodology issues and the quality of published articles of both practitioners and academic scholars. They found that most research was descriptive and the most frequent method for data collection was the questionnaire, followed by content analysis and interviews. With regard to data analysis, more researchers used quantitative methods, considerably less used qualitative‐only methods, whereas 61 out of 206 studies included some kind of qualitative analysis, raising the total percentage of qualitative methods to nearly 50%. With regard to the quality of published research, the authors argued that “ the majority of the reports are detailed, comprehensive, and well‐organized ” (p. 254) [ 43 ]. Still, they noticed that the majority of reports did not mention the critical issues of research validity and reliability and neither did they indicate study limitations or future research recommendations. Hider and Pymm [ 32 ] described content analysis of LIS literature “ which aimed to identify the most common strategies and techniques employed by LIS researchers carrying out high‐profile empirical research ” (p. 109). Their results suggested that while researchers employed a wide variety of strategies, they mostly used surveys and experiments. They also observed that although quantitative research accounted for more than 50% of the articles, there was an increase in the use of most sophisticated qualitative methods. Chu [ 17 ] analyzed the research articles published between 2001 and 2010 in three major journals and reported the following most frequent research methods: theoretical approach (e.g., conceptual analysis), content analysis, questionnaire, interview, experiment, and bibliometrics. Her study showed an increase in both the number and variety of research methods but lack of growth in the use of qualitative research or in the adoption of multiple research methods.

In summary, the literature shows a continued interest in the analysis of published LIS research. Approaches include focusing on particular publication years, geographic areas, journal titles, aspects of LIS, and specific characteristics, such as subjects, authorship, and research methods. Despite the abundance of content analyses of LIS literature, the findings are not easily comparable due to differences in the number and titles of journals examined, in the types of the papers selected for analysis, in the periods covered, and in classification schemes developed by the authors to categorize article topics and research strategies. Despite the differences, some findings are consistent among all studies:

Information seeking, information retrieval, and library and information service activities are among the most common subjects studied,

Descriptive research methodologies based on surveys and questionnaires predominate,

Over the years, there has been a considerable increase in the array of research approaches used to explore library issues, and

Data analysis is usually limited to descriptive statistics, including frequencies, means, and standard deviations.

3.2. Data collection and analysis

Articles published between 2011 and 2016 were obtained from the following journals: Library and Information Science Research, College & Research Libraries, Journal of Documentation, Information Processing & Management, and Journal of Academic Librarianship ( Table 1 ). These five titles were selected as data sources because they have the highest 5‐year impact factor of the journals classified in Ulrich’s Serials Directory under the “Library and Information Sciences” subject heading. From the journals selected, only full‐length articles were collected. Editorials, book reviews, letters, interviews, commentaries, and news items were excluded from the analysis. This selection process yielded 1643 articles. A stratified random sample of 440 articles was chosen for in‐depth analysis ( Table 2 ). For the purpose of this study, five strata, corresponding to the five journals, were used. The sample size was determined using a margin of error, 4%, and confidence interval, 95%.

Table 1.

Profile of the journals.

Table 2.

Journal titles.

Each article was classified as either research or theoretical. Articles that employed specific research methodology and presented specific findings of original studies performed by the author(s) were considered research articles. The kind of study may vary (e.g., it could be an experiment, a survey, etc.), but in all cases, raw data had been collected and analyzed, and conclusions were drawn from the results of that analysis. Articles reporting research in system design or evaluation in the information systems field were also regarded as research articles . On the other hand, works that reviewed theories, theoretical concepts, or principles discussed topics of interest to researchers and professionals, or described research methodologies were regarded as theoretical articles [ 44 ] and were classified in the no‐empirical‐research category. In this category, were also included literature reviews and articles describing a project, a situation, a process, etc.

Each article was classified into a topical category according to its main subject. The articles classified as research were then further explored and analyzed to identify (i) research approach, (ii) research methodology, and (iii) method of data analysis. For each variable, a coding scheme was developed, and the articles were coded accordingly. The final list of the analysis codes was extracted inductively from the data itself, using as reference the taxonomies utilized in previous studies [ 15 , 32 , 43 , 45 ]. Research approaches “ are plans and procedures for research ” (p. 3) [ 46 ]. Research approaches can generally be grouped as qualitative, quantitative, and mixed methods studies. Quantitative studies aim at the systematic empirical investigation of quantitative properties or phenomena and their relationships. Qualitative research can be broadly defined as “ any kind of research that produces findings not arrived at by means of statistical procedures or other means of quantification ” (p. 17) [ 47 ]. It is a way to gain insights through discovering meanings and explaining phenomena based on the attributes of the data. In mixed model research, quantitative and qualitative approaches are combined within or across the stages of the research process. It was beyond the scope of this study to identify in which stages of a study—data collection, data analysis, and data interpretation—the mixing was applied or to reveal the types of mixing. Therefore, studies using both quantitative and qualitative methods, irrespective of whether they describe if and how the methods were integrated, were coded as mixed methods studies.

Research methodologies , or strategies of inquiry, are types of research models “ that provide specific direction for procedures in a research design ” (p. 11) [ 46 ] and inform the decisions concerning data collection and analysis. A coding schema of research methodologies was developed by the authors based on the analysis of all research articles included in the sample. The methodology classification included 12 categories ( Table 3 ). Each article was classified into one category for the variable research methodology . If more than one research strategy was mentioned (e.g., experiment and survey), the article was classified according to the main strategy.

Table 3.

Coding schema for research methodologies.

Methods of data analysis refer to the techniques used by the researchers to explore the original data and answer their research problems or questions. Data analysis for quantitative researches involves statistical analysis and interpretation of figures and numbers. In qualitative studies, on the other hand, data analysis involves identifying common patterns within the data and making interpretations of the meanings of the data. The array of data analysis methods included the following categories:

Descriptive statistics,

Inferential statistics,

Qualitative data analysis,

Experimental evaluation, and

Other methods,

Descriptive statistics are used to describe the basic features of the data in a study. Inferential statistics investigate questions, models, and hypotheses. Mathematical analysis refers to mathematic functions, etc. used mainly in bibliometric studies to answer research questions associated with citation data. Qualitative data analysis is the range of processes and procedures used for the exploration of qualitative data, from coding and descriptive analysis to identification of patterns and themes and the testing of emergent findings and hypotheses. It was used in this study as an overarching term encompassing various types of analysis, such as thematic analysis, discourse analysis, or grounded theory analysis. The class experimental evaluation was used for system and software analysis and design studies which assesses the newly developed algorithm, tool, method, etc. by performing experiments on selected datasets. In these cases, “experiments” differ from the experimental designs in social sciences. Methods that did not fall into one of these categories (e.g., mathematical analysis, visualization, or benchmarking) were classified as other methods . If both descriptive and inferential statistics were used in an article, only the inferential were recorded. In mixed methods studies, each method was recorded in the order in which it was reported in the article.

Ten percent of the articles were randomly selected and used to establish inter‐rater reliability and provide basic validation of the coding schema. Cohen’s kappa was calculated for each coded variable. The average Cohen’s kappa value was κ = 0.60, p < 0.000 (the highest was 0.63 and lowest was 0.59). This indicates a substantial agreement [ 48 ]. The coding disparities across raters were discussed, and the final codes were determined via consensus.

3.3. Results

3.3.1. topic.

Table 4 presents the distribution of articles over the various topics, for each of which a detailed description is provided. The five most popular topics of the papers in the total sample of 440 articles were “information retrieval,” “information behavior,” “information literacy,” “library services,” and “organization and management.” These areas cover over 60% of all topics studied in the papers. The least‐studied topics (covered in less than eight papers) fall into the categories of “information and knowledge management,” “library information systems,” “LIS theory,” and “infometrics.”

Table 4.

Article topics.

Figure 1 shows how the top five topics are distributed across journals. As expected, the topic “information retrieval” has higher publication frequencies in Information Processing & Management, a journal focusing on system design and issues related to the tools and techniques used in storage and retrieval of information. “Information literacy,” “information behavior,” “library services,” and “organization and management” appear to be distributed almost proportionately in College & Research Libraries. “Information literacy” seems to be a more preferred topic in the Journal of Academic Librarianship, while “information behavior” is more popular in the Journal of Documentation and Library & Information Science Research.

what is research methodology in library and information science

Figure 1.

Distribution of topics across journals.

3.3.2. Research approach and methodology

Of all articles examined, 343 articles, which represent the 78% of the sample, reported empirical research. The rest 22% (N = 97) were classified as non‐empirical research papers. Research articles were coded as quantitative, qualitative, or mixed methods studies. An overwhelming majority (70%) of the empirical research articles employed a quantitative research approach. Qualitative and mixed methods research was reported in 21.6 and 8.5% of the articles, respectively ( Figure 2 ).

what is research methodology in library and information science

Figure 2.

Research approach.

Table 5 presents the distribution of research approaches over the five most famous topics. The quantitative approach clearly prevails in all topics, especially in information retrieval research. However, qualitative designs seem to gain acceptance in all topics (except information retrieval), while in information behavior research, quantitative and qualitative approaches are almost evenly distributed. Mixed methods were quite frequent in information literacy and information behavior studies and less popular in the other topics.

Table 5.

Topics across research approach.

The most frequently used research strategy was survey, accounting for almost 37% of all research articles, followed by system and software analysis and design, a strategy used in this study specifically for research in information systems (Jarvelin & Vakkari, 1990). This result is influenced by the fact that Information Processing & Management addresses issues at the intersection between LIS and computer science, and the majority of its articles present the development of new tools, algorithms, methods and systems, and their experimental evaluation. The third‐ and fourth‐ranking strategies were content analysis and bibliometrics. Case study, experiment, and secondary data analysis were represented by 15 articles each, while the rest of the techniques were underrepresented with considerably fewer articles ( Table 6 ).

Table 6.

Research methodologies.

3.3.3. Methods of data analysis

Table 7 displays the frequencies for each type of data analysis.

Table 7.

Method of data analysis.

Almost half of the empirical research papers examined reported any use of statistics. Descriptive statistics, such as frequencies, means, or standard deviations, were more frequently used compared to inferential statistics, such as ANOVA, regression, or factor analysis. Nearly one‐third of the articles employed some type of qualitative data analysis either as the only method or—in mixed methods studies—in combination with quantitative techniques.

3.4. Discussions and conclusions

The patterns of LIS research activity as reflected in the articles published between 2011 and 2016 in five well‐established, peer‐reviewed journals were described and analyzed. LIS literature addresses many and diverse topics. Information retrieval, information behavior, and library services continue to attract the interest of researchers as they are core areas in library science. Information retrieval has been rated as one of the most famous areas of interest in research articles published between 1965 and 1985 [ 40 ]. According to Dimitroff [ 49 ], information retrieval was the second most popular topic in the articles published in the Bulletin of the Medical Library Association, while Cano [ 50 ] argued that LIS research produced in Spain from 1977 to 1994 was mostly centered on information retrieval and library and information services. In addition, Koufogiannakis et al. [ 42 ] found that information access and retrieval were the domain with the most research, and in Hildreth and Aytac’s [ 43 ] study, most articles were dealing with issues related to users (needs, behavior, information seeking, etc.), services, and collections. The present study provides evidence that the amount of research in information literacy is increasing, presumably due to the growing importance of information literacy instruction in libraries. In recent years, there is an ongoing educational role for librarians, who are more and more actively engaging in the teaching and learning processes, a trend that is reflected in the research output.

With regard to research methodologies, the present study seems to confirm the well‐documented predominance of survey in LIS research. According to Dimitroff [ 49 ], the percentage related to use of survey research methods reported in various studies varied between 20.3 and 41.5%. Powell [ 51 ], in a review of the research methods appearing in LIS literature, pointed out that survey had consistently been the most common type of study in both dissertations and journal articles. Survey reported the most widely used research design by Jarvelin and Vakkari [ 40 ], Crawford [ 52 ], Hildreth and Aytac [ 43 ], and Hider and Pymm [ 32 ]. The majority of articles examined by Koufogiannakis et al. [ 42 ] were descriptive studies using questionnaires/surveys. In addition, survey methods represented the largest proportion of methods used in information behavior articles analyzed by Julien et al. [ 53 ]. There is no doubt that survey has been used more than any other method in LIS research. As Jarvelin and Vakkari [ 15 ] put it, “it appears that the field is so survey‐oriented that almost all problems are seen through a survey viewpoint” (p. 416). Much of survey’s popularity can be ascribed to its being a well‐known, understood, easily conducted, and inexpensive method, which is easy to analyze results [ 41 , 42 ]. However, our findings suggest that while the survey ranks high, a variety of other methods have been also used in the research articles. Content analysis emerged as the third‐most frequent strategy, a finding similar to those of previous studies [ 17 , 32 ]. Although content analysis was not regarded by LIS researchers as a favored research method until recently, its popularity seems to be growing [ 17 ].

Quantitative approaches, which dominate, tend to rely on frequency counts, percentages, and descriptive statistics used to describe the basic features of the data in a study. Fewer studies used advanced statistical analysis techniques, such as t‐tests, correlation, and regressions, while there were some examples of more sophisticated methods, such as factor analysis, ANOVA, MANOVA, and structural equation modeling. Researchers engaging in quantitative research designs should take into consideration the use of inferential statistics, which enables the generalization from the sample being studied to the population of interest and, if used appropriately, are very useful for hypothesis testing. In addition, multivariate statistics are suitable for examining the relationships among variables, revealing patterns and understanding complex phenomena.

The findings also suggest that qualitative approaches are gaining increasing importance and have a role to play in LIS studies. These results are comparable to the findings of Hider and Pymm [ 32 ], who observed significant increases for qualitative research strategies in contemporary LIS literature. Qualitative analysis description varied widely, reflecting the diverse perspectives, analysis methods, and levels of depth of analysis. Commonly used terms in the articles included coding, content analysis, thematic analysis, thematic analytical approach, theme, or pattern identification. One could argue that the efforts made to encourage and promote qualitative methods in LIS research [ 54 , 55 ] have made some impact. However, qualitative research methods do not seem to be adequately utilized by library researchers and practitioners, despite their potential to offer far more illuminating ways to study library‐related issues [ 56 ]. LIS research has much to gain from the interpretive paradigm underpinning qualitative methods. This paradigm assumes that social reality is

the product of processes by which social actors together negotiate the meanings for actions and situations; it is a complex of socially constructed meanings. Human experience involves a process of interpretation rather than sensory, material apprehension of the external physical world and human behavior depends on how individuals interpret the conditions in which they find themselves. Social reality is not some ‘thing’ that may be interpreted in different ways, it is those interpretations (p. 96) [ 57 ].

As stated in the introduction of this chapter, library and information science focuses on the interaction between individuals and information. In every area of LIS research, the connection of factors that lead to and influence this interaction is increasingly complex. Qualitative research searches for “ all aspects of that complexity on the grounds that they are essential to understanding the behavior of which they are a part ” (p. 241) [ 59 ]. Qualitative research designs can offer a more in‐depth analysis of library users, their needs, attitudes, and behaviors.

The use of mixed methods designs was found to be rather rare. While Hildreth and Aytac [ 43 ] found higher percentages of studies using combined methods in data analysis, our results are analogous to those shown by Fidel [ 60 ]. In fact, as in her study, only few of the articles analyzed referred to mixed methods research by name, a finding indicating that “ the concept has not yet gained recognition in LIS research ” (p. 268). Mixed methods research has become an established research approach in the social sciences as it minimizes the weaknesses of quantitative and qualitative research alone and allows researchers to investigate the phenomena more completely [ 58 ].

In conclusion, there is evidence that LIS researchers employ a large number and wide variety of research methodologies. Each research approach, strategy, and method has its advantages and limitations. If the aim of the study is to confirm hypotheses about phenomena or measure and analyze the causal relationships between variables, then quantitative methods might be used. If the research seeks to explore, understand, and explain phenomena then qualitative methods might be used. Researchers can consider the full range of possibilities and make their selection based on the philosophical assumptions they bring to the study, the research problem being addressed, their personal experiences, and the intended audience for the study [ 46 ].

Taking into consideration the increasing use of qualitative methods in LIS studies, an in‐depth analysis of papers using qualitative methods would be interesting. A future study in which the different research strategies and types of analysis used in qualitative methods will be presented and analyzed could help LIS practitioners understand the benefits of qualitative analysis.

Mixed methods used in LIS research papers could be analyzed in future studies in order to identify in which stages of a study, data collection, data analysis, and data interpretation, the mixing was applied and to reveal the types of mixing.

As far as it concerns the quantitative research methods, which predominate in LIS research, it would be interesting to identify systematic relations between more than two variables such as authors’ affiliation, topic, research strategies, etc. and to create homogeneous groups using multivariate data analysis techniques.

  • 1. Buckland MK, Liu ZM. History of information science. Annual Review of Information Science and Technology. 1995; 30 :385-416
  • 2. Rayward WB. The history and historiography of information science: Some reflections. Information Processing & Management. 1996; 32 (1):3-17
  • 3. Wildemuth BM. Applications of Social Research Methods to Questions in Information and Library Science. Westport, CT: Libraries Unlimited; 2009
  • 4. Hjørland B. Theory and metatheory of information science: A new interpretation. Journal of Documentation. 1998; 54 (5):606-621. DOI: http://doi.org/10.1108/EUM0000000007183
  • 5. Åström F. Heterogeneity and homogeneity in library and information science research. Information Research [Internet]. 2007 [cited 23 April 2017]; 12 (4): poster colisp01 [3 p.]. Available from: http://www.informationr.net/ir/12-4/colis/colisp01.html
  • 6. Dillon A. Keynote address: Library and information science as a research domain: Problems and prospects. Information Research [Internet]. 2007 [cited 23 April 2017]; 12 (4): paper colis03 [6 p.]. Available from: http://www.informationr.net/ir/12-4/colis/colis03.html
  • 7. Eldredge JD. Evidence‐based librarianship: An overview. Bulletin of the Medical Library Association. 2000; 88 (4):289-302
  • 8. Bradley J, Marshall JG. Using scientific evidence to improve information practice. Health Libraries Review. 1995; 12 (3):147-157
  • 9. Bibliometrics. In: International Encyclopedia of Information and Library Science. 2nd ed. London, UK: Routledge; 2003. p. 38
  • 10. Åström F. Library and Information Science in context: The development of scientific fields, and their relations to professional contexts. In: Rayward WB, editor. Aware and Responsible: Papers of the Nordic‐International Colloquium on Social and Cultural Awareness and Responsibility in Library, Information and Documentation Studies (SCARLID). Oxford, UK: Scarecrow Press; 2004. pp. 1-27
  • 11. Grover R, Glazier J. A conceptual framework for theory building in library and information science. Library and Information Science Research. 1986; 8 (3):227-242
  • 12. Boyce BR, Kraft DH. Principles and theories in information science. In: W ME, editor. Annual Review of Information Science and Technology. Medford, NJ: Knowledge Industry Publications. 1985; pp. 153-178
  • 13. Feehan PE, Gragg WL, Havener WM, Kester DD. Library and information science research: An analysis of the 1984 journal literature. Library and Information Science Research. 1987; 9 (3):173-185
  • 14. Spink A. Information science: A third feedback framework. Journal of the American Society for Information Science. 1997; 48 (8):728-740
  • 15. Jarvelin K, Vakkari P. Content analysis of research articles in Library and Information Science. Library and Information Science Research. 1990; 12 (4):395-421
  • 16. Kim SJ, Jeong DY. An analysis of the development and use of theory in library and information science research articles. Library and Information Science Research. 2006; 28 (4):548-562. DOI: http://doi.org/10.1016/j.lisr.2006.03.018
  • 17. Chu H. Research methods in library and information science: A content analysis. Library & Information Science Research. 2015; 37 (1):36-41. DOI: http://doi.org/10.1016/j.lisr.2014.09.003
  • 18. Van Maanen J. Different strokes: Qualitative research in the administrative science quarterly from 1956 to 1996. In: Van Maanen J, editor. Qualitative Studies of Organizations. Thousand Oaks, CA: SAGE; 1998. pp. ix‐xxxii
  • 19. Brookes BC. The foundations of information science Part I. Philosophical aspects. Journal of Information Science. 1980; 2 (3/4):125-133
  • 20. Hauser L. A conceptual analysis of information science. Library and Information Science Research. 1988; 10 (1):3-35
  • 21. McGrath WE. Current theory in Library and Information Science. Introduction. Library Trends. 2002; 50 (3):309-316
  • 22. Theory and why it is important - Social and behavioral theories - e-Source Book - OBSSR e-Source [Internet]. Esourceresearch.org. 2017 [cited 23 April 2017]. Available from: http://www.esourceresearch.org/eSourceBook/SocialandBehavioralTheories/3TheoryandWhyItisImportant/tabid/727/Default.aspx
  • 23. Babbie E. The practice of social research. 7th ed. Belmont, CA: Wadsworth; 1995
  • 24. Glazier JD, Grover R. A multidisciplinary framework for theory building. Library Trends. 2002; 50 (3):317-329
  • 25. Glaser B, Strauss AL. The discovery of grounded theory: Strategies for qualitative research. New Brunswick: Aldine Transaction; 1999
  • 26. Smiraglia RP. The progress of theory in knowledge organization. Library Trends. 2002; 50 :330-349
  • 27. McGrath WE. Explanation and prediction: Building a unified theory of librarianship, concept and review. Library Trends. 2002; 50 (3):350-370
  • 28. Julien H, Duggan LJ. A longitudinal analysis of the information needs and uses literature. Library & Information Science Research. 2000; 22 (3):291-309. DOI: http://doi.org/10.1016/S0740‐8188(99)00057‐2
  • 29. Pettigrew KE, McKechnie LEF. The use of theory in information science research. Journal of the American Society for Information Science and Technology. 2001; 52 (1):62-73. DOI: http://doi.org/10.1002/1532‐2890(2000)52:1<62::AID‐ASI1061>3.0.CO;2‐J
  • 30. Kumasi KD, Charbonneau DH, Walster D. Theory talk in the library science scholarly literature: An exploratory analysis. Library & Information Science Research. 2013; 35 (3):175-180. DOI: http://doi.org/10.1016/j.lisr.2013.02.004
  • 31. Rawson C, Hughes‐Hassell S. Research by Design: The promise of design‐based research for school library research. School Libraries Worldwide. 2015; 21 (2):11-25
  • 32. Hider P, Pymm B. Empirical research methods reported in high‐profile LIS journal literature. Library & Information Science Research. 2008; 30 (2):108-114. DOI: http://doi.org/10.1016/j.lisr.2007.11.007
  • 33. Bernhard, P. In search of research methods used in information science. Canadian Journal of Information and Library Science. 1993;18(3): 1-35
  • 34. Blake VLP. Since Shaughnessy. Collection Management. 1994; 19 (1‐2):1-42. DOI: http://doi.org/10.1300/J105v19n01_01
  • 35. Schlachter GA. Abstracts of library science dissertations. Library Science Annual. 1989; 1 :1988-1996
  • 36. Kim MT. Research record. Journal of Education for Library and Information Science. 1996; 37 (4):376-383
  • 37. Peritz BC. The methods of library science research: Some results from a bibliometric survey. Library Research. 1980; 2 (3):251-268
  • 38. Nour MM. A quantitative analysis of the research articles published in core library journals of 1980. Library and Information Science Research. 1985; 7 (3):261-273
  • 39. Enger KB, Quirk G, Stewart JA. Statistical methods used by authors of library and infor- mation science journal articles. Library and Information Science Research. 1989; 11 (1): 37-46
  • 40. Jarvelin K, Vakkari P. The evolution of library and information science 1965-1985: A content analysis of journal articles. Information Processing and Management. 1993; 29 (1):129-144
  • 41. Kumpulainen S. Library and information science research in 1975: Content analysis of the journal articles. Libri. 1991; 41 (1):59-76
  • 42. Koufogiannakis D, Slater L, Crumley E. A content analysis of librarianship research. Journal of Information Science. 2004; 30 (3):227-239. DOI: http://doi.org/10.1177/0165551504044668
  • 43. Hildreth CR, Aytac S. Recent library practitioner research: A methodological analysis and critique on JSTOR. Journal of Education for Library and Information Science. 2007; 48 (3):236-258
  • 44. Gonzales‐Teruel A, Abad‐Garcia MF. Information needs and uses: An analysis of the literature published in Spain, 1990‐2004. Library and Information Science Research. 2007; 29 (1):30-46
  • 45. Luo L, Mckinney M. JAL in the past decade: A comprehensive analysis of academic library research. The Journal of Academic Librarianship. 2015; 41 :123-129. DOI: http://doi.org/10.1016/j.acalib.2015.01.003
  • 46. Creswell JW. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. 3rd ed. Thousand Oaks, CA: SAGE; 2009
  • 47. Strauss A, Corbin J. Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Newbury Park, CA: SAGE Publications; 1990
  • 48. Neuendorf KA. The Content Analysis Guidebook. 2nd ed. Thousand Oaks, CA: SAGE Publications; 2016
  • 49. Dimitroff A. Research for special libraries: A quantitative analysis of the literature. Special Libraries. 1995; 86 (4):256-264
  • 50. Cano V. Bibliometric overview of library and information science research in Spain. Journal of the American Society for Information Science. 1999; 50 (8):675-680. DOI: http://doi.org/10.1002/(SICI)1097‐4571(1999)50:8<675::AID‐ASI5>3.0.CO;2‐B
  • 51. Powell RR. Recent trends in research: A methodological essay. Library & Information Science Research. 1999; 21 (1):91-119. DOI: http://doi.org/10.1016/S0740‐8188(99)80007‐3
  • 52. Crawford GA. The research literature of academic librarianship: A comparison of college & Research Libraries and Journal of Academic Librarianship. College & Research Libraries. 1999; 60 (3):224-230. DOI: http://doi.org/10.5860/crl.60.3.224
  • 53. Julien H, Pecoskie JJL, Reed K. Trends in information behavior research, 1999-2008: A content analysis. Library & Information Science Research. 2011; 33 (1):19-24. DOI: http://doi.org/10.1016/j.lisr.2010.07.014
  • 54. Fidel R. Qualitative methods in information retrieval research. Library and Information Science Research. 1993; 15 (3):219-247
  • 55. Hernon P, Schwartz C. Reflections (editorial). Library and Information Science Research. 2003; 25 (1):1-2. DOI: http://doi.org/10.1016/S0740‐8188(02)00162‐7
  • 56. Priestner A. Going native: Embracing ethnographic research methods in libraries. Revy. 2015; 38 (4):16-17
  • 57. Blaikie N. Approaches to social enquiry. Cambridge: Polity; 1993
  • 58. Johnson RB, Onwuegbuzie AJ. Mixed methods research: A research paradigm whose time has come. Educational Researcher. 2004; 33 (7):14-26
  • 59. Westbrook L. Qualitative research methods: A review of major stages, data analysis techniques, and quality controls. Library & Information Science Research. 1994; 16 (3):241-254
  • 60. Fidel R. Are we there yet?: Mixed methods research in library and information science. Library and Information Science Research. 2008; 30 (4):265-272. DOI: http://doi.org/10.1016/j.lisr.2008.04.001

© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Continue reading from the same book

Edited by Sonyel Oflazoglu Dora

Published: 28 June 2017

By Seyma Demir and Yasemin Yildirim Usta

1896 downloads

By Maria Cecília de Souza Minayo

2128 downloads

By Yusuf Bilgin

3198 downloads

ABC-CLIO eBooks

Research Methods in Library and Information Science, 7th Edition

Research Methods in Library and Information Science, 7th Edition

Libraries Unlimited

Cite this eBook

9781440878589

Connaway, Lynn and Radford, Marie. Research Methods in Library and Information Science, 7th Edition. 7, Libraries Unlimited, 2021. ABC-CLIO, publisher.abc-clio.com/9781440878589.

Chicago Manual of Style

Connaway, Lynn, and Marie Radford. Research Methods in Library and Information Science, 7th Edition, 7. Libraries Unlimited, 2021. http://publisher.abc-clio.com/9781440878589

Connaway, L. & Radford, M. (2021). Research Methods in Library and Information Science, 7th Edition. Retrieved from http://publisher.abc-clio.com/9781440878589

The seventh edition of this frequently adopted textbook features new or expanded sections on social justice research, data analysis software, scholarly identity research, social networking, data science, and data visualization, among other topics. It continues to include discipline experts' voices.

The revised seventh edition of this popular text provides instruction and guidance for professionals and students in library and information science who want to conduct research and publish findings, as well as for practicing professionals who want a broad overview of the current literature.

Providing a broad introduction to research design, the authors include principles, data collection techniques, and analyses of quantitative and qualitative methods, as well as advantages and limitations of each method and updated bibliographies. Chapters cover the scientific method, sampling, validity, reliability, and ethical concerns along with quantitative and qualitative methods. LIS students and professionals will consult this text not only for instruction on conducting research but also for guidance in critically reading and evaluating research publications, proposals, and reports.

As in the previous edition, discipline experts provide advice, tips, and strategies for completing research projects, dissertations, and theses; writing grants; overcoming writer's block; collaborating with colleagues; and working with outside consultants. Journal and book editors discuss how to publish and identify best practices and understudied topics, as well as what they look for in submissions.

  • Features new or expanded sections on social justice research; virtual collaboration, data collection, and dissemination; scholarly communication; computer-assisted qualitative and quantitative data analysis; scholarly identity research and guidelines; data science; and visualization of quantitative and qualitative data
  • Provides a broad and comprehensive overview and update, especially of research published over the past five years
  • Highlights school, public, and academic research findings
  • Relies on the coauthors' expertise in research design, securing grant funding, and using the latest technology and data analysis software

Table of Contents

Table of Contents pages: 1 2

  • Cover Cover1 1
  • Title Page iii 4
  • Copyright iv 5
  • Contents v 6
  • Illustrations xv 16
  • Text Boxes xvii 18
  • Preface xix 20
  • 1—Research and Librarianship 1 24
  • Introduction 1 24
  • Definition of Research 2 25
  • The Assessment Imperative 6 29
  • Scholarly Communication 8 31
  • Research Data Management and Reuse 11 34
  • New Modes for Collaboration 14 37
  • Time Management 16 39
  • Overview of Previous Library and Information Science Research 19 42
  • Current Library and Information Science Research Environment 19 42
  • Research Methods in Library and Information Science 19 42
  • Recommendations for Future Research in Library and Information Science 22 45
  • Summary 25 48
  • References 26 49
  • 2—Developing the Research Study 35 58
  • Planning for Research: Getting Started 35 58
  • Philosophical Underpinnings and Assumptions 36 59
  • Paradigms That Shape Research Development 37 60
  • A General Outline for Research 39 62
  • Literature Review of Related Research 39 62
  • Identification of the Problem 43 66
  • Characteristics of a Problem Suitable for Research 45 68
  • Statement of the Problem 47 70
  • Identifying Subproblems 48 71
  • The Role of Theory in the Design of Research 49 72
  • Definition of Theory 49 72
  • Research Design 56 79
  • Differences in Quantitative and Qualitative Design 57 80
  • Mixed Methods 59 82
  • Testing or Applying the Theory 66 89
  • The Pilot Study 66 89
  • Summary 67 90
  • References 68 91
  • 3—Writing the Research Proposal 73 96
  • Organization and Content of a Typical Proposal 74 97
  • Title Page 74 97
  • Abstract 75 98
  • Table of Contents 75 98
  • Introduction and Statement of the Problem 75 98
  • The Literature Review of Related Research 78 101
  • Research Design 79 102
  • Institutional Resources 81 104
  • Personnel 81 104
  • Budget 82 105
  • Anticipated Results 84 107
  • Indicators of Success 84 107
  • Diversity Plan 86 109
  • Limitations of the Study 86 109
  • Back Matter 87 110
  • The Dissertation Proposal: Further Guidance 87 110
  • Characteristics of a Good Proposal 89 112
  • Features That Detract from a Proposal 89 112
  • Obtaining Funding for Library and Information Science Research 90 113
  • Summary 95 118
  • References 96 119
  • 4—Principles of Quantitative Methods 99 122
  • Formulating Hypotheses 100 123
  • Definitions of Hypothesis 100 123
  • Sources of Hypotheses 102 125
  • Developing the Hypothesis 102 125
  • Variables 103 126
  • Concepts 105 128
  • Desirable Characteristics of Hypotheses 107 130
  • Testing the Hypothesis 108 131
  • Validity and Reliability 110 133
  • Validity of Research Design 110 133
  • Validity in Measurement 111 134
  • Logical Validity 112 135
  • Empirical Validity 112 135
  • Construct Validity 113 136
  • Reliability of Research Design 113 136
  • Reliability in Measurement 113 136
  • Scales 115 138
  • Ethics of Research 116 139
  • General Guidelines 117 140
  • Guidelines for Library and Information Science Professionals 119 142
  • Ethics for Research in the Digital Environment 120 143
  • Research Misconduct 123 146
  • Summary 124 147
  • References 124 147
  • 5—Survey Research and the Questionnaire 129 152
  • Survey Research 130 153
  • Major Differences between Survey Research and Other Methods 130 153
  • Types of Survey Research 130 153
  • Exploratory Survey Research 131 154
  • Descriptive Survey Research 132 155
  • Other Types of Survey Research 132 155
  • Basic Purposes of Descriptive Survey Research 133 156
  • Basic Steps of Survey Research: An Overview 134 157
  • Survey Research Designs 136 159
  • Survey Research Costs 137 160
  • The Questionnaire 138 161
  • Prequestionnaire Planning 138 161
  • Advantages of the Questionnaire 139 162
  • Disadvantages of the Questionnaire 140 163
  • Constructing the Questionnaire 141 164
  • Type of Question According to Information Needed 142 165
  • Type of Question According to Form 143 166
  • Scaled Responses 146 169
  • Question Content and Selection 152 175
  • Question Wording 153 176
  • Sequencing of Questionnaire Items 154 177
  • Sources of Error 155 178
  • Preparing the First Draft 155 178
  • Evaluating the Questionnaire 156 179
  • The Pretest 156 179
  • Final Editing 157 180
  • Cover Email or Letter with Introductory Information 159 182
  • Distribution of the Questionnaire 160 183
  • Summary 165 188
  • References 165 188
  • 6—Sampling 169 192
  • Basic Terms and Concepts 169 192
  • Types of Sampling Methods 170 193
  • Nonprobability Sampling 170 193
  • Probability Sampling 172 195
  • Determining the Sample Size 182 205
  • Use of Formulas 183 206
  • Sampling Error 186 209
  • Other Causes of Sampling Error 188 211
  • Nonsampling Error 189 212
  • Summary 189 212
  • References 189 212
  • 7—Experimental Research 191 214
  • Causality 191 214
  • The Conditions for Causality 192 215
  • Bases for Inferring Causal Relationships 193 216
  • Controlling the Variables 194 217
  • Random Assignment 195 218
  • Internal Validity 196 219
  • Threats to Internal Validity 196 219
  • External Validity 198 221
  • Threats to External Validity 198 221
  • Experimental Designs 199 222
  • True Experimental Designs 200 223
  • True Experiments and Correlational Studies 202 225
  • Quasi-Experimental Designs 205 228
  • Ex Post Facto Designs 207 230
  • Internet-Based Experiments 207 230
  • Summary 208 231
  • References 208 231
  • 8—Analysis of Quantitative Data 211 234
  • Statistical Analysis 211 234
  • Data Mining 212 235
  • Log Analysis 212 235
  • Data Science 215 238
  • Machine Learning and Artificial Intelligence 215 238
  • Bibliometrics 217 240
  • Role of Statistics 221 244
  • Cautions in Using Statistics 221 244
  • Steps Involved in Statistical Analysis 222 245
  • The Establishment of Categories 222 245
  • Coding the Data 223 246
  • Analyzing the Data: Descriptive Statistics 227 250
  • Analyzing the Data: Inferential Statistics 233 256
  • Parametric Statistics 235 258
  • Nonparametric Statistics 240 263
  • Selecting the Appropriate Statistical Test 241 264
  • Cautions in Testing the Hypothesis 243 266
  • Statistical Analysis Software 244 267
  • Visualization and Display of Quantitative Data 246 269
  • Summary 250 273
  • References 251 274
  • 9—Principles of Qualitative Methods 259 282
  • Introduction to Qualitative Methods 259 282
  • Strengths of a Qualitative Approach 261 284
  • Role of the Researcher 262 285
  • The Underlying Assumptions of Naturalistic Work 263 286
  • Ethical Concerns 264 287
  • Informed Consent 265 288
  • Deception 268 291
  • Confidentiality and Anonymity 269 292
  • Data-Gathering Techniques 270 293
  • Research Design 271 294
  • Establishing Goals 272 295
  • Developing the Conceptual Framework 273 296
  • Developing Research Questions 274 297
  • Research Questions for Focus Group and Individual Interviews in the Public Library Context 274 297
  • Research Questions for Mixed-Methods Study with Focus Group and Individual Interviews in the Academic Library Context 275 298
  • Research Questions for Focus Group and Individual Interviews in a High School Context 275 298
  • Research Questions for a Mixed-Methods Grant Project Using Transcript Analysis, Individual Interviews, and Design Sessions in the Consortial Live Chat Virtual Reference Context 276 299
  • Research Questions for a Mixed-Methods Study Using a Questionnaire and Individual Interviews Investigating Chat Virtual Reference in the Time of COVID-19 276 299
  • Research Design in Online Environments 277 300
  • New Modes for Online Data Collection 278 301
  • Summary 280 303
  • References 281 304
  • 10—Analysis of Qualitative Data 287 310
  • Data Analysis Tools and Methods 287 310
  • Stages in Data Analysis 289 312
  • Preparing and Processing Data for Analysis 289 312
  • Computer-Assisted Qualitative Data Analysis Software (CAQDAS) 290 313
  • Deciding Whether to Use Qualitative Software 295 318
  • Strategies for Data Analysis 299 322

eBook Search

Advanced search.

current document titles Imprint: Linworth/Libraries Unlimited susan's collection all documents

Search for Book

Keyword(s):

Print ISBN-13:

Enter value

Publication Year:

Search >

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Research methods in library and information science: A content analysis

Profile image of Heting Chu

Related Papers

Advances in Library and Information Science

Judith Mavodza

The library and information science (LIS) profession is influenced by multidisciplinary research strategies and techniques (research methods) that in themselves are also evolving. They represent established ways of approaching research questions (e.g., qualitative vs. quantitative methods). This chapter reviews the methods of research as expressed in literature, demonstrating how, where, and if they are inter-connected. Chu concludes that popularly used approaches include the theoretical approach, experiment, content analysis, bibliometrics, questionnaire, and interview. It appears that most empirical research articles in Chu&#39;s analysis employed a quantitative approach. Although the survey emerged as the most frequently used research strategy, there is evidence that the number and variety of research methods and methodologies have been increasing. There is also evidence that qualitative approaches are gaining increasing importance and have a role to play in LIS, while mixed meth...

what is research methodology in library and information science

Aspasia Togia , Afrodite Malliari

Library and information science (LIS) is a very broad discipline, which uses a wide range of constantly evolving research strategies and techniques. The aim of this chapter is to provide an updated view of research issues in library and information science. A strati‐ fied random sample of 440 articles published in five prominent journals was analyzed and classified to identify (i) research approach, (ii) research methodology, and (iii) method of data analysis. For each variable, a coding scheme was developed, and the articles were coded accordingly. A total of 78% of the articles reported empirical research. The rest 22% were classified as non‐empirical research papers. The five most popular topics were " information retrieval, " " information behaviour, " " information literacy, " " library ser‐ vices, " and " organization and management. " An overwhelming majority of the empirical research articles employed a quantitative approach. Although the survey emerged as the most frequently used research strategy, there is evidence that the number and variety of research methodologies have been increased. There is also evidence that qualitative approaches are gaining increasing importance and have a role to play in LIS, while mixed methods have not yet gained enough recognition in LIS research.

Aspasia Togia

The aim of the present study is to investigate the general trends of LIS research, using as source material the articles published in Library and Information Science Research in a five-year period (2005-2010). Library & Information Science Research was chosen because is a cross-disciplinary and refereed journal, which focuses on the research process in library and information science, covers a wide range of topicswithin the field, reports research findings and provides work of interest to both academics and practitioners. The authors review the findings from an examination of research articles published in the journal, giving emphasis on articles that used quantitative and/or qualitative research methods as an integral part of the author’s work. The paper examines the major topics and problems addressed by LIS researchers, the research approaches and the types of quantitative and qualitative research methods used in articles published during this period, in an effort to understand t...

Veronica Gauchi Risso

Purpose – This paper aims to explore research methods used in Library and Information Science (LIS) during the past four decades. The goal is to compile a annotated bibliography of seminal works of the discipline used in different countries and social contexts. Design/methodology/approach – When comparing areas and types of research, different publication patterns are taken into account. As we can see, data indicators and types of studies carried out on scientific activity contribute very little when evaluating the real response potential to identified problems. Therefore, among other things, LIS needs new methodological developments, which should combine qualitative and quantitative approaches and allow a better understanding of the nature and characteristics of science in different countries. Findings – The conclusion is that LIS emerges strictly linked to descriptive methodologies, channeled to meet the challenges of professional practice through empirical strategies of a professional nature, which manifests itself the preponderance of a professional paradigm that turns out to be an indicator of poor scientific discipline development. Research limitations/implications – This, undoubtedly, reflects the reality of Anglo-Saxon countries, reproduced in most of the recognized journals of the field; this issue plus the chosen instruments for data collection certainly slant the results. Originality/value – The development of taxonomies in the discipline cannot be left aside from the accepted by the rest of the scientific community, at least if LIS desires to be integrated and recognized as a scientific discipline.

Abubakar Abdulkareem

All research is built on certain underlying philosophical assumptions about what constitutes valid research and which research method is appropriate for the development of knowledge in a given study. The use of research paradigm and theory provide specific direction for procedures in research design. This paper discussed the importance of understanding the use research paradigm and theory in relation with the concepts of ontology, epistemology and methodology in both quantitative and qualitative approach. The paper further discussed the connection of research paradigm and theory in library and information science (LIS) research, in both quantitative and qualitative research. The author presents step using deductive and inductive approach and provide a guide in which LIS researchers need to be consider before designing a research proposal.

Helen Clarke

Rajani Mishra

Research is the means to deplore the already existing information in the discipline. It is continuous and exhaustive study and leads to development of the newer theory, methodology or redefinition of the already existing theory in the light of new facts. This rediscovery depends upon the used methods and tools of the research. The present study is a step towards this. It tries to find out the various research methods used by the researchers of the library and information science. Keywords: Research methods, library services, survey, analytical research Cite this Article Rajani Mishra. The Data Analysis Tools Used in the Articles of Annals of Library and Information Studies: An Analysis. Journal of Advancements in Library Sciences . 2017; 4(2): 46–49p.

Library Hi Tech

Denise Koufogiannakis

International Journal of …

Chinwe Anunobi

Sidi Masuga

RELATED PAPERS

Wiley eBooks

David Drake

Technologies

Joshua Pearce , Koami Soulemane Hayibo

IEEE Intelligent Systems

Martin Hering-Bertram

Ibrahim H. Osman DEA, Egov Performance, SEM, Optimitization, Metaheuristics

Alfonso Ramirez Ponce

Acta Scientiarum. Human and Social Sciences

Olayinka ajala

Kuwait Journal of Science

rabia habib

Erik Jansen

Allel HADJALI

Annals of Botany

Antonio Useche

DANIELA ACEVES HERNANDEZ

Vandna Dahiya

Revista Prolegomenos Derechos Y Valores De La Facultad De Derecho

Dilia Patiño

IEEE Robotics and Automation Letters

Avital Bechar

Revista Mundaú

Waleska Aureliano

SSRN Electronic Journal

Brendan K O'Rourke

World journal of surgery

Naloe Black

Revista Jurídica de la Universidad de León

CRISTINA GONZÁLEZ VIDALES

The Astrophysical Journal

Ferdinando Patat

Anna Galasheva

FERRAN OLUCHA MONTINS

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • Skip to search box
  • Skip to main content

Princeton University Library

  • Background Information
  • Find Articles
  • Get the Full Text of a Journal Article
  • Why Can't I Find That Article?

Library Research Methods

  • Evaluating Websites
  • Citing Sources
  • Productivity Tools for Scholars

(Adapted from Thomas Mann, Library Research Models )

Keyword searches . Search relevant keywords in catalogs, indexes, search engines, and full-text resources. Useful both to narrow a search to the specific subject heading and to find sources not captured under a relevant subject heading. To search a database effectively, start with a Keyword search, find relevant records, and then find relevant Subject Headings. In search engines, include many keywords to narrow the search and carefully evaluate what you find.

Subject searches .  Subject Headings (sometimes called Descriptors) are specific terms or phrases used consistently by online or print indexes to describe what a book or journal article is about. This is true of the library’s Catalog as well as many other library databases . 

Look for recent, scholarly books and articles. Within catalogs and databases, sort by the most recent date and look for books from scholarly presses and articles from scholarly journals. The more recent the source, the more up-to-date the references and citations.

Citation searches in scholarly sources .  Track down references, footnotes, endnotes, citations, etc. within relevant readings. Search for specific books or journals in the library’s Catalog . This technique helps you become part of the scholarly conversation on a particular topic.

Searches through published bibliographies (including sets of footnotes in relevant subject documents).  Published bibliographies on particular subjects (Shakespeare, alcoholism, etc.) often list sources missed through other kinds of searches. BIBLIOGRAPHY is a subject heading in the Catalog , so a Guided Search with BIBLIOGRAPHY as a Subject and your topic as a keyword will help you find these.

Searches through people sources (whether by verbal contact, e-mail, etc.). People are often more willing to help than you might think. The people to start with are often professors with relevant knowledge or librarians.

Systematic browsing, especially of full-text sources arranged in predictable subject groupings . Libraries organize books by subject, with similar books shelved together.  Browsing the stacks is a good way to find similar books; however, in large libraries, some books are not in the main stacks (e.g., they might be checked out or in ReCAP), so use the catalog as well.

The advantages of trying all these research methods are that:

Each of these ways of searching is applicable in any subject area

None of them is confined exclusively to English-language sources

Each has both strengths and weaknesses, advantages and disadvantages

The weaknesses within any one method are balanced by the strengths of the others

The strength of each is precisely that it is capable of turning up information or knowledge records that cannot be found efficiently—or often even at all—by any of the others

How to Gut a (Scholarly) Book in 5 Almost-easy Steps

Evaluating sources.

From Wayne C. Booth et al., The Craft of Research , 4th ed., pp.76-79

5.4 EVALUATING SOURCES FOR RELEVANCE AND RELIABILITY When you start looking for sources, you’ll find more than you can use, so you must quickly evaluate their usefulness; use two criteria: relevance and reliability.

5.4.1 Evaluating Sources for Relevance

If your source is a book, do this:

  • Skim its index for your key words, then skim the pages on which those words occur.
  • Skim the first and last paragraphs in chapters that use a lot of your key words.
  • Skim prologues, introductions, summary chapters, and so on.
  • Skim the last chapter, especially the >rst and last two or three pages.
  • If the source is a collection of articles, skim the editor’s introduction.
  • Check the bibliography for titles relevant to your topic.

If your source is an article, do this:

  • Read the abstract, if it has one.
  • Skim the introduction and conclusion, or if they are not marked by headings, skim the first six or seven paragraphs and the last four or five.
  • Skim for section headings, and read the first and last paragraphs of those sections.

If your source is online, do this:

  • If it looks like a printed article, follow the steps for a journal article.
  • Skim sections labeled “introduction,” “overview,” “summary,” or the like. If there are none, look for a link labeled “About the Site” or something similar.
  • If the site has a link labeled “Site Map” or “Index,” check it for your key words and skim the referenced pages.
  • If the site has a “search” resource, type in your key words.

This kind of speedy reading can guide your own writing and revision. If you do not structure your report so your readers can skim it quickly and see the outlines of your argument, your report has a problem, an issue we discuss in chapters 12 and 14.

5.4.2 Evaluating Sources for Reliability You can’t judge a source until you read it, but there are signs of its reliability:

1. Is the source published or posted online by a reputable press? Most university presses are reliable, especially if you recognize the name of the university. Some commercial presses are reliable in some fields, such as Norton in literature, Ablex in sciences, or West in law. Be skeptical of a commercial book that makes sensational claims, even if its author has a PhD after his name. Be especially careful about sources on hotly contested social issues such as stem-cell research, gun control, and global warming. Many books and articles are published by individuals or organizations driven by ideology. Libraries often include them for the sake of coverage, but don’t assume they are reliable.

2. Was the book or article peer-reviewed? Most reputable presses and journals ask experts to review a book or article before it is published; it is called “peer review.” Many essay collections, however, are reviewed only by the named editor(s). Few commercial magazines use peer review. If a publication hasn’t been peer-reviewed, be suspicious.

3. Is the author a reputable scholar? This is hard to answer if you are new to a field. Most publications cite an author’s academic credentials; you can find more with a search engine. Most established scholars are reliable, but be cautious if the topic is a contested social issue such as gun control or abortion. Even reputable scholars can have axes to grind, especially if their research is financially supported by a special interest group. Go online to check out anyone an author thanks for support, including foundations that supported her work.

4. If the source is available only online, is it sponsored by a reputable organization? A Web site is only as reliable as its sponsor. You can usually trust one sponsored and maintained by a reputable organization. But if the site has not been updated recently, it may have been abandoned and is no longer endorsed by its sponsor. Some sites supported by individuals are reliable; most are not. Do a Web search for the name of the sponsor to find out more about it.

5. Is the source current? You must use up-to-date sources, but what counts as current depends on the field. In computer science, a journal article can be out-of-date in months; in the social sciences, ten years pushes the limit. Publications have a longer life in the humanities: in philosophy, primary sources are current for centuries, secondary ones for decades. In general, a source that sets out a major position or theory that other researchers accept will stay current longer than those that respond to or develop it. Assume that most textbooks are not current (except, of course, this one).

If you don’t know how to gauge currency in your field, look at the dates of articles in the works cited of a new book or article: you can cite works as old as the older ones in that list (but perhaps not as old as the oldest). Try to find a standard edition of primary works such as novels, plays, letters, and so on (it is usually not the most recent). Be sure that you consult the most recent edition of a secondary or tertiary source (researchers often change their views, even rejecting ones they espoused in earlier editions).

6. If the source is a book, does it have a notes and a bibliography? If not, be suspicious, because you have no way to follow up on anything the source claims.

7. If the source is a Web site, does it include bibliographical data? You cannot know how to judge the reliability of a site that does not indicate who sponsors and maintains it, who wrote what’s posted there, and when it was posted or last updated.

8. If the source is a Web site, does it approach its topic judiciously? Your readers are unlikely to trust a site that engages in heated advocacy, attacks those who disagree, makes wild claims, uses abusive language, or makes errors of spelling, punctuation, and grammar.

The following criteria are particularly important for advanced students:

9. If the source is a book, has it been well reviewed? Many fields have indexes to published reviews that tell you how others evaluate a source.

10. Has the source been frequently cited by others? You can roughly estimate how influential a source is by how often others cite it. To determine that, consult a citation index.

  • << Previous: Why Can't I Find That Article?
  • Next: Evaluating Sources >>
  • Last Updated: Oct 17, 2023 3:09 PM
  • URL: https://libguides.princeton.edu/philosophy

what is research methodology in library and information science

  • Politics & Social Sciences
  • Social Sciences

Buy new: .savingPriceOverride { color:#CC0C39!important; font-weight: 300!important; } .reinventMobileHeaderPrice { font-weight: 400; } #apex_offerDisplay_mobile_feature_div .reinventPriceSavingsPercentageMargin, #apex_offerDisplay_mobile_feature_div .reinventPricePriceToPayMargin { margin-right: 4px; } $81.97 $ 81 . 97 FREE delivery May 28 - June 3 Ships from: Muguet’s store Sold by: Muguet’s store

Save with used - good .savingpriceoverride { color:#cc0c39important; font-weight: 300important; } .reinventmobileheaderprice { font-weight: 400; } #apex_offerdisplay_mobile_feature_div .reinventpricesavingspercentagemargin, #apex_offerdisplay_mobile_feature_div .reinventpricepricetopaymargin { margin-right: 4px; } $46.92 $ 46 . 92 free delivery tuesday, may 28 ships from: amazon sold by: shelf of treasures, return this item for free.

Free returns are available for the shipping address you chose. You can return the item for any reason in new and unused condition: no shipping charges

  • Go to your orders and start the return
  • Select the return method

Kindle app logo image

Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required .

Read instantly on your browser with Kindle for Web.

Using your mobile phone camera - scan the code below and download the Kindle app.

QR code to download the Kindle App

Follow the author

Lynn Silipigni Connaway

Image Unavailable

Research Methods in Library and Information Science (Library and Information Science Text Series)

  • To view this video download Flash Player

what is research methodology in library and information science

Research Methods in Library and Information Science (Library and Information Science Text Series) 6th Edition

There is a newer edition of this item:.

Research Methods in Library and Information Science (Library and Information Science Text Series)

Purchase options and add-ons

An essential resource for LIS master's and doctoral students, new LIS faculty, and academic librarians, this book provides expert guidance and practical examples based on current research about quantitative and qualitative research methods and design. Conducting research and successfully publishing the findings is a goal of many professionals and students in library and information science (LIS). Using the best methodology maximizes the likelihood of a successful outcome. This outstanding book broadly covers the principles, data collection techniques, and analyses of quantitative and qualitative methods as well as the advantages and limitations of each method to research design. It addresses these research methods and design by discussing the scientific method, sampling techniques, validity, reliability, and ethical concerns along with additional topics such as experimental research design, ethnographic methods, and usability testing. The book presents comprehensive information in a logical, easy-to-follow format, covering topics such as research strategies for library and information science doctoral students; planning for research; defining the problem, forming a theory, and testing the theory; the scientific method of inquiry and data collection techniques; survey research methods and questionnaires; analyzing quantitative data; interview-based research; writing research proposals; and even time management skills. LIS students and professionals can consult the text for instruction on conducting research using this array of tools as well as for guidance in critically reading and evaluating research publications, proposals, and reports. The explanations and current research examples supplied by discipline experts offer advice and strategies for completing research projects, dissertations, and theses as well as for writing grants, overcoming writer's block, collaborating with colleagues, and working with outside consultants. The answer to nearly any question posed by novice researchers is provided in this book.

  • ISBN-10 9781440834783
  • ISBN-13 978-1440834783
  • Edition 6th
  • Publisher Libraries Unlimited
  • Publication date November 21, 2016
  • Part of series Library and Information Science Text
  • Language English
  • Dimensions 1 x 1 x 1 inches
  • Print length 496 pages
  • See all details

Amazon First Reads | Editors' picks at exclusive prices

Frequently bought together

Research Methods in Library and Information Science (Library and Information Science Text Series)

Customers who bought this item also bought

Applications of Social Research Methods to Questions in Information and Library Science

Editorial Reviews

About the author.

Lynn Silipigni Connaway , PhD, is a senior research scientist at OCLC Research where she leads user studies and the digital visitors and residents project. Marie L. Radford , PhD, MSLIS, is professor in the Department of Information and Library Studies and director of the PhD Program at the Rutgers School of Communication and Information, New Brunswick, NJ.

Product details

  • ASIN ‏ : ‎ 1440834784
  • Publisher ‏ : ‎ Libraries Unlimited; 6th edition (November 21, 2016)
  • Language ‏ : ‎ English
  • Paperback ‏ : ‎ 496 pages
  • ISBN-10 ‏ : ‎ 9781440834783
  • ISBN-13 ‏ : ‎ 978-1440834783
  • Item Weight ‏ : ‎ 2.01 pounds
  • Dimensions ‏ : ‎ 1 x 1 x 1 inches
  • #253 in Reference (Books)
  • #257 in General Library & Information Sciences
  • #352 in Research Reference Books

About the author

Lynn silipigni connaway.

Lynn Silipigni Connaway, Ph.D., worked in libraries, was on the faculty of library and information science programs, and was the Vice-President of Research and Library Services at an electronic book provider. She currently is a research scientist at OCLC. She has been studying bibliographic issues related to how people look for and get their information and mining library bibliographic and use data to facilitate decision making.

Customer reviews

Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.

To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.

  • Sort reviews by Top reviews Most recent Top reviews

Top reviews from the United States

There was a problem filtering reviews right now. please try again later..

what is research methodology in library and information science

Top reviews from other countries

  • Amazon Newsletter
  • About Amazon
  • Accessibility
  • Sustainability
  • Press Center
  • Investor Relations
  • Amazon Devices
  • Amazon Science
  • Sell on Amazon
  • Sell apps on Amazon
  • Supply to Amazon
  • Protect & Build Your Brand
  • Become an Affiliate
  • Become a Delivery Driver
  • Start a Package Delivery Business
  • Advertise Your Products
  • Self-Publish with Us
  • Become an Amazon Hub Partner
  • › See More Ways to Make Money
  • Amazon Visa
  • Amazon Store Card
  • Amazon Secured Card
  • Amazon Business Card
  • Shop with Points
  • Credit Card Marketplace
  • Reload Your Balance
  • Amazon Currency Converter
  • Your Account
  • Your Orders
  • Shipping Rates & Policies
  • Amazon Prime
  • Returns & Replacements
  • Manage Your Content and Devices
  • Recalls and Product Safety Alerts
  • Conditions of Use
  • Privacy Notice
  • Consumer Health Data Privacy Disclosure
  • Your Ads Privacy Choices

Advertisement

Issue Cover

  • Previous Article
  • Next Article

1. INTRODUCTION

2. related work, 3. methodology, 4. findings, 5. discussion, 6. conclusion, author contributions, competing interests, funding information, data availability, towards automated analysis of research methods in library and information science.

ORCID logo

Handling Editor: Ludo Waltman

  • Cite Icon Cite
  • Open the PDF for in another window
  • Permissions
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Search Site

Ziqi Zhang , Winnie Tam , Andrew Cox; Towards automated analysis of research methods in library and information science. Quantitative Science Studies 2021; 2 (2): 698–732. doi: https://doi.org/10.1162/qss_a_00123

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Previous studies of research methods in Library and Information Science (LIS) lack consensus in how to define or classify research methods, and there have been no studies on automated recognition of research methods in the scientific literature of this field. This work begins to fill these gaps by studying how the scope of “research methods” in LIS has evolved, and the challenges in automatically identifying the usage of research methods in LIS literature. We collected 2,599 research articles from three LIS journals. Using a combination of content analysis and text mining methods, a sample of this collection is coded into 29 different concepts of research methods and is then used to test a rule-based automated method for identifying research methods reported in the scientific literature. We show that the LIS field is characterized by the use of an increasingly diverse range of methods, many of which originate outside the conventional boundaries of LIS. This implies increasing complexity in research methodology and suggests the need for a new approach towards classifying LIS research methods to capture the complex structure and relationships between different aspects of methods. Our automated method is the first of its kind in LIS, and sets an important reference for future research.

Research methods are one of the defining intellectual characteristics of an academic discipline ( Whitley, 2000 ). Paradigmatic fields use a settled range of methods. Softer disciplines are marked by greater variation, more interdisciplinary borrowing, and novelty. In trying to understand our own field of Library and Information Science (LIS) better, a grasp of the changing pattern of methods can tell us much about the character and directions of the subject. LIS employs an increasingly diverse range of research methods as the discipline becomes increasingly entwined with other subjects, such as health informatics (e.g., Lustria, Kazmer et al., 2010 ), and computer science (e.g., Chen, Liu, & Ho, 2013 ). As a result of a wish to understand these patterns, a number of studies have been conducted to investigate the usage and evolution of research methods in LIS. Many of these ( Bernhard, 1993 ; Blake, 1994 ; Chu, 2015 ; Järvelin & Vakkari, 1990 ) aim to develop a classification scheme of commonly used research methods in LIS, whereas some ( Hider & Pymm, 2008 ; VanScoy & Fontana, 2016 ) focus on comparing the usage of certain methods (e.g., qualitative vs. quantitative), or recent trends in the usage of certain methods ( Fidel, 2008 ; Grankikov, Hong et al., 2020 ).

However, we identify several gaps in the literature on research methods in LIS. First, there is an increasing need for an updated view of how the scope of “research methods” in LIS has evolved. On the one hand, as we shall learn from the literature review, despite continuous interest in this research area, there remains a lack of consensus in the terminology and the classification of research methods ( Ferran-Ferrer, Guallar et al., 2017 ; Risso, 2016 ). Some ( Hider & Pymm, 2008 ; Järvelin & Vakkari, 1990 ) classify methods from different angles that form a hierarchy, and others ( Chu, 2015 ; Park, 2004 ) define a flat structure of methods. In reporting their methods, scholars also undertake different approaches, such as some that define their work in terms of data collection methods, and others that define themselves through modes of analysis. Therefore, this “lack of consensus” is difficult to resolve, but reflects that LIS is not a paradigmatic discipline where it is agreed how knowledge is built. Rather, the field sustains a number of incommensurable viewpoints about the definition of method.

On the other hand, as our results will show, the growth of artificial intelligence (AI) and Big Data research in the last decade has led to a significant increase of data-driven research published in LIS that extends to these fast-growing disciplines. As a result of this, the conventional scope and definitions of LIS research methods have difficulty in accommodating these new disciplines. For example, many of the articles published around the AI and Big Data topics are difficult to fit into the categories of methods defined in Chu (2015) .

The implication of the above situation is that it becomes extremely challenging for researchers (particularly new to LIS) to develop and maintain an informed view of the research methods used in the field. Second, there is an increasing need for automated methods that can help the analysis of research methods in LIS, as the number of publications and research methods both increase rapidly. However, we find no work in this direction in LIS to date. Although such work has already been attempted in other disciplines, such as Computer Science ( Augenstein, Das et al., 2017 ) and Biomedicine ( Hirohata, Okazaki et al., 2008 ) there is nothing comparable in LIS. Studies in those other fields have focused on automatically identifying the use of research methods and their parameters (e.g., data collected, experiment settings) from scientific literature, and have proved to be an important means for the effective archiving and timely summarizing of research. The need for providing structured access to the content of scientific literature is also articulated in Knoth and Herrmannova (2014) ’s concept of “semantometics.” We see a pressing need for conducting similar research in LIS. However, due to the complexity of defining and agreeing with a classification of LIS research methods, we anticipate the task of automated analysis will face many challenges. Therefore, a first step in this direction would be to gain an in-depth understanding of such technical challenges.

How has the scope of “research methods” in LIS evolved, compared to previous definitions of this subject?

To what extent can we automatically identify the usage of research methods in LIS literature, and what are the challenges?

We review existing definitions and the scope of “research methods” in LIS, and discuss their limitations in the context of the increasingly multidisciplinary nature and diversification of research methods used in this domain. Following on from this, we propose an updated classification of LIS research methods based on an analysis of the past 10 years’ publications from three primary journals in this field. Although this does not address many of the limitations in the status quo of the definition and classification of LIS research methods, it reflects the significant changes that deviate from the previous findings and highlights issues that need to be addressed in future research in this direction. Second, we conduct the first study of automated methods for identifying research methods from LIS literature. To achieve this, we develop a data set containing human-labeled scientific publications according to our new classification scheme, and a text mining method that automatically recognizes these labels. Our experiments revealed that, compared to other disciplines where automated classification of this kind is well established, the task in LIS is extremely challenging and there remains a significant amount of work to be done and coordinated by different parties to improve the performance of the automated method. We discuss these challenges and potential ways to address them to inform future research taking this direction.

The remainder of this paper is structured as follows. We discuss related work in the next section, followed by a description of our method. We then present and discuss our results and the limitations of this study, with concluding remarks in the final section.

We discuss related work in two areas. First, we review studies of research methods in LIS. We do not cover research in similar directions within other disciplines, as research methods can differ significantly across different subject fields. Second, we discuss studies of automated methods for information extraction (IE) from scholarly data. We will review work conducted in other disciplines, particularly from Computer Science and Biomedicine, because significant progress has been made in these subject fields and we expect to learn from and generalize methods developed in these areas to LIS.

2.1. Studies of Research Methods in LIS

Chu (2015) surveyed pre-2013 studies of research methods in LIS and these have been summarized in Table 1 . To avoid repetition, we only present an overview of this survey and refer readers to her work for details. Järvelin and Vakkari (1990) conducted the first study on this topic and proposed a framework that contains “research strategies” (e.g., historical research, survey, qualitative strategy, evaluation, case or action research, and experiment) and “data collection methods” (e.g., questionnaire, interview, observation, thinking aloud, content analysis, and historical source analysis). This framework was widely adopted and revised in later studies. For example, Kumpulainen (1991) showed that 51% of studies belonged to “empirical research” where “interview and questionnaire” (combined) was the most popular data collection method, and 48% were nonempirical research and contained no identifiable methods of data collection. Bernhard (1993) defined 13 research methods in a flat structure. Some of these have a connection to the five research strategies by Järvelin and Vakkari (1990) (e.g., “experimental research” to “empirical research”), and others would have been categorized as “data collection methods” by Järvelin and Vakkari (e.g., “content analysis,” “bibliometrics,” and “historical research”). Other studies that proposed flat structures of method classification include Blake (1994) , who introduced a classification of 13 research methods largely resembling those in Bernhard (1993) , and Park (2004) , who identified 17 research methods when comparing research methods curricula in Korean and U.S. universities. The author identified new methods such as “focus group,” and “field study,” possibly indicating the changing scene in LIS. Hider and Pymm (2008) conducted an analysis that categorized articles from 20 LIS journals into the classification scheme defined by Järvelin and Vakkari (1990) . They showed that “survey” remained the predominant research strategy but there had been a notable increase of “experiment.” Fidel (2008) examined the use of “mixed methods” in LIS. She proposed a definition of “mixed method” and distinguished it with other concepts that are often misused as “mixed methods” in this field. Overall, only a very small percentage of LIS literature (5%) used “mixed methods” defined in this way. She also highlighted that in LIS, researchers often do not use the term mixed methods to describe their work.

A summary of literature on the studies of research methods in LIS

Drawing conclusions from the literature, Chu (2015) highlighted several patterns from the studies of research methods in LIS. First, researchers in LIS are increasingly using more sophisticated methods and techniques instead of the commonly used survey or historical method of the past. Methods such as experiments and modeling were on the rise. Second, there has been an increase in the use of qualitative approaches compared with the past, such as in the field of Information Retrieval. Building on this, Chu (2015) conducted a study of 1,162 research articles published from 2001 to 2010 in three major LIS journals—the largest collection spanning the longest time period in previous studies. She proposed a classification of 17 methods that largely echo those suggested before. However, some new methods included were “research journal/diary” and “webometrics” (e.g., link analysis, altmetrics). The study also showed that “content analysis,” “experiment,” and “theoretical approach” overtook “survey” and “historical method” to secure the dominant position among popular research methods used in LIS.

Since Chu (2015) , a number of studies have been conducted on the topic of research methods in LIS, generally using a similar approach. Research articles published from some major LIS journals are sampled and manually coded into a classification scheme that is typically based on those proposed earlier. We summarize a number of studies below. VanScoy and Fontana (2016) focused on reference and information service (RIS) literature, a subfield of LIS. Over 1,300 journal articles were first separated into research articles (i.e., empirical studies) and those that were not research. Research articles were then coded into 13 research methods that can be broadly divided into “qualitative,” “quantitative,” and “mixed” methods. Again, these are similar to the previous literature, but add new categories such as “narrative analysis” and “phenomenology.” Authors showed that most of the RIS research was quantitative, with “descriptive methods” based on survey questionnaires being the most common. Ferran-Ferrer et al. (2017) studied a collection of Spanish LIS journal articles and showed that 68% were empirical research. They developed a classification scheme that defines nine “research methods” and 13 “techniques.” Different categories to the previous studies include “log analysis,” “text interpretation,” etc. However, the exact difference between these concepts was not clearly explained. Togia and Malliari (2017) coded 440 LIS journal articles into a similar classification of 12 “research methods” to that in Chu (2015) . However, in contrast to Chu, they showed that “survey” remained in the dominant position. Grankikov et al. (2020) studied the use of “mixed methods” in LIS literature. Different from Fidel (2008) , they concluded that the use of “mixed methods” in LIS has been on the rise.

In addition to work within LIS there has been work more widely in the social sciences to produce typologies for methodology (e.g., Luff, Byatt, & Martin, 2015 ). This update to an earlier seminal work by Durrant (2004) introduces a rather comprehensive typology of methodology, differentiating research design, data collection, data quality, and data analysis, among other categories. While offering a detailed approach for the gamut of social science methods, it does not represent the full range of methods of use in LIS which draws on approaches beyond the social sciences. Thus, while contributing to the development of our own taxonomy, this work could only offer a useful input.

In summary, the literature shows a continued interest in the studies of research methods in LIS in the last two decades. However, there remains significant inconsistency in the interpretation of terminologies used to describe the research methods, and in the different categorizations of research methods. This “lack of consensus” was discussed in Risso (2016) and VanScoy and Fontana (2016) . Risso (2016) highlighted that first, studies of LIS research methods take different perspectives that can reflect research subareas within this field, object of study delimitation, or different ways of considering and approaching it. Second, a severe problem is the lack of category definitions in the different research method taxonomies proposed in the literature, and as a result, some were difficult to distinguish from each other. VanScoy and Fontana (2016) pointed out that existing methodology categorizations in LIS are difficult to use, due to “conflation of research design, data collection, and data analysis methods,” “ill-defined categories,” and “extremely broad ‘other’ categories.” As examples, whereas Chu (2015) proposed a classification primarily based on data collection techniques, methods such as “bibliometrics” and “webometrics” are arguably not for data collection, and were seen to be classified as “techniques” or “methods” in Ferran-Ferrer et al. (2017) . On the contrary, “survey,” “interview,” and “observation” are mixed with “content analysis” and “experiment” and all considered as “techniques” by Ferran-Ferrer et al. (2017) . In terms of the disagreement on the use of hierarchy, many authors have adopted a simple flat structure (e.g., Bernhard, 1993 ; Chu, 2015 ; Hider & Pymm, 2008 ; Park, 2004 ), whereas some introduced simple but inconsistent hierarchies (e.g., “research strategies” vs. “data collection methods” in Järvelin and Vakkari (1990) and “qualitative” vs. “quantitative” in VanScoy and Fontana (2016) ). While intuitively we may argue that a sensible approach is to split methods primarily into data collection and analysis methods, apparently the examples shown above suggest that this is not a view that warrants consensus.

We argue that this issue reflects the ambiguity and complexity in research methods used in LIS. As a result of this, the same data can be analyzed in different ways that reflect different conceptual stances. Adding to this is the lack of consistency among authors in reporting their methods. Researchers sometimes define their work in terms of data collection methods, others through modes of analysis. For this reason, we argue that it is intrinsically difficult, if not impossible, to fully address these issues with a single universally agreed LIS research method definition and classification. Nevertheless, it remains imperative for researchers to gain an updated view of the evolution and diversification of research methods in this field, and to appreciate the different viewpoints from which they can be structured.

2.2. Automated Information Extraction from Scholarly Data

IE is the task of automatically extracting structured information from unstructured or semistructured documents. There has been increasing research in IE from scientific literature (or “scholarly data”) in the last decades, due to the rapid growth of literature and the pressing need to effectively index, retrieve, and analyze such data ( Nasar, Jaffry, & Malik, 2018 ). Nasar et al. (2018) reviewed recent studies in this area and classified them into two groups: those that extract metadata about an article, and those that extract key insights from the content. Research in this area has been predominantly conducted in the computer science, medical, and biology domains. We present an overview of these studies below.

Metadata extraction may target “descriptive” metadata that are often used for discovery and indexing, such as title, author, keywords, and references; “structural” metadata that describe how an article is organized, such as the section structures; and “administrative” metadata for resource management, such as file type and size. A significant number of studies in this area focus on extracting information from citations ( Alam, Kumar et al., 2017 ), or header level metadata extraction from articles ( Wang & Chai, 2018 ). The first targets information in individual bibliographic entries, such as the author names (first name, last name, initial), title of the article, journal name, and publisher. The second targets information usually on the title page of an article, such as title, authors, affiliations, emails, publication venue, keywords, and abstract. Thanks to the continuous interest in the computer science, medical, and biology domains, several gold-standard data sets have been curated over the years to be used to benchmark IE methods developed for such tasks. For example, the CORA data set ( Seymore, McCallum, & Rosenfeld, 1999 ) was developed based on a collection of computer science research articles, and consists of both a set for header metadata extraction (935 records) and a set for citation extraction (500 records). The FLUX-CiM data set ( Cortez, da Silva et al., 2007 ) is a data set for citation extraction, containing over 2,000 bibliography entries for computer science and health science. Th UMASS data set consists of bibliographic information from 5,000 research papers in four major domains that include physics, mathematics, computer science, and quantitative biology.

According to Nasar et al. (2018) , key-insights extraction refers to the extraction of information within an article’s text content. The types of such information vary significantly. They are often ad hoc and there is no consensus on what should be extracted. However, typically, this can include mentions of objectives, hypothesis, method, related work, gaps in research, result, experiment, evaluation criteria, conclusion, limitations of the study, and future work. Augenstein et al. (2017) and QasemiZadeh and Schumann (2016) proposed more fine-grained information units for extraction, such as task (e.g., “machine learning,” “data mining”), process (i.e., solutions of a problem, such as algorithms, methods and tools), materials (i.e., resources studied in a paper or used to solve the problem, such as “data set,” “corpora”), technology, system, tool, language resources (specific to computational linguistics), model, and data item metadata. The sources of such information are generally considered to be either sentence- or phrase-level, where the first aims to identify sentences that may convey the information either explicitly or implicitly, and the second aims to identify phrases or words that explicitly describe the information (e.g., “CNN model” in “The paper proposes a novel CNN model that works effectively for text classification”).

Studies of key-insight extraction are also limited to computer science and medical domains. Due to the lack of consensus over the task definition, which is discussed above, different data sets have been created focusing on different tasks. Hirohata et al. (2008) created a data set of 51,000 abstracts of published biomedical research articles, and classified individual sentences into objective, method, result, conclusion, and none. Teufel and Moens (2002) coded 80 computational linguistics research articles into different textual zones that describe, for example, background, objective, method, and related work. Liakata, Saha et al. (2012) developed a corpus of 256 full biochemistry/chemistry articles which are coded at sentence-level for 11 categories, such as hypothesis, motivation, goal, and method. Dayrell, Candido et al. (2012) created a data set containing abstracts from Physical Sciences and Engineering and Life and Health Sciences (LH). Sentences were classified into categories such as background, method, and purpose. Ronzano and Saggion (2015) coded 40 articles of the computer imaging domain and classified sentences into similar categories. Gupta and Manning (2011) pioneered the study of phrase-level key-insight extraction. They created a data set of 474 abstracts of computational linguistics research papers, and annotated phrases that describe three general levels of concepts: “focus,” which describes an article’s main contribution; “technique,” which mentions a method or a tool used in an article; and “domain,” which explains the application domain of a paper, such as speech recognition. Augenstein et al. (2017) created a data set of computational linguistics research articles that focus on phrase-level insights. Phrases indicating a concept of task, process, and material are annotated within 500 article abstracts. QasemiZadeh and Schumann (2016) annotated “terms” in 300 abstracts of computational linguistics papers. The categories of these terms are more fine grained, but some are generic, such as spatial regions, temporal entities, and numbers. Tateisi, Ohta et al. (2016) annotated a corpus of 400 computer science paper abstracts for relations, such as “apply-to” (e.g., a method applied to achieve certain purpose) and “compare” (e.g., a method is compared to a baseline).

In terms of techniques, the state of the art has mostly used either rule-based methods or machine learning. With rule-based methods, rules are coded into programs to capture recurring patterns in the data. For example, words such as “results,” “experiments,” and “evaluation” are often used to represent results in a research article, and phrases such as “we use,” and “our method” are often used to describe methods ( Hanyurwimfura, Bo et al., 2012 ; Houngb & Mercer, 2012 ). With machine learning methods, a human annotated data set containing a large number of examples is first created, and is used subsequently to “train” and “evaluate” machine learning algorithms ( Hirohata et al., 2008 ; Ronzano & Saggion, 2015 ). Such algorithms will consume low-level features (e.g., words, word sequences (n-grams), part of speech, word-shape (capitalized, lower case, etc), and word position, which are usually designed by domain experts) to discover patterns that may help capture the type of information that is to be extracted.

In summary, although there have been a plethora of studies on IE in the scientific literature, these have been limited to only a handful of disciplines and none has studied the problem in LIS. Existing methods will not be directly applicable to our problems for a number of reasons. First, previous work that extracts “research methods” only aims to identify the sentence or phrase that mentions a method (i.e., sentence- or phrase-level of extraction), but not recognize the actual method used. This is different, because the same research method may be referred to in different ways (e.g., “questionnaire” and “survey” may indicate the same method). Previous work also expects the research methods to be explicitly mentioned, which is not always true in LIS. Studies that use, for example, “content analysis,” “ethnography,” or “webometrics” may not even use these terms in their work to explain their methods. For example, instead of stating “a content analysis approach is used,” many papers may only state “we analyzed and coded the transcripts….” For these reasons, a different approach needs to be taken and a deeper understanding of these challenges as well as to what extent they can be dealt with will add significant value for future research in this area.

We describe our method in four parts. First, we explain our approach to data collection. Second, we describe an exploratory study of the data set, with the goal of developing a preliminary view of the possible research methods mentioned in our data set. Third, guided by the literature and informed by the exploratory analysis, we propose an updated research method classification scheme. Instead of attempting to address the intrinsically difficult problem of defining a classification hierarchy, our proposed scheme will adopt a flat structure. Our focus will be the change in the scope of research methods (e.g., where previous classification schemes need a revision). Finally, we describe how we develop the first automated method for the identification of research methods used in LIS studies.

3.1. Data Collection

Our data collection methods are subject to the following criteria. First, we select scientific publications from popular journals that are representative of LIS. Second, we use data that are machine readable, such as those in an XML format that preserves all the structural information of an article, instead of PDFs. This is because we would like to be able to process the text content of each, and OCR from PDFs is known to create noise in converted text ( Nasar et al., 2018 ). Finally, we select data from the same or similar sources reported from the previous literature such that our findings can be directly compared to early studies. This may allow us to discover trends in LIS research methods.

Thus, building on Chu (2015) , we selected research articles published between January 1, 2008 and December 31, 2018 and from Journal of Documentation (JDoc), Journal of the American Society for Information Science & Technology (JASIS&T; now Journal of the Association for Information Science and Technology ), and Library & Information Science Research (LISR). These are among the core journals in LIS and were also used in Chu (2015) , thus allowing us to make a direct comparison against earlier findings. We used the CrossRef API 1 to fetch the XML copies of these articles, and only kept articles that describe empirical research. This is identified with a category label assigned to each article by a journal. However, we notice a significant degree of inter- and intrajournal inconsistency in terms of how their articles are labeled. Briefly, each journal used between 14 and 19 categories to label their articles. There appear to be repetitions in these categories within each journal, and a lack of consensus on how each journal categorizes its articles. We show details of this later in our results section. For JDoc, we included 381 (out of 508 articles published in this period) articles labeled as “research article” and “case study.” For JASIS&T, we included 1,837 “research articles” (out of 2,150). For LISR, we included 382 “research articles” and “full length articles (FLA).” This created a data set of 2,599 research articles, twice more than that in Chu (2015) .

The XML versions of research articles allow programmatic access to the structured content of the articles, such as the title, authors, abstract, sections of main text, subsections, and paragraphs. We extract this structured content from each article for automated analysis later. However, it is worth noting that different publishers have adopted different XML templates to encode their data, which created obstacles during data processing.

3.2. Exploratory Analysis

To support our development of the classification scheme, we begin by undertaking an exploratory analysis of our data set to gain a preliminary understanding of the scope of methods potentially in use. For this, we use a combination of clustering and terminology extraction methods. VOSviewer ( Van Eck & Waltman, 2010 ), a bibliometric software tool, is used to identify keywords from the publication data sets and their co-occurrence network within the three journals. Our approach consisted of three steps detailed below.

First, for each article, we extract the text content that most likely contains descriptions of its methodology (i.e., the “methodology text”). For this, we combine text content from title, keywords, abstracts, and also the methodology section (if available) of each article. To extract the methodology section from an article, we use a rule-based method to automatically identify the section that describes the research methods (i.e., the “methodology section”). This is done by extracting all level 1 sections in an article together with their section titles, and then using a list of keywords to match against these section titles. If a section title contains any one of these keywords, we consider that section to be the methodology section. The keywords include 2 “methodology, development, method, procedure, design, study description, data analysis/study, the model.” Note that although these keywords are frequently seen in methodology section titles, we do not expect them to identify all variations of such section titles, nor can we expect every article to have a methodology section. However, we did not need to fully recover them as long as we have a sufficiently large sample that can inform our development of the classification scheme later on. This method identified methodology sections from 290 (out of 381), 1,283 (out of 1,837), and 346 (out of 383) of JDoc, JASIS&T, and LISR articles respectively. Still, there remains significant variation in terms of how researchers name their methodology section. We show this later in the results section. When the methodology section cannot be identified by our method, we use the title, keywords, and abstract of the article only. We apply this process to each article in each journal, creating three corpora.

Second, we import each corpus to VOSviewer 3 (version 1.614) and use its text-mining function to extract important terms and create clusters based on co-occurrences of the terms. VOSviewer uses natural language processing algorithms in the process of identifying terms. It involves steps such as copyright statement removal, sentence detection, part-of-speech tagging, noun phrase identification, and noun phrase unification. The extracted noun phrases are then treated as term candidates. Next, the number of articles in which a term occurs is counted (i.e., document frequency, or DF). Binary counting is chosen to avoid the analysis being skewed by terms that are very frequent within single articles. Then we select the top 60% relevant terms ranked by document frequency, and exclude those with a DF less than 10. These terms are used to support the development of the classification scheme.

To facilitate our coders in their task, the terms are further clustered into groups using the clustering function in VOSviewer. Briefly, the algorithm starts by creating a keyword network based on the co-occurrence frequencies within the title, abstract, keyword list, and methodology section. It then uses a technique that is a variant of the modularity function by Newman and Girvan (2004) and Newman (2004) for clustering the nodes in a network. Details of this algorithm can be found in Van Eck and Waltman (2014) . We expect terms related to the same or similar research methods to form distinct clusters. Thus, by creating these clusters, we seek to gain some insight into the methods they may represent.

The term lists and their cluster memberships for the three journals are presented to the coders, who are asked to manually inspect them and consider them in their development of the classification scheme below.

3.3. Classification Scheme

Our development of the classification of research methods is based on a deductive approach informed by the previous literature and our exploratory analysis. A sample of around 110 articles (“shared sample”) were randomly selected from each of the three journals to be coded by three domain experts. To define “research methods,” we asked all coders to create a flat classification of methods primarily following the flat scheme proposed by Chu (2015) for reference. They could identify multiple methods for an article, and when this was the case, they were asked to identify the “main” (i.e., “first” as in Chu) method and other “secondary” methods (i.e., second, third, etc. in Chu). While Chu (2015) took a view focusing on data collection methods, we asked coders to consider both modes of analysis and data collection methods as valid candidates, as in Kim (1996) . We did not ask coders to explicitly separate analysis from data collection, because (as reflected in our literature review) there is disagreement in how different methods are classified from these angles.

Coders were asked to reuse the methods in Chu’s classification where possible. They were also asked to refer to the term lists extracted before, to look for terms that may support existing theory, or terms that may indicate new methods that were not present in Chu’s classification. When no codes from Chu’s model could be used, they were asked to discuss and create new codes that are appropriate, particularly informed by the term lists. Once the codes were finalized, the coders split the remaining data equally for coding. An Inter-Annotator-Agreement (Kappa statistics) of 86.7 was obtained on the shared sample when only considering the main method identified.

One issue at the beginning of the coding process is the notable duplicative and overlapping nature in the methods reported in the existing literature, as well as those proposed by the coders. Using Chu’s scheme as an example, ethnography often involves participant observation , whereas bibliometrics may use methods such as link analysis (as part of webometrics ). Another issue is the confusion of “topic” and “method.” For example, an article could clearly discuss a bibliometrics study, but it was debatable whether it uses a “bibliometrics” method. To resolve these issues, coders were asked to follow the following principles. The first was to distinguish the goal of an article and the means implemented to achieve it. The second was to treat the main method as the one that generally takes the larger part of the text. Examples will be provided later in the results section.

During the coding process, coders were also asked to document the keywords that they found to be often indicative of each research method. For example, “content analysis” and “inter coder/rater reliability” are often seen in articles that use the “content analysis” method, whereas “survey,” “Likert,” “sampling,” and “response rate” are often seen in articles that use “questionnaire.” Note however, that it is not possible to create an exhaustive vocabulary for all research methods. Many keywords could also be ambiguous, and some research methods may only have a very limited set of keywords. However, these keywords form an important resource for our automated methods to be proposed below. Our proposed method classification contains 29 methods. These, together with their associated keywords, are shown and discussed later in the results section.

3.4. Information Extraction of Research Methods

In this section, our goal is to develop automated IE methods that are able to determine the type of research method(s) that are used by a research article. As discussed before, this is different from the large number of studies on key-insights extraction that are already conducted in other disciplines. First, previous studies aim to classify text segments (e.g., sentences, phrases) within a research article into broad categories including “methods,” without identifying what the methods are. As we have argued, these are two different tasks. Second, compared to the types of key insights for extraction, our study tackles a significantly larger number of fine-grained tasks—29 research methods. This implies that our task is much more challenging and that previous methods will not be directly transferable.

As our study is the first to tackle this task in LIS, we opt for a rule-based method for two reasons. First, compared to machine learning methods, rule-based methods were found to have better interpretability and flexibility when requirements are unclear ( Chiticariu, Li, & Reiss, 2013 ). This is particularly important for studies in new domains. Second, despite increasing interest in machine learning-based methods, Nasar et al. (2018) showed that they do not have a clear advantage over rule-based methods. In addition, we also focus on a rather narrow target: identifying a single main method used. Note that this does not imply an assumption that each article will use only one method. It is rather a built-in limitation of our IE method. The reasons, as we shall discuss in more detail later, are twofold. On the one hand, almost every article will mention multiple methods, but it is extremely difficult to determine automatically which are actually used for conducting the research and which are not. On the other hand, as per Chu (2015) , articles that report using multiple methods remain a small fraction (e.g., 23% for JDoc, 13% for JASIS&T, and 18% for LISR in 2009–2010). With these in mind, it is extremely easy for automated methods to make false positive extractions of multiple methods. Therefore, our aim here is exploring the feasibility and understanding the challenges in achieving our goal, rather than maximizing the potential performance of the automated methods.

We used a smaller sample of 30 coded articles to develop the rule-based method, with the remaining 300 for evaluation later on. Generally, our method searches the keywords (as explained before) associated with each research method within the restricted sections of an article. The method receiving the highest frequency will be considered to be the main research method used in that study. As we have discussed previously, many of these keywords can be ambiguous, but we hypothesize that by restricting our search within specific contexts, such as the abstract or the methodology section, there will be a higher possibility of recovering true positives. Figure 1 shows the overall workflow of our method, which will be explained in detail below.

Overview of the IE method for research method extraction.

Overview of the IE method for research method extraction.

3.4.1. Text content extraction

In this step, we aim to extract the text content from the parts of an article that are most likely to mention the research methods used. We focus on three parts: the title of an article, its abstract, and the methodology section, if available. Titles and abstracts can be directly extracted from our data set following the XML structures. For methodology sections, we use the same method introduced before for identifying them.

3.4.2. Keywords/keyphrase matching

In this step, we aim to look up the keywords/keyphrases (to be referred to uniformly as “keywords” below) associated with each research method within the text elements identified above. For each research method, and for each associated keyword, we count its frequency within each of the identified text elements. Note that the inflectional forms of these keywords (e.g., plural forms) are also searched. Then we sum the frequencies of all matched keywords for each research method within each text element to obtain a score for that research method within that text element. We denote this as freq ( m , text i ), where m denotes one of the research methods, text i denotes the text extracted from the part i of the article, with i ∈ { title , abstract , methodsection }.

3.4.3. Match selection

In this step, we aim to determine the main research method used in an article based on the matches found before. Given the set of matched research methods for a particular type of text element, that is, for a set of { freq ( m 1 , text i ), freq ( m 2 , text i ) …, freq ( m k , text i )}, where i is fixed, we simply choose the method with the highest frequency. As an example, if “content analysis” and “interview” have frequencies of 5 and 3, respectively, in the abstract of an article, we select “content analysis” to be the method detected from the abstract of that paper. Next, we select the research method based on the following priority: title > abstract > methodology section. In other words, if a research method is found in the title, abstract, and methodology section of an article, we choose only the one found in the title. Following the example above, if “content analysis” is the most frequent method based on the abstract of an article, and “questionnaire” is the one selected for its methodology section, we choose “content analysis” to be the research method used by the study. If none of the research methods are found in any of the three text elements, we consider the article to be “theoretical.” If multiple methods are found to tie based on our method, then the one appearing earlier in the text will be chosen to be the main method.

3.4.4. Evaluation

Given a particular type of research method in the data set, the number of research articles that reported using that method is “total actual positives,” and the number predicted by the IE method is “total predicted positives.” The intersection of the two is “true positives.” Because the problem is cast as a classification task, and in line with the work in this direction but in other disciplines, we treat Precision and Recall with equal weights in computing F1. Also, we compute the “micro” average of Precision, Recall, and F1 over the entire data set across all research methods, where the “true positives,” “total predicted positives,” and “total actual positives” will simply be the sum of the corresponding values for each research method in the data set.

4.1. Data Collection

As mentioned previously, we notice a significant degree of inter- and intrajournal inconsistency in how different journals categorize their articles. We show the details in Table 2 .

Different categorizations of published articles by the three different journals

First, there is a lack of definition of these categorization labels from the official sources, and many of the labels are not self-explanatory. For example, it is unclear why fine-grained JASIS&T labels such as “advances in information science” and “AIS review” deserve to be separate categories, or what “technical paper” and “secondary article” entail in JDoc. For LISR, which uses mostly acronym codes to label its articles, we were unable to find a definition of these codes 4 .

Second, different journals have used a different set of labels to categorize their articles. While the three journals appear to include some types that are the same, some of these are named in different ways (e.g., “opinion paper” in JASIS&T and “viewpoint” in JDoc). More noticeable is the lack of consensus in their categorization labels. For example, only JASIS&T has “brief communication,” only JDoc has “secondary article,” and only LISR has “non-article.”

A more troubling issue is the intrajournal inconsistency. Each journal has used a large set of labels, many of which appear to be redundant. For example, in JASIS&T, “opinion paper,” “opinion,” and “opinion piece” seem to refer to the same type. “Depth review” and “AIS review” seem to be a part of “review.” In JDoc, “general review” and “book review” seem to be a part of “review.” And “article” seems to be too broad a category. In LISR, it is unclear why “e-review” is needed in addition to “review-article.” Also, note that for many categories, there are only a handful of articles, an indication that those labels may be no longer used, or were even created in error.

4.2. Exploratory Analysis

Figures 2 – 4 visualize the clusters of methodologyrelated keywords found in the articles from each of the three journals. All three journals show a clear pattern of three separated large clusters. For LISR, three clusters emerge as follows: One (green) centers on “interview,” with keywords such as “interviewee,” “theme,” and “transcript”; one (red) centers on “questionnaire,” with keywords such as “survey,” “respondent,” and “scale”; and one (blue) with miscellaneous keywords, many of which seem to correlate weakly with studies of scientific literature (e.g., keywords such as “author,” “discipline,” and “article”) or bibliometrics generally.

Cluster of terms extracted from the LISR corpus (top 454 terms ranked by frequency extracted from the entire corpus of 382 articles). Size of font indicates frequency of the keyword.

Cluster of terms extracted from the LISR corpus (top 454 terms ranked by frequency extracted from the entire corpus of 382 articles). Size of font indicates frequency of the keyword.

Cluster of terms extracted from the JDoc corpus (top 451 terms ranked by frequency extracted from the entire corpus of 381 articles). Size of font indicates frequency of the keyword.

Cluster of terms extracted from the JDoc corpus (top 451 terms ranked by frequency extracted from the entire corpus of 381 articles). Size of font indicates frequency of the keyword.

Cluster of terms extracted from the JASIS&T corpus (top 2,027 terms ranked by frequency extracted from the entire corpus of 1,837 articles). Font size indicates frequency of the keyword.

Cluster of terms extracted from the JASIS&T corpus (top 2,027 terms ranked by frequency extracted from the entire corpus of 1,837 articles). Font size indicates frequency of the keyword.

For JDoc, the two clusters around “interview” (green) and “questionnaire” (blue) are clearly visible. In contrast to LISR, the third cluster (red) features keywords that are often indicative of statistical methods, algorithms, and use of experiments. Overall, the split of the clusters seems to indicate the separation of methods that are typically qualitative (green and blue) and quantitative (red).

The clusters from JASIS&T appear to be more different from LISR and JDoc and also have clearer boundaries. One cluster (red) appears to represent methods based on “interview” and “survey”; one (green) features keywords indicative of bibliometrics studies; and one (blue) has keywords often seen in studies using statistical methods, experiments, or algorithms. Comparing the three journals, we see a similar focus of methodologies between LISR and JDoc, but quite different patterns in JASIS&T. The latter appears to be more open to quantitative and data science research.

4.3. Classification Scheme

Table 3 displays our proposed method classification scheme, together with references to previous work where appropriate, and keywords that were indicative of the methods. Notice that some of the keywords are selected based on the clusters derived from the exploratory studies. Also, the keywords are by no means a comprehensive representation of the methods, but only serve as a starting point for this type of study. In the following we define some of the methods in detail and explain their connection to the literature.

The proposed research method classification scheme

Our study was able to reuse most of the codes from Chu (2015) . We revised Chu’s “ethnography/field study” to two categories: “ethnography/field study,” which refers to traditional ethnographic research (e.g. using participant observation in real world settings), and “digital ethnography,” referring to the use of ethnographic methods in the digital world, including work following Kozinets’ (2010) suggestions for “netnography” as an influential branch of this work.

The major change we have introduced concerns the “experiment” category. Chu (2015) argued for a renewed perspective on “experiment,” in the sense that this refers to a broad range of studies where “new procedures (e.g., key-phrase extraction), algorithms (e.g., search result ranking), or systems (e.g., digital libraries)” are created and subsequently evaluated. This differs from the classic “experimental design” as per Campbell and Stanley (1966) . However, we argue that this is an “overgeneralization,” as Chu showed that more than half of the articles from JASIS&T have used this method. Such a broad category is less useful as it hides the complex multidisciplinary nature in LIS. Therefore, in our classification, we use “experiment” to refer to the classic “experimental design” method and introduce a more fine-grained list of methods that would have been classified as “experiment” by Chu. These include “agent based modeling/simulation,” “classification,” “clustering,” “information extraction,” “IR related indexing/ranking/query methods,” and “topic modeling,” all of which focus on developing procedures or algorithms (rather than simple application of such techniques for a different purpose) that are often subject to systematic evaluation; and “comparative evaluation,” which focuses on following scientific experimental protocols to systematically compare and evaluate a set of methods.

Further, we added methods that do not necessarily overlap with Chu’s classification. For example, “annotation” refers to studies that involve users annotating or coding certain content, with the coding frame or the coded content being the primary output of a study. “Document analysis” refers to studies that analyze a collection of documents (e.g., government policy papers) or media items (e.g., audio or video data) to discover patterns and insights. “Mixed methods” is added, as studies such as Grankikov et al. (2020) revealed an upward trend in the usage of this research method in LIS. Note that in this context, “mixed methods” refers to Fidel’s (2008) definition, which refers to research that combines data collection in a particular sequence for some reason, rather than any research that happens to involve multiple forms of data. “Statistical methods” has a narrow scope encompassing studies of correlation between variables or hypothesis testing, as well as those that propose metrics to quantify certain problems. This excludes metrics specifically targeting the bibliometrics domain (e.g., h -index), as the level of complexity and the extent of effort devoted to that area justifies it being an independent umbrella term that encompasses various statistical metrics. Statistical methods also exclude generic comparison based on descriptive statistics, which is very common (and thus can be overgeneralizing) in quantitative research; also, the majority of computational methods for classification, clustering, or regression are statistical-based in a more general sense. Finally, “user task based studies” refers to systematic methods that involve human users undertaking certain tasks following certain (often different) processes, with a goal to compare their behaviors or evaluate the processes.

Revisiting the issue of duplication and overlap often seen in the scope of LIS research methods discussed before, we use examples to illustrate how our classification should be used to avoid such an issue. In Table 4 , articles by Zuccala, van Someren, and van Bellen (2014) , Wallace, Gingras, and Duhon (2008) , Denning, Soledad, and Ng (2015) , and Solomon and Björk (2012) all study bibliometrics problems, but their main research method is classified differently under our scheme. Zuccala et al. (2014) focuses on developing a classifier to automatically categorize sentences in reviews by their scholarly credibility and writing style. The article studied a problem of bibliometrics nature, and used human coders to annotate training data. However, its ultimate goal is to develop and evaluate a classifier, as is the focus of the majority of the text. Therefore, the main research method is considered to be “classification,” and “annotation” may be considered a secondary research method and “bibliometrics” is more appropriate as a topic of the study. Wallace et al. (2008) has a similar pattern, where the content is dominated by technical details of how the “network analysis” method is constructed and applied to bibliometrics problems. Denning et al. (2015) describes a tool whose core method is formulating a statistical indicator, which the authors propose to measure book readability. Thus its main method qualifies under “statistical methods.” Solomon and Björk (2012) uses descriptive statistics to compare open access journals. By definition, we do not classify such an approach as “statistical methods.” But it can be argued that the authors used certain metrics to quantify a specific bibliometrics problem and therefore, we label its main method as “bibliometrics.” In terms of our very own article, arguably, we consider both “content analysis” and “classification” as our main methods, and “annotation” as a secondary method because it serves a purpose for content analysis and creating training data for classification. “Bibliometrics” is more appropriate as the topic rather than the method we use, because our work actually adapts generic methods to bibliometric problems.

Example articles and how their main research method will be coded under our scheme

Figure 5 compares the distribution of different research methods found in the samples of the three journals. We notice several patterns. First, compared to JDoc and LISR, work published at JASIS&T has a clear emphasis on using a wider range of computational methods. This is consistent with findings from Chu (2015) . Second, JASIS&T also has a substantial focus on bibliometrics research, which lacks representation in JDoc or LISR. Instead (the third pattern) for JDoc and LISR, questionnaire and interview remain the most dominant research methods. These findings resonate with those from our exploratory analysis. Fourth, for all three journals, a noticeable fraction of published work (between 10% and 18%) is of a theoretical nature, where no data collection or analysis methods are documented. Finally, we could not identify studies using “webometrics” as methods, but many may qualify under such a topic. However, they often use other methods (e.g., content analysis of web collections, annotation of web content) to study a webometrics problem.

Distribution of research methods found in the samples of the three journals. The y-axis indicates percentages represented by a method within a specific journal collection.

Distribution of research methods found in the samples of the three journals. The y-axis indicates percentages represented by a method within a specific journal collection.

4.4. Information Extraction of Research Methods

We evaluate our IE method using 300 articles from the coded sample data 6 (disjoint with the smaller set for developing the method), and present the Precision, Recall and F1 scores below. As mentioned before, we only evaluate the main method extracted by the IE process using Eqs. 1 – 3 . We then show the common errors made by our method.

4.4.1. Overview of Precision, Recall, and F1

Table 5 shows the Precision, Recall and F1 of our IE method obtained on the annotated samples from the three journals. Overall, the results show that the task is a very challenging one, as our method has obtained rather poor results on most of the research methods. Across the different journals and considering the size of the sample, our method has generally performed consistently on “interview,” “questionnaire,” and “bibliometrics.” Based on the nature of our method (i.e., keywords lookup), this suggests that terminologies related to these research methods may be used more often in nonambiguous contexts. The average performance of our IE method achieves a microaverage F1 of 0.783 on JDoc, 0.811 on LISR, and 0.61 on JASIS&T. State-of-the-art methods on key-insights extraction generally achieve an F1 of between 0.03 ( Lin, Ng et al., 2010 ) and 0.53 ( Kovačević, Konjović et al., 2012 ) on tasks related to “research methods” at either sentence or phrase levels. Notice that the figures should not be compared directly as-is, because the task we deal with is different: We aim to identify specific methods, whereas all the previous studies only aim to determine whether a specific piece of text describes a research method or not.

Precision (P), Recall (R) and F1 on the three journals. “–” indicates that no articles are classified under method by the coders; neither does our method predict that method for any articles. For the absolute number of instances for each method, see Figure 5  

4.4.2. Impact of the article of abstract

We conducted further analysis to investigate the quality of abstracts and its impact on our IE method. This includes three types of analysis. To begin with, we disabled the “methodology section” extraction component in our method, and retested our method on the same data set, but excluded articles where methods can only be identified from the methodology section. The results are shown in Table 6 . On average, we obtained noticeable improvement on the JDoc data set, but not on LISR or JASIS&T. Among the three journals, JDoc is the only one that enforces a structured abstract. Arguably, this ensures consistency and quality in writing the abstracts, from which our IE methods may have potentially benefited.

Precision (P), Recall (R) and F1 on the three journals when the text from the methodology section (if available) is ignored. “–” indicates that no articles are classified under method by the coders, and neither does our method predict that method for any articles. Bold indicates better results whereas underline indicates worse results compared to Table 4 . For the absolute number of instances for each method, see Figure 5  

To verify this, we conducted the second type of analysis. We asked coders to revisit the articles they coded and identify the percentage of articles for which they were unable to identify its main method confidently without going to the full texts. This provides an alternative but more direct view of the quality of abstracts from the three journals, without the bias from the IE method. The figures are 5%, 6%, and 12% for JASIS&T, JDoc, and LISR respectively. This shows that to a human reader, comparatively, both JDoc and JASIS&T abstracts are more explicit than LISR when it comes to explaining their methods. This may be an indication of better quality abstracts. To some extent, this is consistent with the pattern we observed from the previous analysis. The quality in JASIS&T abstracts does not translate to better performance of our IE method when focusing on only the abstracts. This could be partially attributed to the wider diversity of methods noted in JASIS&T articles ( Figure 5 ) as well as the implicitness in the description of many of those methods that deviate from LISR and JDoc. For example, none of the articles using “comparative evaluation” used the keywords shown in Table 3 . Instead, they used generic words that, if included, could have significantly increased false positives (e.g., “compare” and “evaluate” are typically used but will be nondiscriminative to identify studies that solely focus on comparative evaluations). Similarly, only one article using “user based task studies” used our proposed keywords. We will cover this issue again in the later sections.

Our third type of analysis involves studying the association between the length of an abstract and its quality, and subsequently (and potentially) its impact on our IE method. We notice that the three journals have different requirements on the length of abstract: 150 for LISR, 250 for JDoc, and 200 for JASIS&T. We do not make hypothesis a correlation between an abstract’s length and its clarity (hence affecting its quality), as this can be argued from contradictory angles. On the one hand, one may argue that a shorter length can force authors to be more explicit about their methodology; on the one hand, one could also argue that a shorter length may result in more ambiguity, as authors have little space to explain their approach clearly. Instead, we started with analyzing the distribution of abstract length in our data sets across the three journals. We wrote a program that counts the number of words in each abstract, where words are delimited by white space characters only. We made surprising findings, as shown in Figure 6 : a very large proportion of articles did not comply with the limit of the abstract length.

Distribution of abstract length across the three different journals.

Distribution of abstract length across the three different journals.

Figure 6 suggests that at least 50% of articles in our JASIS&T and LISR data sets have exceeded the abstract word limits. The situation of JDoc is not very much better. Across all three journals, there are also very long abstracts that almost doubled the word limit 8 ; and there are noticeable articles with very short abstracts, such as those containing fewer than 100 words: 1 for JDoc, 34 for LISR, and 14 for JASIS&T. Overall, we do not see significantly different patterns in the distributions across the three journals. We further manually inspected a sample of 20 articles from each journal to investigate whether there were any patterns in terms of the publication year of those articles that exceeded the word limit. This is because we were uncertain whether during the abstract word limit changed during the history of each journal. Again, we could not find any consistent patterns. For JDoc, the distributions are 2010 (3), 2011 (3), 2013 (4), 2014 (1), 2015 (2), 2016 (2), 2017 (2), and 2018 (3). For LISR, the distributions are 2010 (5), 2011 (1), 2012 (4), 2013 (2), 2014 (1), 2015 (4), 2016 (2), and 2018 (1). For JASIS&T, the distributions are 2010 (3), 2011 (4), 2012 (2), 2013 (1), 2014 (4), 2015 (2), 2016 (1), 2017 (2), and 2018 (1). Articles exceeding the abstract length limit can be found in any year in all three journals. For these reasons, we argue that there is no strong evidence indicating any association between the abstract length and its impact on our IE method. However, the lack of compliance with the journal requirement is rather concerning. While the quality of abstracts may be a factor that affects our method, it is worth noting that our method for detecting the methodology section has its limitations. Some articles do not have an explicit “methodology” section. Instead, they may describe different parts of their method in several top-level sections (e.g., see Saarikoski, Laurikkala et al., 2009 ). Some may have a “methodology” section that is a subsection of the top-level sections (e.g., the method section is within the “Case Study” section in Freeburg, 2017 ). A manual inspection of 50 annotated samples revealed that there were 10% of articles on which this method failed to identify the methodology section. In other words, the method has a 10% error rate. Thus arguably, with a more reliable method for finding methodology sections or generally content sections that describe methodology, our IE method could perform better.

4.4.3. Error analysis

To further understand the challenges of this task, we analyzed all errors made by our IE method and explain these below. Of the errors, 67% 9 are due to keywords used in different contexts than expected. For example, we define “classification” to be methods that use computational approaches for classifying data. However, the keywords “classify” or “classification” are also used frequently in work that may use, for example, content analysis or document analysis, to study library classification systems. A frequent error of this type is when a method is mentioned as future or previous work, such as in “In future studies, e.g., families’ focus-group interviews could bring new insights.” Some 10% of errors are due to ambiguity of the keywords themselves. For example, “bibliometrics” was identified as the wrong research method from the sentence “This paper combines practices emerging in the arts and humanities with research evaluation from a scientometric perspective…”. A further 33% of errors are due to the lack of keywords, or when a method is mentioned implicitly and can only be inferred from reading the context. As examples, we discussed “comparative evaluation” and “user based task studies” before. More examples include “information extraction,” which is a very broad topic and can be difficult to include all possible keywords; and “document analysis,” which is particularly difficult to capture because researchers rarely use distinctive keywords to describe their study. In all these cases, a lot of inference with background knowledge is required.

We discuss the lessons learned from this work with respect to our research questions, as well as limitations of our work.

5.1. Research Method Classification

Our first research question concerns the evolution of “research methods” in LIS. We summarize three key points below.

First, following a deductive coding process informed by literature as well as our data analysis, we developed a classification scheme that largely extends that of Chu (2015) . In particular, we refined Chu’s “experiment” category to include a range of methods that are based on computational approaches, used in the creation of procedures, algorithms, or systems. These are often found in work belonging to the “new frontier” of LIS (i.e., those that often cross boundaries with other disciplines, such as information retrieval, data mining, human computer interaction, and information systems). We also added new categories that were not included in the existing classification schemes by earlier studies. Overall, we believe that our significantly wider classification scheme indicates the increasing trend of diversification and interdisciplinary research in LIS. This could be seen as a strength in terms of LIS drawing fruitfully on a wide range of fields and influences, from humanities, social science, and science. It does not suggest a field moving towards the mature position of paradigmatic consensus, but it could be seen to reflect a healthy dynamism. More troubling might be considered the extent to which novelty comes largely from computational methods, suggesting a discipline without a long history of development and whose direction is subordinate to that of another.

Second, coming with this widening scope is the increasing complexity in defining “research methods.” While our proposed classification scheme remains a flat structure, as is the case for the majority of studies in this area, we acknowledge that the LIS community may benefit from a hierarchical classification that reflects different perspectives of research methodology. However, as we have discussed in extended depth earlier on, it has been difficult to achieve consensus, simply because researchers in different traditions view methodology differently and use terminology differently. Although it was not an aim of this study, we anticipate that this can be partially addressed by developing a framework for defining and classifying LIS research methods from multiple, complementary perspectives. For example, a study should have a topic (e.g., “bibliometrics” could be both a method and topic), could use certain modes of analysis and data collection methods (resonating with the “research strategy” and “data collection method” model by Järvelin and Vakkari (1990) ), and adopt a certain methodological stance (e.g., mixedmethods, multimethods, quantitative) based on the mode of analysis (resonating with that by Hider and Pymm (2008) ).

However, there exist significant hurdles to achieve this goal. As suggested by Risso (2016) , LIS needs to disambiguate and clearly define different categories of “methods” (e.g., to address issues such as “citation analysis” being treated as both research strategy and data collection method in Järvelin and Vakkari (1990) ). Further, there is a need to regularly update the framework to accommodate the evolution of the LIS discipline ( Ferran-Ferrer et al., 2017 ). For this, automated IE methods may be useful in coping with the growing amount of literature. Also, significant effort needs to be devoted to encourage the adoption of such standards. Last, but not least, researchers should be encouraged to share their coding frame and the data they coded as examples for future reference. Data sharing has been an obvious gap in LIS research on research methods, compared to other disciplines such as Computer Science and Biomedicine.

Third, there is a clear pattern of different methodological emphasis in the articles published by the three different journals. While JDoc and LISR appear to publish more work that uses “conventional” LIS research methods, JASIS&T appears to be more open to accepting work that uses a diverse range of methods that have an experimental nature and seen more common in other disciplines. This pattern may reflect the different scope of focus of these journals. For example, LISR explicitly states that it “does not normally publish technical information science studies … or most bibliometric studies,” whereas JASIS&T “focuses on the production, …, use, and evaluation of information and on the tools and techniques associated with these processes.” However, JDoc’s scope description is less indicative of the methodological emphasis, as it states “… welcome submissions exploring topics where concepts and models in the library and information sciences overlap with those in cognate disciplines.” This difference in terms of their scope and aims had an impact on our exploratory analysis and, therefore, our resulting classification scheme. However, this should not be considered a limitation of our approach. If an LIS journal expands its scope to cover such a diverse range of fields, then we argue there is a need to develop a more fine-grained classification that better reflects this trend.

5.2. Automated Extraction of Research Methods

Our IE method for detecting the research methods used in a study is the first in LIS. Similar to earlier studies on key-insight extraction from scientific literature, we found this task particularly challenging. Although our method is based on simple rules, we believe it is still representative of the state of the art. This is because, on the one hand, its average performance over all methods is comparable to figures previously reported in similar tasks, even if our task is arguably more difficult. On the other hand, research so far cannot show a clear advantage of more complex methods such as machine learning over rule-based ones. The typical errors we found from our method will be equally challenging for typical machine learning-based methods.

Overall, our method achieved reasonable performance on only a few methods (i.e., “interview,” “questionnaire,” and “bibliometrics”), whereas its performance on most methods is rather unsatisfactory. Compared to work in a similar direction from other disciplines, we argue that research on IE of research methods from the LIS literature will need to consider unique challenges. The first is the unique requirement of the task. As we discussed before, existing IE methods in this area only aim to identify the sentence or phrase that mentions a method (i.e., sentence- or phrase-level of extraction), but not to recognize the actual method used. This is not very useful when our goal is to understand the actual method adopted by a study, which may mention other methods for the purposes of comparison, discussion, and references. This implies a formulation of the task beyond the “syntactic” level to the “semantic” level, where the automated IE method needs not only to identify mentions of methods in text, but also to understand the context in which they appear to derive their meanings (e.g., recall the examples we have shown in the error analysis section).

Adding to the above (i.e., the second challenge) is the complexity in defining and classifying LIS “research methods,” as we have discussed in the previous section. The need for taking a multiperspective view and identifying not only the main but also secondary methods only escalates the level of difficulty for IE. Also, there is the lack of standard terminology to describe LIS methods. For example, from our own process of eliciting research methods, we discovered methods that are difficult to identify by keywords, such as “mixed methods” and “document analysis.”

Finally, researchers may need to cope with varying degrees of quality in research article abstracts. This is particularly important because, as we have shown, our method can benefit from well-structured abstracts. In Computer Science for example, IE of research methods has mostly focused on abstracts ( Augenstein et al., 2017 ) because they are generally deemed to be of high quality and information rich. In the LIS domain, however, we have noticed issues such as how journal publishers differ in terms of enforcing structured abstracts, and that not every study would clearly describe their method in the abstracts ( Ferran-Ferrer et al., 2017 ).

All these challenges mean that feature engineering—a crucial step for IE of research methods from texts—will be very challenging in the LIS discipline. We discuss some possibilities that may partially address this in the following section.

5.3. Other Issues

During our data collection and analysis, we discovered issues with how journal publishers categorize their articles. We have shown an extensive degree of intra- and interjournal inconsistency, as well as a lack of guidance on how to interpret these categories. This undoubtedly created difficulties for our data collection process and potential uncertainties in the quality of our data set, and will remain an obstacle for future research in this area. We therefore urge the journal publishers to be more transparent about their article categorization system, and to work on improving the quality of their categorization. It might also be useful for publishers to offer common guidelines on describing methods in abstracts and to prompt peer reviewers to examine keywords and abstracts with this in mind.

Our further analysis of the abstract lengths showed a significant extent of noncompliance, as many articles (around, or even exceeding, 50%) are published with an abstract exceeding the word limit, and a small number of articles had a very short abstract. While we were unable to confirm the association between the length of the abstracts and the performance of our IE method, such inconsistency could arguably be considered as a quality issue for the journal.

5.4. Limitations of This Study

First, our proposed classification scheme remains a flat structure, and as we discussed above, it may need to be further developed into a hierarchy to better reflect different perspectives on research methods. Some may also argue that our classification diverges from the core research methods used in LIS. Due to the multidisciplinary nature of LIS, do we really need to integrate method classifications that conventionally belong to other disciplines? Would it be better to simply use the classification schemes from those disciplines when a study crosses those disciplines? These are the questions that we do not have answers to but deserve a debate given the multidisciplinary trend in LIS.

Second, our automated IE method for extracting research methods has large room for improvement. Similar to the previous work on key-insight extraction, we have taken a classification-based approach. Our method is based on keyword lookup, which is prone to ambiguity due to both context and terminology, as we have discussed. As a result, its performance is still unsatisfactory. We envisage an alternative approach to be sentence- or paragraph-level classification that focuses on sentences or paragraphs from certain areas of a paper only, such as abstracts or the methodology section, when available. The idea is that sentences or paragraphs from such content may describe the method used and, compared to simple keywords lookup, provide additional context for interpretation. However, this creates a significant challenge for data annotation, because machine learning methods require a large amount of examples (training data) to learn from, and for this particular task there will be a very large number of categories that need examples. We therefore urge researchers in LIS to make a collective effort towards data annotation, sharing, and reuse.

Also, our IE method only targets a single, main research method from each article. Detecting multiple research methods may be necessary but will be even more challenging, as features that are usually effective for detecting single methods (e.g., frequency) will be unreliable, and it requires a more advanced level of “comprehension” by the automated method. In addition, existing IE methods only identify the research methods themselves but overlook other parameters of the methods that may also be very interesting. For example, new researchers to LIS may want to know what a reasonable sample size is when a questionnaire is used, whether the sample size has an impact on citation statistics, or what methods are often “mixed” in a mixed method research. Addressing these issues will be beneficial to the LIS research community, but remains a significant challenge to be tackled in the future.

Finally, our work has focused on the LIS discipline. Although this offers unique value compared to the existing work on IE of research methods predominantly covering Computer Science and Biomedicine, the question remains as to how the method can generalize to other social science disciplines or humanities. For example, our study shows that among the three journals, between 13% and 21% of articles are theoretical studies ( Figure 5 ). However, methods commonly used in the humanities (e.g., hermeneutics) would not be described in a manner like empirical studies in LIS. This means that our IE method, if applied to this discipline, can misclassify some studies that use traditional humanities methods as nonempirical, even though their authors might consider them to be empirical. Nevertheless, LIS is marked by considerable innovation in methods. This reflects wider pressures for more interdisciplinary studies to address complex social problems as well as individual researchers’ motives to innovate in methods to achieve novelty. These factors are by no means confined to LIS. We can anticipate that these factors will make the classification of methods in soft and applied disciplines equally challenging. Therefore, something may be learned from this study by those working in other fields.

The field of LIS is becoming increasingly interdisciplinary as we see a growing number of publications that draw on theory and methods from other subject areas. This leads to increasingly diverse research methods reported in this field. A deep understanding of these methods would be of crucial interest to researchers, especially those who are new to this field. While there have been studies of research methods in LIS in the past, there is a lack of consensus in the classification and definition of research methods in LIS, and nonexistence of studies of automated analysis of research methods reported in the literature. The latter has been recognized as of paramount importance and has attracted significant effort in fields that have witnessed significant growth of scientific literature, a situation that LIS is also undergoing.

Set in this context, this work analyzed a large collection of LIS literature published in three representative journals to develop a renewed perspective of research method classification in LIS, and to carry out an exploratory study into automated methods—to the best of our knowledge, the first of this nature in LIS—for analyzing the research methods reported in scientific publications. We discovered critical insights that are likely to impact the future studies of research methods in this field.

In terms of research method classification, we showed a widening scope of research methodology in LIS, as we see a substantial number of studies that cross disciplines such as information retrieval, data mining, human computer interaction, and information systems. The implications are twofold. First, conventional methodology classifications defined by the previous work can be too broad, as certain methodological categories (e.g., “experiment”) would include a significant number of studies and are too generic to differentiate them. Second, there is the increasing complexity of defining “research method,” which necessitates a hierarchically structured classification scheme that reflects different perspectives of research methodology (e.g., data collection method, analysis method, and methodological stance). Additionally, we also showed that different journals appear to have a different methodological focus, with JASIS&T being the most open to studies that are more quantitative, or algorithm and experiment based.

In terms of the automated method for method analysis, we tackled the task of identifying specific research methods used in a study, one that is novel compared to the previous work in other fields. Our method is based on simple rule-based keyword lookup, and worked well for a small number of research methods. However, overall, the task remains extremely challenging for recognizing the majority of research methods. The reasons are mainly due to language ambiguity, which results in challenges in feature engineering. Our data are publicly available and will encourage further studies in this direction.

Further, our data collection process revealed data quality issues reflecting an extensive degree of intra- and interjournal inconsistency with regards to how journal publishers organize their articles when making their data available for research. This data quality issue can discourage interest and effort in studies of research methods in the LIS field. We therefore urge journal publishers to address these issues by making their article categorization system more transparent and consistent among themselves.

Our future work will focus on a number of directions. First, we aim to progress towards developing a hierarchical, structured method classification scheme reflecting different perspectives in LIS. This will address the limitations of our current, flat method classification scheme proposed in this work. Second, as discussed before, we aim to further develop our automated method by incorporating more complex features that may improve its accuracy and enabling it to capture other aspects of research methods, such as the data sets involved and their quantity.

Ziqi Zhang: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Software, Visualization, Writing—original draft. Winnie Tam: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Visualization, Writing—review & editing. Andrew Cox: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Supervision, Writing—review & editing.

The authors have no competing interests.

No funding was received for this research.

The data are available at Zenodo ( https://doi.org/10.5281/zenodo.4486156 ).

https://www.crossref.org/services/metadata-delivery/ , last retrieved in March 2020.

Their plural forms are also considered.

https://www.vosviewer.com/ . Last accessed May 2020.

All available codes are defined at: https://www.elsevier.com/__data/assets/text_file/0005/275666/ja5_art550_dtd.txt . However, no explanation of these codes can be found. A search on certain Q&A platforms found “FLA” to be “Full Length Article.”

Only up to five examples are shown. For the full list of keywords, see supplementary material in the appendix.

Data can be downloaded at https://doi.org/10.5281/zenodo.4486156 .

Average P, R, and F1 are identical because we are evaluating micro-average over all classes. Also the method predicts only one class for each article; therefore in Eqs. 1 and 2 , #total predicted positives = #total actual positives = #articles in the collection.

Examples: 10.1108/JD-10-2012-0138, 10.1108/00220410810912415, 10.1002/asi.21694.

More than one error category can be associated with each article.

Keywords associated with each research method

Author notes

Email alerts, related articles, affiliations.

  • Online ISSN 2641-3337

A product of The MIT Press

Mit press direct.

  • About MIT Press Direct

Information

  • Accessibility
  • For Authors
  • For Customers
  • For Librarians
  • Direct to Open
  • Open Access
  • Media Inquiries
  • Rights and Permissions
  • For Advertisers
  • About the MIT Press
  • The MIT Press Reader
  • MIT Press Blog
  • Seasonal Catalogs
  • MIT Press Home
  • Give to the MIT Press
  • Direct Service Desk
  • Terms of Use
  • Privacy Statement
  • Crossref Member
  • COUNTER Member  
  • The MIT Press colophon is registered in the U.S. Patent and Trademark Office

This Feature Is Available To Subscribers Only

Sign In or Create an Account

  • Open access
  • Published: 13 May 2024

What are the strengths and limitations to utilising creative methods in public and patient involvement in health and social care research? A qualitative systematic review

  • Olivia R. Phillips 1 , 2   na1 ,
  • Cerian Harries 2 , 3   na1 ,
  • Jo Leonardi-Bee 1 , 2 , 4   na1 ,
  • Holly Knight 1 , 2 ,
  • Lauren B. Sherar 2 , 3 ,
  • Veronica Varela-Mato 2 , 3 &
  • Joanne R. Morling 1 , 2 , 5  

Research Involvement and Engagement volume  10 , Article number:  48 ( 2024 ) Cite this article

103 Accesses

2 Altmetric

Metrics details

There is increasing interest in using patient and public involvement (PPI) in research to improve the quality of healthcare. Ordinarily, traditional methods have been used such as interviews or focus groups. However, these methods tend to engage a similar demographic of people. Thus, creative methods are being developed to involve patients for whom traditional methods are inaccessible or non-engaging.

To determine the strengths and limitations to using creative PPI methods in health and social care research.

Electronic searches were conducted over five databases on 14th April 2023 (Web of Science, PubMed, ASSIA, CINAHL, Cochrane Library). Studies that involved traditional, non-creative PPI methods were excluded. Creative PPI methods were used to engage with people as research advisors, rather than study participants. Only primary data published in English from 2009 were accepted. Title, abstract and full text screening was undertaken by two independent reviewers before inductive thematic analysis was used to generate themes.

Twelve papers met the inclusion criteria. The creative methods used included songs, poems, drawings, photograph elicitation, drama performance, visualisations, social media, photography, prototype development, cultural animation, card sorting and persona development. Analysis identified four limitations and five strengths to the creative approaches. Limitations included the time and resource intensive nature of creative PPI, the lack of generalisation to wider populations and ethical issues. External factors, such as the lack of infrastructure to support creative PPI, also affected their implementation. Strengths included the disruption of power hierarchies and the creation of a safe space for people to express mundane or “taboo” topics. Creative methods are also engaging, inclusive of people who struggle to participate in traditional PPI and can also be cost and time efficient.

‘Creative PPI’ is an umbrella term encapsulating many different methods of engagement and there are strengths and limitations to each. The choice of which should be determined by the aims and requirements of the research, as well as the characteristics of the PPI group and practical limitations. Creative PPI can be advantageous over more traditional methods, however a hybrid approach could be considered to reap the benefits of both. Creative PPI methods are not widely used; however, this could change over time as PPI becomes embedded even more into research.

Plain English Summary

It is important that patients and public are included in the research process from initial brainstorming, through design to delivery. This is known as public and patient involvement (PPI). Their input means that research closely aligns with their wants and needs. Traditionally to get this input, interviews and group discussions are held, but this can exclude people who find these activities non-engaging or inaccessible, for example those with language challenges, learning disabilities or memory issues. Creative methods of PPI can overcome this. This is a broad term describing different (non-traditional) ways of engaging patients and public in research, such as through the use or art, animation or performance. This review investigated the reasons why creative approaches to PPI could be difficult (limitations) or helpful (strengths) in health and social care research. After searching 5 online databases, 12 studies were included in the review. PPI groups included adults, children and people with language and memory impairments. Creative methods included songs, poems, drawings, the use of photos and drama, visualisations, Facebook, creating prototypes, personas and card sorting. Limitations included the time, cost and effort associated with creative methods, the lack of application to other populations, ethical issues and buy-in from the wider research community. Strengths included the feeling of equality between academics and the public, creation of a safe space for people to express themselves, inclusivity, and that creative PPI can be cost and time efficient. Overall, this review suggests that creative PPI is worthwhile, however each method has its own strengths and limitations and the choice of which will depend on the research project, PPI group characteristics and other practical limitations, such as time and financial constraints.

Peer Review reports

Introduction

Patient and public involvement (PPI) is the term used to describe the partnership between patients (including caregivers, potential patients, healthcare users etc.) or the public (a community member with no known interest in the topic) with researchers. It describes research that is done “‘with’ or ‘by’ the public, rather than ‘to,’ ‘about’ or ‘for’ them” [ 1 ]. In 2009, it became a legislative requirement for certain health and social care organisations to include patients, families, carers and communities in not only the planning of health and social care services, but the commissioning, delivery and evaluation of them too [ 2 ]. For example, funding applications for the National Institute of Health and Care Research (NIHR), a UK funding body, mandates a demonstration of how researchers plan to include patients/service users, the public and carers at each stage of the project [ 3 ]. However, this should not simply be a tokenistic, tick-box exercise. PPI should help formulate initial ideas and should be an instrumental, continuous part of the research process. Input from PPI can provide unique insights not yet considered and can ensure that research and health services are closely aligned to the needs and requirements of service users PPI also generally makes research more relevant with clearer outcomes and impacts [ 4 ]. Although this review refers to both patients and the public using the umbrella term ‘PPI’, it is important to acknowledge that these are two different groups with different motivations, needs and interests when it comes to health research and service delivery [ 5 ].

Despite continuing recognition of the need of PPI to improve quality of healthcare, researchers have also recognised that there is no ‘one size fits all’ method for involving patients [ 4 ]. Traditionally, PPI methods invite people to take part in interviews or focus groups to facilitate discussion, or surveys and questionnaires. However, these can sometimes be inaccessible or non-engaging for certain populations. For example, someone with communication difficulties may find it difficult to engage in focus groups or interviews. If individuals lack the appropriate skills to interact in these types of scenarios, they cannot take advantage of the participation opportunities it can provide [ 6 ]. Creative methods, however, aim to resolve these issues. These are a relatively new concept whereby researchers use creative methods (e.g., artwork, animations, Lego), to make PPI more accessible and engaging for those whose voices would otherwise go unheard. They ensure that all populations can engage in research, regardless of their background or skills. Seminal work has previously been conducted in this area, which brought to light the use of creative methodologies in research. Leavy (2008) [ 7 ] discussed how traditional interviews had limits on what could be expressed due to their sterile, jargon-filled and formulaic structure, read by only a few specialised academics. It was this that called for more creative approaches, which included narrative enquiry, fiction-based research, poetry, music, dance, art, theatre, film and visual art. These practices, which can be used in any stage of the research cycle, supported greater empathy, self-reflection and longer-lasting learning experiences compared to interviews [ 7 ]. They also pushed traditional academic boundaries, which made the research accessible not only to researchers, but the public too. Leavy explains that there are similarities between arts-based approaches and scientific approaches: both attempts to investigate what it means to be human through exploration, and used together, these complimentary approaches can progress our understanding of the human experience [ 7 ]. Further, it is important to acknowledge the parallels and nuances between creative and inclusive methods of PPI. Although creative methods aim to be inclusive (this should underlie any PPI activity, whether creative or not), they do not incorporate all types of accessible, inclusive methodologies e.g., using sign language for people with hearing impairments or audio recordings for people who cannot read. Given that there was not enough scope to include an evaluation of all possible inclusive methodologies, this review will focus on creative methods of PPI only.

We aimed to conduct a qualitative systematic review to highlight the strengths of creative PPI in health and social care research, as well as the limitations, which might act as a barrier to their implementation. A qualitative systematic review “brings together research on a topic, systematically searching for research evidence from primary qualitative studies and drawing the findings together” [ 8 ]. This review can then advise researchers of the best practices when designing PPI.

Public involvement

The PHIRST-LIGHT Public Advisory Group (PAG) consists of a team of experienced public contributors with a diverse range of characteristics from across the UK. The PAG was involved in the initial question setting and study design for this review.

Search strategy

For the purpose of this review, the JBI approach for conducting qualitative systematic reviews was followed [ 9 ]. The search terms were (“creativ*” OR “innovat*” OR “authentic” OR “original” OR “inclu*”) AND (“public and patient involvement” OR “patient and public involvement” OR “public and patient involvement and engagement” OR “patient and public involvement and engagement” OR “PPI” OR “PPIE” OR “co-produc*” OR “co-creat*” OR “co-design*” OR “cooperat*” OR “co-operat*”). This search string was modified according to the requirements of each database. Papers were filtered by title, abstract and keywords (see Additional file 1 for search strings). The databases searched included Web of Science (WoS), PubMed, ASSIA and CINAHL. The Cochrane Library was also searched to identify relevant reviews which could lead to the identification of primary research. The search was conducted on 14/04/23. As our aim was to report on the use of creative PPI in research, rather than more generic public engagement, we used electronic databases of scholarly peer-reviewed literature, which represent a wide range of recognised databases. These identified studies published in general international journals (WoS, PubMed), those in social sciences journals (ASSIA), those in nursing and allied health journals (CINAHL), and trials of interventions (Cochrane Library).

Inclusion criteria

Only full-text, English language, primary research papers from 2009 to 2023 were included. This was the chosen timeframe as in 2009 the Health and Social Reform Act made it mandatory for certain Health and Social Care organisations to involve the public and patients in planning, delivering, and evaluating services [ 2 ]. Only creative methods of PPI were accepted, rather than traditional methods, such as interviews or focus groups. For the purposes of this paper, creative PPI included creative art or arts-based approaches (e.g., e.g. stories, songs, drama, drawing, painting, poetry, photography) to enhance engagement. Titles were related to health and social care and the creative PPI was used to engage with people as research advisors, not as study participants. Meta-analyses, conference abstracts, book chapters, commentaries and reviews were excluded. There were no limits concerning study location or the demographic characteristics of the PPI groups. Only qualitative data were accepted.

Quality appraisal

Quality appraisal using the Critical Appraisal Skills Programme (CASP) checklist [ 10 ] was conducted by the primary authors (ORP and CH). This was done independently, and discrepancies were discussed and resolved. If a consensus could not be reached, a third independent reviewer was consulted (JRM). The full list of quality appraisal questions can be found in Additional file 2 .

Data extraction

ORP extracted the study characteristics and a subset of these were checked by CH. Discrepancies were discussed and amendments made. Extracted data included author, title, location, year of publication, year study was carried out, research question/aim, creative methods used, number of participants, mean age, gender, ethnicity of participants, setting, limitations and strengths of creative PPI and main findings.

Data analysis

The included studies were analysed using inductive thematic analysis [ 11 ], where themes were determined by the data. The familiarisation stage took place during full-text reading of the included articles. Anything identified as a strength or limitation to creative PPI methods was extracted verbatim as an initial code and inputted into the data extraction Excel sheet. Similar codes were sorted into broader themes, either under ‘strengths’ or ‘limitations’ and reviewed. Themes were then assigned a name according to the codes.

The search yielded 9978 titles across the 5 databases: Web of Science (1480 results), PubMed (94 results), ASSIA (2454 results), CINAHL (5948 results) and Cochrane Library (2 results), resulting in 8553 different studies after deduplication. ORP and CH independently screened their titles and abstracts, excluding those that did not meet the criteria. After assessment, 12 studies were included (see Fig.  1 ).

figure 1

PRISMA flowchart of the study selection process

Study characteristics

The included studies were published between 2018 and 2022. Seven were conducted in the UK [ 12 , 14 , 15 , 17 , 18 , 19 , 23 ], two in Canada [ 21 , 22 ], one in Australia [ 13 ], one in Norway [ 16 ] and one in Ireland [ 20 ]. The PPI activities occurred across various settings, including a school [ 12 ], social club [ 12 ], hospital [ 17 ], university [ 22 ], theatre [ 19 ], hotel [ 20 ], or online [ 15 , 21 ], however this information was omitted in 5 studies [ 13 , 14 , 16 , 18 , 23 ]. The number of people attending the PPI sessions varied, ranging from 6 to 289, however the majority (ten studies) had less than 70 participants [ 13 , 14 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 ]. Seven studies did not provide information on the age or gender of the PPI groups. Of those that did, ages ranged from 8 to 76 and were mostly female. The ethnicities of the PPI group members were also rarely recorded (see Additional file 3 for data extraction table).

Types of creative methods

The type of creative methods used to engage the PPI groups were varied. These included songs, poems, drawings, photograph elicitation, drama performance, visualisations, Facebook, photography, prototype development, cultural animation, card sorting and creating personas (see Table  1 ). These were sometimes accompanied by traditional methods of PPI such as interviews and focus group discussions.

The 12 included studies were all deemed to be of good methodological quality, with scores ranging from 6/10 to 10/10 with the CASP critical appraisal tool [ 10 ] (Table  2 ).

Thematic analysis

Analysis identified four limitations and five strengths to creative PPI (see Fig.  2 ). Limitations included the time and resource intensity of creative PPI methods, its lack of generalisation, ethical issues and external factors. Strengths included the disruption of power hierarchies, the engaging and inclusive nature of the methods and their long-term cost and time efficiency. Creative PPI methods also allowed mundane and “taboo” topics to be discussed within a safe space.

figure 2

Theme map of strengths and limitations

Limitations of creative PPI

Creative ppi methods are time and resource intensive.

The time and resource intensive nature of creative PPI methods is a limitation, most notably for the persona-scenario methodology. Valaitis et al. [ 22 ] used 14 persona-scenario workshops with 70 participants to co-design a healthcare intervention, which aimed to promote optimal aging in Canada. Using the persona method, pairs composed of patients, healthcare providers, community service providers and volunteers developed a fictional character which they believed represented an ‘end-user’ of the healthcare intervention. Due to the depth and richness of the data produced the authors reported that it was time consuming to analyse. Further, they commented that the amount of information was difficult to disseminate to scientific leads and present at team meetings. Additionally, to ensure the production of high-quality data, to probe for details and lead group discussion there was a need for highly skilled facilitators. The resource intensive nature of the creative co-production was also noted in a study using the persona scenario and creative worksheets to develop a prototype decision support tool for individuals with malignant pleural effusion [ 17 ]. With approximately 50 people, this was also likely to yield a high volume of data to consider.

To prepare materials for populations who cannot engage in traditional methods of PPI was also timely. Kearns et al. [ 18 ] developed a feedback questionnaire for people with aphasia to evaluate ICT-delivered rehabilitation. To ensure people could participate effectively, the resources used during the workshops, such as PowerPoints, online images and photographs, had to be aphasia-accessible, which was labour and time intensive. The author warned that this time commitment should not be underestimated.

There are further practical limitations to implementing creative PPI, such as the costs of materials for activities as well as hiring a space for workshops. For example, the included studies in this review utilised pens, paper, worksheets, laptops, arts and craft supplies and magazines and took place in venues such as universities, a social club, and a hotel. Further, although not limited to creative PPI methods exclusively but rather most studies involving the public, a financial incentive was often offered for participation, as well as food, parking, transport and accommodation [ 21 , 22 ].

Creative PPI lacks generalisation

Another barrier to the use of creative PPI methods in health and social care research was the individual nature of its output. Those who participate, usually small in number, produce unique creative outputs specific to their own experiences, opinions and location. Craven et al. [ 13 ], used arts-based visualisations to develop a toolbox for adults with mental health difficulties. They commented, “such an approach might still not be worthwhile”, as the visualisations were individualised and highly personal. This indicates that the output may fail to meet the needs of its end-users. Further, these creative PPI groups were based in certain geographical regions such as Stoke-on-Trent [ 19 ] Sheffield [ 23 ], South Wales [ 12 ] or Ireland [ 20 ], which limits the extent the findings can be applied to wider populations, even within the same area due to individual nuances. Further, the study by Galler et al. [ 16 ], is specific to the Norwegian context and even then, maybe only a sub-group of the Norwegian population as the sample used was of higher socioeconomic status.

However, Grindell et al. [ 17 ], who used persona scenarios, creative worksheets and prototype development, pointed out that the purpose of this type of research is to improve a certain place, rather than apply findings across other populations and locations. Individualised output may, therefore, only be a limitation to research wanting to conduct PPI on a large scale.

If, however, greater generalisation within PPI is deemed necessary, then social media may offer a resolution. Fedorowicz et al. [ 15 ], used Facebook to gain feedback from the public on the use of video-recording methodology for an upcoming project. This had the benefit of including a more diverse range of people (289 people joined the closed group), who were spread geographically around the UK, as well as seven people from overseas.

Creative PPI has ethical issues

As with other research, ethical issues must be taken into consideration. Due to the nature of creative approaches, as well as the personal effort put into them, people often want to be recognised for their work. However, this compromises principles so heavily instilled in research such as anonymity and confidentiality. With the aim of exploring issues related to health and well-being in a town in South Wales, Byrne et al. [ 12 ], asked year 4/5 and year 10 pupils to create poems, songs, drawings and photographs. Community members also created a performance, mainly of monologues, to explore how poverty and inequalities are dealt with. Byrne noted the risks of these arts-based approaches, that being the possibility of over-disclosure and consequent emotional distress, as well as people’s desire to be named for their work. On one hand, the anonymity reduces the sense of ownership of the output as it does not portray a particular individual’s lived experience anymore. On the other hand, however, it could promote a more honest account of lived experience. Supporting this, Webber et al. [ 23 ], who used the persona method to co-design a back pain educational resource prototype, claimed that the anonymity provided by this creative technique allowed individuals to externalise and anonymise their own personal experience, thus creating a more authentic and genuine resource for future users. This implies that anonymity can be both a limitation and strength here.

The use of creative PPI methods is impeded by external factors

Despite the above limitations influencing the implementation of creative PPI techniques, perhaps the most influential is that creative methodologies are simply not mainstream [ 19 ]. This could be linked to the issues above, like time and resource intensity, generalisation and ethical issues but it is also likely to involve more systemic factors within the research community. Micsinszki et al. [ 21 ], who co-designed a hub for the health and well-being of vulnerable populations, commented that there is insufficient infrastructure to conduct meaningful co-design as well as a dominant medical model. Through a more holistic lens, there are “sociopolitical environments that privilege individualism over collectivism, self-sufficiency over collaboration, and scientific expertise over other ways of knowing based on lived experience” [ 21 ]. This, it could be suggested, renders creative co-design methodologies, which are based on the foundations of collectivism, collaboration and imagination an invalid technique in the research field, which is heavily dominated by more scientific methods offering reproducibility, objectivity and reliability.

Although we acknowledge that creative PPI techniques are not always appropriate, it may be that their main limitation is the lack of awareness of these methods or lack of willingness to use them. Further, there is always the risk that PPI, despite being a mandatory part of research, is used in a tokenistic or tick-box fashion [ 20 ], without considering the contribution that meaningful PPI could make to enhancing the research. It may be that PPI, let alone creative PPI, is not at the forefront of researchers’ minds when planning research.

Strengths of creative PPI

Creative ppi disrupts power hierarchies.

One of the main strengths of creative PPI techniques, cited most frequently in the included literature, was that they disrupt traditional power hierarchies [ 12 , 13 , 17 , 19 , 23 ]. For example, the use of theatre performance blurred the lines between professional and lay roles between the community and policy makers [ 12 ]. Individuals created a monologue to portray how poverty and inequality impact daily life and presented this to representatives of the National Assembly of Wales, Welsh Government, the Local Authority, Arts Council and Westminster. Byrne et al. [ 12 ], states how this medium allowed the community to engage with the people who make decisions about their lives in an environment of respect and understanding, where the hierarchies are not as visible as in other settings, e.g., political surgeries. Creative PPI methods have also removed traditional power hierarchies between researchers and adolescents. Cook et al. [ 13 ], used arts-based approaches to explore adolescents’ ideas about the “perfect” condom. They utilised the “Life Happens” resource, where adolescents drew and then decorated a person with their thoughts about sexual relationships, not too dissimilar from the persona-scenario method. This was then combined with hypothetical scenarios about sexuality. A condom-mapping exercise was then implemented, where groups shared the characteristics that make a condom “perfect” on large pieces of paper. Cook et al. [ 13 ], noted that usually power imbalances make it difficult to elicit information from adolescents, however these power imbalances were reduced due to the use of creative co-design techniques.

The same reduction in power hierarchies was noted by Grindell et al. [ 17 ], who used the person-scenario method and creative worksheets with individuals with malignant pleural effusion. This was with the aim of developing a prototype of a decision support tool for patients to help with treatment options. Although this process involved a variety of stakeholders, such as patients, carers and healthcare professionals, creative co-design was cited as a mechanism that worked to reduce power imbalances – a limitation of more traditional methods of research. Creative co-design blurred boundaries between end-users and clinical staff and enabled the sharing of ideas from multiple, valuable perspectives, meaning the prototype was able to suit user needs whilst addressing clinical problems.

Similarly, a specific creative method named cultural animation was also cited to dissolve hierarchies and encourage equal contributions from participants. Within this arts-based approach, Keleman et al. [ 19 ], explored the concept of “good health” with individuals from Stoke-on Trent. Members of the group created art installations using ribbons, buttons, cardboard and straws to depict their idea of a “healthy community”, which was accompanied by a poem. They also created a 3D Facebook page and produced another poem or song addressing the government to communicate their version of a “picture of health”. Public participants said that they found the process empowering, honest, democratic, valuable and practical.

This dissolving of hierarchies and levelling of power is beneficial as it increases the sense of ownership experienced by the creators/producers of the output [ 12 , 17 , 23 ]. This is advantageous as it has been suggested to improve its quality [ 23 ].

Creative PPI allows the unsayable to be said

Creative PPI fosters a safe space for mundane or taboo topics to be shared, which may be difficult to communicate using traditional methods of PPI. For example, the hypothetical nature of condom mapping and persona-scenarios meant that adolescents could discuss a personal topic without fear of discrimination, judgement or personal disclosure [ 13 ]. The safe space allowed a greater volume of ideas to be generated amongst peers where they might not have otherwise. Similarly, Webber et al. [ 23 ], , who used the persona method to co-design the prototype back pain educational resource, also noted how this method creates anonymity whilst allowing people the opportunity to externalise personal experiences, thoughts and feelings. Other creative methods were also used, such as drawing, collaging, role play and creating mood boards. A cardboard cube (labelled a “magic box”) was used to symbolise a physical representation of their final prototype. These creative methods levelled the playing field and made personal experiences accessible in a safe, open environment that fostered trust, as well as understanding from the researchers.

It is not only sensitive subjects that were made easier to articulate through creative PPI. The communication of mundane everyday experiences were also facilitated, which were deemed typically ‘unsayable’. This was specifically given in the context of describing intangible aspects of everyday health and wellbeing [ 11 ]. Graphic designers can also be used to visually represent the outputs of creative PPI. These captured the movement and fluidity of people and well as the relationships between them - things that cannot be spoken but can be depicted [ 21 ].

Creative PPI methods are inclusive

Another strength of creative PPI was that it is inclusive and accessible [ 17 , 19 , 21 ]. The safe space it fosters, as well as the dismantling of hierarchies, welcomed people from a diverse range of backgrounds and provided equal opportunities [ 21 ], especially for those with communication and memory difficulties who might be otherwise excluded from PPI. Kelemen et al. [ 19 ], who used creative methods to explore health and well-being in Stoke-on-Trent, discussed how people from different backgrounds came together and connected, discussed and reached a consensus over a topic which evoked strong emotions, that they all have in common. Individuals said that the techniques used “sets people to open up as they are not overwhelmed by words”. Similarly, creative activities, such as the persona method, have been stated to allow people to express themselves in an inclusive environment using a common language. Kearns et al. [ 18 ], who used aphasia-accessible material to develop a questionnaire with aphasic individuals, described how they felt comfortable in contributing to workshops (although this material was time-consuming to make, see ‘Limitations of creative PPI’ ).

Despite the general inclusivity of creative PPI, it can also be exclusive, particularly if online mediums are used. Fedorowicz et al. [ 15 ], used Facebook to create a PPI group, and although this may rectify previous drawbacks about lack of generalisation of creative methods (as Facebook can reach a greater number of people, globally), it excluded those who are not digitally active or have limited internet access or knowledge of technology. Online methods have other issues too. Maintaining the online group was cited as challenging and the volume of responses required researchers to interact outside of their working hours. Despite this, online methods like Facebook are very accessible for people who are physically disabled.

Creative PPI methods are engaging

The process of creative PPI is typically more engaging and produces more colourful data than traditional methods [ 13 ]. Individuals are permitted and encouraged to explore a creative self [ 19 ], which can lead to the exploration of new ideas and an overall increased enjoyment of the process. This increased engagement is particularly beneficial for younger PPI groups. For example, to involve children in the development of health food products, Galler et al. [ 16 ] asked 9-12-year-olds to take photos of their food and present it to other children in a “show and tell” fashion. They then created a newspaper article describing a new healthy snack. In this creative focus group, children were given lab coats to further their identity as inventors. Galler et al. [ 16 ], notes that the methods were highly engaging and facilitated teamwork and group learning. This collaborative nature of problem-solving was also observed in adults who used personas and creative worksheets to develop the resource for lower back pain [ 23 ]. Dementia patients too have been reported to enjoy the creative and informal approach to idea generation [ 20 ].

The use of cultural animation allowed people to connect with each other in a way that traditional methods do not [ 19 , 21 ]. These connections were held in place by boundary objects, such as ribbons, buttons, fabric and picture frames, which symbolised a shared meaning between people and an exchange of knowledge and emotion. Asking groups to create an art installation using these objects further fostered teamwork and collaboration, both at an individual and collective level. The exploration of a creative self increased energy levels and encouraged productive discussions and problem-solving [ 19 ]. Objects also encouraged a solution-focused approach and permitted people to think beyond their usual everyday scope [ 17 ]. They also allowed facilitators to probe deeper about the greater meanings carried by the object, which acted as a metaphor [ 21 ].

From the researcher’s point of view, co-creative methods gave rise to ideas they might not have initially considered. Valaitis et al. [ 22 ], found that over 40% of the creative outputs were novel ideas brought to light by patients, healthcare providers/community care providers, community service providers and volunteers. One researcher commented, “It [the creative methods] took me on a journey, in a way that when we do other pieces of research it can feel disconnected” [ 23 ]. Another researcher also stated they could not return to the way they used to do research, as they have learnt so much about their own health and community and how they are perceived [ 19 ]. This demonstrates that creative processes not only benefit the project outcomes and the PPI group, but also facilitators and researchers. However, although engaging, creative methods have been criticised for not demonstrating academic rigour [ 17 ]. Moreover, creative PPI may also be exclusive to people who do not like or enjoy creative activities.

Creative PPI methods are cost and time efficient

Creative PPI workshops can often produce output that is visible and tangible. This can save time and money in the long run as the output is either ready to be implemented in a healthcare setting or a first iteration has already been developed. This may also offset the time and costs it takes to implement creative PPI. For example, the prototype of the decision support tool for people with malignant pleural effusion was developed using personas and creative worksheets. The end result was two tangible prototypes to drive the initial idea forward as something to be used in practice [ 17 ]. The use of creative co-design in this case saved clinician time as well as the time it would take to develop this product without the help of its end-users. In the development of this particular prototype, analysis was iterative and informed the next stage of development, which again saved time. The same applies for the feedback questionnaire for the assessment of ICT delivered aphasia rehabilitation. The co-created questionnaire, designed with people with aphasia, was ready to be used in practice [ 18 ]. This suggests that to overcome time and resource barriers to creative PPI, researchers should aim for it to be engaging whilst also producing output.

That useable products are generated during creative workshops signals to participating patients and public members that they have been listened to and their thoughts and opinions acted upon [ 23 ]. For example, the development of the back pain resource based on patient experiences implies that their suggestions were valid and valuable. Further, those who participated in the cultural animation workshop reported that the process visualises change, and that it already feels as though the process of change has started [ 19 ].

The most cost and time efficient method of creative PPI in this review is most likely the use of Facebook to gather feedback on project methodology [ 15 ]. Although there were drawbacks to this, researchers could involve more people from a range of geographical areas at little to no cost. Feedback was instantaneous and no training was required. From the perspective of the PPI group, they could interact however much or little they wish with no time commitment.

This systematic review identified four limitations and five strengths to the use of creative PPI in health and social care research. Creative PPI is time and resource intensive, can raise ethical issues and lacks generalisability. It is also not accepted by the mainstream. These factors may act as barriers to the implementation of creative PPI. However, creative PPI disrupts traditional power hierarchies and creates a safe space for taboo or mundane topics. It is also engaging, inclusive and can be time and cost efficient in the long term.

Something that became apparent during data analysis was that these are not blanket strengths and limitations of creative PPI as a whole. The umbrella term ‘creative PPI’ is broad and encapsulates a wide range of activities, ranging from music and poems to prototype development and persona-scenarios, to more simplistic things like the use of sticky notes and ordering cards. Many different activities can be deemed ‘creative’ and the strengths and limitations of one does not necessarily apply to another. For example, cultural animation takes greater effort to prepare than the use of sticky notes and sorting cards, and the use of Facebook is cheaper and wider reaching than persona development. Researchers should use their discretion and weigh up the benefits and drawbacks of each method to decide on a technique which suits the project. What might be a limitation to creative PPI in one project may not be in another. In some cases, creative PPI may not be suitable at all.

Furthermore, the choice of creative PPI method also depends on the needs and characteristics of the PPI group. Children, adults and people living with dementia or language difficulties all have different engagement needs and capabilities. This indicates that creative PPI is not one size fits all and that the most appropriate method will change depending on the composition of the group. The choice of method will also be determined by the constraints of the research project, namely time, money and the research aim. For example, if there are time constraints, then a method which yields a lot of data and requires a lot of preparation may not be appropriate. If generalisation is important, then an online method is more suitable. Together this indicates that the choice of creative PPI method is highly individualised and dependent on multiple factors.

Although the limitations discussed in this review apply to creative PPI, they are not exclusive to creative PPI. Ethical issues are a consideration within general PPI research, especially when working with more vulnerable populations, such as children or adults living with a disability. It can also be the case that traditional PPI methods lack generalisability, as people who volunteer to be part of such a group are more likely be older, middle class and retired [ 24 ]. Most research is vulnerable to this type of bias, however, it is worth noting that generalisation is not always a goal and research remains valid and meaningful in its absence. Although online methods may somewhat combat issues related to generalisability, these methods still exclude people who do not have access to the internet/technology or who choose not to use it, implying that online PPI methods may not be wholly representative of the general population. Saying this, however, the accessibility of creative PPI techniques differs from person to person, and for some, online mediums may be more accessible (for example for those with a physical disability), and for others, this might be face-to-face. To combat this, a range of methods should be implemented. Planning multiple focus group and interviews for traditional PPI is also time and resource intensive, however the extra resources required to make this creative may be even greater. Although, the rich data provided may be worth the preparation and analysis time, which is also likely to depend on the number of participants and workshop sessions required. PPI, not just creative PPI, often requires the provision of a financial incentive, refreshments, parking and accommodation, which increase costs. These, however, are imperative and non-negotiable, as they increase the accessibility of research, especially to minority and lower-income groups less likely to participate. Adequate funding is also important for co-design studies where repeated engagement is required. One barrier to implementation, which appears to be exclusive to creative methods, however, is that creative methods are not mainstream. This cannot be said for traditional PPI as this is often a mandatory part of research applications.

Regarding the strengths of creative PPI, it could be argued that most appear to be exclusive to creative methodologies. These are inclusive by nature as multiple approaches can be taken to evoke ideas from different populations - approaches that do not necessarily rely on verbal or written communication like interviews and focus groups do. Given the anonymity provided by some creative methods, such as personas, people may be more likely to discuss their personal experiences under the guise of a general end-user, which might be more difficult to maintain when an interviewer is asking an individual questions directly. Additionally, creative methods are by nature more engaging and interactive than traditional methods, although this is a blanket statement and there may be people who find the question-and-answer/group discussion format more engaging. Creative methods have also been cited to eliminate power imbalances which exist in traditional research [ 12 , 13 , 17 , 19 , 23 ]. These imbalances exist between researchers and policy makers and adolescents, adults and the community. Lastly, although this may occur to a greater extent in creative methods like prototype development, it could be suggested that PPI in general – regardless of whether it is creative - is more time and cost efficient in the long-term than not using any PPI to guide or refine the research process. It must be noted that these are observations based on the literature. To be certain these differences exist between creative and traditional methods of PPI, direct empirical evaluation of both should be conducted.

To the best of our knowledge, this is the first review to identify the strengths and limitations to creative PPI, however, similar literature has identified barriers and facilitators to PPI in general. In the context of clinical trials, recruitment difficulties were cited as a barrier, as well as finding public contributors who were free during work/school hours. Trial managers reported finding group dynamics difficult to manage and the academic environment also made some public contributors feel nervous and lacking confidence to speak. Facilitators, however, included the shared ownership of the research – something that has been identified in the current review too. In addition, planning and the provision of knowledge, information and communication were also identified as facilitators [ 25 ]. Other research on the barriers to meaningful PPI in trial oversight committees included trialist confusion or scepticism over the PPI role and the difficulties in finding PPI members who had a basic understanding of research [ 26 ]. However, it could be argued that this is not representative of the average patient or public member. The formality of oversight meetings and the technical language used also acted as a barrier, which may imply that the informal nature of creative methods and its lack of dependency on literacy skills could overcome this. Further, a review of 42 reviews on PPI in health and social care identified financial compensation, resources, training and general support as necessary to conduct PPI, much like in the current review where the resource intensiveness of creative PPI was identified as a limitation. However, others were identified too, such as recruitment and representativeness of public contributors [ 27 ]. Like in the current review, power imbalances were also noted, however this was included as both a barrier and facilitator. Collaboration seemed to diminish hierarchies but not always, as sometimes these imbalances remained between public contributors and healthcare staff, described as a ‘them and us’ culture [ 27 ]. Although these studies compliment the findings of the current review, a direct comparison cannot be made as they do not concern creative methods. However, it does suggest that some strengths and weaknesses are shared between creative and traditional methods of PPI.

Strengths and limitations of this review

Although a general definition of creative PPI exists, it was up to our discretion to decide exactly which activities were deemed as such for this review. For example, we included sorting cards, the use of interactive whiteboards and sticky notes. Other researchers may have a more or less stringent criteria. However, two reviewers were involved in this decision which aids the reliability of the included articles. Further, it may be that some of the strengths and limitations cannot fully be attributed to the creative nature of the PPI process, but rather their co-created nature, however this is hard to disentangle as the included papers involved both these aspects.

During screening, it was difficult to decide whether the article was utilising creative qualitative methodology or creative PPI , as it was often not explicitly labelled as such. Regardless, both approaches involved the public/patients refining a healthcare product/service. This implies that if this review were to be replicated, others may do it differently. This may call for greater standardisation in the reporting of the public’s involvement in research. For example, the NIHR outlines different approaches to PPI, namely “consultation”, “collaboration”, “co-production” and “user-controlled”, which each signify an increased level of public power and influence [ 28 ]. Papers with elements of PPI could use these labels to clarify the extent of public involvement, or even explicitly state that there was no PPI. Further, given our decision to include only scholarly peer-reviewed literature, it is possible that data were missed within the grey literature. Similarly, the literature search will not have identified all papers relating to different types of accessible inclusion. However, the intent of the review was to focus solely on those within the definition of creative.

This review fills a gap in the literature and helps circulate and promote the concept of creative PPI. Each stage of this review, namely screening and quality appraisal, was conducted by two independent reviewers. However, four full texts could not be accessed during the full text reading stage, meaning there are missing data that could have altered or contributed to the findings of this review.

Research recommendations

Given that creative PPI can require effort to prepare, perform and analyse, sufficient time and funding should be allocated in the research protocol to enable meaningful and continuous PPI. This is worthwhile as PPI can significantly change the research output so that it aligns closely with the needs of the group it is to benefit. Researchers should also consider prototype development as a creative PPI activity as this might reduce future time/resource constraints. Shifting from a top-down approach within research to a bottom-up can be advantageous to all stakeholders and can help move creative PPI towards the mainstream. This, however, is the collective responsibility of funding bodies, universities and researchers, as well as committees who approve research bids.

A few of the included studies used creative techniques alongside traditional methods, such as interviews, which could also be used as a hybrid method of PPI, perhaps by researchers who are unfamiliar with creative techniques or to those who wish to reap the benefits of both. Often the characteristics of the PPI group were not included, including age, gender and ethnicity. It would be useful to include such information to assess how representative the PPI group is of the population of interest.

Creative PPI is a relatively novel approach of engaging the public and patients in research and it has both advantages and disadvantages compared to more traditional methods. There are many approaches to implementing creative PPI and the choice of technique will be unique to each piece of research and is reliant on several factors. These include the age and ability of the PPI group as well as the resource limitations of the project. Each method has benefits and drawbacks, which should be considered at the protocol-writing stage. However, given adequate funding, time and planning, creative PPI is a worthwhile and engaging method of generating ideas with end-users of research – ideas which may not be otherwise generated using traditional methods.

Data availability

No datasets were generated or analysed during the current study.

Abbreviations

Critical Appraisal Skills Programme

The Joanna Briggs Institute

National Institute of Health and Care Research

Public Advisory Group

Public and Patient Involvement

Web of Science

National Institute for Health and Care Research. What Is Patient and Public Involvement and Public Engagement? https://www.spcr.nihr.ac.uk/PPI/what-is-patient-and-public-involvement-and-engagement Accessed 01 Sept 2023.

Department of Health. Personal and Public Involvement (PPI) https://www.health-ni.gov.uk/topics/safety-and-quality-standards/personal-and-public-involvement-ppi#:~:text=The Health and Social Care Reform Act (NI) 2009 placed,delivery and evaluation of services . Accessed 01 Sept 2023.

National Institute for Health and Care Research. Policy Research Programme – Guidance for Stage 1 Applications https://www.nihr.ac.uk/documents/policy-research-programme-guidance-for-stage-1-applications-updated/26398 Accessed 01 Sept 2023.

Greenhalgh T, Hinton L, Finlay T, Macfarlane A, Fahy N, Clyde B, Chant A. Frameworks for supporting patient and public involvement in research: systematic review and co-design pilot. Health Expect. 2019. https://doi.org/10.1111/hex.12888

Article   PubMed   PubMed Central   Google Scholar  

Street JM, Stafinski T, Lopes E, Menon D. Defining the role of the public in health technology assessment (HTA) and HTA-informed decision-making processes. Int J Technol Assess Health Care. 2020. https://doi.org/10.1017/S0266462320000094

Article   PubMed   Google Scholar  

Morrison C, Dearden A. Beyond tokenistic participation: using representational artefacts to enable meaningful public participation in health service design. Health Policy. 2013. https://doi.org/10.1016/j.healthpol.2013.05.008

Leavy P. Method meets art: arts-Based Research Practice. New York: Guilford; 2020.

Google Scholar  

Seers K. Qualitative systematic reviews: their importance for our understanding of research relevant to pain. Br J Pain. 2015. https://doi.org/10.1177/2049463714549777

Lockwood C, Porritt K, Munn Z, Rittenmeyer L, Salmond S, Bjerrum M, Loveday H, Carrier J, Stannard D. Chapter 2: Systematic reviews of qualitative evidence. Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis JBI. 2020. https://synthesismanual.jbi.global . https://doi.org/10.46658/JBIMES-20-03

CASP. CASP Checklists https://casp-uk.net/images/checklist/documents/CASP-Qualitative-Studies-Checklist/CASP-Qualitative-Checklist-2018_fillable_form.pdf (2022).

Braun V, Clarke V. Using thematic analysis in psychology. Qualitative Res Psychol. 2006. https://doi.org/10.1191/1478088706qp063oa

Article   Google Scholar  

Byrne E, Elliott E, Saltus R, Angharad J. The creative turn in evidence for public health: community and arts-based methodologies. J Public Health. 2018. https://doi.org/10.1093/pubmed/fdx151

Cook S, Grozdanovski L, Renda G, Santoso D, Gorkin R, Senior K. Can you design the perfect condom? Engaging young people to inform safe sexual health practice and innovation. Sex Educ. 2022. https://doi.org/10.1080/14681811.2021.1891040

Craven MP, Goodwin R, Rawsthorne M, Butler D, Waddingham P, Brown S, Jamieson M. Try to see it my way: exploring the co-design of visual presentations of wellbeing through a workshop process. Perspect Public Health. 2019. https://doi.org/10.1177/1757913919835231

Fedorowicz S, Riley V, Cowap L, Ellis NJ, Chambers R, Grogan S, Crone D, Cottrell E, Clark-Carter D, Roberts L, Gidlow CJ. Using social media for patient and public involvement and engagement in health research: the process and impact of a closed Facebook group. Health Expect. 2022. https://doi.org/10.1111/hex.13515

Galler M, Myhrer K, Ares G, Varela P. Listening to children voices in early stages of new product development through co-creation – creative focus group and online platform. Food Res Int. 2022. https://doi.org/10.1016/j.foodres.2022.111000

Grindell C, Tod A, Bec R, Wolstenholme D, Bhatnagar R, Sivakumar P, Morley A, Holme J, Lyons J, Ahmed M, Jackson S, Wallace D, Noorzad F, Kamalanathan M, Ahmed L, Evison M. Using creative co-design to develop a decision support tool for people with malignant pleural effusion. BMC Med Inf Decis Mak. 2020. https://doi.org/10.1186/s12911-020-01200-3

Kearns Á, Kelly H, Pitt I. Rating experience of ICT-delivered aphasia rehabilitation: co-design of a feedback questionnaire. Aphasiology. 2020. https://doi.org/10.1080/02687038.2019.1649913

Kelemen M, Surman E, Dikomitis L. Cultural animation in health research: an innovative methodology for patient and public involvement and engagement. Health Expect. 2018. https://doi.org/10.1111/hex.12677

Keogh F, Carney P, O’Shea E. Innovative methods for involving people with dementia and carers in the policymaking process. Health Expect. 2021. https://doi.org/10.1111/hex.13213

Micsinszki SK, Buettgen A, Mulvale G, Moll S, Wyndham-West M, Bruce E, Rogerson K, Murray-Leung L, Fleisig R, Park S, Phoenix M. Creative processes in co-designing a co-design hub: towards system change in health and social services in collaboration with structurally vulnerable populations. Evid Policy. 2022. https://doi.org/10.1332/174426421X16366319768599

Valaitis R, Longaphy J, Ploeg J, Agarwal G, Oliver D, Nair K, Kastner M, Avilla E, Dolovich L. Health TAPESTRY: co-designing interprofessional primary care programs for older adults using the persona-scenario method. BMC Fam Pract. 2019. https://doi.org/10.1186/s12875-019-1013-9

Webber R, Partridge R, Grindell C. The creative co-design of low back pain education resources. Evid Policy. 2022. https://doi.org/10.1332/174426421X16437342906266

National Institute for Health and Care Research. A Researcher’s Guide to Patient and Public Involvement. https://oxfordbrc.nihr.ac.uk/wp-content/uploads/2017/03/A-Researchers-Guide-to-PPI.pdf Accessed 01 Nov 2023.

Selman L, Clement C, Douglas M, Douglas K, Taylor J, Metcalfe C, Lane J, Horwood J. Patient and public involvement in randomised clinical trials: a mixed-methods study of a clinical trials unit to identify good practice, barriers and facilitators. Trials. 2021 https://doi.org/10.1186/s13063-021-05701-y

Coulman K, Nicholson A, Shaw A, Daykin A, Selman L, Macefield R, Shorter G, Cramer H, Sydes M, Gamble C, Pick M, Taylor G, Lane J. Understanding and optimising patient and public involvement in trial oversight: an ethnographic study of eight clinical trials. Trials. 2020. https://doi.org/10.1186/s13063-020-04495-9

Ocloo J, Garfield S, Franklin B, Dawson S. Exploring the theory, barriers and enablers for patient and public involvement across health, social care and patient safety: a systematic review of reviews. Health Res Policy Sys. 2021. https://doi.org/10.1186/s12961-020-00644-3

National Institute for Health and Care Research. Briefing notes for researchers - public involvement in NHS, health and social care research. https://www.nihr.ac.uk/documents/briefing-notes-for-researchers-public-involvement-in-nhs-health-and-social-care-research/27371 Accessed 01 Nov 2023.

Download references

Acknowledgements

With thanks to the PHIRST-LIGHT public advisory group and consortium for their thoughts and contributions to the design of this work.

The research team is supported by a National Institute for Health and Care Research grant (PHIRST-LIGHT Reference NIHR 135190).

Author information

Olivia R. Phillips and Cerian Harries share joint first authorship.

Authors and Affiliations

Nottingham Centre for Public Health and Epidemiology, Lifespan and Population Health, School of Medicine, University of Nottingham, Clinical Sciences Building, City Hospital Campus, Hucknall Road, Nottingham, NG5 1PB, UK

Olivia R. Phillips, Jo Leonardi-Bee, Holly Knight & Joanne R. Morling

National Institute for Health and Care Research (NIHR) PHIRST-LIGHT, Nottingham, UK

Olivia R. Phillips, Cerian Harries, Jo Leonardi-Bee, Holly Knight, Lauren B. Sherar, Veronica Varela-Mato & Joanne R. Morling

School of Sport, Exercise and Health Sciences, Loughborough University, Epinal Way, Loughborough, Leicestershire, LE11 3TU, UK

Cerian Harries, Lauren B. Sherar & Veronica Varela-Mato

Nottingham Centre for Evidence Based Healthcare, School of Medicine, University of Nottingham, Nottingham, UK

Jo Leonardi-Bee

NIHR Nottingham Biomedical Research Centre (BRC), Nottingham University Hospitals NHS Trust, University of Nottingham, Nottingham, NG7 2UH, UK

Joanne R. Morling

You can also search for this author in PubMed   Google Scholar

Contributions

Author contributions: study design: ORP, CH, JRM, JLB, HK, LBS, VVM, literature searching and screening: ORP, CH, JRM, data curation: ORP, CH, analysis: ORP, CH, JRM, manuscript draft: ORP, CH, JRM, Plain English Summary: ORP, manuscript critical review and editing: ORP, CH, JRM, JLB, HK, LBS, VVM.

Corresponding author

Correspondence to Olivia R. Phillips .

Ethics declarations

Ethics approval and consent to participate.

The Ethics Committee of the Faculty of Medicine and Health Sciences, University of Nottingham advised that approval from the ethics committee and consent to participate was not required for systematic review studies.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

40900_2024_580_MOESM1_ESM.docx

Additional file 1: Search strings: Description of data: the search strings and filters used in each of the 5 databases in this review

Additional file 2: Quality appraisal questions: Description of data: CASP quality appraisal questions

40900_2024_580_moesm3_esm.docx.

Additional file 3: Table 1: Description of data: elements of the data extraction table that are not in the main manuscript

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Phillips, O.R., Harries, C., Leonardi-Bee, J. et al. What are the strengths and limitations to utilising creative methods in public and patient involvement in health and social care research? A qualitative systematic review. Res Involv Engagem 10 , 48 (2024). https://doi.org/10.1186/s40900-024-00580-4

Download citation

Received : 28 November 2023

Accepted : 25 April 2024

Published : 13 May 2024

DOI : https://doi.org/10.1186/s40900-024-00580-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Public and patient involvement
  • Creative PPI
  • Qualitative systematic review

Research Involvement and Engagement

ISSN: 2056-7529

what is research methodology in library and information science

Staff Members

222 results returned.

University Libraries Logo

Samantha Agbeblewu

Lending Assistant

Professional headshot of Jack Ahern

Jack (John) Ahern

Director of Annual Giving

headshot of Kim Allen

Kimberly Allen

Projects and Gov Docs Metadata Assistant

Chris (Christopher) Allen

Library Facilities Manager

Professional headshot of Nate Allen

Nate (Nathaniel) Allen

Carolina Blu Delivery & Weekend Supervisor

Meaghan Alston

Assistant Curator for African American Collections

Professional headshot of Katelyn Ander

Katelyn Ander

Media Center Manager

Professional headshot of Juan Arango

Juan Arango

Records Management & Searching Coordinator

Professional headshot of Baz Armstrong

Baz Armstrong

Circulation Supervisor/Technology Lending Coordinator

Professional headshot of Jess Aylor

Jess (Jessica) Aylor

Executive Director of Library Development

what is research methodology in library and information science

Cultural Relativity and Acceptance of Embryonic Stem Cell Research

Article sidebar.

what is research methodology in library and information science

Main Article Content

There is a debate about the ethical implications of using human embryos in stem cell research, which can be influenced by cultural, moral, and social values. This paper argues for an adaptable framework to accommodate diverse cultural and religious perspectives. By using an adaptive ethics model, research protections can reflect various populations and foster growth in stem cell research possibilities.

INTRODUCTION

Stem cell research combines biology, medicine, and technology, promising to alter health care and the understanding of human development. Yet, ethical contention exists because of individuals’ perceptions of using human embryos based on their various cultural, moral, and social values. While these disagreements concerning policy, use, and general acceptance have prompted the development of an international ethics policy, such a uniform approach can overlook the nuanced ethical landscapes between cultures. With diverse viewpoints in public health, a single global policy, especially one reflecting Western ethics or the ethics prevalent in high-income countries, is impractical. This paper argues for a culturally sensitive, adaptable framework for the use of embryonic stem cells. Stem cell policy should accommodate varying ethical viewpoints and promote an effective global dialogue. With an extension of an ethics model that can adapt to various cultures, we recommend localized guidelines that reflect the moral views of the people those guidelines serve.

Stem cells, characterized by their unique ability to differentiate into various cell types, enable the repair or replacement of damaged tissues. Two primary types of stem cells are somatic stem cells (adult stem cells) and embryonic stem cells. Adult stem cells exist in developed tissues and maintain the body’s repair processes. [1] Embryonic stem cells (ESC) are remarkably pluripotent or versatile, making them valuable in research. [2] However, the use of ESCs has sparked ethics debates. Considering the potential of embryonic stem cells, research guidelines are essential. The International Society for Stem Cell Research (ISSCR) provides international stem cell research guidelines. They call for “public conversations touching on the scientific significance as well as the societal and ethical issues raised by ESC research.” [3] The ISSCR also publishes updates about culturing human embryos 14 days post fertilization, suggesting local policies and regulations should continue to evolve as ESC research develops. [4]  Like the ISSCR, which calls for local law and policy to adapt to developing stem cell research given cultural acceptance, this paper highlights the importance of local social factors such as religion and culture.

I.     Global Cultural Perspective of Embryonic Stem Cells

Views on ESCs vary throughout the world. Some countries readily embrace stem cell research and therapies, while others have stricter regulations due to ethical concerns surrounding embryonic stem cells and when an embryo becomes entitled to moral consideration. The philosophical issue of when the “someone” begins to be a human after fertilization, in the morally relevant sense, [5] impacts when an embryo becomes not just worthy of protection but morally entitled to it. The process of creating embryonic stem cell lines involves the destruction of the embryos for research. [6] Consequently, global engagement in ESC research depends on social-cultural acceptability.

a.     US and Rights-Based Cultures

In the United States, attitudes toward stem cell therapies are diverse. The ethics and social approaches, which value individualism, [7] trigger debates regarding the destruction of human embryos, creating a complex regulatory environment. For example, the 1996 Dickey-Wicker Amendment prohibited federal funding for the creation of embryos for research and the destruction of embryos for “more than allowed for research on fetuses in utero.” [8] Following suit, in 2001, the Bush Administration heavily restricted stem cell lines for research. However, the Stem Cell Research Enhancement Act of 2005 was proposed to help develop ESC research but was ultimately vetoed. [9] Under the Obama administration, in 2009, an executive order lifted restrictions allowing for more development in this field. [10] The flux of research capacity and funding parallels the different cultural perceptions of human dignity of the embryo and how it is socially presented within the country’s research culture. [11]

b.     Ubuntu and Collective Cultures

African bioethics differs from Western individualism because of the different traditions and values. African traditions, as described by individuals from South Africa and supported by some studies in other African countries, including Ghana and Kenya, follow the African moral philosophies of Ubuntu or Botho and Ukama , which “advocates for a form of wholeness that comes through one’s relationship and connectedness with other people in the society,” [12] making autonomy a socially collective concept. In this context, for the community to act autonomously, individuals would come together to decide what is best for the collective. Thus, stem cell research would require examining the value of the research to society as a whole and the use of the embryos as a collective societal resource. If society views the source as part of the collective whole, and opposes using stem cells, compromising the cultural values to pursue research may cause social detachment and stunt research growth. [13] Based on local culture and moral philosophy, the permissibility of stem cell research depends on how embryo, stem cell, and cell line therapies relate to the community as a whole . Ubuntu is the expression of humanness, with the person’s identity drawn from the “’I am because we are’” value. [14] The decision in a collectivistic culture becomes one born of cultural context, and individual decisions give deference to others in the society.

Consent differs in cultures where thought and moral philosophy are based on a collective paradigm. So, applying Western bioethical concepts is unrealistic. For one, Africa is a diverse continent with many countries with different belief systems, access to health care, and reliance on traditional or Western medicines. Where traditional medicine is the primary treatment, the “’restrictive focus on biomedically-related bioethics’” [is] problematic in African contexts because it neglects bioethical issues raised by traditional systems.” [15] No single approach applies in all areas or contexts. Rather than evaluating the permissibility of ESC research according to Western concepts such as the four principles approach, different ethics approaches should prevail.

Another consideration is the socio-economic standing of countries. In parts of South Africa, researchers have not focused heavily on contributing to the stem cell discourse, either because it is not considered health care or a health science priority or because resources are unavailable. [16] Each country’s priorities differ given different social, political, and economic factors. In South Africa, for instance, areas such as maternal mortality, non-communicable diseases, telemedicine, and the strength of health systems need improvement and require more focus. [17] Stem cell research could benefit the population, but it also could divert resources from basic medical care. Researchers in South Africa adhere to the National Health Act and Medicines Control Act in South Africa and international guidelines; however, the Act is not strictly enforced, and there is no clear legislation for research conduct or ethical guidelines. [18]

Some parts of Africa condemn stem cell research. For example, 98.2 percent of the Tunisian population is Muslim. [19] Tunisia does not permit stem cell research because of moral conflict with a Fatwa. Religion heavily saturates the regulation and direction of research. [20] Stem cell use became permissible for reproductive purposes only recently, with tight restrictions preventing cells from being used in any research other than procedures concerning ART/IVF.  Their use is conditioned on consent, and available only to married couples. [21] The community's receptiveness to stem cell research depends on including communitarian African ethics.

c.     Asia

Some Asian countries also have a collective model of ethics and decision making. [22] In China, the ethics model promotes a sincere respect for life or human dignity, [23] based on protective medicine. This model, influenced by Traditional Chinese Medicine (TCM), [24] recognizes Qi as the vital energy delivered via the meridians of the body; it connects illness to body systems, the body’s entire constitution, and the universe for a holistic bond of nature, health, and quality of life. [25] Following a protective ethics model, and traditional customs of wholeness, investment in stem cell research is heavily desired for its applications in regenerative therapies, disease modeling, and protective medicines. In a survey of medical students and healthcare practitioners, 30.8 percent considered stem cell research morally unacceptable while 63.5 percent accepted medical research using human embryonic stem cells. Of these individuals, 89.9 percent supported increased funding for stem cell research. [26] The scientific community might not reflect the overall population. From 1997 to 2019, China spent a total of $576 million (USD) on stem cell research at 8,050 stem cell programs, increased published presence from 0.6 percent to 14.01 percent of total global stem cell publications as of 2014, and made significant strides in cell-based therapies for various medical conditions. [27] However, while China has made substantial investments in stem cell research and achieved notable progress in clinical applications, concerns linger regarding ethical oversight and transparency. [28] For example, the China Biosecurity Law, promoted by the National Health Commission and China Hospital Association, attempted to mitigate risks by introducing an institutional review board (IRB) in the regulatory bodies. 5800 IRBs registered with the Chinese Clinical Trial Registry since 2021. [29] However, issues still need to be addressed in implementing effective IRB review and approval procedures.

The substantial government funding and focus on scientific advancement have sometimes overshadowed considerations of regional cultures, ethnic minorities, and individual perspectives, particularly evident during the one-child policy era. As government policy adapts to promote public stability, such as the change from the one-child to the two-child policy, [30] research ethics should also adapt to ensure respect for the values of its represented peoples.

Japan is also relatively supportive of stem cell research and therapies. Japan has a more transparent regulatory framework, allowing for faster approval of regenerative medicine products, which has led to several advanced clinical trials and therapies. [31] South Korea is also actively engaged in stem cell research and has a history of breakthroughs in cloning and embryonic stem cells. [32] However, the field is controversial, and there are issues of scientific integrity. For example, the Korean FDA fast-tracked products for approval, [33] and in another instance, the oocyte source was unclear and possibly violated ethical standards. [34] Trust is important in research, as it builds collaborative foundations between colleagues, trial participant comfort, open-mindedness for complicated and sensitive discussions, and supports regulatory procedures for stakeholders. There is a need to respect the culture’s interest, engagement, and for research and clinical trials to be transparent and have ethical oversight to promote global research discourse and trust.

d.     Middle East

Countries in the Middle East have varying degrees of acceptance of or restrictions to policies related to using embryonic stem cells due to cultural and religious influences. Saudi Arabia has made significant contributions to stem cell research, and conducts research based on international guidelines for ethical conduct and under strict adherence to guidelines in accordance with Islamic principles. Specifically, the Saudi government and people require ESC research to adhere to Sharia law. In addition to umbilical and placental stem cells, [35] Saudi Arabia permits the use of embryonic stem cells as long as they come from miscarriages, therapeutic abortions permissible by Sharia law, or are left over from in vitro fertilization and donated to research. [36] Laws and ethical guidelines for stem cell research allow the development of research institutions such as the King Abdullah International Medical Research Center, which has a cord blood bank and a stem cell registry with nearly 10,000 donors. [37] Such volume and acceptance are due to the ethical ‘permissibility’ of the donor sources, which do not conflict with religious pillars. However, some researchers err on the side of caution, choosing not to use embryos or fetal tissue as they feel it is unethical to do so. [38]

Jordan has a positive research ethics culture. [39] However, there is a significant issue of lack of trust in researchers, with 45.23 percent (38.66 percent agreeing and 6.57 percent strongly agreeing) of Jordanians holding a low level of trust in researchers, compared to 81.34 percent of Jordanians agreeing that they feel safe to participate in a research trial. [40] Safety testifies to the feeling of confidence that adequate measures are in place to protect participants from harm, whereas trust in researchers could represent the confidence in researchers to act in the participants’ best interests, adhere to ethical guidelines, provide accurate information, and respect participants’ rights and dignity. One method to improve trust would be to address communication issues relevant to ESC. Legislation surrounding stem cell research has adopted specific language, especially concerning clarification “between ‘stem cells’ and ‘embryonic stem cells’” in translation. [41] Furthermore, legislation “mandates the creation of a national committee… laying out specific regulations for stem-cell banking in accordance with international standards.” [42] This broad regulation opens the door for future global engagement and maintains transparency. However, these regulations may also constrain the influence of research direction, pace, and accessibility of research outcomes.

e.     Europe

In the European Union (EU), ethics is also principle-based, but the principles of autonomy, dignity, integrity, and vulnerability are interconnected. [43] As such, the opportunity for cohesion and concessions between individuals’ thoughts and ideals allows for a more adaptable ethics model due to the flexible principles that relate to the human experience The EU has put forth a framework in its Convention for the Protection of Human Rights and Dignity of the Human Being allowing member states to take different approaches. Each European state applies these principles to its specific conventions, leading to or reflecting different acceptance levels of stem cell research. [44]

For example, in Germany, Lebenzusammenhang , or the coherence of life, references integrity in the unity of human culture. Namely, the personal sphere “should not be subject to external intervention.” [45]  Stem cell interventions could affect this concept of bodily completeness, leading to heavy restrictions. Under the Grundgesetz, human dignity and the right to life with physical integrity are paramount. [46] The Embryo Protection Act of 1991 made producing cell lines illegal. Cell lines can be imported if approved by the Central Ethics Commission for Stem Cell Research only if they were derived before May 2007. [47] Stem cell research respects the integrity of life for the embryo with heavy specifications and intense oversight. This is vastly different in Finland, where the regulatory bodies find research more permissible in IVF excess, but only up to 14 days after fertilization. [48] Spain’s approach differs still, with a comprehensive regulatory framework. [49] Thus, research regulation can be culture-specific due to variations in applied principles. Diverse cultures call for various approaches to ethical permissibility. [50] Only an adaptive-deliberative model can address the cultural constructions of self and achieve positive, culturally sensitive stem cell research practices. [51]

II.     Religious Perspectives on ESC

Embryonic stem cell sources are the main consideration within religious contexts. While individuals may not regard their own religious texts as authoritative or factual, religion can shape their foundations or perspectives.

The Qur'an states:

“And indeed We created man from a quintessence of clay. Then We placed within him a small quantity of nutfa (sperm to fertilize) in a safe place. Then We have fashioned the nutfa into an ‘alaqa (clinging clot or cell cluster), then We developed the ‘alaqa into mudgha (a lump of flesh), and We made mudgha into bones, and clothed the bones with flesh, then We brought it into being as a new creation. So Blessed is Allah, the Best of Creators.” [52]

Many scholars of Islam estimate the time of soul installment, marked by the angel breathing in the soul to bring the individual into creation, as 120 days from conception. [53] Personhood begins at this point, and the value of life would prohibit research or experimentation that could harm the individual. If the fetus is more than 120 days old, the time ensoulment is interpreted to occur according to Islamic law, abortion is no longer permissible. [54] There are a few opposing opinions about early embryos in Islamic traditions. According to some Islamic theologians, there is no ensoulment of the early embryo, which is the source of stem cells for ESC research. [55]

In Buddhism, the stance on stem cell research is not settled. The main tenets, the prohibition against harming or destroying others (ahimsa) and the pursuit of knowledge (prajña) and compassion (karuna), leave Buddhist scholars and communities divided. [56] Some scholars argue stem cell research is in accordance with the Buddhist tenet of seeking knowledge and ending human suffering. Others feel it violates the principle of not harming others. Finding the balance between these two points relies on the karmic burden of Buddhist morality. In trying to prevent ahimsa towards the embryo, Buddhist scholars suggest that to comply with Buddhist tenets, research cannot be done as the embryo has personhood at the moment of conception and would reincarnate immediately, harming the individual's ability to build their karmic burden. [57] On the other hand, the Bodhisattvas, those considered to be on the path to enlightenment or Nirvana, have given organs and flesh to others to help alleviate grieving and to benefit all. [58] Acceptance varies on applied beliefs and interpretations.

Catholicism does not support embryonic stem cell research, as it entails creation or destruction of human embryos. This destruction conflicts with the belief in the sanctity of life. For example, in the Old Testament, Genesis describes humanity as being created in God’s image and multiplying on the Earth, referencing the sacred rights to human conception and the purpose of development and life. In the Ten Commandments, the tenet that one should not kill has numerous interpretations where killing could mean murder or shedding of the sanctity of life, demonstrating the high value of human personhood. In other books, the theological conception of when life begins is interpreted as in utero, [59] highlighting the inviolability of life and its formation in vivo to make a religious point for accepting such research as relatively limited, if at all. [60] The Vatican has released ethical directives to help apply a theological basis to modern-day conflicts. The Magisterium of the Church states that “unless there is a moral certainty of not causing harm,” experimentation on fetuses, fertilized cells, stem cells, or embryos constitutes a crime. [61] Such procedures would not respect the human person who exists at these stages, according to Catholicism. Damages to the embryo are considered gravely immoral and illicit. [62] Although the Catholic Church officially opposes abortion, surveys demonstrate that many Catholic people hold pro-choice views, whether due to the context of conception, stage of pregnancy, threat to the mother’s life, or for other reasons, demonstrating that practicing members can also accept some but not all tenets. [63]

Some major Jewish denominations, such as the Reform, Conservative, and Reconstructionist movements, are open to supporting ESC use or research as long as it is for saving a life. [64] Within Judaism, the Talmud, or study, gives personhood to the child at birth and emphasizes that life does not begin at conception: [65]

“If she is found pregnant, until the fortieth day it is mere fluid,” [66]

Whereas most religions prioritize the status of human embryos, the Halakah (Jewish religious law) states that to save one life, most other religious laws can be ignored because it is in pursuit of preservation. [67] Stem cell research is accepted due to application of these religious laws.

We recognize that all religions contain subsets and sects. The variety of environmental and cultural differences within religious groups requires further analysis to respect the flexibility of religious thoughts and practices. We make no presumptions that all cultures require notions of autonomy or morality as under the common morality theory , which asserts a set of universal moral norms that all individuals share provides moral reasoning and guides ethical decisions. [68] We only wish to show that the interaction with morality varies between cultures and countries.

III.     A Flexible Ethical Approach

The plurality of different moral approaches described above demonstrates that there can be no universally acceptable uniform law for ESC on a global scale. Instead of developing one standard, flexible ethical applications must be continued. We recommend local guidelines that incorporate important cultural and ethical priorities.

While the Declaration of Helsinki is more relevant to people in clinical trials receiving ESC products, in keeping with the tradition of protections for research subjects, consent of the donor is an ethical requirement for ESC donation in many jurisdictions including the US, Canada, and Europe. [69] The Declaration of Helsinki provides a reference point for regulatory standards and could potentially be used as a universal baseline for obtaining consent prior to gamete or embryo donation.

For instance, in Columbia University’s egg donor program for stem cell research, donors followed standard screening protocols and “underwent counseling sessions that included information as to the purpose of oocyte donation for research, what the oocytes would be used for, the risks and benefits of donation, and process of oocyte stimulation” to ensure transparency for consent. [70] The program helped advance stem cell research and provided clear and safe research methods with paid participants. Though paid participation or covering costs of incidental expenses may not be socially acceptable in every culture or context, [71] and creating embryos for ESC research is illegal in many jurisdictions, Columbia’s program was effective because of the clear and honest communications with donors, IRBs, and related stakeholders.  This example demonstrates that cultural acceptance of scientific research and of the idea that an egg or embryo does not have personhood is likely behind societal acceptance of donating eggs for ESC research. As noted, many countries do not permit the creation of embryos for research.

Proper communication and education regarding the process and purpose of stem cell research may bolster comprehension and garner more acceptance. “Given the sensitive subject material, a complete consent process can support voluntary participation through trust, understanding, and ethical norms from the cultures and morals participants value. This can be hard for researchers entering countries of different socioeconomic stability, with different languages and different societal values. [72]

An adequate moral foundation in medical ethics is derived from the cultural and religious basis that informs knowledge and actions. [73] Understanding local cultural and religious values and their impact on research could help researchers develop humility and promote inclusion.

IV.     Concerns

Some may argue that if researchers all adhere to one ethics standard, protection will be satisfied across all borders, and the global public will trust researchers. However, defining what needs to be protected and how to define such research standards is very specific to the people to which standards are applied. We suggest that applying one uniform guide cannot accurately protect each individual because we all possess our own perceptions and interpretations of social values. [74] Therefore, the issue of not adjusting to the moral pluralism between peoples in applying one standard of ethics can be resolved by building out ethics models that can be adapted to different cultures and religions.

Other concerns include medical tourism, which may promote health inequities. [75] Some countries may develop and approve products derived from ESC research before others, compromising research ethics or drug approval processes. There are also concerns about the sale of unauthorized stem cell treatments, for example, those without FDA approval in the United States. Countries with robust research infrastructures may be tempted to attract medical tourists, and some customers will have false hopes based on aggressive publicity of unproven treatments. [76]

For example, in China, stem cell clinics can market to foreign clients who are not protected under the regulatory regimes. Companies employ a marketing strategy of “ethically friendly” therapies. Specifically, in the case of Beike, China’s leading stem cell tourism company and sprouting network, ethical oversight of administrators or health bureaus at one site has “the unintended consequence of shifting questionable activities to another node in Beike's diffuse network.” [77] In contrast, Jordan is aware of stem cell research’s potential abuse and its own status as a “health-care hub.” Jordan’s expanded regulations include preserving the interests of individuals in clinical trials and banning private companies from ESC research to preserve transparency and the integrity of research practices. [78]

The social priorities of the community are also a concern. The ISSCR explicitly states that guidelines “should be periodically revised to accommodate scientific advances, new challenges, and evolving social priorities.” [79] The adaptable ethics model extends this consideration further by addressing whether research is warranted given the varying degrees of socioeconomic conditions, political stability, and healthcare accessibilities and limitations. An ethical approach would require discussion about resource allocation and appropriate distribution of funds. [80]

While some religions emphasize the sanctity of life from conception, which may lead to public opposition to ESC research, others encourage ESC research due to its potential for healing and alleviating human pain. Many countries have special regulations that balance local views on embryonic personhood, the benefits of research as individual or societal goods, and the protection of human research subjects. To foster understanding and constructive dialogue, global policy frameworks should prioritize the protection of universal human rights, transparency, and informed consent. In addition to these foundational global policies, we recommend tailoring local guidelines to reflect the diverse cultural and religious perspectives of the populations they govern. Ethics models should be adapted to local populations to effectively establish research protections, growth, and possibilities of stem cell research.

For example, in countries with strong beliefs in the moral sanctity of embryos or heavy religious restrictions, an adaptive model can allow for discussion instead of immediate rejection. In countries with limited individual rights and voice in science policy, an adaptive model ensures cultural, moral, and religious views are taken into consideration, thereby building social inclusion. While this ethical consideration by the government may not give a complete voice to every individual, it will help balance policies and maintain the diverse perspectives of those it affects. Embracing an adaptive ethics model of ESC research promotes open-minded dialogue and respect for the importance of human belief and tradition. By actively engaging with cultural and religious values, researchers can better handle disagreements and promote ethical research practices that benefit each society.

This brief exploration of the religious and cultural differences that impact ESC research reveals the nuances of relative ethics and highlights a need for local policymakers to apply a more intense adaptive model.

[1] Poliwoda, S., Noor, N., Downs, E., Schaaf, A., Cantwell, A., Ganti, L., Kaye, A. D., Mosel, L. I., Carroll, C. B., Viswanath, O., & Urits, I. (2022). Stem cells: a comprehensive review of origins and emerging clinical roles in medical practice.  Orthopedic reviews ,  14 (3), 37498. https://doi.org/10.52965/001c.37498

[2] Poliwoda, S., Noor, N., Downs, E., Schaaf, A., Cantwell, A., Ganti, L., Kaye, A. D., Mosel, L. I., Carroll, C. B., Viswanath, O., & Urits, I. (2022). Stem cells: a comprehensive review of origins and emerging clinical roles in medical practice.  Orthopedic reviews ,  14 (3), 37498. https://doi.org/10.52965/001c.37498

[3] International Society for Stem Cell Research. (2023). Laboratory-based human embryonic stem cell research, embryo research, and related research activities . International Society for Stem Cell Research. https://www.isscr.org/guidelines/blog-post-title-one-ed2td-6fcdk ; Kimmelman, J., Hyun, I., Benvenisty, N.  et al.  Policy: Global standards for stem-cell research.  Nature   533 , 311–313 (2016). https://doi.org/10.1038/533311a

[4] International Society for Stem Cell Research. (2023). Laboratory-based human embryonic stem cell research, embryo research, and related research activities . International Society for Stem Cell Research. https://www.isscr.org/guidelines/blog-post-title-one-ed2td-6fcdk

[5] Concerning the moral philosophies of stem cell research, our paper does not posit a personal moral stance nor delve into the “when” of human life begins. To read further about the philosophical debate, consider the following sources:

Sandel M. J. (2004). Embryo ethics--the moral logic of stem-cell research.  The New England journal of medicine ,  351 (3), 207–209. https://doi.org/10.1056/NEJMp048145 ; George, R. P., & Lee, P. (2020, September 26). Acorns and Embryos . The New Atlantis. https://www.thenewatlantis.com/publications/acorns-and-embryos ; Sagan, A., & Singer, P. (2007). The moral status of stem cells. Metaphilosophy , 38 (2/3), 264–284. http://www.jstor.org/stable/24439776 ; McHugh P. R. (2004). Zygote and "clonote"--the ethical use of embryonic stem cells.  The New England journal of medicine ,  351 (3), 209–211. https://doi.org/10.1056/NEJMp048147 ; Kurjak, A., & Tripalo, A. (2004). The facts and doubts about beginning of the human life and personality.  Bosnian journal of basic medical sciences ,  4 (1), 5–14. https://doi.org/10.17305/bjbms.2004.3453

[6] Vazin, T., & Freed, W. J. (2010). Human embryonic stem cells: derivation, culture, and differentiation: a review.  Restorative neurology and neuroscience ,  28 (4), 589–603. https://doi.org/10.3233/RNN-2010-0543

[7] Socially, at its core, the Western approach to ethics is widely principle-based, autonomy being one of the key factors to ensure a fundamental respect for persons within research. For information regarding autonomy in research, see: Department of Health, Education, and Welfare, & National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1978). The Belmont Report. Ethical principles and guidelines for the protection of human subjects of research.; For a more in-depth review of autonomy within the US, see: Beauchamp, T. L., & Childress, J. F. (1994). Principles of Biomedical Ethics . Oxford University Press.

[8] Sherley v. Sebelius , 644 F.3d 388 (D.C. Cir. 2011), citing 45 C.F.R. 46.204(b) and [42 U.S.C. § 289g(b)]. https://www.cadc.uscourts.gov/internet/opinions.nsf/6c690438a9b43dd685257a64004ebf99/$file/11-5241-1391178.pdf

[9] Stem Cell Research Enhancement Act of 2005, H. R. 810, 109 th Cong. (2001). https://www.govtrack.us/congress/bills/109/hr810/text ; Bush, G. W. (2006, July 19). Message to the House of Representatives . National Archives and Records Administration. https://georgewbush-whitehouse.archives.gov/news/releases/2006/07/20060719-5.html

[10] National Archives and Records Administration. (2009, March 9). Executive order 13505 -- removing barriers to responsible scientific research involving human stem cells . National Archives and Records Administration. https://obamawhitehouse.archives.gov/the-press-office/removing-barriers-responsible-scientific-research-involving-human-stem-cells

[11] Hurlbut, W. B. (2006). Science, Religion, and the Politics of Stem Cells.  Social Research ,  73 (3), 819–834. http://www.jstor.org/stable/40971854

[12] Akpa-Inyang, Francis & Chima, Sylvester. (2021). South African traditional values and beliefs regarding informed consent and limitations of the principle of respect for autonomy in African communities: a cross-cultural qualitative study. BMC Medical Ethics . 22. 10.1186/s12910-021-00678-4.

[13] Source for further reading: Tangwa G. B. (2007). Moral status of embryonic stem cells: perspective of an African villager. Bioethics , 21(8), 449–457. https://doi.org/10.1111/j.1467-8519.2007.00582.x , see also Mnisi, F. M. (2020). An African analysis based on ethics of Ubuntu - are human embryonic stem cell patents morally justifiable? African Insight , 49 (4).

[14] Jecker, N. S., & Atuire, C. (2021). Bioethics in Africa: A contextually enlightened analysis of three cases. Developing World Bioethics , 22 (2), 112–122. https://doi.org/10.1111/dewb.12324

[15] Jecker, N. S., & Atuire, C. (2021). Bioethics in Africa: A contextually enlightened analysis of three cases. Developing World Bioethics, 22(2), 112–122. https://doi.org/10.1111/dewb.12324

[16] Jackson, C.S., Pepper, M.S. Opportunities and barriers to establishing a cell therapy programme in South Africa.  Stem Cell Res Ther   4 , 54 (2013). https://doi.org/10.1186/scrt204 ; Pew Research Center. (2014, May 1). Public health a major priority in African nations . Pew Research Center’s Global Attitudes Project. https://www.pewresearch.org/global/2014/05/01/public-health-a-major-priority-in-african-nations/

[17] Department of Health Republic of South Africa. (2021). Health Research Priorities (revised) for South Africa 2021-2024 . National Health Research Strategy. https://www.health.gov.za/wp-content/uploads/2022/05/National-Health-Research-Priorities-2021-2024.pdf

[18] Oosthuizen, H. (2013). Legal and Ethical Issues in Stem Cell Research in South Africa. In: Beran, R. (eds) Legal and Forensic Medicine. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32338-6_80 , see also: Gaobotse G (2018) Stem Cell Research in Africa: Legislation and Challenges. J Regen Med 7:1. doi: 10.4172/2325-9620.1000142

[19] United States Bureau of Citizenship and Immigration Services. (1998). Tunisia: Information on the status of Christian conversions in Tunisia . UNHCR Web Archive. https://webarchive.archive.unhcr.org/20230522142618/https://www.refworld.org/docid/3df0be9a2.html

[20] Gaobotse, G. (2018) Stem Cell Research in Africa: Legislation and Challenges. J Regen Med 7:1. doi: 10.4172/2325-9620.1000142

[21] Kooli, C. Review of assisted reproduction techniques, laws, and regulations in Muslim countries.  Middle East Fertil Soc J   24 , 8 (2020). https://doi.org/10.1186/s43043-019-0011-0 ; Gaobotse, G. (2018) Stem Cell Research in Africa: Legislation and Challenges. J Regen Med 7:1. doi: 10.4172/2325-9620.1000142

[22] Pang M. C. (1999). Protective truthfulness: the Chinese way of safeguarding patients in informed treatment decisions. Journal of medical ethics , 25(3), 247–253. https://doi.org/10.1136/jme.25.3.247

[23] Wang, L., Wang, F., & Zhang, W. (2021). Bioethics in China’s biosecurity law: Forms, effects, and unsettled issues. Journal of law and the biosciences , 8(1).  https://doi.org/10.1093/jlb/lsab019 https://academic.oup.com/jlb/article/8/1/lsab019/6299199

[24] Wang, Y., Xue, Y., & Guo, H. D. (2022). Intervention effects of traditional Chinese medicine on stem cell therapy of myocardial infarction.  Frontiers in pharmacology ,  13 , 1013740. https://doi.org/10.3389/fphar.2022.1013740

[25] Li, X.-T., & Zhao, J. (2012). Chapter 4: An Approach to the Nature of Qi in TCM- Qi and Bioenergy. In Recent Advances in Theories and Practice of Chinese Medicine (p. 79). InTech.

[26] Luo, D., Xu, Z., Wang, Z., & Ran, W. (2021). China's Stem Cell Research and Knowledge Levels of Medical Practitioners and Students.  Stem cells international ,  2021 , 6667743. https://doi.org/10.1155/2021/6667743

[27] Luo, D., Xu, Z., Wang, Z., & Ran, W. (2021). China's Stem Cell Research and Knowledge Levels of Medical Practitioners and Students.  Stem cells international ,  2021 , 6667743. https://doi.org/10.1155/2021/6667743

[28] Zhang, J. Y. (2017). Lost in translation? accountability and governance of Clinical Stem Cell Research in China. Regenerative Medicine , 12 (6), 647–656. https://doi.org/10.2217/rme-2017-0035

[29] Wang, L., Wang, F., & Zhang, W. (2021). Bioethics in China’s biosecurity law: Forms, effects, and unsettled issues. Journal of law and the biosciences , 8(1).  https://doi.org/10.1093/jlb/lsab019 https://academic.oup.com/jlb/article/8/1/lsab019/6299199

[30] Chen, H., Wei, T., Wang, H.  et al.  Association of China’s two-child policy with changes in number of births and birth defects rate, 2008–2017.  BMC Public Health   22 , 434 (2022). https://doi.org/10.1186/s12889-022-12839-0

[31] Azuma, K. Regulatory Landscape of Regenerative Medicine in Japan.  Curr Stem Cell Rep   1 , 118–128 (2015). https://doi.org/10.1007/s40778-015-0012-6

[32] Harris, R. (2005, May 19). Researchers Report Advance in Stem Cell Production . NPR. https://www.npr.org/2005/05/19/4658967/researchers-report-advance-in-stem-cell-production

[33] Park, S. (2012). South Korea steps up stem-cell work.  Nature . https://doi.org/10.1038/nature.2012.10565

[34] Resnik, D. B., Shamoo, A. E., & Krimsky, S. (2006). Fraudulent human embryonic stem cell research in South Korea: lessons learned.  Accountability in research ,  13 (1), 101–109. https://doi.org/10.1080/08989620600634193 .

[35] Alahmad, G., Aljohani, S., & Najjar, M. F. (2020). Ethical challenges regarding the use of stem cells: interviews with researchers from Saudi Arabia. BMC medical ethics, 21(1), 35. https://doi.org/10.1186/s12910-020-00482-6

[36] Association for the Advancement of Blood and Biotherapies.  https://www.aabb.org/regulatory-and-advocacy/regulatory-affairs/regulatory-for-cellular-therapies/international-competent-authorities/saudi-arabia

[37] Alahmad, G., Aljohani, S., & Najjar, M. F. (2020). Ethical challenges regarding the use of stem cells: Interviews with researchers from Saudi Arabia.  BMC medical ethics ,  21 (1), 35. https://doi.org/10.1186/s12910-020-00482-6

[38] Alahmad, G., Aljohani, S., & Najjar, M. F. (2020). Ethical challenges regarding the use of stem cells: Interviews with researchers from Saudi Arabia. BMC medical ethics , 21(1), 35. https://doi.org/10.1186/s12910-020-00482-6

Culturally, autonomy practices follow a relational autonomy approach based on a paternalistic deontological health care model. The adherence to strict international research policies and religious pillars within the regulatory environment is a great foundation for research ethics. However, there is a need to develop locally targeted ethics approaches for research (as called for in Alahmad, G., Aljohani, S., & Najjar, M. F. (2020). Ethical challenges regarding the use of stem cells: interviews with researchers from Saudi Arabia. BMC medical ethics, 21(1), 35. https://doi.org/10.1186/s12910-020-00482-6), this decision-making approach may help advise a research decision model. For more on the clinical cultural autonomy approaches, see: Alabdullah, Y. Y., Alzaid, E., Alsaad, S., Alamri, T., Alolayan, S. W., Bah, S., & Aljoudi, A. S. (2022). Autonomy and paternalism in Shared decision‐making in a Saudi Arabian tertiary hospital: A cross‐sectional study. Developing World Bioethics , 23 (3), 260–268. https://doi.org/10.1111/dewb.12355 ; Bukhari, A. A. (2017). Universal Principles of Bioethics and Patient Rights in Saudi Arabia (Doctoral dissertation, Duquesne University). https://dsc.duq.edu/etd/124; Ladha, S., Nakshawani, S. A., Alzaidy, A., & Tarab, B. (2023, October 26). Islam and Bioethics: What We All Need to Know . Columbia University School of Professional Studies. https://sps.columbia.edu/events/islam-and-bioethics-what-we-all-need-know

[39] Ababneh, M. A., Al-Azzam, S. I., Alzoubi, K., Rababa’h, A., & Al Demour, S. (2021). Understanding and attitudes of the Jordanian public about clinical research ethics.  Research Ethics ,  17 (2), 228-241.  https://doi.org/10.1177/1747016120966779

[40] Ababneh, M. A., Al-Azzam, S. I., Alzoubi, K., Rababa’h, A., & Al Demour, S. (2021). Understanding and attitudes of the Jordanian public about clinical research ethics.  Research Ethics ,  17 (2), 228-241.  https://doi.org/10.1177/1747016120966779

[41] Dajani, R. (2014). Jordan’s stem-cell law can guide the Middle East.  Nature  510, 189. https://doi.org/10.1038/510189a

[42] Dajani, R. (2014). Jordan’s stem-cell law can guide the Middle East.  Nature  510, 189. https://doi.org/10.1038/510189a

[43] The EU’s definition of autonomy relates to the capacity for creating ideas, moral insight, decisions, and actions without constraint, personal responsibility, and informed consent. However, the EU views autonomy as not completely able to protect individuals and depends on other principles, such as dignity, which “expresses the intrinsic worth and fundamental equality of all human beings.” Rendtorff, J.D., Kemp, P. (2019). Four Ethical Principles in European Bioethics and Biolaw: Autonomy, Dignity, Integrity and Vulnerability. In: Valdés, E., Lecaros, J. (eds) Biolaw and Policy in the Twenty-First Century. International Library of Ethics, Law, and the New Medicine, vol 78. Springer, Cham. https://doi.org/10.1007/978-3-030-05903-3_3

[44] Council of Europe. Convention for the protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine (ETS No. 164) https://www.coe.int/en/web/conventions/full-list?module=treaty-detail&treatynum=164 (forbidding the creation of embryos for research purposes only, and suggests embryos in vitro have protections.); Also see Drabiak-Syed B. K. (2013). New President, New Human Embryonic Stem Cell Research Policy: Comparative International Perspectives and Embryonic Stem Cell Research Laws in France.  Biotechnology Law Report ,  32 (6), 349–356. https://doi.org/10.1089/blr.2013.9865

[45] Rendtorff, J.D., Kemp, P. (2019). Four Ethical Principles in European Bioethics and Biolaw: Autonomy, Dignity, Integrity and Vulnerability. In: Valdés, E., Lecaros, J. (eds) Biolaw and Policy in the Twenty-First Century. International Library of Ethics, Law, and the New Medicine, vol 78. Springer, Cham. https://doi.org/10.1007/978-3-030-05903-3_3

[46] Tomuschat, C., Currie, D. P., Kommers, D. P., & Kerr, R. (Trans.). (1949, May 23). Basic law for the Federal Republic of Germany. https://www.btg-bestellservice.de/pdf/80201000.pdf

[47] Regulation of Stem Cell Research in Germany . Eurostemcell. (2017, April 26). https://www.eurostemcell.org/regulation-stem-cell-research-germany

[48] Regulation of Stem Cell Research in Finland . Eurostemcell. (2017, April 26). https://www.eurostemcell.org/regulation-stem-cell-research-finland

[49] Regulation of Stem Cell Research in Spain . Eurostemcell. (2017, April 26). https://www.eurostemcell.org/regulation-stem-cell-research-spain

[50] Some sources to consider regarding ethics models or regulatory oversights of other cultures not covered:

Kara MA. Applicability of the principle of respect for autonomy: the perspective of Turkey. J Med Ethics. 2007 Nov;33(11):627-30. doi: 10.1136/jme.2006.017400. PMID: 17971462; PMCID: PMC2598110.

Ugarte, O. N., & Acioly, M. A. (2014). The principle of autonomy in Brazil: one needs to discuss it ...  Revista do Colegio Brasileiro de Cirurgioes ,  41 (5), 374–377. https://doi.org/10.1590/0100-69912014005013

Bharadwaj, A., & Glasner, P. E. (2012). Local cells, global science: The rise of embryonic stem cell research in India . Routledge.

For further research on specific European countries regarding ethical and regulatory framework, we recommend this database: Regulation of Stem Cell Research in Europe . Eurostemcell. (2017, April 26). https://www.eurostemcell.org/regulation-stem-cell-research-europe   

[51] Klitzman, R. (2006). Complications of culture in obtaining informed consent. The American Journal of Bioethics, 6(1), 20–21. https://doi.org/10.1080/15265160500394671 see also: Ekmekci, P. E., & Arda, B. (2017). Interculturalism and Informed Consent: Respecting Cultural Differences without Breaching Human Rights.  Cultura (Iasi, Romania) ,  14 (2), 159–172.; For why trust is important in research, see also: Gray, B., Hilder, J., Macdonald, L., Tester, R., Dowell, A., & Stubbe, M. (2017). Are research ethics guidelines culturally competent?  Research Ethics ,  13 (1), 23-41.  https://doi.org/10.1177/1747016116650235

[52] The Qur'an  (M. Khattab, Trans.). (1965). Al-Mu’minun, 23: 12-14. https://quran.com/23

[53] Lenfest, Y. (2017, December 8). Islam and the beginning of human life . Bill of Health. https://blog.petrieflom.law.harvard.edu/2017/12/08/islam-and-the-beginning-of-human-life/

[54] Aksoy, S. (2005). Making regulations and drawing up legislation in Islamic countries under conditions of uncertainty, with special reference to embryonic stem cell research. Journal of Medical Ethics , 31: 399-403.; see also: Mahmoud, Azza. "Islamic Bioethics: National Regulations and Guidelines of Human Stem Cell Research in the Muslim World." Master's thesis, Chapman University, 2022. https://doi.org/10.36837/ chapman.000386

[55] Rashid, R. (2022). When does Ensoulment occur in the Human Foetus. Journal of the British Islamic Medical Association , 12 (4). ISSN 2634 8071. https://www.jbima.com/wp-content/uploads/2023/01/2-Ethics-3_-Ensoulment_Rafaqat.pdf.

[56] Sivaraman, M. & Noor, S. (2017). Ethics of embryonic stem cell research according to Buddhist, Hindu, Catholic, and Islamic religions: perspective from Malaysia. Asian Biomedicine,8(1) 43-52.  https://doi.org/10.5372/1905-7415.0801.260

[57] Jafari, M., Elahi, F., Ozyurt, S. & Wrigley, T. (2007). 4. Religious Perspectives on Embryonic Stem Cell Research. In K. Monroe, R. Miller & J. Tobis (Ed.),  Fundamentals of the Stem Cell Debate: The Scientific, Religious, Ethical, and Political Issues  (pp. 79-94). Berkeley: University of California Press.  https://escholarship.org/content/qt9rj0k7s3/qt9rj0k7s3_noSplash_f9aca2e02c3777c7fb76ea768ba458f0.pdf https://doi.org/10.1525/9780520940994-005

[58] Lecso, P. A. (1991). The Bodhisattva Ideal and Organ Transplantation.  Journal of Religion and Health ,  30 (1), 35–41. http://www.jstor.org/stable/27510629 ; Bodhisattva, S. (n.d.). The Key of Becoming a Bodhisattva . A Guide to the Bodhisattva Way of Life. http://www.buddhism.org/Sutras/2/BodhisattvaWay.htm

[59] There is no explicit religious reference to when life begins or how to conduct research that interacts with the concept of life. However, these are relevant verses pertaining to how the fetus is viewed. (( King James Bible . (1999). Oxford University Press. (original work published 1769))

Jerimiah 1: 5 “Before I formed thee in the belly I knew thee; and before thou camest forth out of the womb I sanctified thee…”

In prophet Jerimiah’s insight, God set him apart as a person known before childbirth, a theme carried within the Psalm of David.

Psalm 139: 13-14 “…Thou hast covered me in my mother's womb. I will praise thee; for I am fearfully and wonderfully made…”

These verses demonstrate David’s respect for God as an entity that would know of all man’s thoughts and doings even before birth.

[60] It should be noted that abortion is not supported as well.

[61] The Vatican. (1987, February 22). Instruction on Respect for Human Life in Its Origin and on the Dignity of Procreation Replies to Certain Questions of the Day . Congregation For the Doctrine of the Faith. https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_19870222_respect-for-human-life_en.html

[62] The Vatican. (2000, August 25). Declaration On the Production and the Scientific and Therapeutic Use of Human Embryonic Stem Cells . Pontifical Academy for Life. https://www.vatican.va/roman_curia/pontifical_academies/acdlife/documents/rc_pa_acdlife_doc_20000824_cellule-staminali_en.html ; Ohara, N. (2003). Ethical Consideration of Experimentation Using Living Human Embryos: The Catholic Church’s Position on Human Embryonic Stem Cell Research and Human Cloning. Department of Obstetrics and Gynecology . Retrieved from https://article.imrpress.com/journal/CEOG/30/2-3/pii/2003018/77-81.pdf.

[63] Smith, G. A. (2022, May 23). Like Americans overall, Catholics vary in their abortion views, with regular mass attenders most opposed . Pew Research Center. https://www.pewresearch.org/short-reads/2022/05/23/like-americans-overall-catholics-vary-in-their-abortion-views-with-regular-mass-attenders-most-opposed/

[64] Rosner, F., & Reichman, E. (2002). Embryonic stem cell research in Jewish law. Journal of halacha and contemporary society , (43), 49–68.; Jafari, M., Elahi, F., Ozyurt, S. & Wrigley, T. (2007). 4. Religious Perspectives on Embryonic Stem Cell Research. In K. Monroe, R. Miller & J. Tobis (Ed.),  Fundamentals of the Stem Cell Debate: The Scientific, Religious, Ethical, and Political Issues  (pp. 79-94). Berkeley: University of California Press.  https://escholarship.org/content/qt9rj0k7s3/qt9rj0k7s3_noSplash_f9aca2e02c3777c7fb76ea768ba458f0.pdf https://doi.org/10.1525/9780520940994-005

[65] Schenker J. G. (2008). The beginning of human life: status of embryo. Perspectives in Halakha (Jewish Religious Law).  Journal of assisted reproduction and genetics ,  25 (6), 271–276. https://doi.org/10.1007/s10815-008-9221-6

[66] Ruttenberg, D. (2020, May 5). The Torah of Abortion Justice (annotated source sheet) . Sefaria. https://www.sefaria.org/sheets/234926.7?lang=bi&with=all&lang2=en

[67] Jafari, M., Elahi, F., Ozyurt, S. & Wrigley, T. (2007). 4. Religious Perspectives on Embryonic Stem Cell Research. In K. Monroe, R. Miller & J. Tobis (Ed.),  Fundamentals of the Stem Cell Debate: The Scientific, Religious, Ethical, and Political Issues  (pp. 79-94). Berkeley: University of California Press.  https://escholarship.org/content/qt9rj0k7s3/qt9rj0k7s3_noSplash_f9aca2e02c3777c7fb76ea768ba458f0.pdf https://doi.org/10.1525/9780520940994-005

[68] Gert, B. (2007). Common morality: Deciding what to do . Oxford Univ. Press.

[69] World Medical Association (2013). World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. JAMA , 310(20), 2191–2194. https://doi.org/10.1001/jama.2013.281053 Declaration of Helsinki – WMA – The World Medical Association .; see also: National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979).  The Belmont report: Ethical principles and guidelines for the protection of human subjects of research . U.S. Department of Health and Human Services.  https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html

[70] Zakarin Safier, L., Gumer, A., Kline, M., Egli, D., & Sauer, M. V. (2018). Compensating human subjects providing oocytes for stem cell research: 9-year experience and outcomes.  Journal of assisted reproduction and genetics ,  35 (7), 1219–1225. https://doi.org/10.1007/s10815-018-1171-z https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6063839/ see also: Riordan, N. H., & Paz Rodríguez, J. (2021). Addressing concerns regarding associated costs, transparency, and integrity of research in recent stem cell trial. Stem Cells Translational Medicine , 10 (12), 1715–1716. https://doi.org/10.1002/sctm.21-0234

[71] Klitzman, R., & Sauer, M. V. (2009). Payment of egg donors in stem cell research in the USA.  Reproductive biomedicine online ,  18 (5), 603–608. https://doi.org/10.1016/s1472-6483(10)60002-8

[72] Krosin, M. T., Klitzman, R., Levin, B., Cheng, J., & Ranney, M. L. (2006). Problems in comprehension of informed consent in rural and peri-urban Mali, West Africa.  Clinical trials (London, England) ,  3 (3), 306–313. https://doi.org/10.1191/1740774506cn150oa

[73] Veatch, Robert M.  Hippocratic, Religious, and Secular Medical Ethics: The Points of Conflict . Georgetown University Press, 2012.

[74] Msoroka, M. S., & Amundsen, D. (2018). One size fits not quite all: Universal research ethics with diversity.  Research Ethics ,  14 (3), 1-17.  https://doi.org/10.1177/1747016117739939

[75] Pirzada, N. (2022). The Expansion of Turkey’s Medical Tourism Industry.  Voices in Bioethics ,  8 . https://doi.org/10.52214/vib.v8i.9894

[76] Stem Cell Tourism: False Hope for Real Money . Harvard Stem Cell Institute (HSCI). (2023). https://hsci.harvard.edu/stem-cell-tourism , See also: Bissassar, M. (2017). Transnational Stem Cell Tourism: An ethical analysis.  Voices in Bioethics ,  3 . https://doi.org/10.7916/vib.v3i.6027

[77] Song, P. (2011) The proliferation of stem cell therapies in post-Mao China: problematizing ethical regulation,  New Genetics and Society , 30:2, 141-153, DOI:  10.1080/14636778.2011.574375

[78] Dajani, R. (2014). Jordan’s stem-cell law can guide the Middle East.  Nature  510, 189. https://doi.org/10.1038/510189a

[79] International Society for Stem Cell Research. (2024). Standards in stem cell research . International Society for Stem Cell Research. https://www.isscr.org/guidelines/5-standards-in-stem-cell-research

[80] Benjamin, R. (2013). People’s science bodies and rights on the Stem Cell Frontier . Stanford University Press.

Mifrah Hayath

SM Candidate Harvard Medical School, MS Biotechnology Johns Hopkins University

Olivia Bowers

MS Bioethics Columbia University (Disclosure: affiliated with Voices in Bioethics)

IMAGES

  1. Research Methodology Phases Download Scientific Diagram

    what is research methodology in library and information science

  2. 15 Research Methodology Examples (2023)

    what is research methodology in library and information science

  3. PPT

    what is research methodology in library and information science

  4. Introduction to Research Methodology and Research Methods Lecture 1

    what is research methodology in library and information science

  5. Your Step-by-Step Guide to Writing a Good Research Methodology

    what is research methodology in library and information science

  6. Research Methodology in Library and Information Science : Mr. Pankaj

    what is research methodology in library and information science

VIDEO

  1. Referencing Basics (Part 1b)

  2. Research Methodology l Department of Library and Information Science l Shri Krishna University l

  3. Library, Information Science & Technology Abstracts

  4. M 18 Basics of Research Methodology

  5. topic : Management information system

  6. What Is Research Design

COMMENTS

  1. Research Methods in Library and Information Science, 6th Edition

    by: Lynn Silipigni Connaway and Marie L. Radford. Conducting research and successfully publishing the findings is a goal of many professionals and students in library and information science (LIS). Using the best methodology maximizes the likelihood of a successful outcome. This book broadly covers the principles, data collection techniques ...

  2. Research Methods in Library and Information Science

    Library and information science (LIS) is a very broad discipline, which uses a wide rangeof constantly evolving research strategies and techniques. The aim of this chapter is to provide an updated view of research issues in library and information science. A stratified random sample of 440 articles published in five prominent journals was analyzed and classified to identify (i) research ...

  3. Research Methods in Library and Information Science

    1. Introduction. Library and information science (LIS), as its name indicates, is a merging of librarianship. and information science that took place in the 1960s [1, 2]. LIS is a eld of both ...

  4. Research Methods in Library and Information Science, 7th Edition

    The seventh edition of this frequently adopted textbook features new or expanded sections on social justice research, data analysis software, scholarly identity research, social networking, data science, and data visualization, among other topics. It continues to include discipline experts' voices.The revised seventh edition of this popular text provides instruction and guidance for ...

  5. Research methods in library and information science: A ...

    A total of 1162 research articles, published from 2001 to 2010 in three major journals of library and information science (LIS), are analyzed quantitatively and qualitatively in order to address some recurring themes about research method selection and application in the scholarly domain.

  6. Research Methods in Library and Information Science, 6th Edition

    Conducting research and successfully publishing the findings is a goal of many professionals and students in library and information science (LIS). Using the best methodology maximizes the likelihood of a successful outcome. This outstanding book broadly covers the principles, data collection techniques, and analyses of quantitative and qualitative methods as well as the advantages and ...

  7. Research Methods in Library and Information Science, 7th Edition

    The revised seventh edition of this popular text provides instruction and guidance for professionals and students in library and information science who want to conduct research and publish findings, as well as for practicing professionals who want a broad overview of the current literature.

  8. Research Methods in Library and Information Science

    The revised seventh edition of this popular text provides instruction and guidance for professionals and students in library and information science who want to conduct research and publish findings, as well as for practicing professionals who want a broad overview of the current literature.

  9. Research Methods in Library and Information

    PAGES: 528. PRICE: $90.00. BINDING: Cloth. LIBRARY OF CONGRESS CLASSIFICATION: Z669. REVIEW: This textbook outlines quantitative and qualitative research methods in library and information science. It addresses the role... Explore millions of resources from scholarly journals, books, newspapers, videos and more, on the ProQuest Platform.

  10. Research methods in library and information science: A ...

    A total of 1162 research articles, published from 2001 to 2010 in three major journals of library and information science (LIS), are analyzed quantitatively and qualitatively in order to address some recurring themes about research method selection and application in the scholarly domain. This study shows that LIS scholars utilize a greater number and wider variety of research methods than before.

  11. Research Methods in Library and Information Science

    The aim of this chapter is to provide an updated view of research issues in library and information science. A strati‐ fied random sample of 440 articles published in five prominent journals was analyzed and classified to identify (i) research approach, (ii) research methodology, and (iii) method of data analysis.

  12. Library & Information Science Research

    Library & Information Science Research, a cross-disciplinary and refereed journal, focuses on the research process in library and information science, especially demonstrations of innovative methods and theoretical frameworks or unusual extensions or applications of well-known methods and tools. …. View full aims & scope.

  13. (PDF) Research methods in library and information science: A content

    Purpose - This paper aims to explore research methods used in Library and Information Science (LIS) during the past four decades. The goal is to compile a annotated bibliography of seminal works of the discipline used in different countries and social contexts.

  14. Library Research Methods

    Library Research Methods (Adapted from Thomas Mann, Library Research Models) ... In computer science, a journal article can be out-of-date in months; in the social sciences, ten years pushes the limit. Publications have a longer life in the humanities: in philosophy, primary sources are current for centuries, secondary ones for decades. ...

  15. Research Methods in Library and Information Science (Library and

    The book presents comprehensive information in a logical, easy-to-follow format, covering topics such as research strategies for library and information science doctoral students; planning for research; defining the problem, forming a theory, and testing the theory; the scientific method of inquiry and data collection techniques; survey ...

  16. Towards automated analysis of research methods in library and

    Abstract. Previous studies of research methods in Library and Information Science (LIS) lack consensus in how to define or classify research methods, and there have been no studies on automated recognition of research methods in the scientific literature of this field. This work begins to fill these gaps by studying how the scope of "research methods" in LIS has evolved, and the challenges ...

  17. Research methods used in library and information science during the

    - This paper aims to explore research methods used in Library and Information Science (LIS) during the past four decades. The goal is to compile a annotated bibliography of seminal works of the discipline used in different countries and social contexts. , - When comparing areas and types of research, different publication patterns are taken ...

  18. Quantitative Research Approach and its Applications in Library and

    Content analysis of research articles in library and information science. Library & Information Science Research, 12, 395-422] study in order to compare the results with the earlier findings ...

  19. Library and information science

    Academic courses in library science include collection management, information systems and technology, research methods, user studies, information literacy, cataloging and classification, preservation, reference, statistics and management.Library science is constantly evolving, incorporating new topics like database management, information architecture and information management, among others.

  20. What are the strengths and limitations to utilising creative methods in

    Background There is increasing interest in using patient and public involvement (PPI) in research to improve the quality of healthcare. Ordinarily, traditional methods have been used such as interviews or focus groups. However, these methods tend to engage a similar demographic of people. Thus, creative methods are being developed to involve patients for whom traditional methods are ...

  21. Research Methods in Library and Information Science

    The seventh edition of this frequently adopted textbook features new or expanded sections on social justice research, data analysis software, scholarly identity research, social networking, data science, and data visualization, among other topics. It continues to include discipline experts' voices.The revised seventh edition of this popular text provides instruction and guidance for ...

  22. Health Sciences Library

    The Health Sciences Library (HSL) has partnered with administration at UNC-Chapel Hill's Gillings School of Global Public Health (SPH) on a series of projects to reveal collaboration patterns of research faculty, illustrate evolution in research foci over time, and demonstrate SPH impact. View all library projects.

  23. Associations of between‐ and within‐day ...

    2 METHODS. We used data from The Maastricht Study, an observational prospective population-based cohort study. The rationale and methodology have been described previously. 9 In brief, the study focuses on the etiology, pathophysiology, complications, and comorbidities of T2DM and is characterized by an extensive phenotyping approach. Eligible ...

  24. Staff Members

    Records Management & Searching Coordinator. Davis Circulation. [email protected]. Baz Armstrong. Circulation Supervisor/Technology Lending Coordinator. House Undergraduate Library. [email protected]. Jess (Jessica) Aylor. Executive Director of Library Development.

  25. A statistical integrative analysis method for ...

    In this study, we proposed a method that allows integrated analysis even with a small amount of miRNA and gene expression data, and examined its validity. The method is based on a group comparison between target genes having specific functions controlled by a specific miRNA and a background group of randomly selected genes.

  26. First derivative synchronous spectrofluorimetric method for the

    The proposed method was successfully applied to commercial dosage forms and spiked human samples. Moreover, the greenness of the proposed method was investigated based on the analytical eco-scale approach, with the results showing an excellent green scale with a score of 95.

  27. Mixed methods research in library and information science: A

    Abstract. Mixed methods research is gaining prominence in the library and information science (LIS) discipline. However, according to previous analyses, few LIS studies utilized mixed methods research, and the researchers did not recognize or describe them as such. The present methodological review assesses the current state of mixed methods ...

  28. Cultural Relativity and Acceptance of Embryonic Stem Cell Research

    Voices in Bioethics is currently seeking submissions on philosophical and practical topics, both current and timeless. Papers addressing access to healthcare, the bioethical implications of recent Supreme Court rulings, environmental ethics, data privacy, cybersecurity, law and bioethics, economics and bioethics, reproductive ethics, research ethics, and pediatric bioethics are sought.

  29. High‐Yield Synthesis of Hierarchical SAPO‐34 by Recrystallization

    Here, a series of hierarchical SAPO-34 catalysts were synthesized using a straightforward recrystallization method. The incorporation of triethylamine into the recrystallization mother liquor facilitated the formation of mesopores, achieving a high solid yield of up to 90%.

  30. Research Methods in Library and Information Science

    An essential resource for LIS master's and doctoral students, new LIS faculty, and academic librarians, this book provides expert guidance and practical examples based on current research about quantitative and qualitative research methods and design.Conducting research and successfully publishing the findings is a goal of many professionals and students in library and information science (LIS).