Nursing Research and Evidence-Based Practice
I requesting username BOLAVENS work. If, . These questions related NURSING RESEARCH AND EVIDENCE-BASED PRACTICE. refer book titled Nursing Research: Generating assessing evidence nursing, IBM# 9781605477084 answers.
Discuss the differences between research, research utilization, and evidence-based practice. you may want to link this to the historical evolution of research in nursing.
Research refers to the systematic process of searching and generating knowledge about a particular topic in order to reach conclusions. Research utilization, on the other hand, is the process by which findings from research are used to guide practice. Research utilization should, however, not be confused with evidence-based practice. Evidence-based practice is an extension of research utilization. It involves finding evidence of practice, considering patient and practitioner preferences, differences and values, and then making informed practice decisions. Research utilization only involves applying findings of research to clinical practice. Evidence-based practice is built on research utilization whereby clinical decision-making is based on the best available evidence judged in reference to the particular context the decision is aimed at. In history, evidence-based practice was done only through research utilization only until Archie Cochrane, Prof. Guyatt and Sackett conducted further research and explained that evidence-based practice needs to combine with clinical expertise and patient’s preference for it to be optimal (Satterfield et al., 2009). Since then, there has been a clear distinction between research as the process to generate evidence, research utilization as simple application of evidence to practice, and evidence-based medicine as broader application of evidence (from research, clinical experience and patient preference) to practice.
Identify and discuss 2 major ways in which qualitative research differs from quantitative research. Is one better than the other? provide reference(s)
Qualitative and quantitative researches differ majorly in the research question it aims at answering and how data is collected and analyzed. Qualitative research is aimed at answering questions relating to the why and how of a particular phenomenon, while quantitative research is aimed at answering what, where, and when questions regarding a phenomenon. As stated by Creswell (2007), qualitative research explores the phenomenon in depth and is explanatory in nature. It is best for studies that aim at defining problems and attempting to develop approaches to these problems. Thus, it allows the researcher to delve into these issues at depth to the interest of the researchers. Data collection methods for qualitative studies include participant and non-participant observation, reflexive journals, unstructured, semi-structured and structured interviews, focus group discussions, and field notes. Data is typically analyzed thematically using content analysis or discourse analysis. Quantitative research, on the other hand, is aimed at describing the phenomenon. Data is often collected through questionnaires and survey tools and analyzed using descriptive statistics such as percentages, mean, mode, and median (Miller & Salkind, 2002).
Discuss sources of bias for both quantitative and qualitative research. For quantitative research, be sure to address both random and systematic bias. you may use examples from the articles you selected as illustrations of bias and/or preventing bias.
In quantitative studies, the major sources of bias are design bias, random bias, measurement bias, and systematic bias. In qualitative studies, bias arises from sampling bias, procedural or systematic bias, design, and researcher bias. Design bias arises when the study uses a design that does not control threats to its internal and external validity. Therefore, the study is not able to identify the inherent validity problems leading to issues such as when the most or least of a particular variable or set of variables leads to a regression effect. Random bias is when the sampling procedure introduces bias such as omission of specific minority groups from the sample or targeting only the most desirable statistics. Measurement bias is when the researcher does not control the effects of the tools for data collection. A good example is when researchers ask respondents to report socially desirable answers such as asking whether a person has been involved in criminal behavior. It may also arise when an invalid measure is used such as rate of adverse events instead of cure rates in efficacy trials. Procedural or systematic bias occurs when the researcher administers the research items under adverse conditions such as paying subjects or promising items such as course credits for students. For qualitative studies, researcher bias is when the researcher allows their personal bias to influence how the data is collected and analyzed to reflect their personal values or opinion (Pannucci & Wilkins, 2010).
Researchers often identify the research problem and then go in search of a theory, discuss the disadvantages of doing this. What does the recommended book above recommend that research do to assure a true fit theory and designing the study?
When a researcher uses a particular theory to generate ideas for their research, it often being subjectivity to the research since the researcher begins to think along the specific theory in establishing the validity and reliability information and approaches. This makes it difficult for the researcher to look at other aspects outside the theory that may be simple and more sensible. Since the theory also includes core assumptions, the researcher often tends to use the same assumptions which may lead to over- or under-estimation of the effect in their research. It is important to test the accuracy of the core assumptions made in the theory before deciding to use it to guide the research. To test the theories, the researcher should do a search of literature to see other studies that have tested these assumptions or conduct a simple study to test these assumptions. Another way to ensure the theory is a good fit is to pit two theories that make opposite predictions and try to see which is a better fit for their study (Groves et al., 2009).
Describe the quantitative design of the article you selected. Present the strengths and limitations of this type of design according to the book mentioned above and how these are reflected in your study. Contrast the design you have selected with another designed.
The quantitative study chose was a randomized trial that allocated children aged between 5 and 16 years who were diagnosed as obese. The researchers used a randomization sequence to eliminate random bias and even when the allocation ratio was changed to allocate more patients to primary care, they altered the randomization lists to reflect this. The strengths of this study is that it eliminated random bias by using a randomization sequence. Systematic bias was also eliminated by ensuring the participants give voluntary informed consent. The limitations of the study stem from it being a pilot randomized trial and thus not being adequately powered. The study was also done at a smaller scale using two primary care clinics which make it not possible to generalize the findings to the larger populations. Compared to the qualitative design, the evaluation of the two points of care, primary and hospital-based care could only be done with the quantitative design since it was aimed at evaluating the effectiveness of primary care
(Banks, Sharp, Hunt, & Shield, 2012) ADDIN EN.CITE .
Describe the qualitative design (or methodology) of the article you selected. Present the strengths and limitations of this type of design according to the book mentioned above and how these are reflected in your study. Contrast the design you have selected with another design.
The researcher conducted semi-structured interviews with parents whose children has completed 12-months of treatment in the childhood obesity clinical trial described earlier. The sampling criteria was clear and the researchers aimed at ensuring maximum variation sampling to balance age, gender and allocation in the trial. The researchers used a topic guide that covered their interested areas such as expectations and experience in the clinic, practitioner advice, changes made after clinical advice, and other aspects. The authors tried to include children in the study by simplifying questions and engaging children in the interview. The study design had the strength of going indepth to understand the topic areas the researchers were interested in. The limitations of the study include the inability to interview parents and children separately thus the input of children being minimal. The authors also report that the participants did not have a chance to review the findings, which impacts on the validity and reliability of the results. Compared with the quantitative design, this design was best for exploring the why questions related to childhood obesity and seeking to explore and explain the concepts the researchers defined (Banks, Cramer, Sharp, Shield, & Turner, 2013).
Read the section Questionnaires vs. Interviews on pages 305-306 in the above mention textbook, how are these guidelines similar and different from data collected by nurses when giving care? What principles did you identify that are new to you but could be important in improving your collection of clinical data?
In the routine care environment, nurses collect a lot of data through the various forms they have to fill. The difference between the guideline in the book and these forms first comes from the intention of the data collection. The data collected in routine care is aimed at ensuring the patient receives the highest standard of care. Therefore, principles such as anonymity are not ensured because the data is only meant to be for provision of care. Forms used in routine care often have the patient’s name or number and can be easily traced to a particular patient. Secondly, in routine care, interviewer bias is not a concern because the interviewer rarely has an angle since they are only collecting data aimed at ensuring the patient gets the appropriate care. The principle of supplementary data is common in both the routine data collection and in research because the nurse has to be observant about the patient’s surrounding. Principles that are new include those of anonymity in data collection for the purpose of research, missing information, and sample control. Missing information is where the respondent may decide not to answer a question or give a “don’t know” response while in clinical data collection, often the patient is required to answer a question even when it is sensitive or uncomfortable if it pertains to their care. In sample control, the researcher has to check that the person to be interviewed are the intended respondent while in data collection, this is not often the case (Polit & Beck, 2012).
You are interested in nurses attitudes toward EBP (evidence-based practice), which method do you think would work best to obtain this information: a questionnaire, a face-to-face interview, or group interview? defend your answer.
In a study to understand the attitudes of nurses towards evidence-based practice, the best study design to explore this is the qualitative design. According to Holloway and Wheeler (2002), qualitative studies, are best aimed at exploring the different attitudes or behaviors of interest to the researcher. The qualitative design will allow the researcher to dwell on the why and how of the attitudes and go into greater detail about the specific attitudes and why they exist. Data collection techniques such as interviews and focus group discussions are of importance since they help the researcher answer the research question. Focus group discussions are favored because they bring the advantage of bringing out more issues compared to individual interviews that often provide the same information. The qualitative approach also recognizes the importance and complexity of these attitudes and they cannot fit in one or several descriptions. Rather, rich responses are needed to allow in-depth analysis that attempts to bring together the complex aspects of the attitudes. The output from this qualitative study will thus reflect the individual attitudes by trying to show the full picture of the differences and similarities of these attitudes as well as explain them in detail.
Demographic data is collected for every study, What is the purpose of describing the demographic data?
Demographic data relates to the different characteristics of the study population. It is important because it helps to categorize the study participants in case a trend is more common in a particular age or sex category than the other and also helps to describe the population in general to compare the inclusion criteria for the study against the actual participants included. For studies such as randomized trials, demographic data helps to understand differences between the baseline data for the recruited groups and helps to show the robustness of the randomization procedures used in the trial and thus improve the validity and reliability of the results. Furthermore, in certain studies, interesting trends may emerge that can only be described by subgrouping the data by the demographic characteristics (Peck & Devore, 2011). For example, it is only possible to tell the impact of a particular intervention or mode of providing care on the vulnerable population if the results of a larger study is subset by this population. It also helps to provide the implications or generalizability of the findings since for example, findings from studies conducted primarily in urban populations may not be directly transferrable to rural populations.
There is a tendency for novice researchers to develop their own instrument if they cannot readily find one, how might you respond to a peer or manager who asks you to help develop a new tool to collect patient data on anxiety prior to cardiac catheterization?
To develop a data collection tool, the first step is to look at present tools being used in the hospital as well as the Internet. This will help me understand the expectations and standards in developing the tools. It is also important to discuss the data that needs to be collected in order to create a tool that meets this need. Tools available on the Internet or present in the hospital may not meet the needs of the organization (Oppenheim, 2000). For this particular question of anxiety prior to cardiac catheterization, it is important to collect qualitative and quantitative data. Therefore, as a researcher, I will develop a questionnaire that collects demographic data, data on the episodes of anxiety and possible reasons why the patients feel anxious. In order to go in depth with these issues, I will prepare a semi-structured interview guide that aims at understanding the behavior of patients relating to anxiety before cardiac catheterization.
State in your own words what is meant by Type 1 and Type 11 errors . Why are these important? Name one thing that can be done to improve internal validity of a study.
The type I error occurs when a true null hypothesis is rejected incorrectly while a type II error occurs when the research fails to reject a null hypothesis that is null. The type I error represents the case of a false positive. It means that the research had an error that leads to a conclusion that there is an effect while in real sense there is no effect. The type II error represents the case of a false negative and would lead to showing no effect when there one exists. In the comparison of two means, when the researcher concludes that the means are different when they really are not is a type I error while a conclusion that the means are similar when they are really different is a type II error (Sheskin, 2003).
Internal validity can be increased by blinding participants in a study. This reduces bias related to interpretation of results and therefore the research personnel are not able to behave in a way that influences their interpretation. It decreases the effects of the research personnel on the study results thus improving the validity of the study (Brewer, 2000).
An example of a multivariate procedure is analysis of covariance (ANCOVA). Explain what is meant by the following statement: ANCOVA offers post hoc statistical control. Provide example.
ANCOVA can be used to block and match statistics to compute a regression equation that shows the relationship between the pre- and post-test scores of each of the chosen groups and this is then used to answer the question of the different in the posttest scores if the pretest scores are constant. Blocking is when the researcher selects participants with a similar range of pre-test scores while matching is when the researcher finds pairs of participants with the same pre-test scores (Sheskin, 2003). By using ANCOVA, the researcher is able to increase the power of their prediction since they are able to account for the correlation between the pre- and post-test scores. In this way, the researcher is not only looking at the post-test scores only, rather is using the pretest scores to explain the post-test scores. An example is if when considering the post-treatment weights of participants in three treatment groups which are 170.0, 169.4 and 155.6 for diets A, B, and C. Looking at the post-treatment weights only, the researcher can interpret the result as differences. However, using ANCOVA, the researcher would look at the pre-treatment weights of the participants against the post-treatment scores to determine the difference.
In the final section of study reports, there is a section on implications and recommendations. Describe the difference between these terms. Provide examples from one of the studies that you critiqued.
The implications section of a study report articulates the meaning of the study findings for practice and research and extends the scientific knowledge. The implications may be development of methods, theory, new efficacy results or content or topic areas that need further research to be conducted. For clinical studies, it describes the implications of the study findings in clinical practice (Wikman, 2006). The implication section of the study states that primary care has the potential to be as effective as hospital care in providing training and support for weight management of children. Recommendations, on the other hand, are what the authors or researchers feel are the interpretation of the study findings in terms of maximizing the impact of the study. These contain recommendations such as future research that needs to be conduct. In the Banks et al. (2012) ADDIN EN.CITE study, the recommendation provided is for further research to evaluate the appropriateness of primary care further.
Researchers have a responsibility to identify the limitations of a study. What is meant by limitation? Provide examples from one of the studies that you critiqued.
The limitations section allows the readers to understand the limitations of the study for the authors to address when interpreting the study findings. According to Drotar (2009), when authors disclose the limitations of their study, it allows them to be critical about their research and to present counterarguments which future research should consider. It also presents issues that are a threat to the validity of the findings presented in the study. The study limitations also offer suggestions for new research agendas for the future. In the Banks et al. (2012) ADDIN EN.CITE study, the researchers discuss the lack of statistical power and localized nature of the findings in order for those interpreting the findings to understand they may not be generalizable.
Discuss what is meant by mixed-methods designs. what are the limitation of these designs.
The mixed-methods design is the design whereby the qualitative and quantitative designs are used together. This method leverages that advantages of the qualitative and quantitative designs and inclusion of both methods supplements their disadvantages. One limitation of this method is that it is time-intensive and resource-intensive. Since the mixed-method design uses both the qualitative and quantitative design, one design must first be used before using the other to explore concepts that emerged. Therefore, it is also expensive. When using the mixed-methods, it is also difficult for the researcher to learn multiple methods and be able to use both effectively. The interpretation of conflicting results is also difficult and analysis of quantitative data qualitatively is also difficult (Ponterotto, Matthew, & Raughley, 2013).
Banks, J., Cramer, H., Sharp, D.J., Shield, J.P., & Turner, K.M. (2013). Identifying families’ reasons for engaging or not engaging with childhood obesity services: A qualitative study. Journal of Child Health Care. doi: 10.1177/1367493512473854
Banks, J., Sharp, D.J., Hunt, L.P., & Shield, J.P. (2012). Evaluating the transferability of a hospital-based childhood obesity clinic to primary care: a randomised controlled trial. Br J. Gen Pract, 62(594), e6-12. doi: 10.3399/bjgp12X616319
Brewer, M. (2000). Research Design and Issues of Validity. In H. Reis & C. Judd (Eds.), Handbook of Research Methods in Social and Personality Psychology. Cambridge: Cambridge University Press.
Creswell, J.W. (2007). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Thousand Oaks, California: SAGE Publications.
Drotar, D. (2009). Editorial: How to Write an Effective Results and Discussion for the Journal of Pediatric Psychology. Journal of Pediatric Psychology, 34(4), 339-343. doi: 10.1093/jpepsy/jsp014
Groves, R.M., Floyd J. Fowler, J., Couper, M.P., Lepkowski, J.M., Singer, E., & Tourangeau, R. (2009). Survey Methodology. New York: John Wiley & Sons.
Holloway, I., & Wheeler, S. (2002). Qualitative Research in Nursing. New York: Wiley.
Miller, D.C., & Salkind, N.J. (2002). Handbook of Research Design and Social Measurement. Thousand Oaks, California: SAGE Publications.
Oppenheim, A.N. (2000). Questionnaire Design, Interviewing and Attitude Measurement. London: Bloomsbury.
Pannucci, C.J., & Wilkins, E.G. (2010). Identifying and Avoiding Bias in Research. Plastic and Reconstructive Surgery, 126(2), 619-625-610.1097/PRS.1090b1013e3181de1024bc.
Peck, R., & Devore, J. (2011). Statistics: The Exploration & Analysis of Data. Stamford, Connecticut
Polit, D.F., & Beck, C.T. (2012). Nursing Research: Generating and Assessing Evidence for Nursing Practice (9th ed.). Connecticut: Wolters Kluwer Health/Lippincott Williams & Wilkins.
Ponterotto, J.G., Matthew, J.T., & Raughley, B. (2013). The Value of Mixed Methods Designs to Social Justice Research in Counseling and Psychology. Journal for Social Action in Counseling and Psychology, 5(2), 42-68.
Satterfield, J.M., Spring, B., Brownson, R.C., Mullen, E.J., Newhouse, R.P., Walker, B.B., & Whitlock, E.P. (2009). Toward a Transdisciplinary Model of Evidence-Based Practice. Milbank Quarterly, 87(2), 368-390. doi: 10.1111/j.1468-0009.2009.00561.x
Sheskin, D.J. (2003). Handbook of Parametric and Nonparametric Statistical Procedures: Third Edition: Taylor & Francis.
Wikman, A. (2006). Reliability, Validity and True Values in Surveys. Social Indicators Research, 78(1), 85-110.