Which of the following are conditions for determining causality in research studies?

Recommended textbook solutions

Which of the following are conditions for determining causality in research studies?

MKTG: Principles of Marketing

13th EditionCarl McDaniel, Charles W Lamb, Joe Hair

150 solutions

Which of the following are conditions for determining causality in research studies?

Principles of Marketing

18th EditionGary Armstrong, Philip Kotler

388 solutions

Which of the following are conditions for determining causality in research studies?

Consumer Behavior: Buying, Having, and Being

12th EditionMichael R Solomon

485 solutions

Which of the following are conditions for determining causality in research studies?

Principles of Marketing, Global Edition

18th EditionGary Armstrong, Philip Kotler

267 solutions

External Validity

L.C. Leviton, in International Encyclopedia of the Social & Behavioral Sciences, 2001

1.3 The Challenge of Complex Interactions

Causal relationships in real-world settings are complex, and statistical interactions of variables are assumed to be pervasive (e.g., Brunswik 1955, Cronbach 1982). This means that the strength of a causal relationship is assumed to vary with the population, setting, or time represented within any given study, and with the researcher's choices about treatments and measurement of outcomes. Without a sensitive assessment of such interactions, true effects can be obscured or causal claims can be overgeneralized to a wider range of people, settings, times, treatments, or outcome constructs than is warranted. Unfortunately, the number of possible interactions is endless, posing problems to the analyst that are insuperable, at least in theory (Cook 1993). The challenge is: how do we cope with this complexity?

The practical task is to assess the most plausible interactions within a given research area. Interactions occur often between treatment, populations, measures and context, posing the most plausible threats to external validity (Campbell and Stanley 1966, Cook and Campbell 1979). Because external validity specifies the conditions under which an internally valid relationship can be reproduced, threats to external validity necessarily invoke internal validity.

The interaction of testing and treatment suggests that an effect size varies by the conditions of measurement.

The interaction of selection and treatment hypothesizes that an effect size varies by population studied.

The interaction of setting and treatment describes effects that vary by setting.

The interaction of history and treatment deals with the extent to which a causal relationship replicates across different times.

Other interactions between features of the study and treatment sometimes occur, but their plausibility depends on the individual study context and on what is generally known about these study features (see Internal Validity).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767007312

Path Analysis

Christy Lleras, in Encyclopedia of Social Measurement, 2005

Direct and Indirect Causal Relationships

Causal relationships between variables may consist of direct and indirect effects. Direct causal effects are effects that go directly from one variable to another. Indirect effects occur when the relationship between two variables is mediated by one or more variables. For example, in Fig. 1, school engagement affects educational attainment directly and indirectly via its direct effect on achievement test score. Maternal education and parental income also have indirect effects on both achievement and educational attainment. Their indirect effects on achievement occur through their direct effects on school engagement. Their indirect effects on educational attainment occur through their influence on school engagement, through their influence on achievement, and through their effects on achievement and engagement, combined.

The magnitude of the indirect effects is determined by taking the product of the path coefficients along the pathway between the two causally related variables. Thus, the total indirect effect between two variables in a path model equals the sum of the products of each indirect effect. For example, child's school engagement affects educational attainment indirectly through its effect on achievement. Thus, the magnitude of the indirect effect between engagement and attainment can be estimated by multiplying the paths from school engagement to achievement and from achievement to educational attainment, (pEA × pAS).

Calculating the total indirect effect between mother's education and child's educational attainment is a bit more complicated but follows the same logic. Maternal education affects educational attainment indirectly through child's achievement and the magnitude of the indirect effect is (pEA × pAM). Maternal education also indirectly influences educational attainment via child's school engagement and the magnitude of the effect is (pES × pSM). In addition, mother's education influences child's educational attainment both through its effect on school engagement and on achievement. The magnitude of this indirect effect is (pEA × pAS × pSM). Thus, the total indirect effect of mother's educational attainment on child's educational attainment is the sum of all of these indirect effects, (pEA × pAM) + (pES × pSM) + (pEA ×  pAS × pSM). Since mother's education is also correlated with parental income, all of these indirect effects also occur via this correlation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985004837

Causal Inference

Alberto Abadie, in Encyclopedia of Social Measurement, 2005

Introduction

Establishing causal relationships is an important goal of empirical research in social sciences. Unfortunately, specific causal links from one variable, D, to another, Y, cannot usually be assessed from the observed association between the two variables. The reason is that at least part of the observed association between two variables may arise by reverse causation (the effect of Y on D) or by the confounding effect of a third variable, X, on D and Y.

Consider, for example, a central question in education research: “Does class size affect test scores of primary school students? If so, by how much?” A researcher may be tempted to address this question by comparing test scores between primary school students in large and small classes. Small classes, however, may prevail in wealthy districts, which may have, on average, higher endowments of other educational inputs (highly qualified teachers, more computers per student, etc.) If other educational inputs have a positive effect on test scores, the researcher may observe a positive association between small classes and higher test scores, even if small classes do not have any direct effect on students' scores. As a result, observed association between class size and average test scores should not be interpreted as evidence of effectiveness of small classes improving students' scores.

This gives the rationale for the often-invoked mantra “association does not imply causation.” Unfortunately, the mantra does not say a word about what implies causation. Moreover, the exact meaning of causation needs to be established explicitly before trying to learn about it.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985001821

Validity, Data Sources

Michael P. McDonald, in Encyclopedia of Social Measurement, 2005

Internal and External Validity

The causal relationship of one concept to another is sometimes also discussed in terms of validity. Internal validity refers to the robustness of the relationship of a concept to another internal to the research question under study. Much of the discussion in the section under threats to validity and the tests for validity is pertinent to the internal validity of a measure, vis-a-vis another concept with which it is theoretically correlated. External validity refers to the greater generalizability of the relationship between two concepts under study. Is the uncovered relationship applicable outside of the research study?

The relationship between one measure and another may be a true relationship, or it may be a spurious relationship that is caused by invalid measurement of one of the measures. That is, the two measures may be related because of improper measurement, and not because the two measures are truly correlated with one another. Similarly, two measures that are truly related may remain undetected because invalid measurement prevents the discovery of the correlation. By now, the reader should be aware that all measures are not perfectly valid, the hope is that the error induced in projecting theory onto the real world is small and unbiased so that relationships, be they findings that two measures are or are not correlated, are correctly determined.

All of the threats to validity apply to the strength of the internal validity of the relationship between two measures, as the two measures must be valid in order for the true relationship between the two, if any exists, to be determined. Much of the discussion of tests of content and convergent validity also applies to internal validity. In addition, researchers should consider the rules of inference in determining if a relationship is real or spurious. Are there confounding factors that are uncontrolled for driving the relationship? A classic example in time-series analysis is cointegration, the moving of two series together over time, such as the size of the population and the size of the economy, or any other measure that grows or shrinks over time. In the earlier example of voter turnout, the confounding influence of a growing ineligible population led researchers to incorrectly correlate a largely invalid measure of decreasing voter turnout to negative advertising, a decline of social capital, the rise in cable television, campaign financing, the death of the World War II generation, globalization, and decline in voter mobilization efforts by the political parties.

External validity refers to the generalizability of a relationship outside the setting of the study. Perhaps the most distinguishing characteristic of the social sciences from the hard sciences is that social scientists do not have the luxury of performing controlled experiments. One cannot go back in history and change events to determine hypothetical counterfactuals, while physicists may repeatedly bash particles together and observe how changing conditions alter outcomes. The closest the social sciences come to controlled experiments is in laboratory settings where human subjects are observed responding to stimuli in controlled situations. But are these laboratory experiments externally valid to real situations?

In a classic psychology experiment, a subject seated in a chair is told that the button in front of them is connected to an electric probe attached to a second subject. When the button is pushed an increasing amount of voltage is delivered. Unknown to the subject, the button is only hooked to a speaker, simulating screams of pain. Under the right circumstances, subjects are coerced into delivering what would be fatal doses of voltage.

Such laboratory experiments raise the question as to whether in real situations subjects would respond in the similar manner and deliver a fatal charge to another person, i.e., is the experiment externally valid? Psychologists, sociologists, political scientists, economists, cognitive theorists, and others who engage in social science laboratory experiments painstakingly make the laboratory as close to the real world as possible in order to control for the confounding influence that people may behave differently if they know they are being observed. For example, this may take the form of one-way windows to observe child behavior. Unfortunately, sometimes the laboratory atmosphere is impossible to remove, such as with subjects engaged in computer simulations, and subjects are usually aware prior to engaging in a laboratory experiment that they are being observed.

External validity is also an issue in forecasting, where relationships that are based on observed relationships may fail in predicting hypothetical or unobserved events. For example, economists often describe the stock market as a random walk. Despite analyst charts that graph levels of support and simple trend lines, no model exists to predict what will happen in the future. For this reason, mutual funds come with the disclaimer, “past performance is no guarantee of future returns.” A successful mutual fund manager is likely to be no more successful than another in the next business quarter.

The stock market is perhaps the best example of a system that is highly reactionary to external shocks. Unanticipated shocks are the bane of forecasting. As long as conditions remain constant, modeling will be at least somewhat accurate, but if the world fundamentally changes then the model may fail. Similarly, forecasts of extreme values outside the scope of the research design may also fail, or when the world acts within the margin of error of the forecast then predictions, such as the winner of the 2000 presidential election, may be indeterminate.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985000463

Scientific Foundations

I. Scott MacKenzie, in Human-computer Interaction, 2013

4.8 Relationships: circumstantial and causal

I noted above that looking for and explaining interesting relationships is part of what we do in HCI research. Often a controlled experiment is designed and conducted specifically for this purpose, and if done properly a particular type of conclusion is possible. We can often say that the condition manipulated in the experiment caused the changes in the human responses that were observed and measured. This is a cause-and-effect relationship, or simply a causal relationship.

In HCI, the variable manipulated is often a nominal-scale attribute of an interface, such as device, entry method, feedback modality, selection technique, menu depth, button layout, and so on. The variable measured is typically a ratio-scale human behavior, such as task completion time, error rate, or the number of button clicks, scrolling events, gaze shifts, etc.

Finding a causal relationship in an HCI experiment yields a powerful conclusion. If the human response measured is vital in HCI, such as the time it takes to do a common task, then knowing that a condition tested in the experiment reduces this time is a valuable outcome. If the condition is an implementation of a novel idea and it was compared with current practice, there may indeed be reason to celebrate. Not only has a causal relationship been found, but the new idea improves on existing practice. This is the sort of outcome that adds valuable knowledge to the discipline; it moves the state of the art forward.9 This is what HCI research is all about!

Finding a relationship does not necessarily mean a causal relationship exists. Many relationships are circumstantial. They exist, and they can be observed, measured, and quantified. But they are not causal, and any attempt to express the relationship as such is wrong. The classic example is the relationship between smoking and cancer. Suppose a research study tracks the habits and health of a large number of people over many years. This is an example of the correlational method of research mentioned earlier. In the end, a relationship is found between smoking and cancer: cancer is more prevalent in the people who smoked. Is it correct to conclude from the study that smoking causes cancer? No. The relationship observed is circumstantial, not causal. Consider this: when the data are examined more closely, it is discovered that the tendency to develop cancer is also related to other variables in the data set. It seems the people who developed cancer also tended to drink more alcohol, eat more fatty foods, sleep less, listen to rock music, and so on. Perhaps it was the increased consumption of alcohol that caused the cancer, or the consumption of fatty foods, or something else. The relationship is circumstantial, not causal. This is not to say that circumstantial relationships are not useful. Looking for and finding a circumstantial relationship is often the first step in further research, in part because it is relatively easy to collect data and look for circumstantial relationships.

Causal relationships emerge from controlled experiments. Looking for a causal relationship requires a study where, among other things, participants are selected randomly from a population and are randomly assigned to test conditions. A random assignment ensures that each group of participants is the same or similar in all respects except for the conditions under which each group is tested. Thus, the differences that emerge are more likely due to (caused by) the test conditions than to environmental or other circumstances. Sometimes participants are balanced into groups where the participants in each group are screened so that the groups are equal in terms of other relevant attributes. For example, an experiment testing two input controllers for games could randomly assign participants to groups or balance the groups to ensure the range of gaming experience is approximately equal.

Here is an HCI example similar to the smoking versus cancer example: A researcher is interested in comparing multi-tap and predictive input (T9) for text entry on a mobile phone. The researcher ventures into the world and approaches mobile phone users, asking for five minutes of their time. Many agree. They answer a few questions about experience and usage habits, including their preferred method of entering text messages. Fifteen multi-tap users and 15 T9 users are found. The users are asked to enter a prescribed phrase of text while they are timed. Back in the lab, the data are analyzed. Evidently, the T9 users were faster, entering at a rate of 18 words per minute, compared to 12 words per minute for the multi-tap users. That’s 50 percent faster for the T9 users! What is the conclusion? There is a relationship between method of entry and text entry speed; however, the relationship is circumstantial, not causal. It is reasonable to report what was done and what was found, but it is wrong to venture beyond what the methodology gives. Concluding from this simple study that T9 is faster than multi-tap would be wrong. Upon inspecting the data more closely, it is discovered that the T9 users tended to be more tech-savvy: they reported considerably more experience using mobile phones, and also reported sending considerably more text messages per day than the multi-tap users who, by and large, said they didn’t like sending text messages and did so very infrequently.10 So the difference observed may be due to prior experience and usage habits, rather than to inherent differences in the text entry methods. If there is a genuine interest in determining if one text entry method is faster than another, a controlled experiment is required. This is the topic of the next chapter.

One final point deserves mention. Cause and effect conclusions are not possible in certain types of controlled experiments. If the variable manipulated is a naturally occurring attribute of participants, then cause and effect conclusions are unreliable. Examples of naturally occurring attributes include gender (female, male), personality (extrovert, introvert), handedness (left, right), first language (e.g., English, French, Spanish), political viewpoint (left, right), and so on. These attributes are legitimate independent variables, but they cannot be manipulated, which is to say, they cannot be assigned to participants. In such cases, a cause and effect conclusion is not valid because is not possible to avoid confounding variables (defined in Chapter 5). Being a male, being an extrovert, being left-handed, and so on always brings forth other attributes that systematically vary across levels of the independent variable. Cause and effect conclusions are unreliable in these cases because it is not possible to know whether the experimental effect was due to the independent variable or to the confounding variable.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124058651000042

On Causality in Nonlinear Complex Systems

James A. Coffman, in Philosophy of Complex Systems, 2011

Summary

Science seeks to delineate causal relationships in an effort to explain empirical phenomena, with the ultimate goal being to understand, and whenever possible predict, events in the natural world. In the biological sciences, and especially biomedical science, causality is typically reduced to those molecular and cellular mechanisms that can be isolated in the laboratory and thence manipulated experimentally. However, increasing awareness of emergent phenomena produced by complexity and non-linearity has exposed the limitations of such reductionism. Events in nature are the outcome of processes carried out by complex systems of interactions produced by historical contingency within dissipative structures that are far from thermodynamic equilibrium. As such, they cannot be adequately explained in terms of lower level mechanisms that are elucidated under artificial laboratory conditions. Rather, a full causal explanation requires comprehensive examination of the flow networks and hierarchical relationships that define a system and the context within which it exists.

The fact that hierarchical context plays a critical role in determining the outcome of events reinvigorates Aristotelian conceptions of causality. One such perspective, which I refer to as developmentalism, views all non-random causality as a product of development at some level. Development (‘self-organization’) occurs via the selective agency of autocatalytic cycles inherent in certain configurations of processes, which competitively organizes a system as resources become limiting. In this view bottom-up causality (the concern of reductionism) holds sway mainly in immature systems, whereas top-down causality (organizational or informational constraint) dominates mature systems, the functioning of which is less dependent (and more constraining) on the activities of their lower-level parts. Extrapolating the developmentalist perspective to the limit, one might posit that the ultimate arbiters of causality, the ‘laws of physics’, are themselves no more than organizational constraints produced by (and contingent upon) the early development of the universe. The causal relationships that define chemistry and biology are more highly specified organizational constraints produced by later development. Developmentalism helps resolve a number of long-standing dialectics concerned with causality, including reductionism/holism, orthogenesis/adaptation, and stasis/change.

In biological sciences, developmentalism engenders a discourse that overcomes barriers imposed by the still-dominant paradigms of molecular reductionism on the one hand and Darwinian evolution on the other. With regard to the former, it provides a better interpretive framework for the new science of ‘systems-biology’, which seeks to elucidate regulatory networks that control ontogeny, stem cell biology, and the etiology of disease. With regard to the latter, it provides an intelligible bridge between chemistry and biology, and hence an explanation for the natural origin of life. Finally, developmentalism, being an inherently ecological perspective, is well-suited as a paradigm for addressing problems of environmental management and sustainability.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444520760500109

Business, Social Science Methods Used in

Gayle R. Jennings, in Encyclopedia of Social Measurement, 2005

Experimental and Quasi-experimental Methods

Experiments enable researchers to determine causal relationships between variables in controlled settings (laboratories). Researchers generally manipulate the independent variable in order to determine the impact on a dependent variable. Such manipulations are also called treatments. In experiments, researchers essay to control confounding variables and extraneous variables. Confounding variables may mask the impact of another variable. Extraneous variables may influence the dependent variable in addition to the independent variable. Advantages of experiments include the ability to control variables in an artificial environment. Disadvantages include the mismatch between reality and laboratory settings and the focus on a narrow range of variables at any one time. Laboratory experiments enable researchers to control experiments to a greater degree than those experiments conducted in simulated or real businesses or business-related environments. Experiments in the field (business and business-related environments) may prove to be challenging due to issues related to gaining access and ethical approval. However, field experiments (natural experiments) allow the measurement of the influence of the independent variable on the dependent variable within a real-world context, although not all extraneous variables are controllable. The classical experimental method involves independent and dependent variables, random sampling, control groups, and pre- and posttests. Quasi-experiments omit aspects from the classical experiment method (such as omission of a control group or absence of a pretest).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B012369398500270X

Volume 4

Nina Holling, ... Nick Maskell, in Encyclopedia of Respiratory Medicine(Second Edition), 2022

Air and Blood in the Pleural Space

Studies have long debated the causal relationship between the presence of air and blood and accumulation of eosinophils in the pleural space (Ferreiro et al., 2011; Krenke et al., 2009; Rubins and Rubins, 1996; Adelman et al., 1984). EPEs appear to be a common finding in post-traumatic pleural effusions, for example following cardiothoracic surgery (Krenke et al., 2009; Martinez-Garcia et al., 2000; Rubins and Rubins, 1996; Wysenbeek et al., 1985). In post-traumatic pleural effusions eosinophilia is induced by local synthesis of IL-5 by T helper 2 lymphocytes as well as activation of the complement cascade (de Blay et al., 1997; Schandene et al., 1993).

On this basis, it was believed that repeated thoracentesis should also lead to EPEs. Chung et al. found that patients undergoing repeat thoracentesis for malignant pleural effusions had higher incidences of EPE and attributed this to the local release of proinflammatory cytokines such as tumor necrosis factor-a, IL-1 b, IL-8, vascular endothelial growth factor and plasminogen-activator inhibitor-1 (Chung et al., 2007; Chung et al., 2003). However, recent studies have shown that repeated thoracentesis does not increase incidence of EPEs (Ferreiro et al., 2011; Krenke et al., 2009; Martinez-Garcia et al., 2000; Rubins and Rubins, 1996).

Furthermore, some studies even stipulated that a repeated thoracocentesis (within 2–12 weeks) could reduce incidences of EPE (Rubins and Rubins, 1996). Ferreiro et al. (2011) studied 50 patients with EPE, and found that a high eosinophil count could be related to blood in the pleural fluid, and a significant correlation between the number of red blood cells and the number of eosinophils in the EPE was identified. This correlation has not been confirmed in other studies (Rubins and Rubins, 1996).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081027233000287

Experimental research

Jonathan Lazar, ... Harry Hochheiser, in Research Methods in Human Computer Interaction (Second Edition), 2017

Abstract

Experimental research allows the identification of causal relationships between entities or events. Successful experimental research depends on well-defined research hypotheses that specify the dependent variables to be observed and the independent variables to be controlled. After a hypothesis is constructed, the design of an experiment consists of three components: treatments, units, and the assignment method. In an experiment, the process of sample selection and the assignment of treatments or experiment conditions needs to be randomized or counter balanced. Significance testing is used to judge whether the observed group means are truly different. All significance tests are subject to Type I errors and Type II errors. It is generally believed that Type I errors are worse than Type II errors, therefore the alpha threshold that determines the probability of making Type I errors should be kept low.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128053904000029

21st European Symposium on Computer Aided Process Engineering

Erzsébet Németh, ... Ian T. Cameron, in Computer Aided Chemical Engineering, 2011

2.4 Cause-implication graphs

The adopted structured language, including the causal relationships, captured as semantic triplets during the BLHAZID workflow, facilitates the determination of the failure propagation through the system, and the determination of potential root causes and possible consequences of a deviation using backward and forward reasoning. As a graphical representation of the causality information, we introduced the causeimplication graph to visualize it. The nodes of the graph are the failures and each edge represents a causal relationship between nodes.

Beyond the graphical illustration of the causality information, applicable graph results can be used to trace consistency checking issues such as missing causality relations, orphaned failures or sub-graphs within the blended method and provides formal means of auditing hazard identification across the life cycle. Cause-implication graphs have great utility in operator training, on-line diagnosis and application to design decisions.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444537119502145

What are the 3 conditions for causality?

There are three conditions for causality: covariation, temporal precedence, and control for “third variables.” The latter comprise alternative explanations for the observed causal relationship.

How do you determine causality in research?

There are three widely accepted preconditions to establish causality: first, that the variables are associated; second, that the independent variable precedes the dependent variable in temporal order; and third, that all possible alternative explanations for the relationship have been accounted for and dismissed.

What are the criteria for determining causality?

The first three criteria are generally considered as requirements for identifying a causal effect: (1) empirical association, (2) temporal priority of the indepen- dent variable, and (3) nonspuriousness. You must establish these three to claim a causal relationship.

What allows researchers to determine causality?

Answer and Explanation: The only way for a research method to determine causality is through a properly controlled experiment.