The process of partitioning an organization into subunits to improve efficiency is known as

Structural Contingency Theory

Lex Donaldson, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

Abstract

Structural contingency theory holds that the effect on organizational performance of organizational structure depends upon how far the structure fits the contingencies, such as uncertainty, strategy, and size. Organizations facing low uncertainty are fitted by specialized and centralized hierarchical structures, whereas organizations facing high uncertainty are fitted by lower specialization and decentralization (i.e., decisions being taken at lower levels of the hierarchy). Undiversified strategy is fitted by a functional structure, whereas diversified strategy is fitted by a multidivisional structure. Larger size is fitted by more specialized and decentralized structure. Various changes over time in focus are identified, such as from differentiation to interdependence.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868731102

Organizations: Authority and Power

Cynthia Hardy, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

Managerialist Studies of Power in OMT

Although conducted separately and with a different rationale, managerialist studies of power in OMT adopted a similar view of power as the literature described above insofar as researchers explored how individuals in organizations exercised power as part of a deliberate strategy to achieve intended outcomes in the face of opposition. In particular, researchers focused on uncovering the sources of power that allowed actors who did not possess legitimate authority to influence decision outcomes. In this way, researchers began to differentiate between formal, legitimate power – authority – and informal, illegitimate power.

Crozier (1964) studied how maintenance workers in a French state-owned tobacco monopoly had a high degree of power in relation to their position in the hierarchy because they controlled the uncertainty associated with breakdowns of equipment. This and similar work contributed to the strategic contingencies theory of intraorganizational power (Hickson et al., 1971), which argued that subunits within an organization were powerful when they were able to cope with greater amounts of uncertainty, were central, and were not easily substitutable – in other words, when other subunits depended upon them. Similar assumptions underpinned the resource dependency view of power.

This theory states that power resides among a set of interdependent subunits or organizations that exchange resources with each other. The value of the resources that a subunit/organization controls and the extent to which those resources can be obtained elsewhere (i.e., the subunit/organization's substitutability in the exchange relationship) determine the terms of exchange, and thus the power in relation to other subunits/organizations. If the value that a subunit/organization provides can be replaced (i.e., substituted), then there is little dependence on that subunit/organization, which consequently has little power in that social relationship.

(Magee et al., 2008: p. 362)

Researchers began to identify the resources that constituted the bases or sources of power, such as information, expertise, credibility, prestige, access to higher echelon members, and the control of money, rewards, and sanctions (Pettigrew, 1973). They also questioned whether simply possessing scarce resources was enough to confer power, arguing that actors also had to be aware of their contextual pertinence and sufficiently skilled to deploy them. This process of mobilizing power was typically referred to as ‘politics’, which further reinforced the idea that the use of power was illegitimate, dysfunctional, and self-serving.

Researchers using this approach to power, concentrated on the link between power and the control of scarce resources – or dependencies – that helps actors to influence decision-making processes in the face of opposition. Those individuals with the greatest access to power sources and the greatest skill in putting them to use are the most likely to prevail in decision making. Decision outcomes produced in this way are described as political, rather than rational, and are typically assumed to be at odds with the interests and objectives of those in formal authority. Power is only used in the first place because of the existence of conflict.

[It] is clear that political activity is activity which is undertaken to overcome some resistance or opposition. Without opposition or contest within the organization, there is neither the need nor the expectation that one would observe political activity.

(Pfeffer, 1981: p. 7)

In this way, the use of power is inextricably linked to the situations of overt conflict associated with struggles among actors seeking to protect vested interests.

This managerialist work on power thus converges theoretically with the study of community power. First, both community and OMT researchers challenged the existing models of decision making. Studies of community power refuted the élitist views of decision making suggested by some sociologists to propose a pluralist perspective; while management studies of power challenged the existence of the rational model in organizations by saying that decision making was, in fact, political (although, whereas political scientists tended to promote pluralism – and the attendant politics – over élitism, management researchers clung to rationality as the desirable mode of functioning). Second, both bodies of literature adopted a behavioral approach that focused on the overt exercise of power in the decision-making arena, and both argued that decision outcomes were influenced by actors' access to and expertise in using power (although OMT researchers seemed uninterested in the concept of nondecision making, which did not gain much traction in this literature). Third, both bodies of work employed similar definitions of power, i.e., the ability to get others to do what you want them to, if necessary against their will (Weber, 1947) or to do something they otherwise would not do (Dahl, 1957).

While managerialist approaches in OMT have produced insight into some aspects of power, existing organizational arrangements – particularly in the guise of legitimate authority – are largely excluded from analysis. Power represents dysfunctional interruptions in the far more desirable flow of rational decisions by self-interested actors who engage in its political mobilization, a term whose negative connotations helped to reinforce the view that power is illegitimate and dysfunctional (except, perhaps, when used by managers in defense of organizational goals). As power and authority were pulled further apart, authority was increasingly taken for granted; with researchers rarely stopping to consider how it got there or whose interests it serves. Power was what counted – and what fascinated – researchers (Hardy and Clegg, 2013).

Power, related to one's control over valued resources, transforms individual psychology such that the powerful think and act in ways that lead to the retention and acquisition of power.

(Magee et al., 2008: p. 351)

As the managerialist literature has grown, it has increasingly conceptualized power in highly individualized terms: as individual control over valued resources that are asymmetrically distributed in social relations. Confidence in the predictive ability of power mechanisms such as hierarchical position, perceptions of competence, self-enhancement, hedonic sense-making, and the desire to win has led researchers like Jeffrey Pfeffer (2013) to claim their universality as determinants of organizational outcomes.

Not only does power induce different motivations in individuals, but the powerful soon learn that they can get away with things that others cannot.

(Pfeffer, 2013: p. 277)

In contrast, Willmott (2013a: p. 281) argues, “power and influence reach well beyond the surface mechanisms identified and naturalized by Professor Pfeffer.” He argues that managerialist work ignores and obscures the ways in which power is institutionalized and reproduced – to the extent that this approach can be said to constitute uncritical management studies.

[Power] tends to be conceived shallowly as an attribute of individuals and groups who possess it, and to be manifest in zero-sum games involving acquisition, exercise, and potential loss of power [which] reflects and reinforces a highly individualized and individualizing representation of an episodic manifestation of power that largely displaces consideration of its relational formation and systemic operation.

(Willmott, 2013a: p. 285)

So, let us now turn to critical management studies to see how work in this area has treated power and authority.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868731138

People in Organizations

Jennifer A. Chatman, Jack A. Goncalo, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

Some Individuals Can Effect Change More Than Others

Early leadership research focused on the physiological and psychological traits thought to be associated with exceptional leaders. These ‘great man’ theories of leadership examined the effects of personal characteristics such as height, physical appearance, and intelligence on leaders' emergence and effectiveness. This stream of research has its counterpart in more current studies examining the effects of self-confidence, extraversion, and energy level (e.g., House, 1988). The aim of this approach has been to identify a leadership personality. However, it leaves many crucial questions unanswered, such as whether certain personal characteristics become more important than others depending on the organizational context, and why, regardless of formal authority, followers perceive some people as leaders and not others.

Contingency theories of leadership were advanced to explain how certain personal characteristics made a leader effective in certain situations (e.g., House and Baetz, 1979). For example, leaders who initiated structure raised the productivity and satisfaction of a group working on a boring or simple task but lowered the productivity and satisfaction of a group working on a complex task, while a considerate leader raised the satisfaction and productivity of a group engaged in a boring task but had little effect on a group engaged in a task they found intrinsically interesting. Additionally, research showed that allowing members to participate in decision making increased commitment but depended on the amount of trust the leader had in his or her subordinates as well as the urgency of task completion (Vroom and Jago, 1978). Thus, contingency theories of leadership were more comprehensive than trait theories; however, they still did not account for the interactive effects of leader characteristics and their situational contexts.

Recent research has focused on charismatic and transformational leadership, demonstrating that some individuals influence situations more than others. This research takes an interactional approach by conceptualizing leadership as a personal relationship between the leader and his or her followers. A leader must have certain interpersonal skills in order to inspire followers to set aside their goals and to pursue a common vision. Charismatic leaders are thought to have the ability to change their circumstances by increasing followers' motivation and commitment and, sometimes, to change the direction of the entire organization (e.g., Meindl et al., 1985). However, a leader is only charismatic if followers recognize him or her as such; followers must identify with the vision articulated by the leader. In one particularly exhaustive laboratory study of charismatic leadership (Howell and Frost, 1989), confederates were trained to display qualities of a charismatic leader, such as projecting a dominant presence, articulating a large overarching goal, and displaying extreme confidence in followers' ability to accomplish this goal. In addition, norms were created in each group for either high or low productivity. In contrast to participants working under a considerate or structuring leader, participants working under the charismatic leader displayed higher task performance regardless of the group productivity norm. This finding suggests that leaders mold their styles in response to the situation. Moreover, some leaders are capable of changing the situation itself by changing followers' perceptions and motivation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868730200

Organizations, Sociology of

Erhard Friedberg, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

Efficiency and Legitimacy as Unifying Forces Shaping Organizational Forms

The other tradition, to which we will now turn, has a different starting point. It sees organizations as structural forms, the nature, characteristics, and dynamics of which have to be explained. Thus, it starts with organizations and will focus on the variation of their forms: organizations are its basic units of analysis and it will try and analyze the social dynamics on the inter-organizational, sectoral, or societal level in order to explain organizational form. These dynamics are traced to two distinct constraints: efficiency and legitimacy, each of which is stressed by a different strand of analysis.

Efficiency is the constraint stressed by structural contingency theory. This paradigm emerged in the middle of the 1960s and developed as a critical reaction to the theoretical and methodological perspective characteristic of organizational sociology of the 1950s. With regard to methodology, the then dominant qualitative case-study method was criticized because it provided merely a thick description but no grounds for generalizations or for the construction of a general theory of organizations. With regard to theory, it was argued that the over-emphasis on motivations and human relations characteristic of organizational thinking so far had had two consequences detrimental to our understanding of organizations: the role of structure and its influence on these relations had been downplayed, while, and this was seen as even more important, the context of an organization and the way its characteristics condition an organization's structure and functioning had been largely ignored.

As a consequence, structural contingency theory set out on a different program. Its focus was not on action or behavior within organizations, but on organizations as structured entities whose characteristics and change over time have to be explained using quantitative methods for the statistical study of samples of organizations in order to list, describe, and, if possible, measure the influence which the main dimensions of an organization's context exert on its structures, its functioning, and its performance. In other words, this paradigm was concerned with two main questions: which dimensions of context affect an organization's (mainly structural) characteristics and to what extent? what is the influence of each of these characteristics on the performance of an organization?

Structural contingency theory has been the dominant paradigm in the field of organization studies from the middle of the 1960s up to the first half of the 1980s, especially in the Anglo-Saxon world. It has generated an immense effort to determine and measure the impact of the various dimensions of context. Let us mention a few particularly significant research programs. The influence of technology on the structure of organizations has been explored by J. Woodward (1958, 1965, 1970) and C. Perrow (1967, 1970); the Aston group around D. Pugh (1963) and D. Hickson et al. (1969) as well as P. Blau in the United States (Blau and Schoenherr, 1971) has explored the link in particular between size (among other, less central variables) and organizational structure. The impact of the technical, economic, and social characteristics of an organization's environment on its structure and mode of functioning have been independently studied in the seminal work of Burns and Stalker in England (1961) and Lawrence and Lorsch in the Unites States (1967). Last, but not least, the more conceptual work of J. Thompson (1967) has also been very influential, especially in regard to his conceptualization of what he called the 'task environment' of an organization. This approach is developed further by the population ecology of organizations, which builds on the seminal work of Aldrich (1979) as well as Hannan and Freeman (1977) and which aims at studying the contextual conditions which explain the emergence, the diffusion, and the disappearance of populations of organizations which share the same characteristics and which fit certain contextual conditions or ecological niches.

The main contribution of this quantitative and apparently more scientific approach to the study of organizations has been to demonstrate empirically the impossibility to define a single best way for structuring an organization. The good, i.e., the efficient structure, cannot be defined in general and beforehand. It is a function of the context and can only be defined after the different dimensions of this context have been recognized and taken into account in the organizational design. However, this very important contribution should not cover up the theoretical and empirical shortcomings of the approach. Indeed, according to the reasoning on which it is based, context becomes a constraint because organizations are viewed as driven by the constraints of efficiency. Indeed, so this reasoning goes, organizations have to adjust to their contexts because their performance depends on this fit: in order to survive, they have to be efficient and, in order to be efficient, they have to adjust to the demands of their context. Although there is certainly some truth in this hypothesis, the empirical diversity of organizations with similar contexts has shown that structural contingency theory has vastly overestimated the unifying power of the constraint of efficiency. It has enlarged our understanding of the forces which shape organizations but in the process has overstated its case and has been proven wrong by empirical analysis.

Against this reductionism, which pretends to analyze organizations from a purely technical or economic viewpoint (the pressure of the constraint of efficiency), the neo-institutional school in organizational analysis has promoted a more sociological perspective. It emphasizes the symbolic and normative dimensions of action in and between organizations, and stresses the role of their culture, i.e., a set of cognitive and normative frames, to explain their mode of functioning. In other words, it puts forward a less intentional and rational perspective on organizations. In this view, organizations are neither the simple tools of their masters nor machines to maximize efficiency. They are also institutions, i.e., social worlds with their specific identity and culture, which, once created, take on a life of their own and develop their own ends which can never be reduced to mere considerations of efficiency.

Sociological neo-institutionalism builds on, and tries to integrate, several theoretical perspectives: the work of P. Selznick (1943, 1949) on organizations as institutions, the work of H. Simon and his group at Carnegie on bounded rationality and cognitive frames, and the work of Berger and Luckmann on the processes of institutionalization understood as processes of the social construction of reality. In organizational analysis, it brings together the work of American and European sociologists, such as N. Brunsson, P. DiMaggio, N. Fligstein, J. G. March, J. Meyer, J. P. Olsen, W. W. Powell, B. Rowan, W. R. Scott, and L. G. Zucker, whose analysis of organizations is based on roughly three main premises.

First, an organization is an institution because it is structured by a set of cognitive, normative, and symbolic frames, which shape the behavior of its members by providing them with the tools necessary to observe and perceive the world around them, to interpret and understand their counterparts' behavior, and to construct their own interests as well as possible ways to further them. Through their structures – formal (organizational forms, procedures, institutional symbols) as well as informal (myths, rituals, social norms) – organizations shape the perceptions, calculations, reasoning, interpretations, and actions of their members by defining acceptable and legitimate behavior, i.e., behavior which is appropriate in the context of its culture.

Second, no organization exists independently of other organizations which share the same characteristics and which together form an organizational field: e.g., the organizational field of universities, hospitals, schools, airlines, museums, etc. Such organizational fields have their producers and consumers, cognitive and normative frames, their power structures, control mechanisms. In short, they have their own institutional structure and their own dynamics which are brought about by competition as well as interdependence between the constitutive organizations, processes of professionalization, i.e., the establishment of cognitive and normative frames for the field, by government intervention and the like. The dynamics of these organizational fields exert unifying pressures on the individual organizations which, in order to enhance their legitimacy, tend to adopt similar, if not identical, institutional forms and procedures.

Third, while recognizing the importance of the technical and economic environment which was stressed by the structural contingency theory, the neo-institutionalist perspective is interested mainly in the influence ofthe societal and institutional environment. The institutional environment concerns the characteristics of the organizational field of which an organization is a part, and of the rules it has to follow if it wants to obtain resources from the field and strengthen its legitimacy in it. The societal environment designates the norms and values of modern societies which, according to DiMaggio and Powell (1983) or Meyer (Meyer and Scott, 1994), are characterized by processes of rationalization (to a certain extent, this can be understood as the reedition of Max Weber's process of the disenchantment of the world) and the diffusion of standardized norms as a consequence of increased intervention by states, professions, science, and organizational fields. Both institutional and societal environments constitute a sort of unifying matrix for organizations which have to conform to their pressures if they want to be accepted and thus able to draw resources for their functioning.

In short, the neo-institutional perspective stresses the constraint of legitimacy, as opposed to the constraint of efficiency. Rational structures (formal organizations) do not dominate the modern world because they are efficient. They adopt rationalized institutional norms because these will enable them to obtain the resources necessary for their success and their survival as they increase their legitimacy in a wider cultural environment (rationalist Western society and culture).

Taken together, the two strands of reasoning mentioned in the opening remarks provide a complete panorama of the forces shaping organizations: efficiency and legitimacy are the constraints, within which the games being played in and between organizations are embedded, but the games that depend upon the cognitive and relational capacities of the individuals playing them are in turn mediating these constraints. The unifying forces stressed by contingency theory and the neo-institutionalist perspective should therefore never be overestimated: they are themselves subject to differentiating pressures stemming from the cognitive and relational capacities of the humans who play games in contexts structured by constraints of efficiency and legitimacy. These relational and cognitive capacities are never determined and are never final: they are the motor for the infinite variance which is observable in organizational life, they are the motor of innovation which succeeds in destabilizing even the best established technical or institutional environments.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868321043

Ecology: Organizations

Joel A.C. Baum, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

Niche-Width Dynamics

Niche-width theory (Hannan and Freeman, 1977) focuses on two aspects of environmental change – variability and grain – to explain the differential survival of specialists, those who possess few slack resources and concentrate on a narrow range of customers, and generalists, those who attempt to appeal to the mass market and exhibit tolerance for more varied environments. Variability refers to the size-related environmental fluctuations over time. Grain refers to the patchiness of these fluctuations, with fine-grained fluctuations being frequent and coarse-grained infrequent. The key prediction is that in fine-grained environments, with large-magnitude fluctuations, specialists ride out the fluctuations, outcompeting generalists who are unable to respond quickly enough to operate efficiently. Niche-width theory thus challenges the classic prediction that uncertain environments always favor generalists that spread their risk (see Structural Contingency Theory).

In contrast to niche-width theory, which implies an optimal strategy for each population, Carroll (1985) proposes that, in environments characterized by economies of scale, competition among generalists to occupy the center of the market frees peripheral resources that can be used by specialists; a process he refers to as resource partitioning. His model predicts that in concentrated markets specialists can exploit more resources without engaging in direct competition with generalists, and by implication, while increasing market concentration increases the failure of generalists, it lowers the failure of specialists.

Although the specialist–generalist distinction is common in ecological research, tests of niche-width theory are scarce, and resource-partitioning studies do not typically contrast niche-width and resource-partitioning predictions. Yet, there is a conflict between these two theories. They offer distinct views of specialism and generalism and the tradeoff between organizational survival and niche width. The theories also focus on different kinds of niches, niche-width theory emphasizing fundamental niches (i.e., the set of environments in which an organization can survive in the absence of competition), and resource-partitioning theory emphasizing realized niches (i.e., the subset of the fundamental niche in which an organization can survive in the presence of competitors). It is thus unclear whether the two theories differ fundamentally or can be integrated into a unified theory. However, there have been early efforts to link the two theories (Dobrev et al., 2001) by combining niche-width theory's emphasis on environmental dynamics with resource partitioning's emphasis on scale advantages.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868731114

Generalization: Conceptions in the Social Sciences

Thomas D. Cook, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

Extrapolating to Nonsampled Entities

Some social scientists probably think it silly to use samples for generalizing to populations that are not even proximally similar to the samples used. So, many animal researchers abjure all talk of generalizing to humans. But their language is often anthropomorphic, and their research is usually funded because others think it is relevant to humans. So, elaborate reasoning has evolved over time to justify generalizing from animals to humans (Steel, 2008).

In this regard, consider the use of cats in visual research (Blake, 1988). The rationale for their use is in two parts. The first is entirely pragmatic and stresses the conveniences of working with cats (or Drosophila in genetics research), usually because of cost and ethics. The second is theoretical and is based both on cats sharing some anatomical and physiological features with humans and on behavioral research showing that cats and humans behave similarly when confronted with similar visual stimuli. However, the empirical work necessary for showing such correspondences has also revealed that cats are much less like humans in some physical ways – especially features related to night vision – and that some behaviors are not replicated across species when the same stimulus is presented. So, a contingency theory has evolved of the conditions under which cats are better or worse substitutes for humans.

But when asked, honest vision researchers will avow that convenience is the paramount rationale for using cats. Would they rather work with chimpanzees? Undoubtedly, since the chimps' visual system is much closer to humans' than the cats.' But the expense of maintaining chimps precludes using them. Would the researchers rather use humans than chimps? Undoubtedly, if ethical research procedures are possible – which they often are not. However, now that new imaging technologies are available that allow observing the human eye reacting to planned stimuli in great microdetail and real time, many vision researchers are turning to work with humans rather than cats because the pragmatic obstacles to using humans now hardly apply. Thus within the community of vision researchers the preference for generalization to humans via within-species proximal similarity is clear. So, too, is the distaste for arguments that depend on the (only partial) proximal similarity between cat and human on some vision-related attributes given their obvious dissimilarity in others. A similar story could be told for conducting research in the laboratory rather than in the field (Shadish and Cook, 2009). Convenience is paramount, and the reasons given for preferring the lab often contain elements of rationalization. But even so, extrapolation across species and settings is surely superior when there is an empirically defensible contingency theory of the conditions under which the extrapolation is better and worse warranted.

How is extrapolation otherwise achieved? Causal explanation is often held to play a major role, explaining how or why two or more things are related (Mackie, 1974). Once an explanatory process has been identified, the argument is that it can be reproduced in other settings, with other kinds of people, at future times, and with unique ways of instantiating the process. Indeed, this is the major public rationale for basic science. Once one knows why fire happens, one can replicate it with new kinds of materials (say, some newly invented polymers), in new settings (an uninhabited desert island), and with people who have never seen fire before (so long as they can understand the explanatory recipe). The problem with this view is that full explanation is hard to conceptualize and achieve. It is difficult enough to use data for discriminating among explanatory claims even at one level of explanation, let alone for simultaneously investigating factors at the biological, individual, and social levels. Even so, causal explanation is the holy grail of science, not just for its aesthetic parsimony, but also for the utilitarian causal generalization it promotes in the form of extrapolation.

Extrapolation is also achieved through careful induction. If patient education reduces the length of hospital stays and does so with patients suffering from 25 different diagnoses, then it is likely to do so with the 26th. Of course, the logical warrant for such induction is limited, and seems more reasonable the more numerous and heterogeneous are the patient types already examined. If all 25 were different forms of cancer diagnosis, or involved patients aged between 25 and 40, or were restricted to men, this limited heterogeneity would reduce our confidence in any inductive-empirical extrapolation. Meta-analysis is particularly important in this regard, especially when it permits replication across multiple different sources of irrelevancy. A relationship that has proven to be robust in whatever form it was operationalized is more likely to reoccur when other ways of operationalizing its components are attempted – more likely, but not certain. We see here the value traditionally assigned in science to discovering both general dependencies and the contingencies that describe when a given relationship does and does not hold.

The final context for extrapolation relates to formal modeling. When subgroups can be arrayed along a single continuum, the functional form of a causal relationship can then be described. Is it stable? Is there a linear increase or decrease? If a pattern clearly emerges, one might want to argue that the strength of the relationship can be inferred for unobserved subpopulations with values that fall within the continuum range. Simple interpolation is involved here, while the strength of the relationship for values beyond the continuum is inferred by extrapolating from the observed function. But comfort with such extrapolation depends on how distant the nonobserved subpopulation values are from the sampled ones.

Model-based extrapolation is always tricky, but especially so when the models are more complex. Consider much of macroeconomics. Models of a given economic system are developed from theory and past findings, and changes in parameter values are introduced to see how the system reacts to this perturbation. This is basically an extrapolation exercise to discover what would happen if some details of the economic system were not as they actually are. Can we extrapolate from what we think we know, in order to learn what would happen under novel circumstances? The validity of any extrapolation depends on the model being correctly specified with respect to the variables included, the relationships postulated between them, and the strengths of these relationships. Be wrong about any of these assumptions and any conclusions about the effects of the novel perturbation will be biased to some unknown extent. The assumption that the model is complete and accurate is difficult to test, though crucial to the credibility of the results (Glymour et al., 1987). Still, the creation of valid multivariate models of causal interdependency is another major goal of science, and it is not difficult to see why. It promises the extrapolation of results beyond whatever circumstances were studied in the past.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868440286

Leadership in Organizations, Sociology of

Manfred Kets de Vries, Alicia Cheak-Baillargeon, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

A Definitional Confusion

The Anglo-Saxon etymological origin of the words lead, leader, and leadership is laed, which stands for ‘path’ or ‘road.’ The verb laeden means ‘to travel.’ Thus a leader is one who shows fellow travelers the way by walking ahead. This metaphor of the leader as helmsman is still very much on the mark. Unfortunately, the clarity of leadership's etymology is rarely matched with clarity of meaning. When we plunge into the organizational literature on leadership, we quickly become lost in a labyrinth: there are endless definitions, countless articles, and never ending polemics. Papers, books, and articles claiming to delineate leadership proliferate, yet their conclusions can be confusing and even conflicting.

The most recent handbooks on leadership (Bass and Bass, 2008; Bryman et al., 2011) demonstrate the richness of leadership studies in their multiplicity of perspectives (social, psychological, historical, political, cultural, and even military) and approaches (theoretical, empirical, interdisciplinary, and policy-centered). Among the more popular are descriptions in terms of traits, behavior, relationships, and follower perceptions. Prevalent themes in the last decade include personality-based approaches, contingency theories, transformational leadership, leader-followership, innovation and creativity, the role of emotions, and the shadow side of leadership. More recently, with the financial crisis that began in 2008 and the poignancy of a number of high-profile corporate scandals, the nature and integrity of leadership practices have also come under the spotlight. The number of academic journals partially or fully devoted to the study of leadership continues to increase (Leadership Quarterly, Leadership, Journal of Leadership Studies, International Journal of Leadership Studies, Journal of Leadership and Organizational Studies, to name a prominent few) reflecting the diverse and creative discourse on leadership study, as have the number of practitioner or commercial books on leadership, indicating popular interest in the subject.

We continue to see a continued movement away from laboratory experiments, observations of leaderless groups, or the activities of lower level supervisors toward what leaders at a higher level are doing in the context of their work environment. In Harvard Business Review's 10 Must Reads on Leadership (2011), all of which include case studies with CEOs and top executives, the opinion on what makes a great leader may vary, but the authors generally agree that leadership is not a gift inherent to a chosen few, but something which can be cultivated. This includes fine-tuning one's emotional intelligence, implementing key leadership behaviors, adopting effective strategies for adapting to crisis and leading through change, the ability to find meaning and to learn from extremely difficult events, and practicing authenticity.

While much progress has been made, the proliferation of studies has only generated more questions. The observations and profiles of actual leaders show a theatrical gamut of personalities, attributes, strengths, and weaknesses, with different contexts calling for different leadership styles. A review by Yammarino et al. (2005) cited in Crossan and Mazutis (2008) identified at minimum 17 different leadership theories, providing effectiveness remedies ranging from classical approaches to more contemporary forms such as charismatic and transformational leadership. However, this review precludes more recent streams of leadership studies in areas such as strategic leadership or shared leadership or more positive forms such as authentic, spiritual, ethical, or responsible leadership. While we continue to broaden our understanding on leadership, great leadership, while consisting of many teachable components, remains illusive.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868730807

Bureaucracy, Sociology of

Amanda K. Damarin, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

Bureaucracy Today

More recently, numerous social scientists have suggested that bureaucracy's era is over and that it is rapidly being replaced by other organizational forms. The primary culprit (or hero, to some) is increased economic turbulence since the mid-1970s, a product of new technologies and increasingly international, specialized, and competitive markets. Drawing on the logic of contingency theory, many scholars have surmised that under such conditions, both mass production and the bureaucratic forms that typically accompany it have grave disadvantages because their large size and complex hierarchical structures make them slow to innovate and adapt to change (Piore and Sabel, 1984; Saxenian, 1994). Thus, the world described by Whyte and Mills has given way to a new economy comprised of ‘postbureaucratic’, ‘postindustrial’, ‘post-Fordist’, ‘flexible’, and/or ‘networked’ organizations (Heydebrand, 1989; Piore and Sabel, 1984; Powell, 2001; Wood, 1989). Regardless of nomenclature, these new forms share some characteristics of the nonbureaucratic ‘organic’ organizations originally identified by contingency theorists Burns and Stalker (1961): flexible and multiskilled jobs rather than rigid divisions of labor, control via collaboration in a relatively ‘flat’ networked structure rather through a command hierarchy, and an emphasis on worker participation and commitment. They also have features that Burns and Stalker did not anticipate, notably including heavy use of contingent labor such as temporary, freelance, and subcontracted work (Heydebrand, 1989; Powell, 2001). Whether these turns away from bureaucracy are positive, particularly for workers, is the subject of rather polarized debate, with some linking them to increases in skill and autonomy while others emphasize the potential for labor intensification, neopatrimonialism, and insecure employment (see Powell, 2001; Smith, 1997; Vallas, 1999; Wood, 1989 for reviews).

Further, an even more recent line of scholarship suggests that bureaucracy's downfall has been overestimated. Ritzer (1993) claims that small eddies of postindustrialism have done little to stem the rising tide of McDonaldization, in which logics of bureaucratization and rationalization are greatly intensified through the application of the fast-food principles – efficiency, predictability, quantification, and control – to all areas of social life, including workplaces but also health, education, recreation, and the family. Similarly, Head (2003) points out that rather than embracing flexibility and collaboration, many firms have responded to economic distress by adopting new, hi-tech productivity practices such as business process reengineering and enterprise resource planning, which extend bureaucratic regulation to new levels of exactitude and new categories of workers. Others argue that bureaucracy may have waned but is today making resurgence, albeit often in new, more flexible hybridized forms, and that contrary to popular opinion this is not necessarily for the worse (du Gay, 2005; Piore, 2011). For workers, the insecurity, particularism, and insidious normative and peer controls found in ‘flexible’ firms may make formal structure look enticing (Barker, 1993; Kunda, 1992). More generally, after roughly three decades of experience with the consequences of ever-less-regulated market capitalism – Enron, the Dotcom bubble, the mortgage crisis and Great Recession, spiraling economic inequality, the Bangladeshi factory collapse, ongoing environmental degradation – many see bureaucratic emphases on planning, structure, accountability, transparency, and universalism as virtues rather than vices. For better and for worse, bureaucracy remains a persistent feature of the contemporary world.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868320116

Industrial/Organizational Psychology across Cultures

Zeynep Aycan, in Encyclopedia of Applied Psychology, 2004

2 The Role of Culture in Industrial and Organizational Psychology

Organizations are complex systems that operate under the influence of multiple environmental forces that are both internal (e.g., size, type of work, industry and production, type of workforce, technology, stage of development) and external (e.g., political, legal, educational, institutional) to the organization. The challenge of cross-cultural I–O psychology is to disentangle the impact of culture (i.e., sociocultural context) from other internal and external environmental forces.

Since the early 1960s, comparative studies of organizations have focused on noncultural factors influencing organizational structure and practices. Among the most popular noncultural approaches is the “contingency” approach, within which there were four main streams. The first, referred to as the “logic of industrialization,” asserts that industrialization has a homogenizing effect on organizations around the world, irrespective of the political, economic, and cultural contexts. The second thesis, generally referred to as “technological implications,” suggests that technological advancement and automation leads to a transformation of social relations and attitudes at work (e.g., more control over work schedule and work processes, increased emphasis on developing social networks). The “culture-free contingency theory of organizations” approach emphasizes the role of contextual elements such as size of the organization, industry, and dependence on other organizations. The final thesis emphasizes the role of strategic development of organizations, according to which organizations are transformed from small, less hierarchical, and domestic structures to large, complex, professional, and international structures. This transformation has implications for practices such as planning, diversification, and role differentiation.

Another type of noncultural approach stems from the political–economy perspective. In this perspective, organizations in the same sociopolitical systems (e.g., capitalism vs socialism) are assumed to have similar characteristics, especially with respect to organizational objectives, control strategies, and degree of centralization and decision making. The final noncultural approach, namely the “societal effect approach,” takes into account the social context in which organizations operate, with specific emphasis on the educational system, the system of industrial relations, and the role of the state.

Critics of the noncultural approaches are concerned with the deterministic orientation of these approaches as well as their underestimation of the role of culture in explaining organizational phenomena. Some scholars take an interactionist perspective, suggesting that culture influences some aspects of organizational practices more than it does others. For instance, whereas organizational contingencies influence the “formal” characteristics of organizations (e.g., centralization, specialization, span of control), cultural variables influence the “interpersonal” aspects (e.g., power and authority structure, delegation, consultation, communication patterns) or the “organizational processes” (i.e., the way in which organizations function). Others assert that culture has a moderating effect on organizations; that is, even though the contingencies help to determine the organizational structure, culturally driven preferences influence the exercise of choice among alternative structures. For instance, the effect of industrialization on organizations is not homogenous in every country (e.g., the case of Japan). Similarly, within the capitalist system, there is wide variety among organizational and managerial practices. Researchers argue that similarities in socioeconomic systems resulted in similar organizational objectives, but the ways in which these objectives were materialized differed depending on the cultural contexts. The preceding discussion, therefore, underlines the important role that culture plays in understanding organizational phenomena. However, culture’s effect is more salient in some aspects of organizations than in others. The following sections present the state of affairs in those areas of I–O psychology where the impact of culture is significant.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0126574103006905

Work and Industry, Sociology of

Tony J. Watson, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

Work Organizations

If we build upon the insights into changing divisions of labor associated with industrialization introduced by Durkheim and Marx, it is possible to suggest that there are two fundamental principles of work structuring in industrialized societies. One is the occupational principle of work structuring (to be considered later) and the other is the organizational principle in which work is patterned as the outcome of institutional arrangements in which some groups of people conceive of and design work and then recruit, reward, coordinate, and control the efforts of other groups. Sociological study of work organizations not only examines them as parts of the wider social organization of societies, it also recognizes that organizational structures involve a great deal more than the formal arrangements represented by managers' organization charts, rules books, and operating procedures. This accords with Weber's emphasis on the unintended consequences of human social actions and means that unofficial/informal aspects are studied alongside official or formal arrangements. Despite the power of such ways as looking at organizations, certain writers have argued that postbureaucratic organizations are now emerging. These are variously characterized as ‘entrepreneurial,’ ‘networked,’ or ‘culture-led.’ Sociologists have reacted to this type of thinking by arguing, in effect, that although there may be new variants of bureaucracies appearing, this does not warrant a denial of analyzing such arrangements as bureaucracies (e.g., Reed, 2005).

Sociologists generally are continuing to develop forms of analysis which give balanced attention to human agency and to the structural and cultural circumstances, and contingencies which both constrain and enable human initiative taking in the organizational setting. This has been reflected in a shift of emphasis by sociologists of work and industry away from the so-called contingency theory in which an organization's contingencies were, implicitly at least, seen to determine organizational forms to one in which the contingent circumstances of any organization (such as its ownership, market circumstances, technology, and type of people employed) are recognized as being mediated by the ‘strategic choices’ made by managers (Child, 2005). Yet these managers are by no means members of unified groups, as is demonstrated by studies of organizational micropolitics (Burns, 1961). Managers are frequently in competition with each other in their careers and researchers adopting a negotiated order perspective (a notion beginning in the symbolic interactionist theoretical tradition; Strauss, 1978) apply this insight much more broadly. Negotiated orders are the patterns of activity emerging over time in an organization as an outcome of the interplay of the various understandings, interests, initiatives, and reactions of the individuals and groups (including and beyond those of managers) who are involved in that organization.

It is within this broad theoretical emphasis, together with a strong interest in cultural as well as structural aspects of organizations, that sociologists of work and industry have examined changing managerial and corporate initiatives in the managing of work organizations. The work of sociologists studying work and work organizations has often been inspired by a perceived need to counter the assumptions and analyses of managerial thinkers and practitioners, this perhaps most famously occurring with the investigations made at the Hawthorne plant of the Western Electrical Company in Chicago in the 1920s and 1930s (Roethlisberger and Dickson, 1939). These investigations countered the assumptions and recommended practices of the ‘scientific management’ movement or ‘Taylorism’ (Taylor, 1911). The spokesperson of the counter movement, the so-called human relations school, inspired by the Hawthorne studies, was Elton Mayo, an individual influenced by Durkheimian thinking. But where Durkheim hoped for the achievement of ‘social integration’ through an emphasis on occupational moral communities, Mayo looked to work organizations and, especially, to work groups for the fostering of positive social sentiments and moral stability in society.

This broadly critical role for the sociology of work and industry continues to be significant as it relates a variety of corporate and organizational initiatives to broad processes of work restructuring, in which patterns of work experience, organizational, and occupational activity change as part of the general economic, political, and cultural shifts occurring across the world. This again involves countering such discourses as those of team working, lean production, delayering, flexible working, and a managerialist ‘human resource management.’ Research shows that high commitment/indirect control human resourcing and work design practices (using an organic or flexible bureaucratic style) are not the outcome of a straightforward ‘progressive’ movement among employers which will simultaneously achieve greater business effectiveness and all round fairness, as much ‘Human Resource Management’ writing implies. Managers, in fact, are equally ready to pursue rigidly bureaucratic low-commitment and direct control practices when the circumstances allow it (say when they are using a simple technology requiring low worker skills in a context of low-state regulation of employment practices).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868320712

What is defined as the process of planning organizing and controlling operations to reach objectives efficiently and effectively?

Management is the planning, organizing, leading, and controlling of resources to achieve goals effectively and efficiently”

What management function refers to the process of anticipating problems?

6. What management functions refers to the process of anticipating problems, analyzing. them, estimating their likely impact and determining actions that will lead to the desired.

What refers to the learning that is provided in order to improve performance on the present job?

Training refers to the learning that is provided in order to improve performance on the present job.

Is one that describes how do you determine the number of service units that will minimize both customers waiting time and cost of service *?

Quantitative Models for Decision-making Queuing theory – is one that describes how to determine the number of service units that will minimize both customer waiting time and cost of service.