• Tools and Resources
  • Customer Services
  • Agriculture and the Environment
  • Case Studies
  • Chemistry and Toxicology
  • Environment and Human Health
  • Environmental Biology
  • Environmental Economics
  • Environmental Engineering
  • Environmental Ethics and Philosophy
  • Environmental History
  • Environmental Issues and Problems
  • Environmental Processes and Systems
  • Environmental Sociology and Psychology
  • Environments
  • Framing Concepts in Environmental Science
  • Management and Planning
  • Policy, Governance, and Law
  • Quantitative Analysis and Tools
  • Sustainability and Solutions
  • Back to results
  • Share This Facebook LinkedIn Twitter

Article contents

Ecotechnology.

  • Astrid Schwarz Astrid Schwarz Brandenburg University of Technology Cottbus
  • https://doi.org/10.1093/acrefore/9780199389414.013.134
  • Published online: 19 October 2022

Ecotechnology is both broad and widespread, yet it has never been given a universally shared definition; this remains the case even in the early 21st century. Given that it is used in the natural, engineering, and social sciences, as well as in design studies, in the philosophy and history of technology and in science policy, perhaps this is not surprising. Indeed, it is virtually impossible to come up with an unambiguous definition for ecotechnology: It should be understood rather as an umbrella term that facilitates connections among different scientific fields and science policy and, in so doing, offers a robust trading zone of ideas and concepts. The term is part of a cultural and sociopolitical framework and, as such, wields explanatory power. Ecotechnology approaches argue for the design of ensembles that embed human action within an ecologically functional environment and mediating this relationship by technological means. Related terms, such as ecotechnics, ecotechniques, ecotechnologies, and eco-technology, are used similarly.

In the 1970s, “ecotechnology,” along with other terms, gave a voice to an unease and a concern with sociotechnical transformations. This eventually gave rise to the first global environmental movement expressing a comprehensive eco-cultural critique of society-environment relations. Ecotechnology was part of the language used by activists, as well as by social theorists and natural scientists working in the transdisciplinary field of applied ecology. The concept of ecotechnology helped to both establish and “smooth over” environmental matters of concern in the worlds of economics, science, and policymaking. The process of deliberation about a green modernity is still ongoing and characterizes the search for a constructive intermediation between artificial and natural systems following environmentally benign design principles.

During the 1980s, disciplinary endeavors flourished in the global academic world, lending ecotechnology more and more visibility. Some of these endeavors, such as restoration ecology and ecological engineering, were rooted in the engineering sciences, but mobilized quite different traditions, namely population biology and systems biology. To date, ecotechnology has been replaced by and large by other terms in applied ecology. Another strand of work resulted in the discipline of social ecology, which developed different focal points, most notably critical political economy and a concern with nature-culture issues in the context of cultural ecology. Finally, more recently, ecotechnology has been discussed in several branches of philosophy that offer different narratives about the epistemic and ontological transformations triggered by an “ecologization” of societies and a theoretical turn toward relationality.

  • environmental management
  • ecological engineering
  • restoration ecology
  • sociotechnical transformation
  • ecosystem theory
  • social ecology
  • ecological design
  • philosophy of technology
  • environmental ethics

Drawing “Eco” and “Techno” Together

Ecotechnology can be considered as a cipher to the vision of adapting human activities more skillfully to ecosystem functions. This encompasses various issues ranging from the production of ecological knowledge, through modalities of technical relations, to sociopolitical settings including different policy styles. Ecotechnology also draws together two terms frequently regarded as existing in opposite camps. The prefix “eco” comes from the Greek “ οίκος ‎” ( oikos ), meaning house, household, or dwelling place and in a wider sense family ( Schwarz & Jax, 2011 , p. 145). The word technology derives from the Greek “ τέχνη ‎” ( technè ), roughly translatable as having skills in craftsmanship and technology but also as artistic ability and dexterity. It has behind the Indo-European tekhn -, assumedly meaning woodwork or carpentry, and can be found in similar stem formations in many other languages ( Mitcham, 1994 , p. 117). It has been pointed out that technè already was an ambiguous term in Greek philosophy ( Mersch, 2018 , p. 5) because it can be identified with the famous figure of Prometheus, who, full of confidence in the practice and championing of technical skills, inevitably drags along his less capable brother Epimetheus, who only causes harm when dappling in technical practices—a reference, in other words, to the side effects of technology including “ecological ills” ( Odum, 1972 , p. 164) and environmental disasters. Bringing together “eco” and “techno,” then, seems to force being a marriage of principles that are in opposition in various ways, the first of these being the intrinsic “fraternal tension” embodied in problem-oriented technological solutions and their often-unexpected consequences; the field of ecological design is not an exception ( Gross, 2010 ). Second, there is the tension between the natural order as it is represented in ecosystem research, with its neatly settled routines and “balanced budgets,” and the innovative force of Promethean agency.

In the following, historical snapshots are offered as a means to carve out the principal paths of this rather overloaded and overdetermined term, the intention being to shed some light on the conceptual formation of ecotechnology and its emergence from antecedent scientific and policy contexts. The present article assesses the uses …

In all the diversity described a generic vision can be identified that is a call for an appropriate conceptualization of the human habitat that is seen as an entanglement of natural, social, and technical relations and objects. Ecotechnology stands for a sociopolitically informed ideal of relating knowledge about social, material, and energy relations by following ecological principles to integrate ecosystem functions on a material basis in the environment. In this sense, one might say that a conceptual scheme of ecotechnology implicitly also lies underneath discussions about functional relationships in sustainable technologies and ecosystem services, or even urban planning, while as a concept it is mainly elaborated in disciplinary fields like ecological engineering, ecological restauration, and ecological design.

Environmental Management and Ecotechnology

The field of environmental management, including the international regulatory system, substantially changed in the 1970s. In this setting, some of the discursive and institutional activities around ecotechnology ultimately resulted in the establishment of engineering disciplines, such as environmental and ecological engineering, industrial and restoration ecology. This process of institutionalization happened in the academic sector all over the world as well as in governmental institutions and nongovernmental organizations. Ecosystem research, systems theory, and engineering issues merged with the demands of science policy and the need to resolve environmental problems caused by industrial excesses. The names of localities such as Santa Barbara (oil spill 1969 ), Seveso (release of dioxin 1976 ), Bophal (gas leak 1984 ), Chernobyl (radioactive plume 1986 ) are just a few examples of a steadily growing number of global environmental disasters caused by technological dysfunction, most of them resulting in substantial ruptures in international environmental policy and legislation (see Seveso directives in EU legislation) as well as in the initiation of national programs for research and technology development in the ecological sciences. At the same time these catastrophic events advanced transformations toward greater environmental literacy in science and society ( Scholz, 2011 ). An increasingly successful implementation of ecotechnological practices picked up pace, while powerful instruments were developed to restore and ameliorate degraded plots of land and, eventually, to create “new natures” ( Blok & Gremmen, 2016 Hughes, 2004 ; McHarg, 1971 ). Even as the field of applied ecology blossomed, however, the concept ecotechnology itself was successively replaced during the 1990s by other concepts formed around design principles (e.g., ecological design, Bergen et al., 2001 ; Ross et al., 2015 ) and ecological restoration ( Berger, 1990 ), or else it became a synonym of ecological engineering ( Mitsch & Jørgensen, 1989 ) and of biomanipulation ( Kasprzak et al., 1993 ). A similar development can be observed in the field of science policy and political economy: Here, the word “ecotechnology” disappeared even before it had exerted any significant impact as a concept. Some of the central issues associated with ecotechnology during the 1970s and 1980s were included in the concept of sustainable development ( Brundtland, 1987 ) and sustainability science, which emerged subsequently. An exception is perhaps the derivative term “ecotechnie,” which was stabilized in policymaking in the field of environment and development to the extent that it was established as an eponymous program, a joint effort between UNESCO Man and Biosphere and the Cousteau Society ( UCEP ), launched in 1994 . Thus, in the ecological sciences the term ecotechnology began to fade away when the “undeniable successes of ecological modernisation strategies” gained a foothold ( Blühdorn & Welsh, 2007 , p. 194). This is at least the case for the Western scientific topology, in the Asian context ecotechnology took a different way.

The overall development of the concept can be described as a piece of transgressive boundary work that stretches over antipodal fields such as technology versus nature, artificial versus natural, and also applied versus basic research. The development of and reflection on ecotechnological principles and techniques cuts through these categories and was from the very beginning an object of interest not only for engineers and natural scientists but also for philosophers, sociologists, and, beyond the academic field, environmental activists. This is not terribly surprising, given that the word “ecology” and later also “sustainability” underwent a similar process through different scientific and sociopolitical fields. All these notions can be identified with the attempt to express an unease with the highly ambivalent process of modernization ( Beck, 2010 ), one of the reactions was a proposal of a framework for a “politics of unsustainability” in a postecologist European era to recast well-established conditions and constellations ( Blühdorn & Welsh, 2007 , p. 196). An enormous body of scientific literature extending across the sciences and the humanities has been produced to express the discomfort, to say the least, with this gargantuan elephant in the room, beginning from the postwar period in the 1950s. This will be discussed later in more detail in the section about social/political ecology with its focus on the disciplinary transversal conjunctions that were rendered possible by conceptualizations of ecotechnology during a historical interim phase of about 20 years starting in the 1970s. This work intends to fill a research gap, identified around the turn of the millennium, that “the origins of the new uses of ‘green’ and ‘eco-’ in regard to technology have not been adequately addressed” ( Jørgensen, 2001 , p. 6393), a deficit that was pointed out in “Greening of Technology and Ecotechnology” in the International Encyclopedia of the Social and Behavioral Sciences .

In the following, three approaches will be pursued to provide a more detailed epistemological picture and historically profound understanding of the term “ecotechnology,” the research practices associated with it, and the management policies embraced by it. To begin, the history of the concept is presented, its different conceptual uses, the main lines of demarcation from other concepts, and the orienting narratives involved. These are discussed by focusing on its development in the ecological sciences. In the section “Sociopolitical Imagineries and Agency in International Networking”, the sociopolitical and socioeconomic issues are unpacked and scrutinized in the context of the disciplinary formation of social ecology in the 1980s, which developed in parallel in different national contexts: Some of these impacts include the continued articulation of ecotechnological visions to this day. In the third section “Another Semantic Turn of Ecotechnology/Ecotechnics”, different theoretical approaches are discussed, mainly in the context of more recent philosophical uses of ecotechnics and ecotechnology, offering a number of considerations regarding the meaning and understandings of the technicity of relations between humans toward their oikos.

Buzzword, Umbrella Term, or Proper Definition?

In the 1970s, the term “ecotechnology” was in the air, emerging simultaneously in public print media and in futuristic literature in the United States. In the scientific arena ecotechnology first arose in a Japanese (Aida, 1971 , cited in Aida, 1995 ) and in an American context ( Bookchin, 1977 ), before spreading further in different national and disciplinary spheres. The proclamation of “ecological engineering” as a discipline in 1962 by Howard T. Odum ( Odum 1962 ) certainly also prepared the ground for “the design of sustainable ecosystems that integrate human society with its natural environment for the benefit of both” ( Mitsch, 2012 , p. 6). A consolidation of the conceptual work took place with the first textbook Ecological Engineering: An Introduction to Ecotechnology in 1989 and the founding of the journal Ecological Engineering in 1993 . A quantitative analysis of the use of the term has confirmed that its attractivity increased during the 1980s and peaked around the turn of the last millennium ( Haddaway et al., 2018 ). Another bibliographical analysis has shown that ecotechnology was much less in use compared to ecological engineering or ecological services ( Barot et al., 2012 ). The attractivity of the term seemed also limited by the fact that ecotechnology had been identified as a buzzword. To clearly delineate its deployment as a “useful concept unifying and gathering efforts around a common vision” ( Haddaway et al., 2018 , p. 247) a study was conducted on bibliographic databases, from which all those articles were filtered that offered explicit definitions on ecotechnology or its derivates. As a result, an evidence-based terminological toolbox was proposed, and the authors set about constructing a conceptual consensus model for their own project and suggested the following definition: “Ecotechnologies are human interventions in social-ecological systems in the form of practices and/or biological, physical and chemical processes designed to minimize harm in the environment and provide services of value to society” ( Haddaway et al., 2018 , p. 260). Unfortunately, the authors provide no clue in their article as to what they mean by “services of value” or what exactly is meant to be “harm in the environment.” The definition might include the building blocks identified, but the question remains what it offers beyond a balanced combination of separate elements. The conceptual context that explains the terms and their semantic environment, thus making the definition work, remains empty. It seems that the strategy to exempt definitions of ecotechnology from their suspected status of buzzword is difficult to be performed properly.

The more promising approach might be to analyze “ecotechnology” as an umbrella term because this makes clear from the beginning that it is the context that needs to be semantically taken into consideration. This ultimately helps us better understand the movements, trends, and discursive tactics of a complex term that not only represents but also gathers up meaning, wields explanatory power, and presents a dynamic and innovative potential. A term becomes an umbrella term when it has great potential to link and translate different discourses and conceptual practices. “Umbrella terms start out as a fragile proposal by means of which a variety of research areas and directions can be linked up with one other” ( Rip & Voß, 2013 , p. 40) and with certain societal concerns and policy issues. Accordingly, an umbrella term mediates between different arenas such as scientific research, society, and policy, each of which follows a different logic. As a mediator the umbrella term not only travels between already existing fields of science, technology, and policy but also might elicit and finally become constitutive of new epistemic and institutional formations. “Sustainability” is a good example of an umbrella term that came into being to reconcile matters of concern about the global environment and critical issues about economic growth and to overcome the array of antagonistic voices in society and also in science. The term “sustainability” became one of the most successful outcomes of the Brundtland (1987) report, which states that the “sustainability of ecosystems on which the global economy depends must be guaranteed” (p. 32) and that “sustainable development requires the unification of economics and ecology in international relations” (p. 74). This promise has become a successful commodity not only in the policy world but also in a nascent scientific arena increasingly concerned to conceptualize sustainable development and terms like “resilience” to become, finally, in the first decade of the new millennium, established as “sustainability science.” Its epistemic program became the study of the interrelatedness of social and ecological systems, their dynamics, and how to govern these. Interesting enough, ecotechnology does not appear in the Brundtland report—technology and ecology are never linked directly. Instead they are imagined as being mediated by economics. Most industries rely essentially on natural resources even while they seriously pollute the environment. These changes have locked economy and ecology together, mainly on a global scale.

To conclude, umbrella terms are not necessarily a drawback. On the contrary, they gain their persuasive power as normatively oriented concepts by being radically inclusive and thus providing a conceptual framework that indicates, among other things, when science policy and research have hitched up together successfully. For a while they are highly innovative in their impacts, and this has also been the case for ecotechnology, as will be discussed in the following.

Conceptualizing Ecotechnology—The Main Path and Some Sidelines

“Ecotechnology” has been an ambiguous term from the very beginning and was never a purely technical term in the scientific world. It occurs across different semantic, disciplinary, and sociopolitical settings, referring to a plenitude of environmental problems and research practices. Further, “ecotechnology” is quite often substituted by other similar terms such as “eco-technology” ( Aida, 1983 ; Leff, 1986 ; Oesterreich, 2001 ), Eco Technology ( Aida, 1986 ), or “ecotechnics” ( Grönlund et al., 2014 , Miller, 2012 ; Nancy, 1991 ), and it also appears in compound terms such as “ecological technologies” or “living technologies” ( Todd & Josephon, 1996 ), “environmental technologies” ( Banham, 1965 ), or “ecotechnic future” ( Greer, 2009 ). In other languages ecotechnology becomes (to give just the most obvious terms in the debate) “Eko tekunorojī“ (Japanese), “écotechnique” (French), “ekoteknik” (Swedish), “Ökotechnologie” and “Ökotechnik” (German), “eco-tecnologia” (Spanish), or “milieutechnologie” (Danish). These different formulations should not only be understood as a linguistic task of translation to cope with but also must be considered in terms of deliberate semantic differentiation and conceptual delimitation in a geopolitical and a disciplinary context. The same applies to the spelling of the term “ecotechnology.” For instance, “eco-technology” is mainly used to date in Japan in a context of ecological design. A more detailed discussion of “national ecotechnologies” is offered in sections “ Ecotechnology in an Asian Context (Japan, China, Taiwan) ” and “ An Ecotechnological Rationality for Latin America .”

Ecotechnology in the Ecosciences

A conceptual ambiguity is also admitted by the scientific community of ecologists. In his article “Ecotechnology as a New Means for Environmental Management” and after about a decade of conceptual work, modeler Milan Straškraba (1993 , p. 311) states that there never was a “common terminology with respect to ecological engineering, ecotechniques and ecotechnology.” This is mirrored in the struggle of the ecological community to define a more authoritative system of principles and rules to enhance the practicability and engagement of methods and tools, and thus establish a standard for linking ecosystem theory and engineering practices. Straškraba proposes a theory of ecosystems, consisting of seven principles that correspond to a theory of ecotechnology and spelling out 17 rules for a “sound management of the environment” (p. 317). Other authors suggested eight ( Mitsch, 1992 ) or 12 principles ( Jørgensen & Nielsen, 1996 ), while other numbers and categories were also proposed ( Bergen et al., 2001 ), indicating that the field was still in a phase of competing classificatory systems and less in a hypothesis-driven phase. Recently, a redefinition of ecological engineering was suggested in the sense of a holistic approach for problem-solving, ecotechnology was just included in a literature search but not conceptualized ( Schönborn & Junge, 2021 ). Here, too, seven ecological principles are proposed to which a good engineering practice should commit ( Schönborn & Junge, 2021 , p. 388).

The system suggested by Straškraba is distinct insofar as he places scientific ecology and ecotechnology on an equal footing with respect to the potential of theory building. Mathematical and computer models cannot just be applied, he argues, they must be calculated and matched to the specific situation, an instance that needs theoretical input. He recommends using “decision support systems” and an individual selection of what he calls ecotechniques such as “river restoration” or “changed agricultural practices” ( Straškraba, 1993 , p. 327). Thus, modeling for him is the means to integrate science and engineering and therefore also theoretical and applied ecology. Straškraba was able to rely on a fundamental consensus in the growing community that only those ecotechniques should be used where the costs of the intervention and “their harm to the global environment are minimized” (p. 311). He underlines this commonality, also of the global vision, by referring to one of the first articles to use the term “ecotechnology” explicitly to name a new engineering discipline based on knowledge about biological structures and processes ( Uhlmann, 1983 , p. 109). Dietrich Uhlmann, head of the water science department at Technical University Dresden, referred explicitly to Marx’s theory of metabolic rift. Though in 1983 the German Democratic Republic had exclaimed the Marx year, the motive for citing a longer Marx passage is equally justified by offering reflections about the necessity to “include environmental requirements in the development of societal needs” ( Uhlmann, 1983 , transl. AS) and the call for the reconciliation of anthropogenic impacts with the laws of development and active principles in nature. This alludes to Marx’s phrase that a society is not only the owner and beneficiary of earth but also has “to bequeath earth as boni patres familias to the following generations improved” ( Marx, 1964 , p. 748), which is the passage cited in the article. Thus inspired, Uhlmann suggests the following program for ecotechnology: “Ecological standards must be created and enforced in the technosphere to ensure environmental conditions that promote human health and well-being. This means that a technology must be created that is integrated into the natural material cycles” ( Uhlmann, 1983 , p. 109, transl. AS).

This in some way anticipates what became the central paradigm of ecological engineering as formulated by William J. Mitsch and Sven Erik Jørgensen (1989) , in the very first textbook that established the field ecological engineering, written mainly by the editors. They stated: “We define ecological engineering and ecotechnology as the design of human society with its natural environment for the benefit of both,” and they continued, “it is a technology with the primary tool being self-designing ecosystems. The components are all of the biological species of the world” ( Mitsch & Jørgensen, 1989 , p. 4). The shift from Uhlmann’s definition lies in the emphasis on who has to adapt to whom: The latter says that societies need to adapt to the natural material cycles, whereas Mitsch and Jørgensen tend to put the environment into the service of human society.

In their preface the authors asserted that their approach was intended to bring about a “cooperation between humans and nature” ( Mitsch & Jørgensen, 1989 , p. ix) and “will encourage a symbiotic relationship between humans and their natural environment” (p. x), a “partnership with nature” (p. 11). They also refer to Straškraba and Uhlmann, by repeating the minimally invasive strategy already established in the growing community of applied ecology. No conceptual distinction is made between ecological engineering and ecotechnology: The terms are virtually interchangeable. In an article published three years later the word ecotechnology appears only on the first page after reeling off the goals in mantra-like fashion ( Mitsch, 1992 , p. 28). As has been suggested, Mitsch et al. had identified ecotechnology with the development of ecological solutions to environmental engineering problems particularly in waste management ( Bergen et al., 2001 , p. 202), whereas Straškraba formed a conceptual tool of ecotechnology in environmental management. Mitsch abandons the term ecotechnology completely and thus fails to define its conceptual contours, an omission not tackled either in later publications by the author collective Mitsch and Jørgensen.

Is Ecotechnology Less Invasive Than Technology?

A closer look at the “collection of principles and case studies” in Mitsch and Jørgensen (1989 , p. 11) illustrates that ecotechnological methods and tools must be just as invasive, at least initially, as the industries and mining or agrotechnologies that brought forth the environmental problems. The examples given are coal mine reclamation, the restoration of lakes, or the recycling of wetlands. These illustrate vividly that constructing the desired ecosystems and gaining the desired control over the material and energy flows—and thus to recycling industrial waste and residues—is an elaborate, technology-intensive enterprise. It involves the use of heavy machines, a massive amount of earth forming, chemical interventions on the ground, the introduction of biological species, and high-tech inputs, including, from the very start of a project, the digital-modeling tools needed to manage it. Ultimately, it seems that the difference between ordinary technological and ecotechnological engineering comes down to insisting that “ecosystems are used for the benefit of humankind without destroying the ecological balance, that is, utilization of the ecosystem on an ecologically sound basis” ( Mitsch & Jørgensen, 1989 , p. 15). The key question then becomes one of where and how to fix the ecological balance to enable natural ecosystems to be used both as resources for commodities and as amenities. It is acknowledged that there is a rising demand for “ecological services” that can be attributed to “the lack of markets for what are the essentially free services supplied by natural ecosystems” ( Maxwell & Costanza, 1989 , p. 58). This view displays a clear commitment to environmental design in the service of economic factors that “determine how natural ecosystems are manipulated by humans” ( Maxwell & Costanza, 1989 , p. 61). Even though the authors assure the reader that there is also a feedback loop that affects the “attributes of ecosystems that are valued by individuals, both demand and production (supply) relationships must be considered” (p. 61). However, the “benefit for both” and the cooperative aspects of the human-nature relationship (the central claim of ecotechnology as well as ecological engineering) ring rather hollow in the face of a clear commitment to the laws of the market that treat nature as a mere resource. The following description of an incidental observation provides a dramatic insight into this rather fraught partnership with nature in an ecotechnological context:

Along State Highway 100 between Palatka and Bunnell, Florida, is a business in which old cars are dumped into wetlands, and parts removed for sale. The used car dump is mostly hidden by the wetland trees. From what we know about wetlands absorbing and holding heavy metals ( Odum et al., 2000a , b ), this may not be a bad arrangement, a kind of ecological engineering ( Odum & Odum, 2003 , p. 352).

It would take a very sympathetic reader to find any irony here. Moreover, a prior statement made by the same author that “ecological engineering reduces costs by fostering nature’s inputs” ( Odum, 1989 , p. 81) does leave a rather bad taste in the mouth.

Thus, when Odum’s idea of a “partnership with nature” is referred to in the current debate on an appropriate design of ecological engineering ( Schönborn & Junge, 2021 , p. 384), the question arises whether a conceptualization is actually provided here that can convince with the attribution of intrinsic values to nature and a holistic method.

Ecotechnology and the Self-Design of Nature

Howard T. Odum was one of the leading figures in the field, the idea of a “self-design” of ecosystems being one of his most important contributions to ecological engineering. As far back as the 1960s, he suggested that ecological engineering may be a viable opportunity to manipulate systems in which “the main energy drives are still coming from natural sources” (cited in Mitsch & Jörgensen, 1989 , p. 4). The idea was that such systems can be converted into self-organized systems by applying a new ecosystem design that uses “the work contributions of the environment” ( Odum, 1989 , p. 81). This top-down perspective, of reducing parts of nature into resource packages driven by energy input and output and transforming them into the service of human society, goes back to general systems theory, which in turn was inspired by cybernetic thinking. However, this is not the whole story: Behind the idea of reducing the environment to a working unit lurks a capitalist strategy. “‘ The economy’ and ‘ the environment’ are not independent of each other. Capitalism is not an economic system; it is a way of organizing nature ” ( Moore, 2015 , p. 2). “Capitalism—or of one prefers, modernity or industrial civilization—emerged out of Nature. It drew wealth from Nature. It disrupted, degraded, or defiled Nature ” ( Moore, 2015 , p. 5).

Howard T. Odum and Eugene Odum were both influential in establishing these ideas about self-organized and engineered ecosystems. Eugene Odum founded the independent Institute of Ecology at the University of Georgia in 1967 (referred to as the Odum School), and he was the author of Fundamentals of Ecology ( 1953 ), an influential textbook in ecology to which his brother Howard T. Odum contributed the sections on energy flow and biogeochemistry ( Hagen, 2021 ). In the 1980s Eugene Odum commented “the possibility that ecosystems do function as general systems with self-organizing properties is to me a very exciting, unifying theory” ( Odum, 1984 , p. 559). This focus on energy flow diagrams inspired by systems theory as the only basic process in natural and human systems has been debunked by numerous authors as being a reductionist approach. Landscape ecologist Zev Naveh called the strategy of reducing everything to countable units a “real danger”; he noted that the Odum’s ecosystem approach provides only simplistic “ecological” explanations for human systems that could be interpreted as a “new kind of neo-materialistic ‘energy marxism’” ( Naveh, 1982 , p. 199). Ecology theorist Ludwig Trepl classified the Odum program as the technocratic branch of ecology that seeks perfection in dominating nature, commenting bitingly that “this was the latest attempt so far to grasp that which eludes predictability” ( Trepl, 1987 , p. 22).

It was precisely his unifying theory that Howard T. Odum had in mind when he developed experimental microcosm systems to apply the findings of self-organizational principles to larger ecosystems. Here, “self-organization” refers to the manipulation and monitoring of a succession observed in an experimental microcosm, consisting in the interaction of a limited number of species inside a vessel of a limited size. Odum had also offered this mesocosm concept to NASA in the 1970s as an experimental system designed to find out more about self-supporting life-support systems ( Odum & Odum, 2003 , p. 147). Although the concept was rejected, Odum continued to explore the possibility of domesticating ecosystems that can thus be “enclosed in concrete boxes to become the mainstays of environmental engineering” (p. 148).

Whereas in the 21st century , mesocosm studies are successfully used to monitor the impact of climate change ( Cavicchioli et al., 2019 ), still in the 1980 not much was known about upscaling processes and their effects. Odum had to admit that “most self-organization has been happenstance, often in spite of management efforts in some other direction” ( 1989 , p. 85). Based on rather weak experimental evidence, this statement reveals that his “self-organization” is more of a descriptive term derived from observing succession processes in very limited settings ( Kangas & Adey, 1996 ) and that the dynamics involved in upscaling processes were virtually unknown. Accordingly, one might be inclined to conclude that this concept of self-organization is driven primarily by an economy of promise and is fueled by Promethean visions of governing “new ecosystems” using systems theory and a set of engineering design tools.

With “Living Machines”—simultaneously a concept and a technical artifact—an idea of symbiotic, self-organized systems was carried forward in industrial ecology ( Zelov et al., 2001 ). However, the “ecology cells” created by the “New Alchemy Institute” were intended from the beginning to be applied in the limited context of wastewater treatment facilities or even individual households ( Todd & Josephon, 1996 ). This more modest approach was also confirmed by theoretical reflections on industrial ecology that concluded it is not possible to define specific measures and practical actions for achieving an overarching vision of sustainability in industrial society by relying on the general ecosystem theory. Rather, a focus on “local, situational and case specific” practices and models was emphasized, in other words, an approach that conceptually refuses a universal or global application ( Korhonen, 2005 , p. 37). An influential author ( Allenby, 2006 ) in the field has echoed this view, arguing that industrial ecology was perhaps one of the first fields not only to be aware of the “complex relationship between the normative and the objective” but also to contribute theoretically to a concept of mixed ontology, as it would be called from a philosophical perspective, “even without considering social science” ( Allenby, 2006 , p. 31), as he candidly admits.

Eventually, even as Odum capitalized on the ecotechnological impetus in environmental management fueled by the idea of a partnership with nature, at the same time he also contributed to the demise of the concept: “Ecotechnology may not be a good synonym for ecological engineering because it seems to omit the ecosystem part,” meaning “self-regulating processes of nature that make ecological self-designs low energy, sustainable, inexpensive, and different” ( Odum & Odum, 2003 , p. 240). Again, there is no conceptual demarcation here between self-organization and self-design ( Mitsch, 1992 ; Odum, 1989 ; Odum & Odum, 2003 ), just as ecotechnology and ecological engineering are used interchangeably ( Mitsch, 1992 ; Mitsch & Jörgensen, 1989 ). This could also be an indication that the epistemological status of technology in science and engineering is never clarified, leading to a constant confusion of values and categories: Instrumental schemes based on physical descriptions (such as self-organization and black boxing) are thus turned into prescriptive rules for pieces of nature without considering the constructive technicity either of the ecosystem scheme or of the imagined ecosystem in nature.

Other concepts of self-design refer more explicitly to ecotechnology and are intended as a basis for constructing a concise framework of principles, although the authors also explicitly acknowledge that they are using them in a “combination of axioms, heuristics and suggestions” ( Bergen et al., 2001 , p. 204). The heuristics of these ecological engineering design principles plays out clearly when the authors go back and forth in their arguments between design and ecosystem discourses, being quite explicit about the ambivalence of engineers designing ecosystems as one of their primary activities and the importance of including a value framework. It is in this context that the claim about an environment capable of being domesticated and made to serve human needs is turned into the more modest question: “What will nature help us to do?” This is explored by offering a discussion about the upstream and downstream effects of design decisions and about stakeholder participation, including the need for a strategy to deal with uncertainty and ignorance. It is argued, for instance, that “diversity provides insurance against uncertainty in addition to contributing to ecological resilience” ( Bergen et al., 2001 , p. 208). This links back to the concept of self-design. The fundamental ecotechnological claim of working for the benefit of society and nature is firmly attached here to an ethical framework that includes a commitment to risk management and reflecting about values during decision-making processes; such considerations encompass, for example, an equitable distribution of risk, intergenerational equity, and a concern for nonhuman species in particular. It might be an interesting follow-up question to ask whether a concept of self-design as deriving from systems theory still makes sense in a discourse that addresses design questions in a framework of adaptive management. Ross et al. argue, for example, “that any ecosystem design is likely to require adjustments over its lifespan, and indeed the most effective ecosystem designs are likely to be those that explicitly acknowledge the lack of any definite endpoint in time” ( 2015 , p. 435).

In conclusion, one might speculate about what might emerge if a conceptual framework for ecotechnological ecosystem design—beyond elaborate ecological knowledge about species and sites—also included (a) consideration of ethical issues from the very start; (b) an acknowledgment that we are living in an anthropocentric world; (c) that design is a goal-oriented practice, meaning that ecosystem functions must be prioritized; and (d) that the existence of ecosystems and species and their historical conditions are considered not only as scientific but also as philosophical issues (including the value of embodied time), as well as a lifeworld issue (such as the pleasure of interrelationships and caring). It is likely that consideration of these criteria offers a viable way to address the urgent need for the localized modulation or assimilation of humans into their limited world. Understood as basic tools for ecotechnological practices, they could serve as a guide for making ecological design and restoration ecology, industrial ecology, and ecological engineering more credible, socially resonant, and robust.

Ecotechnology in an Asian context (Japan, China, Taiwan)

Among the first scientists to use the term “eco-technology” was Shuhei Aida, an academic working in the field of systems engineering at the University of Electro-Communications in Tokyo. He suggested the term in the early 1970s ( 1971 , cited in Aida, 1995 ; 1973 cited in Aida, 1983 ). Unfortunately, these early writings could not be found in international literature source systems. Interestingly, the references circulate rather phantom-like as a first mention of ecotechnology in scientific literature. Aida himself regularly referred to this earlier work. For example, an article published in 1995 presents a definition of ecotechnology using a box-like text format:

Professor S. Aida proposed the following definition for Ecotechnology in 1971 . Ecotechnology is the use of technology for ecosystem management. Ecotechnology is based upon a deep ecological understanding of mutual symbiosis in natural relationships. Ecotechnology is a mechanism for minimizing entropy production and the damage done to society and the environment by the products of entropy. The minimum entropy production concept attempts to optimize efficiency and effectiveness in society. Efficiency and effectiveness describe the various interactions which define our relationships with society and the environment. Ecotechnology is technology oriented towards ecology. ( Aida, 1995 , p. 1456)

The first universally verifiable source for the term “eco-technology” is the book The Humane Use of Human Ideas , edited by Aida and published in 1983 by Pergamon Press on behalf of the Honda Foundation. In the chapter “Fundamental Concepts of Eco-technology,” most of the terms used in the 1971 definition, such as efficiency, effectiveness, and entropy, do not appear. An exception to this is the term “symbiosis,” which is used in a dual sense: First, to characterize a “symbiosis of nature and artificials,” realized by means of “eco-mechanisms” and “construction of nature”—both are to characterize the eco-technology of the future ( Adia, 1983 , Figure 16.8, p. 301). Second, “symbiosis” is used to refer to the “symbiosis of man and society,” this being considered a “holistic function of culture” that eventually culminates in a “synthesis of culture with technology that is Eco-technology” ( Aida, 1983 , p. 308). This metaphor of symbiosis, in both senses, is also used in other national and disciplinary ecotechnological contexts, indicating an attempt to conceptualize the entanglement of different materials and energy as well as cultural artifacts. For instance, in industrial ecology there is a reference to “industrial symbiotic systems” ( Graedel & Allenby, 2010 , p. 232) and in ecological engineering to the symbiotic relationship between humans and their natural environment ( Mitsch & Jörgensen, 1989 ). In the Chinese context, the principle of symbiosis is asserted in ecological engineering, in industrial ecology as well as in emerging technologies ( Li, 2018 ; Ma, 1988 ; Yu & Zhang, 2021 ; Zhang et al., 1998 ).

Eco-Technology and Ecotechnology

Aida’s (1995) article promotes the application of ecotechnology in the AIES project (adaptive intelligent energy systems) and suggests that it can function as a blueprint for a symbiotic technology based on artificial intelligence. This ecotechnology is expected to enable the development of “sustainable, adaptable energy systems for the future,” mainly the construction of power facilities ( 1995 , p. 1458). The development and application of an “adaptive intelligence” is crucial for this project: The principle of ecological symbiosis is linked to the law of entropy and eventually results in a symbiotic self-organization process that enables environmentally benign design in many areas of society, technology, or the economy. It is interesting to note that, at about the same time, the concept of the “eco-thermodynamics” of natural resource depletion gained a certain momentum. It was pointed out that it is not the “finiteness of resource stocks, but the fragility of self-organized natural cycles that we have to fear. Unfortunately, the services provided by these cycles are part of the global commons. They are priceless, yet ‘free’” ( Ayres, 1996 , p. 11). Symbiotic processes of self-organization were expected to reinforce a mimetic ecotechnology that, so it was claimed, would initiate a third industrial revolution. Accordingly, it is not surprising that this technoscientific vision rests on a triple helix model, namely, a consortium consisting of Cranfield University in the United Kingdom, the Japanese International Foundation for Artificial Intelligence (IFAI) and, finally, TEPCO, the Tokyo Electric Power Company. Ecotechnology in this context becomes a program conceived as “ecological optimization in nature” ( Aida, 1995 , p. 1458) and thus a mode of technological design with nature, in which the human-built world, including industrial production, affords a better kind of nature than nature itself could ever build.

In the 1980s, Aida (1983) used the term “eco-technology” to express his unease with the “arrogance found in today’s technology” (p. 286), which he identified particularly with a supposedly inevitable interlocking of economics and technology, as indeed was presented shortly after this in the Brundtland report. Aida aligns himself instead with the Club of Rome report, explicitly seeking a “humanised approach” (p. 286) that should be based on a “productive collaboration between technology and ecology” (p. 286, emphasis in original) “to establish a new technological philosophy, based on ecological concepts and involving every aspect of scientific technology” (p. 288). For Aida, the key function of the “eco-technology” concept is to provide a “new ‘all-sided and multi-layered’ philosophy of science” (p. 286), offered within a holistic framework. This reference to a holistic view of ecotechnology has become a common currency in many Asian countries and is still visibly present. References can be found, for example, to concepts such as “holism, coordination, recycle and regeneration” in ecotechnology and ecological engineering, while its methods and practices should be based on principles of holistic planning and design ( Zhang et al., 1998 , p. 18). In Taiwan, ecotechnological methods and practices are promoted by the government based on a holistic view of problem-solving ( Chou et al., 2007 , p. 270). Additionally, the message contained in the very first preface of the Chinese journal Environmental Science and Ecotechnology could be summed up by the key sentence that human civilization and nature are intertwined—“as inseparable as mind and body” ( Qu, 2020 , p. 1).

However, this holistic framing is not new to scientific ecology, to Western science and philosophy in general, or to international science policy. What is new, however, is to suggest that technology is a means of enabling humans to become adapted to—almost molded into—their environment, which is ecologically limited. This clearly goes beyond the idea of technology as mediating between abstract ideas and material forms, while simultaneously referring to the concept of technè in Greek philosophy. Aida’s work poses a powerful reminder of the 1970s debate on the limits to growth, and his plea to give up the “‘confrontational aspect’ of science and technology” ( Aida, 1983 , p. 283) leads him to the proposal that ecology should be understood as an all-embracing science because “all modern scientific technology, in the biological world around us, must be in harmony with, and a component of, nature” (p. 282). In Aida’s historical reconstruction of science and technology, it is the predominance of physical science that has resulted in science going down the wrong route and thus bringing about a “mistaken evolution of technological methodologies,” both moves being due to the “Western approach of confrontation and conquest of nature” (p. 283). Thus, the current global crisis of humanity’s oikos must necessarily be identified as being largely a consequence of the hubris of Western-style technology.

To address this drawback, Aida suggests that we turn to traditional Oriental, especially Chinese, philosophies that emphasize instead “the need for mankind to unite and cooperate with nature, so that both may continue in harmonious coexistence” ( Aida, 1983 , p. 283). Confucian concepts, he argues, could help us to (re-)introduce “the spirit” in a world of technology that is largely imagined and managed in its materialistic dimension. Aida believes that an ecotechnological philosophy, meaning an “all-round ecological approach to the future,” could bring about this turnaround and close the gap between the material and the ethical, the materialistic and the spiritual in science and technology ( Aida, 1983 , p. 290). Environmental pollution and excessive industrial production could only take hold to such an extent, he argues, because ethical and spiritual issues have been pushed firmly into the background behind material and economic growth in the development of scientific technology.

Aida offers some thoughts on how an ecotechnological philosophy might work to nurture the role of a nonmaterial dimension in science and technology and to combat “Western-made” problems. He suggests that we distinguish between “hard pollution,” the contamination of the physical environment, and “mental pollution,” the latter being the more critical pollution, particularly since it is more difficult to perceive and control. In a visual representation reminiscent of the cybernetics-inspired ecosystem figures of the Odum brothers, “eco-technology” is portrayed as the means of connecting society, energy, and natural resources so that it becomes a means to short-circuit the “problem of human mind,” “future society,” and “technical control” ( Aida, 1983 , p. 291). The resulting eco-technological system then covers the totality of individuals, minds, ambitions, and actions bound together in a society in which the spheres of matter, energy, and information are closely interconnected. Such a modern society, Aida argues in conclusion, is organized by the work of men and machines, “involving many different kinds of interaction between technology, nature and art” (p. 298). In this societal model, ecological science can “offer essential knowledge from nature to form an environmentally harmonious system” (p. 300). Correspondingly, it is eco-technology, the “adjustment technique,” that is expected to synthetize culture and technology, both conceptually and functionally (p. 308). As far as the problem with so-called mental pollution in this eco-technological system is concerned—how it might be characterized, detected, and handled—nothing further is mentioned, and thus the dilemma of an eco-tech-culture persists. In any case, the all-embracing understanding of eco-technology as a technique of adjustment in a world framed in terms of cybernetics acquires something of the uncanny.

Technology and Nature in Harmony?

“Eco-technology” continues to be used more recently in the Asian context, on the one hand in general reflections about sustainable science, and on the other hand, in the sense of establishing ecotechnological practices. Some elements of the narrative discussed above have disappeared, such as the concept of hard versus mental pollution, or the entropy models that conflate the spheres of matter and energy, animate and inanimate, society and nature. Others have persisted, however, such as Western philosophy still being criticized for building “a human empire enslaving nature” and foregrounding an anthropocentric worldview that “does not allow for any restraint in relation to nature, and thus led to the creation of severe environmental constraints” ( Ishida & Furukawa, 2013 , p. 135). A new kind of technology is expected to unfold based on Japanese Buddhist philosophies referring to the core idea that “all living things–mountains and rivers, grasses and trees, and all the land, are imbued with the Buddhist spirit,” and therefore all “living things including humans are seen to be part of the same cycle of life” ( Ishida & Furukawa, 2013 , p. 142). Rethinking the relationship between human beings and nature in the light of environmental constraints thus also means creating a new form of technology that “helps people live wholesome, spiritually fulfilling lives” ( Ishida & Furukawa, 2013 , p. 143) and developing new lifestyles in this limited world. The notion of adjustment as a technique resonates in these ideas, with the individual never dissolving fully into general categories or physical quantities, as is the case in the more technocratic ecotechnological philosophy of Shuhei Aida. Instead, technological potential is seen in the cultivation of a playful and skillful appropriation of things and ways of acting in society. It is this technology incorporated into culture that balances the spiritual and the material sphere, eventually resulting in an industrial revolution which embraces a more relational view of nature. The authors propose a philosophical approach they call “Nature Technology” that revolves around the following four claims: (a) Technology “realizes high function/ultra-low environmental impact with nature as a point of departure,” (b) “is simple and easy to understand,” (c) “encourages communication and community,” and (d) “inspires attachment and affection” ( Ishida & Furukawa, 2013 , p. 153). Each of these elements additionally provides interesting linkages to approaches developed in social and political ecology in Europe, Latin America, and North America (see section “Social/Political Ecology”), as well as to new materialism and posthuman theories (see section “ Another Semantic Turn of Ecotechnology/Ecotechnics ”).

Sustainability and Ecotechnological Agency

The term “eco-technology” appears on the scene when it comes to the socio-technical implementation of sustainable products and to behavior in everyday life, particularly in the world of consumption. Japan is considered as having a diversified and an economically impressive market in highly advanced eco-technologies. At the same time it is said that Japanese citizens have the highest environmental awareness compared to other industrialized countries so that one might “rightly expect a synergy between the launch of eco-products and high citizen awareness” ( Ishida & Furukawa, 2013 , p. 12) and an improved situation in general. However, it has been shown that the “eco-dilemma”—the steady degradation of the global environment—cannot be solved with eco-technologies because increasing consumption is cancelling out the positive effects of green technology, particularly when greenwashing takes over. This has been identified as a kind of rebound effect and has led to a call to change the precondition of eco-technologies that is allied with the socioeconomic formula “people’s desires = convenience and comfort = a prosperous life” (p. 17). Accordingly, it is argued that partially optimized eco-technologies are not sufficient, particularly if they only involve replicating the technological fix of conventional manufacturing, with a layer of green camouflage added on. Technology that truly acknowledges the existence of environmental constraints, then, should be understood as a socio-technical contribution to innovative lifestyles in which new forms of prosperity must be developed and where values and virtues such as responsibility and self-restraint are incorporated into the creation of products and lifestyles alike. However, this would require going beyond recent eco-technologies. Thus, in the Japanese context too, it seems that eco-technology has lost its heuristic persuasiveness as well as its socio-technical power. By contrast, in the Chinese and the Taiwanese context ecotechnology is gaining momentum: It is being incorporated into governmental road maps and is becoming more and more visible in institutional settings. Scientific journals, research centers, and businesses have been established that have the term “ecotechnology” in their name, and most of them see themselves in the tradition of ecological engineering or restoration ecology.

Sociopolitical Imagineries and Agency in International Networking

The ground was well prepared for the emergence of ecotechnology in the 1970s in terms of the sociopolitical imaginaries being linked to ecological thinking. From the beginning, issues about humankind and its habitat were seen in their international dimension, not least as a result of the science policymaking organized by various scientific, industrial, or philanthropic foundations or in the context of activities initiated by the United Nations. Still under the impression of World War II, the volume Man’s Role in Changing the Face of the Earth was published in 1956 , the voluminous outcome of an interdisciplinary conference funded by the American Wenner-Gren foundation for Anthropological Research and the U.S. National Science Foundation. The wording of ecotechnology did not appear, but “man as an agent of change” that should “strive toward a condition of equilibrium with its environment” ( Sears, 1956 , p. 473) was the dominant leitmotif. The concern put forward by American public intellectual Lewis Mumford captured the zeitgeist when he called for a self-transformation of the conditions of the Anthropos, that is humankind, itself, pointing out that

what will happen to this earth depends very largely upon man’s capacities as a dramatist and creative artist, and that in turn depends in no slight measure upon the estimate he forms of himself. What he proposes to do to the earth, utilizing its soils, its mineral resources, its water, its flows of energies, depends largely upon his knowledge of his own historic nature and his plans for his own further self-transformations. ( Mumford, 1956 , p. 1146)

In the context of ethics, the idea of nature and technology being in a cooperative ( Allianztechnologie ) rather than a confrontational relationship was—and still is—popular, as is expressed in Ernst Bloch’s often-quoted analogy about the present technology standing in nature like an occupying army in enemy territory and knowing nothing of the interior ( Bloch, 1985 ). In a further ethical twist, the debate was also linked with the holism debate present in the ontological strand of ecology from the beginning ( Bergandi, 2011 , p. 36). An ecological ethic of using technology to harmonize humanity’s relationship with nature was appealing and was often linked to values such as the integrity of the biosphere or the use of nature in an ecologically sound manner. At the same time, it was very common to criticize the Promethean quest of using technology to dominate nature (e.g., Bookchin, 1977 ). Accordingly, in the 1960s and 1970s, the call to action grew louder as the widely exploitative and destructive character of humankind came to be sensed by many people to be both menacing and dehumanizing; in this context, the global environmental movement gained momentum. This was by no means a uniform phenomenon. Instead, there were different “policy styles” that evolved out of different national contexts, spawned by the interactions among stakeholders in government administrations, the economy, academia, and civil society ( Jamison, 2001 , p. 102). In the U.S. environmental movement, political philosopher and environmental activist Murray Bookchin was a creative “dramatist” and multiplier at the same time. He fleshed out the concept of “ecotechnology” on the occasion of his preparation for the UN Conference on Human Settlements in 1974 and issued the following statement: “If the word ‘ecotechnology’ is to have more than a strictly technical meaning, it must be seen as the very ensemble itself functionally integrated with human communities as part of a shared biosphere of people and non-human life forms” ( Bookchin, 1977 , p. 79). In parallel to the conceptualization of ecotechnology, interdisciplinary collaboration and the trading of concepts and theories were stimulated during an interim phase of about 20 years, starting in the 1970s.

When the first report commissioned by the Club of Rome was published, it almost coincided with the date of the first environmental summit of the United Nations in Stockholm in 1972 , in a sense the birth of environmental diplomacy. The summary of the general debate notes—not without a hint of drama—that “the Conference was launching a new liberation movement to free men from the threat of their thraldom to environmental perils of their own making” ( UN Conference on the Human Environment, 1973 , p. 45). Shortly after the conference the United Nations Environmental Program (UNEP) was founded and became the coordinating body for the United Nations’ environmental activities. One of the dominant topics became the limits to growth and economic development, particularly in less developed countries. The term “eco-development” was introduced by the incumbent secretary-general, Maurice Strong, as an alternative form of economic development to the globally occurring pattern of economic expansion; the term seeped rapidly into debates about social and political theories. It appeared repeatedly in various bodies belonging to international organizations and was also adopted by research centers affiliated with these. The debates about models of economic growth and limited resources were dominant for a while, eventually leading in the 1980s to a debate about environmental pollution ( Moll, 1991 ; Radkau, 2014 ). Perhaps ironically, this issue had already been “forecast” in The Limits to Growth , a book that, perhaps apart from the Bible, had become one of the most hotly disputed and successful books ever published.

‘Unlimited’ resources thus do not appear to be the key to sustaining growth in the world system. Apparently, the economic impetus such resource availability provides must be accompanied by curbs on pollution if a collapse of the world system is to be avoided ( Meadows et al., 1972 , p. 133).

Less extreme categories emerged subsequently to combine environmental care and economic growth: The time for the term “sustainability” had come. In 1983 , the United Nations established the World Commission on Environment and Development. It was headed by the Norwegian Gro Harlem Brundtland, who in 1987 presented the report “Our Common Future” and made “sustainability” the pivotal point of the report. Eco-development and ecotechnology did not appear here explicitly. Instead, key issues associated with ecotechnology were included in the concept of sustainable development.

Social/Political Ecology

In the 1970s ecologically oriented debates, political economy, and social theories came together in various frameworks, and ecotechnology became a key concept in different settings. Authors involved in these debates referred to either social ecology or political ecology, but the differences were determined less by content or a particular set of theories than by institutional settings. Similarly, in historical reconstructions of social ecology ( Luke, 1987 ) or political ecology ( Escobar, 2010 ), the same authors (such as Ernst Friedrich Schumacher, Amory Lovins, and Murray Bookchin) might be claimed as important scholars. It has also been pointed out that utopian political thought comes from the tradition of social philosophy with its historical roots in the 18th century , in the political ideas of Henri Rousseau, for example, and can also be located in the utopian literature of the 19th century , such as in William Morris, Peter Kropotkin, or Henry David Thoreau.

There are several possible narrative strands available to tell the story of the conjuncture of ecotechnology and social ecology. One important nexus is certainly an experimental project in the 1970s, the “Vermont Installation,” which served to combine ecotechnology with ecocommunity. It was launched by Murray Bookchin, an American political philosopher, social theorist, and activist, who called for new forms of knowledge with respect to the use of technology. Most likely, he was also the first to explicitly link the idea of an ecologically informed technology with the project of “social ecology.” He considered his project an ensemble that “has the distinct goal of not only meeting human needs in an ecologically sound manner—one which favors diversity within an ecosystem—but of consciously promoting the integrity of the biosphere” ( Bookchin, 1977 , p. 79). He, too, just like many of his contemporaries, criticized the Promethean attitude that sees technology as a means of dominating and colonizing nature, ultimately leading to energy-, pollutant-, and capital-intensive growth. What is required instead, he argued, is an ecological ethic to tame technological excess in order ultimately to harmonize humanity’s relationship with nature:

Ecotechnology would use the inexhaustible energy capacities of nature—the sun and wind, the tides and waterways, the temperature differentials of the earth and the abundance of hydrogen around us as fuels—to provide the ecocommunity with non-polluting materials or wastes that could be easily recycled ( Bookchin, 1980 , p. 69).

Bookchin advocated a transformation of both capitalism and socialism toward a radical social ecology, the utopia of a “post-scarcity anarchism” ( 1986 ) that would ultimately create a more humane and balanced society capable of caring properly for its organic and inorganic environment. He considered two agents of change as being particularly promising in the quest to generate ecotechnology as a liberatory technology: First, the implementation of the principle “small is beautiful” ( Schumacher, 1973 ) in technological devices and machines, and, second, the integration of ecotechnologies into local environments and everyday practices. He argued that self-management, community empowerment, and household production are crucial for compliance with the ecological constraints of every bioregion ( Bookchin, 1980 , p. 27). Small-scale agriculture and scaling down industry to the needs of a community would not only mimic ecosystems/nature but also become a self-sustaining ecosystem, a basic communal unit of social life. This ecocommunity would be guided by a permanent critically verifying and reifying process of the “making” of a liberated self “capable of turning time into life, space into community, and human relationships into the marvelous” ( Bookchin, 1986 , p. 66).

Bookchin was not alone in seeking new forms of social organization and technological practices and a revival of personal moral responsibility and democratic citizenship in the practices of everyday life. Other social ecologists—even if they did not use the term explicitly—also advocated ecotechnology in the sense of a less destructive technological approach toward nature and a transformation of the prevailing economic order. However, the range of positions was remarkably broad and varied and included fairly radical, direct-action programs such as Bookchin’s, the quest to awaken a moral consciousness that views nature as a moral force ( Schumacher, 1973 ) and the call for a simplification of everyday life ( Illich, 1975/2014 ). More moderate ideas included environmental policy reform—including calls for a new class of experts, or “ecomanagers” (Amory B. Lovins or Hazel Henderson)—and Marxist positions dealing with nature more efficiently and at the same time preventing the overproduction of commodities (André Gorz). All these positions entailed enlisting different agents of change willing to work toward an ecological future with their different motivations. Timothy W. Luke suggests that this rather complex situation can be divided into two strands of political strategy. The first of these places its hope in the educational impact of political actions and writings that would ultimately enable agents of change to tackle the ecological crisis. This so-called soft path is characterized mainly by an appeal to individual decision-making, moral insight, and bottom-up processes of social change. The hard path, by contrast, considers the state to be a key agent of social change, one that uses “bureaucratic coercion, material incentives, and scientific persuasion” ( Luke, 1987 , p. 305) from its operational toolbox to solve the environmental and technological problems identified. Luke is not very optimistic that the full potential of social ecology will be realized in practice, yet he does see some potential in the European Green Parties “to provide a practical model for the effective politicization of social ecology” (p. 314). A more recent critique has pointed out that the declarations that emerged from progressive oppositional politics in the 1970s and 1980s to explain environmental degradation made reference “solely to human-to-human hierarchies and oppressions” and not to a broader network of actors, and that this “can look like an evasion of the need to accord to the nonhuman a disconcerting agency of its own” ( Clark, 2012 , p. 152).

“Soziale Naturwissenschaft” (Social Natural Science)—Another Ecotechnology?

Another closely related narrative strand can be identified in the German-speaking context, even though it did not directly address the connection between ecotechnology and the story of social ecology—in fact, the word ecotechnology was even not used. However, the concept of social [natural] science acquired a certain momentum in the 1980s and established a connection to the more general discourse in philosophy and sociology about the transformation of science, technology, the economy, societal institutions, and personal lifestyles. Other authors were rather skeptical about these suggestions for “ways of expanding ecology” ( Böhme & Grebe, 1985 ) or claiming it as a so-called key science ( Leitwissenschaft ). Historian and philosopher of ecology Ludwig Trepl pointed out that the history of ecology itself

shows most clearly that there is nothing one could unproblematically ‘latch onto’ theoretically: neither the traditional natural history route nor even the strand modernized by systems theory and cybernetics displays the characteristics of an ‘alternative, non-dominating etc. relation to nature’ ( Trepl, 1987 , p. 227; emphasis in original).

The working group “Soziale Naturwissenschaft” at the Technical University in Darmstadt was actively involved in case studies looking at water management projects in Egypt and Germany that were highly problematic in technological and ethical terms. In the course of addressing these concerns, they formulated the need for a new type of knowledge that they dubbed “basic research for applied science” ( Anwendungsgrundlagen ) ( Böhme & Grebe, 1985 , p. 38). They argued that pressing environmental problems cannot be solved by scientific communities organized along traditional disciplinary lines but that new epistemic forms and practices need to be established that are oriented toward problem-solving and include a normative element of theoretical reflexivity. This idea of a “nature policy for the whole society” ( gesamtgesellschaftliche Naturpolitik ) ( Böhme & Grebe, 1985 , p. 38) ultimately set in motion a reorientation and a reassessment of interdisciplinary and later transdisciplinary research that addressed this issue in different research programs. These dealt with questions regarding human-nature metabolism, and most of them shared the assumptions that, first, humans exert a significant impact on nature (as Marx had noted), second, this relation emerged historically—that is, nature itself has a history—and, third, accordingly, the human-nature relation is produced and not just given, necessitating a normative framework. Recent work on socio-ecological transformation takes these ideas, substantiates them, and implements them in design principles relating to society and biodiversity, such as “focusing on relationships between society and nature,” “enabling coexistence,” “strengthening resilience,” as well as in the pursuit of a critically constructive and democratic participatory development of technology ( Jahn et al., 2020 ). Accordingly, the concept of socio-ecological design in the Anthropocene clearly stands in more than a merely analogous or metaphorical relationship to the concept “ecotechnology” as discussed in social ecology by, among others, Bookchin. The same goes for other important research programs dealing with issues of society-nature metabolism. In addition to the Institute for Socio-Ecological Research (ISOE) in Frankfurt am Main, Germany (ISOE), where the research discussed above was conducted, there are other institutions, such as the research platform for socio-ecological transformations at the Institute of Social Ecology in Vienna, Austria, or the Institute for Social Ecology in Vermont, United States. If one had to name a common denominator among all the actors working in the field of socio-ecological transformation, it may be the commitment to transformation in the present (rather than in the future) and to enabling the political implementation of principles for socio-technical design and decision-making.

Social Ecology Interwoven With Industrial Ecology

Another narrative strand of social ecology is that of ecotechnics ( Ökotechnik ), which clearly signals the notion of technological innovation as ecological modernization. The concept of ecotechnics was first developed out of industrial ecology, and its proponents claim that it arose not “from ideological preference, but from the geo- and biospheric reality of societal metabolism” ( Huber, 1986 , p. 283). Ecotechnics and ecological sustainability are two sides of the same coin, the former providing the ecologically informed technology that serves the official government credo in industrialized countries regarding the ecological modernization of society. With ecotechnics, the greening of technology and science goes hand in hand with both a mechanization and a monetarization of ecological contexts. In an ecotechnic context, a naturally balanced system is disrupted by technological means and replaced by the technological production of an artificial eco-equilibrium. Proponents of ecotechnics are aware that this constitutes a far-reaching manipulation of the metabolism of materials and energy that, ultimately, would transform planetary water cycles and also the earth’s climate ( Huber, 1986 , p. 86). This idea of transforming the metabolism of natural systems in favor of industrial production fits perfectly into the strategy—criticized as being a capitalist strategy—of reducing the environment to a matter of managing labor and resources and, ultimately, of reorganizing nature.

It is openly asserted that ecotechnics has the character of a breakthrough technology (similar to biotechnology); accordingly, its aim is not to adapt industrial processes, structures, or products to eco-cycles that have hitherto been given by nature (an idea attributed to conservative parts of the ecology movement). Instead, ecotechnics “breaks up natural materials and their interrelationships, breaks them down, breaks through them and tries to reconstruct them according to its own will” ( Huber, 1986 , p. 86). This rather “bellicose” description is generally countered immediately by the comment that it is a constitutive part of human activity to intervene in nature and thereby change both nature and itself to some extent as a result. This echoes the philosophical idea that humans have never encountered a pristine nature but rather are always dealing with an environment that is nature already transformed and that they appropriate through work.

Clearing, burning, hunting, digging furrows, diverting water, rummaging through the earth for mineral resources, producing garbage, thus consuming, changing and substituting natural resources, man appropriated nature from the beginning, made it his environment. His culture always already created a nature-culture, in the good like in the bad ( Mittelstraß, 1992 , p. 21).

This anthropological determination reinforces the bellicose tone and leaves little room for hope or for any credibility of the claim that ecotechnics can also mean development in an “intelligent and cultivated way” ( Huber, 1986 , p. 86). With this ecotechnics, ecological modernization is positioned in the tradition of progressive technology development that is open-ended, the key being new technologies such as renewable clean energy, new materials, and new modes of production and practices. This resembles the widespread picture of a technological development that comes up against ecological limits to growth while at the same time discovering ways to shift these limits and to permanently “increase the ecological carrying capacity of the geosphere and biosphere for humans” ( Huber, 1986 , p. 279). This ecotechnics fits quite well with the strategy of the Brundtland report, which was published just one year later: In an anthropocentric world structured by hierarchies and colonialisms, nature is dealt with accordingly.

Ecotechnics as Problem-Based Learning

The study program Ecotechnics/Ecoteknik was launched at the university college of Östersund, Sweden, in 1983 and became a pioneering model in combining theoretical knowledge with practical action. What eventually emerged was a problem-based learning method. After the turn of the millennium the program was renamed “Ecotechnology,” thus promoting a concept of sustainable development intended to link ecological, economic, and technological elements in a cooperative and productive way with an entrepreneurial focus. The program specialized in environmental science and environmental engineering, and courses were also offered on socioeconomic issues and on national and international environmental policy structures. Key topics included a number of important instruments for the sustainable use of bioresources in society, such as life cycle and environmental impact assessments, as well as the international environmental management system (EMAS, ISO, etc.) and environmental law ( Grönlund et al., 2014 ). Later, the program was split into three strands: First, ecoengineering, an interdisciplinary course with an engineering focus; second, ecoentrepreneurship, designed to impart special skills in social entrepreneurship and green production; and, third, ecotechnology, which somehow mediated between the two other strands. The participants attending Ecotechnics ’95, the International Symposium on Ecological Engineering in Östersund, agreed that “ecotechnics is defined as the method of designing future societies within ecological frames” ( Thofelt & Englund, 1995 , p. xvi).

One of the core values of the study program is that knowledge must be turned into practical action. Students are taught how biological and ecological systems work and at the same time how to handle complex systems and the sustainable use of local resources. Another important value is the development of the concept of resilience, not only to understand theoretically resilient socio-ecological systems but also to develop self-management skills. Resilience is understood here in the sense of a general theory of adaptive systems. The concept, developed for the modeling of ecological systems ( Holling, 1973 ), was transformed, extended, and applied to the teaching formats in the ecotechnics program. Resilient systems are considered to exhibit similar patterns when they accumulate resources, increase connectedness, or decrease resilience, and they are able to compensate for periods of crisis and transformation. Accordingly, resilience can be understood as an approach to adapting to changing environments, including coping with daily practical life by developing “ego resilience” ( Cohn et al., 2009 , p. 362). Finally, resilience is considered a way of thinking that could be used to analyze social-ecological systems and be applied to social, management, and individual systems. One of the important experiences afforded by the study program is that learning skills takes more time than learning facts, so that more time must be allowed for this—even if it comes at the expense of theoretical knowledge. Students who applied to study for this degree had a reputation for not knowing much but for being good problem-solvers, which was considered an advantage in terms of interdisciplinary project work and problem-solving capacity ( Grönlund et al., 2014 ). “During the period when the Ecotechnics/Ecotechnology was a 2-year education program one employer even said: ‘These Ecotechnics students, they don‘t know much, but they always solve the problem you give them!” ( Grönlund et al., 2014 , p. 18). Meanwhile, the popularity of problem-based learning has increased enormously and has become an established method in academic teaching not only in Sweden. To conclude, it is interesting to note that one of the discursive strands of ecotechnics led to the development of a successful teaching method in general based on combining ecology-inspired theories (mainly resilience) and consideration of the wisdom of everyday practices.

An Ecotechnological Rationality for Latin America

Latin American social ecology has been embedded from the start in a discourse about the decolonization of scientific knowledge and about eco-development, as put forward at the first UN conference in 1972 in Stockholm on the human environment. It was clearly seen that the models and concepts developed in fully industrialized countries were not an appropriate fit for Latin American contexts. Accordingly, the publication of “Limits to Growth,” which elaborated a so-called world model, was matched by the publication in 1976 of a Latin American world model entitled “Catastrophe or New Society?,” written by a group of scholars coordinated by Argentinian geologist Amílcar Herrera. Environmental deterioration and poverty were identified as the main factors of environmental degradation, underlining the need to design and apply proposals based on eco-development. In the years following this, Latin America became an important player on the global international scene. For example, the United Nations Economic Commission for Latin America and the Caribbean (ECLAC) was founded, which brought together an interdisciplinary group of ecologists, economists, and scholars from other disciplines to study the particular environmental problems of the different regions. A Latin American group for the Analysis of Ecological Systems was set up in 1980 and published “The Ecological Future of a Continent: A Prospective Vision of Latin America,” which in some ways foreshadowed the Brundtland report of 1987 . A couple of years later, the Brundtland publication “Our Common Future” was similarly matched by a Latin American study “Our Own Agenda” ( 2005 ), which received support from the United Nations Development Programme (UNDP) and the Inter-American Development Bank.

It is important to note that Latin American social ecology has always been a search for an epistemological concept of environment. The concern behind such a concept is to help deconstruct the nonsustainable rationality of modernity and instead construct “alternative sustainable worlds guided by an environmental rationality” ( Leff, 2010 , p. 10). For many actors working within international organizations, Latin America seemed to be a useful real-world laboratory in which to apply and explore the ideas contained in eco-development. One of the main proponents was political economist Ignacy Sachs, who successfully disseminated the concept in Latin America and promoted eco-development in different institutions such as universities, municipalities, and government agencies. The creation of the Center for Eco-development in Mexico was one of the outcomes of this networking campaign, the aim being to foster a generation of policies for development “in harmony with ecosystem conditions in Mexico” ( Leff, 2010 , p. 6). Following this, the environmental issue was debated in many Latin American countries, including the problem of how to produce forms of knowledge suited to tackle environmental management issues. Accordingly, identifying socio-environmental problems always meant combining economic, political, and social analysis with specific case studies on deforestation, biodiversity loss, soil and nutrient erosion, and, later on, climate change.

Enrique Leff, a Mexican economist and environmental sociologist, pointed out that a simple transfer of technostructures from temperate industrialized regions to tropical underdeveloped countries poses particular problems on social, economic, and biological levels. He noted critically that “the social productive forces created through the technological harnessing of nature’s laws become a force destructive of the material processes that are their source of wealth and development” ( Leff, 1986 , p. 686). This constitutes an argument against a productive process dominated by extraction, exploitation, and a general technological transformation of natural resources that goes far beyond the capacity of ecological conditions to maintain resilience. The term “technostructure” already implies the work of adaption and integration into the productivity of a particular ecological system. It denotes a technological system defined—and constrained—by the ecological conditions of natural productivity and by the productivity of individuals and collectives in a social entity in their quest to appropriate the technological means of production. It is important to note that this is imagined as a two-way process of adaptation that follows a repertoire of heuristics, affords new skills and new knowledge, and is accompanied by the development of monitoring instruments that eventually enable self-management.

The conceptualization of an ecotechnological ( Leff, 1986 ) or environmental ( Leff, 2010 ) rationality receives support from the idea of an ecological rationality as suggested by the ABC Research Group at the Max Planck Institute for Human Development and the Max Planck Institute for Psychological Research, both located in Munich, dedicated to studying adaptive cognition and behavior ( Todd et al., 2012 ). Their main thesis is that ecological rationality is lead in part by using simple heuristics and in part by the structure of the environment: “In what environment does a given heuristic perform better than a complex strategy, and when is the opposite true? This is the question of the ecological rationality of a heuristic” ( Todd et al., 2012 , p. 5). Ecotechnological and ecological rationality both assume that it is only rational to rely on the local environment and a proven pattern of thinking, and that adaptive behavior emerges from a dynamic interaction between mind and world.

In an ecotechnological process designed this way, cultural values are embedded in workflows and in the design of technological artifacts, while, conversely, a transformation of values takes place during the process of resource exploitation as imposed by external political and market forces (government, international economic conditions, etc.). In this way, a system of carefully interrelated natural and technological resources is generated that is attuned to the order of cultural values provided by the local political and economic conditions ( Leff, 1994 , p. 6). This adjustment process is based on an eco-technological rationality that relies on the idea of integrative ecotechnological principles; that is, productive potential is based on an ecosystemic organization of resources and new socioeconomic formations ( Leff, 1994 , p. 3). This ultimately generates technological innovation, accompanied by a reorganization and relocation of industrial production, including societal action and innovative products. Eco-technological rationality thus emerges from a historical, cultural, and political process that provides orientation for a form of ecotechnological production rooted in social values and lifestyles, produces socio-technical innovation, and affords an institutional transformation. In this sense eco-technological rationality is the precondition for potential eco-development.

Leff’s suggestion of an “ecotechnological paradigm” became an important working concept with which to explore the new field of knowledge around a prudent and sustainable development of socioeconomic formations, cultural knowledge, and ecological resources. The conceptual basis for implementing this comprehensive program was constituted by three independent spheres of productivity, namely, the cultural, the ecological, and the technological ( Leff, 1986 , p. 691). The skill required in the planning process is to discover, define, and evaluate the relevant technostructure that has already internalized the necessary ecosystem services. This technostructure then takes shape and acquires a specific technical materialization. It is then presented to the community in such a way that people can accept and assimilate the new knowledge and are empowered to participate in the management processes of their own productive resources.

An Epistemology of Ecotechnology?

Founded in 1976 , the Mexican Association of Epistemology held its first conference on the topic of ecodevelopment models. Unsurprisingly, the conference issued the statement that the environmental crisis is a consequence of the established hegemonic economic and epistemic orders. A need was identified to seek out “environmental rationalities through a dialogue of knowledges with the critical Western thinking now underway in science, philosophy and ethics” ( Leff, 2010 , p. 2). As a consequence, a new culture of epistemological practices emerged that transformed European theories and concepts while at the same time creating a specific concept of knowledge that emphasized the “ecological potentials and the cultural diversity of our continent” ( Leff, 2010 , p. 9). Leff points in particular to the productive engagement with French philosophy, in particular with authors such as Bachelard, Canguilhem, or Derrida, which ultimately led to an understanding of environment as otherness. This allowed an empirical–functional concept of environment to emerge in contrast to more holistic–systemic ones. From the very beginning, investigation of the environment of a certain population (its milieu) included economic and social issues and was not reduced to a mere natural science perspective, one associated with a seemingly value-free collection of data. This is what Leff addressed as being beyond “the logocentrism of science, as the ‘other’ of established scientific theories” ( Leff, 2010 , p. 8). In this framework, nature is still seen as a distinct ontological domain, but it is now acknowledged that it has become inextricably hybridized with culture and technology and that it is also produced by knowledge systems.

All this opened up new fields of political ecology in Latin America, which began working with concepts such as “environment as potential” and “environmental complexity,” the latter understood in terms of self-organization, emergence, nonhierarchy, and nonlinear dynamic processes. Philosopher Arturo Escobar identifies in Leff’s works a “neo-realism derived from complexity” that might allow for a “different reading of the cultural dimension of nature–culture regimes” ( Escobar, 2010 , p. 97). This, he points out, could afford a political ecology in which knowledge is considered a product of lived experience and is co-produced in an environment that is characterized first and foremost by an indifferent relational potentiality toward the cultural and the natural world without immediately drawing an epistemological line between the two. However, as Escobar admits, it is still difficult to maneuver between the interpretive frameworks of constructivists and essentialists, and the move toward a better understanding of relationality, incorporating multiple modes of knowing, is not yet clearly spelled out. Further elaboration of the concept of ecotechnology could indeed be a worthwhile path to pursue.

Another Semantic Turn of Ecotechnology/Ecotechnics

That humans encounter an environment that is already nature transformed and that is subject to progressive technological development is a narrative expounded with differing points of emphasis. Whereas philosopher Mittelstraß (1992) argues that nature has never been part of the living world of humans—because they transform nature into their environment through work—historian Bill McKibben, in his well-known book The End of Nature , brings in a temporal dimension: “We have ended the thing that has, at least in modern times, defined nature for us—its separation from human society” ( McKibben, 1990 , p. 80). Another point of view is put forward by French philosopher Jean-Luc Nancy, who argues that if we regard nature as that which fulfills its purpose by itself, “then we must also regard technology as a purpose of nature, because from it comes the animal that is capable of technology–or needs it”—that is, the human being ( Nancy, 2011 , p. 55). Accordingly, he suggests locating technology at the center of nature rather than constructing it as its opposite or as other. Technology has its own developmental dynamic and finds its own order in that it responds to demands and needs. The breeding of plants and animals, new chemical elements, and the construction of technical infrastructures are examples of this momentum, which may or may not be triggered by humans and cannot be controlled by them. All this is summed up in the term “ Ökotechnie ” denoting the technological becoming of the world ( Nancy, 1991 , p. 38). It is a technoscientific world of possibilities, unstable and plastic, consisting of highly interwoven and nested assemblages in which “ends and means incessantly exchange their roles” ( Nancy, 2011 , p. 56), and the idea of a greater order has been abandoned: There is no longer any intelligent design. Instead, the world has become a technosphere and is compounded of innumerous bits and pieces, all of them somehow related to or sprouting from the well-known technologically armored animal that has itself become part of a network of intelligence ( Hörl, 2011 , p. 17). This dynamic structure with a common though not constructed origin in Homo faber is what Nancy calls an “ecosystem, which is an ecotechnology” ( Nancy, 2011 , p. 66) endowed with the potential to permanently renew and revitalize itself. The concept is not developed further here, but in his earlier writings Nancy had put forward a critique of instrumentalized nature:

So-called ‘natural life,’ from its production to its conservation, its needs, and its representations, whether human, animal, vegetal, or viral, is henceforth inseparable from a set of conditions that are referred to as ‘technological,’ and which constitute what must rather be named ecotechnology ( Nancy, 2007 , p. 94).

The only nature that exists is thus the one already de-structured and recombined by ecotechnologies. When one speaks of “nature,” one refers to a representation of nature that is already remodeled by ecotechnology. Accordingly, ecotechnology is a thoroughgoing technological manipulation, and humans are the subject of an ecotechnological creation. Humans’ ecotechnological activities establish the conditions for any appearance or dynamics of nature, outside or inside the laboratory, for humans and for humans’ milieu, mediated or not through a particular medium. Even when one engages physically with nature outside, this is already ecotechnologized nature, be it a historical cultural landscape, a nature reserve, or the ever more visible heralds of climate change. This conceptualization stands in stark contrast to the alliance technology discussed above and to ecotechnology for the benefit of humans and nature, whatever that may mean in detail and however manipulative it may be in ecological engineering or restoration.

Another technological layer is added when humans engage with visual representations of this ecotechnologized nature outside. Weather, for instance, has—at least for a large part of urban populations—become a phenomenon that takes place mainly on a computer or television screen. The same goes for experiences of nature, for encounters with nondomesticated animals, and, of course, for the greenhouse effect and the hole in the ozone. Media technologies can be considered naturalized in that they offer simulations of nature and may become the only points of reference for experiences and knowledge of nature. In this way, many interactions with the biosphere—including measurements as well as representations—not only become part of an ecotechnologically mediated global information turnover but also crucially raise the problem of how nature is perceived and narrated at all.

The latter has also become an issue in educational programs at international and national levels, where ignorance of the sorely needed shift from the usual nature-culture separation toward ecotechnology in Nancy’s terms has been criticized for distorting reality. To counteract this conceptual habit, education scholars Anette Gough and Noel Gough suggest that “we need to attend much more closely to the micro-politics of subjective life. . . to participate more fully, self-critically, and reflexively in the cultural narratives within which identity, agency, knowledges and ecotechnologies are discursively produced” ( 2014 , p. 6). They conclude that environmental education should move away from expounding common but misleading ideas about nature. Instead, they argue, there should be a focus on narrating environmental issues through the ecotechnological framework, as this provides a more compelling way of preparing people for sustainable development, which depends on the interconnectedness of cultural, economic, and environmental issues and on practices of the self and its milieu.

Complementing Ecotechnology With Ecoscience

The juxtaposition of two umbrella terms, ecotechnology and ecoscience, has been suggested as a way to map the variegated scientific “eco” world from a philosophy of ecology perspective ( Schwarz, 2014 , p. 141). Ecotechnology is regarded as an instance of use-inspired basic research. Good examples of this include restoration ecology, ecological engineering, industrial ecology, and sustainability science. It can be understood as a technoscience that principally develops local theories and practices. In contrast, ecoscience is suggested as an instance of pure basic research, that is, the search for basic understanding with no interest in application ( Stokes, 1998 ). It is characterized by the development of general concepts and theories, something that is done in theoretical ecology, for instance, which has generated the competitive exclusion principle as well as models depicting predator-prey relationships and ecosystem theories. Ecoscience also includes systematic work on biotopes and plant/animal communities, on ecophysiology, and on parts of hydrology and geology, for example, studies of ion exchange in soil and the dynamics of turbulences in running water. One might say that ecoscience seeks to overcome the dimension of singularity and instead to describe rules of connectedness using more general concepts, models, and sometimes even laws. This can be seen in distinct contrast to ecotechnology, which is about developing tailored solutions and site-specific practices.

In another philosophical approach, ecotechnology is proposed as a third cornerstone of ecology along with applied science and basic science ( Mahner & Bunge, 2000 , p. 190). The boundary is drawn, it seems, at the threshold to the laboratory: Inquiring into the ecological connectedness of an aphid is basic science, looking into the control of the aphid population in the laboratory is applied science, and going outside to combat the aphid in the cabbage plot is ecotechnology. In this model, scientists know about the possibilities of things, and technologists bring them into the world by placing them in a context of action, that is, society at large; accordingly, doctors, lawyers, biotechnologists, and planners are all technologists. This conceptualization relies on top-down knowledge transfer as a one-way street, yet this is not adequate for dealing with the variegated landscape of knowledge forms (and never was, in fact). The planning, production, operation, maintenance, and monitoring of things or processes are also part of scientific work and are themselves epistemic practices.

The program of technosciences, and therefore also ecotechnology, is to improve the conditions of human life through innovation. It is this permanent process of reforming ways of knowing and manufacturing that Hannah Arendt refers to when she places such great emphasis on “fabricating experiments,” as she calls them; at issue, for her, is the making of an artifact, of a “work” and, more generally, a shift from asking “what” and “why” toward asking “how” ( Arendt, 1994 , p. 288). Arendt points out that it is the success of technology and science, and, particularly, of their alliance that bears witness to the fact that the act of producing or manufacturing is inherent in the experiment: It makes available the phenomena one wishes to observe. However, it is not Homo faber , but rather Arendt’s Homo laborans who inhabits the ecotechnological world, a world in which an exuberance of energy and materials and the relentless production and consumption of goods is the driving force. All these largely industrially produced artifacts (cars, domestic appliances, hardware, etc.) must be consumed and used up as quickly as possible lest they go to waste, just as natural things decay unused unless they are integrated into the endless cycle of the human metabolic exchange with nature. “It is as though we have torn down the protective walls by which, throughout all the ages past, the world—the edifice made by human hand—has shielded us against nature” ( Arendt, 1994 , p. 115). Here Arendt offers a pessimistic vision of the human-environment relationship and sounds an ecotechnological warning. The “specifically human homeland” is endangered, she cautions, mainly because we erroneously think we have mastered nature by virtue of sheer human force, which is not only part of nature but “perhaps the most powerful natural force” ( Arendt, 1994 , p. 115). She thus anticipates a constituent component of the Anthropocene and ecotechnology as the technological becoming of the world in the 21st century , as discussed above.

However, technological and social innovation seems to be needed more than ever because the relationship between humans and their material environment—artificial or not—is not yet sufficiently developed. Ecotechnology thus means enabling an adaptive design that is compatible with social and political values and norms as well as with the nonhuman requirements of a particular site. Historian Thomas Hughes points out that “we” (humanity) have failed to take responsibility “for creating and maintaining aesthetically pleasing and ecologically sustainable environments” and that humans should, at long last, accept responsibility to design a more “ecotechnological environment, which consists of intersecting and overlapping natural and human-built environments” ( Hughes, 2004 , p. 153). This appeal is addressed mainly to engineers, architects, and environmental scientists whom Hughes considers the experts suited to design and construct the ecotechnological environment.

Around the turn of the millennium an issue of the Trialog Journal (for planning and building in the third world) was dedicated to “eco-technology.” It highlighted the importance of traditional cultures and their wisdom when it comes to dealing with the uncertainty resulting from upheavals. Eco-technology is proposed as a means to support ecologically compatible and culturally acceptable development, including a conscious process of self-development toward sustainability, supported by democratic consensus-building ( Oesterreich, 2001 ). Meanwhile, research on traditional ecological knowledge (TEK) has become established in many places and regions around the world, ecotechnology being a part of its conceptual framework. TEK is considered a body of knowledge, practices, and beliefs that has evolved by adaptive processes over longer time periods and thus is somehow empirically saturated. It is also about relationships among living beings, including humans, both with one another and with their environment ( Martin et al., 2010 ). The cultural transmission of practices, of material and immaterial heritage, is an important issue and includes the investigation of wisdom as an epistemic category ( Ingold, 2000 ).

It can be noted that space and place as an oikos in the mode of experimentation is the recurrent theme that links recent debates on climate change, green lifestyles, restoration ecology and industrial ecology, as well as historically more distant issues such as blue sky campaigns (against air pollution in industrialized countries), efforts to combat water pollution in the 19th century and well into the 20th century , the management of dying forests, and space ecology. Accordingly, it is hardly surprising that the space-oikos theme developed mainly in the context of sustainability discourse, without always being explicitly spelled out.

Ecotechnology Diplomacy

The term “ecotechnology” may be used (a) in the sense of a heuristic strategy in the natural and engineering sciences or in international policymaking, (b) as an umbrella term that travels between already existing fields of science, technology, and policy, (c) to label a disciplinary and institutional enterprise, such as ecotechnics or ecological engineering or ecotechnology, or (d) to refer to an epistemic program that has been spelled out in philosophy, in the educational sciences, and in political anthropology. Accordingly, there is no simple answer to the question, “What is ecotechnology?” Rather, the question to be asked is how ecotechnology is conceptualized in each case and in what way this umbrella term then organizes an epistemic, institutional, or sociopolitical field. The diplomatic aspect of ecotechnology comes in when it is used to foster international relations in scientific cooperation, that is, when science is used for diplomacy, as in the context of UN programs, for example. Ecotechnology as a science is used for diplomatic purposes when international and technical cooperation is fostered between countries, which was the case in the 1990s when ecotechnology became a cipher for sustainable and computerized production. Finally, ecotechnology performs diplomacy when ecotechnologically justified findings, processes, or objects are used to support foreign policy objectives.

As a technoscience operating at the intersection of science and technology, ecotechnology was prolific for some time during the 1980s and 1990s but then gradually lost its heuristic power and was absorbed into the up-and-coming sustainability sciences. An institutional settling never happened in the United States or Japan, where the term was coined and some conceptual work took place. However, the issues, theories, and practices of ecotechnology migrated into other disciplinary fields such as ecological engineering or industrial ecology. In the 21st century it is mainly in China and some other Asian countries where “ecotechnology” appears explicitly in the names of institutions and their research programs.

The association of ecotechnology with a holistic approach or a partnership with nature is a claim that is frequently encountered, particularly in the engineering sciences, although it is barely operationalized in the sense of particular tools or practices. The most convincing ecotechnological principles are those embodied in specific machines or objects and are based on ideas of a circular economy or the cradle-to-cradle design principle. The conceptual opposition between technology and nature is generally upheld in these approaches. More recent ideas in philosophy about an ecotechnics involving the use of technology in nature might contribute to solving the problems of incoherent conceptualization if they were to provide a foundation for a philosophy of science and technology in practice.

In the field of political anthropology a conceptual framework has been developed around ecotechnology, particularly in the French and the Latin American context, and an ecotechnological rationality has been used to argue against the widespread colonialist and exploitative rationale. This discourse mainly argues against a capitalist productive process that is dominated by the technological transformation of natural resources and operates far beyond the resilience capacity of the given ecological conditions. It calls instead for adaptation and integration into the productivity of a particular ecological system. Technostructures should be defined by the ecological conditions of natural productivity and the productivity of the individuals and collectives of a social entity in order to appropriate the technological means of production. This adaptive and integrative process adheres to a repertoire of heuristics, affords new skills and new knowledge, and is accompanied by the development of monitoring instruments that eventually enable self-management. This concept of ecotechnology is seen as a viable path (albeit one that is not yet completely spelled out) for moving toward a better understanding of relationality and incorporating multiple modes of knowing about human beings in their environment.

Further Reading

  • BlĂźhdorn, I. , & Welsh, I. (2007). Eco-politics beyond the paradigm of sustainability: A conceptual framework and research agenda. Environmental Politics , 16 (2), 185–205.
  • Bookchin, M. (1977). The concept of ecotechnologies and ecocommunities . In I. Tinker & M. Buvinic (Eds.), The many facets of human settlements (pp. 73–85). Pergamon.
  • Escobar, A. (2010). Postconstructivist political ecologies. In M. Redclift & G. Woodgate (Eds.), International handbook of environmental sociology (pp. 91–105). Elgar.
  • Hughes, T. P. (2004). Human-built world: How to think about technology and culture . University of Chicago Press.
  • Leff, E. (2010). Latin American environmental thought: A heritage of knowledge for sustainability. ISEE PublicaiĂłn Ocasional , 9 , 1–16.
  • Luke, T. W. (1987). Social ecology as critical political economy. The Social Science Journal , 24 (3), 303–315.
  • Mitsch, W. J. , & Jørgensen, S. E. (2003). Ecological engineering: An introduction to ecotechnology . Wiley.
  • Mitsch, W. J. , & Jørgensen, S. E. (1989). Ecological engineering: Introduction to ecotechnology. In W. J. Mitsch, W. J. & S. E. Jørgensen, S. E. (Eds.), Ecological engineering: An introduction to ecotechnology (pp. 3–12). Wiley.
  • Nancy, J.–L. (2007). The creation of the world or globalization . State University of New York Press.
  • Ross, M. R. V. , Bernhardt, E. S. , Doyle, M. W. , & Heffernan, J. B. (2015). Designer ecosystems: Incorporating design approaches into applied ecology. Annual Review of Environment and Resources , 40 , 419–443.
  • Schwarz, A. (2014). Experiments in practice . Pickering & Chatto.
  • Aida, S. (Ed.). (1983). Humane use of human ideas: The discoveries project and eco-technology . Pergamon Press.
  • Aida, S. (1986). Eco technology . TBS-Britannica.
  • Aida, S. (1995). An introduction to ecotechnology and its application to the AIES project. Pattern Recognition , 28 (10), 1455–1458.
  • Allenby, B. (2006). The ontologies of industrial ecology? Progress in Industrial Ecology—An International Journal , 3 ,(2/3), 28–40.
  • Arendt, H. (1994). Vita activa . Piper.
  • Ayres, R. U. (1996). Eco-thermodynamics: Economics and the second law (INSEAD Working Paper Series). INSEAD.
  • Banham, R. (1965). A home is not a house. Art in America , 2 (2), 70–79.
  • Barot, S. , Lata, J. C. , & Lacroix, G. (2012). Meeting the relational challenge of ecological engineering within ecological sciences. Ecological Engineering , 18 , 13–23.
  • Beck, U. (2010). Climate for change, or how to create a green modernity? Theory, Culture & Society , 27 (2–3), 254–266.
  • Bergandi, D. (2011). Multifaceted ecology between organicism, emergentism and reductionism. In A. Schwarz & K. Jax (Eds.), Ecology revisite: Reflecting on concepts, advancing science (pp. 31–44). Springer.
  • Berger, J. (Ed.). (1990). Environmental restoration . Island Press.
  • Bergen, S. D. , Bolton, S. M. , & Fridley, J. L. (2001). Design principles for ecological engineering. Ecological Engineering , 18 (2), 201–210.
  • Bloch, E. (1985). Das Prinzip Hoffnung . Suhrkamp.
  • Blok, V. , & Gremmen, B. (2016). Ecological innovation: Biomimicry as a new way of thinking and acting ecologically. Journal of Agricultural Environmental Ethics , 29 (2), 203–217.
  • BĂśhme, G. , & Grebe, J. (1985). Soziale Naturwissenschaft—Über die wissenschaftliche Bearbeitung des Stoffwechsels Mensch—Natur. In G. BĂśhme & E. Schramm (Eds.), Soziale Naturwissenschaft: Wege zu einer Erweiterung der Ökologie (pp. 19–41). Suhrkamp.
  • Bookchin, M. (1980). Toward an ecological society . Black Rose Books.
  • Bookchin, M. (1986). Post-scarcity anarchism . Black Rose Books.
  • Brundtland, G. B. (1987). Report of the World Commission on Environment and Development: Our common future [Document A/42/427]. United Nations General Assembly.
  • Cavicchioli, R. , Ripple, W. J. , Timmis, K. N. , Azam, F. , Bakken, L. R. , Baylis, M. , Behrenfeld, M. J. , Boetius, A. , Boyd, P. W. , Classen, A. T. , Crowther, T. W. , Danovaro, R. , Foreman, C. M. , Huisman, J. , Hutchins, D. A. , Jansson, J. K. , Karl, D. M. , Koskella, B. , Welch, D. B. M. , & Webster, N. S. (2019). Scientists’ warning to humanity: Microorganisms and climate change. Nature Reviews Microbiology , 17 , 569–586.
  • Chou, W.–C. , Lin, W.–T. , & Lin, C.–Y. (2007). Application of fuzzy theory and PROMETHEE technique to evaluate suitable ecotechnology method: A case study in Shihmen Reservoir Watershed, Taiwan. Ecological Engineering , 31 , 269–280.
  • Clark, T. (2012). Scale: Derangements of scale. In T. Cohen (Ed.), Theory in the era of climate change (pp. 148–166). Open Humanities Press.
  • Cohn, M. A. , Frederickson, B. L. , Brown, S. L. , Mickels, J. A. , & Conway, A. M. (2009). Happiness unpacked: Positive emotions increase life satisfaction by building resilience. Emotion , 9 (3), 361–368.
  • Gough, A. , & Gough, N. (2014). The denaturation of environmental education: Exploring the role of ecotechnologies . Sustainability: Smart Strategies for the 21C, Hobart Tasmania. National Conference of the Australian Association for Environmental Education.
  • Graedel, T. E. H. , & Allenby, B. (2010). Industrial ecology and sustainable engineering (International edition). Pearson.
  • Greer, J. M. (2009). The ecotechnic future: Envisioning a post-peak world . New Society.
  • GrĂśnlund, E. , Barthelson, M. , & Englund, A. (2021). The creation of independent, problem solving students—The pedagogic legacy of Dr. Lars Thofelt in sustainability teaching at Mid Sweden University [Ecotechnology Working Paper 2021-1a, Inst. f. Ekoteknik och hĂĽllbart byggande]. Mid Sweden University.
  • GrĂśnlund, E. , Barthelson, M. , Englund, A. , Carlman, I. , FrĂśling, M. , Jonsson, A. , & Van den Brink, P. (2014, June 18–20). Ekoteknik (Ecotechnics/ecotechnology)—30 years of experience in interdisciplinary education [Conference session]. Proceedings of the 20th International Sustainable Development Research Conference, Trondheim. Norwegian University of Science and Technology.
  • Gross, M. (2010). Ignorance and surprise. Science, society, and ecological design . MIT Press.
  • Haddaway, N. R. , McConville, J. , & Piniewski, M. (2018). How is the term “ecotechnology” used in the research literature? A systematic review with thematic synthesis. Ecohydrology & Hydrobiology , 18 (3), 247–261.
  • Hagen, J. (2021). Eugene and Howard Odum . Oxford Bibliographies.
  • Herrera, A. O. , Scolnik, H. D. , Chichilnisky, G. , Gallopin, G. C. , Hardoy, J. E. , Mosovich, D. , Oteiza, E. , de Romero Brest, G. L. , Suarez, C. E. , & Talavera, L. (1976). Catastrophe or new society? A Latin American world model . International Development Research Centre.
  • Holling, C. S. (1973). Resilience and stability of ecological systems. Annual Review of Ecology and Systematics , 4 , 1–23.
  • HĂśrl, E. (2011). Die technologische Bedingung. Zur EinfĂźhrung. In E. HĂśrl (Ed.), Die technologische Bedingung: Beiträge zur Beschreibung der technischen Welt (pp. 7–53). Suhrkamp.
  • Huber, J. (1986). Die verlorene Unschuld der Ökologie . Fischer.
  • Hughes, T. P. (2004). Human-built world. How to think about technology and culture . University of Chicago Press.
  • Illich, I. (2014). Selbstbegrenzung: Eine politische Kritik der Technik . C.H. Beck. (Original work published 1975)
  • Ingold, T. (2000). The perception of the environment . Routledge.
  • Ishida, E. , & Furukawa, R. (2013). Nature technology. Creating a fresh approach to technology and lifestyle . Springer.
  • Jahn, T. , Hummel, D. , Drees, L. , Liehr, S. , Lux, A. , Mehring, M. , Stieß, I. , VĂślker, C. , Winkler, M. , & Zimmermann, M. (2020). Sozial-Ăśkologische Gestaltung im Anthropozän. Gaia , 29 (2), 93–97.
  • Jamison, A. (2001). The making of green knowledge: Environmental politics and cultural transformation . Cambridge University Press.
  • Jørgensen, S. E. , & Nielsen, S. N. (1996). Application of ecological engineering principles in agriculture. Ecological Engineering , 7 , 373–381.
  • Jørgensen, U. (2001). Greening of technology and ecotechnology. In N. Smelser & P. Baltes (Eds.), International encyclopedia of the social & behavioral sciences (pp. 6393–6396). Elsevier.
  • Kangas, P. , & Adey, W. (1996). Mesocosm and ecological engineering. Ecological Engineering , 6 , 1–5.
  • Kasprzak, P. , Krienitz, L. , & Koschel, R. (1993). Biomanipulation: A history of success and disappointment. In R. de Bernardi , R. Pagnotta , & A. Pugnetti (Eds.), Strategies for lake ecosystems beyond 2000 (pp. 151–169). Memorie dell’Istituto Italiano di Idrobiologia.
  • Korhonen, J. (2005). Theory of industrial ecology: The case of the concept of diversity. Progress in Industrial Ecology , 2 (1), 35–72.
  • Leff, E. (1986). Ecotechnological productivity: A conceptual basis for the integrated management of natural resources. Social Science Information , 25 (3), 681–702.
  • Leff, E. (1994). EcologĂ­a y capital: Racionalidad ambiental, democracia participativa y desarrollo sustentable . Siglo XXI Editores.
  • Li, X. (2018). Industrial ecology and industry symbiosis for environmental sustainability . Palgrave Pivot.
  • Ma, S. (1988). Development of agro-ecological engineering in China. In S. Ma , A. Jiang , R. Xu , & D. Li (Eds.), Proceedings of the International Symposium on Agro-Ecological Engineering (pp. 1–13). Ecological Society of China.
  • Mahner, M. , & Bunge, M. (2000). Philosophische Grundlagen der Biologie . Springer.
  • Martin, J. F. , Roy, E. D. , & Diemont, S. A. (2010). Traditional knowledge (TEK): Ideas, inspiration, and designs for ecological engineering. Ecological Engineering , 36 (7), 839–849.
  • Marx, K. (1964). Kritik der politischen Ökonomie: Der Gesamtprozeß der kapitalistischen Produktion (Vol. 3, book 3, by F. Engels, Ed.). Dietz Verlag.
  • Maxwell, J. , & Costanza, R. (1989). An ecological economics for ecological engineering. In W. J. Mitsch & S. E. Jørgensen (Eds.), Ecological engineering: An introduction to ecotechnology (pp. 57–77). Wiley.
  • McHarg, I. L. (1971). Design with nature . Natural History Press Doubleday.
  • McKibben, B. (1990). The end of nature . Anchor Books.
  • Meadows, D. , Randers, D. , & Behrens, W. W., III. (1972). The limits to growth . Universe Books.
  • Mersch, D. (2018). Ökologie und Ökologisierung. Internationales Jahrbuch fĂźr Medienphilosophie , 4 (1), 187–220.
  • Miller, J. H. (2012). Ecotechnics: Ecotechnological Odradek. In T. Cohen (Ed.), Telemorphosis: Theory in the era of climate change (Vol. 1, pp. 65–103). University of Michigan Library.
  • Mitcham, C. (1994). Thinking through technology. The path between engineering and philosophy . University of Chicago Press.
  • Mitsch, W. J. (1992). Landscape design and the role of created, restored and natural riparian wetlands in controlling nonpoint source pollution. Ecological Engineering , 1 (1–2), 27–47.
  • Mitsch, W. J. (2012). What is ecological engineering? Ecological Engineering , 45 , 5–12.
  • Mitsch, W. J. , & Jørgensen, S. E. (1989). Ecological engineering: Introduction to ecotechnology. In W. J. Mitsch & S. E. Jørgensen (Eds.), Ecological engineering: An introduction to ecotechnology (pp. 3–12). Wiley.
  • Mittelstraß, J. (1992). Leonardo-Welt: Über Wissenschaft, Forschung und Verantwortung . Suhrkamp Verlag.
  • Moll, P. (1991). From scarcity to sustainable futures . Peter Lang.
  • Moore, J. W. (2015). Capitalism in the web of life . Verso.
  • Mumford, L. (1956). Prospect. In W. L. Thomas (Ed.), Man’s role in changing the face of the world—International Symposium Wenner-Gren Foundation for Anthropological Research (pp. 1141–1152). University of Chicago Press.
  • Nancy, J.–L. (1991). Der Preis des Friedens: Krieg, Recht, Souveränität—technĂŠ. Lettre International , 14 , 34–45.
  • Nancy, J.–L. (2011). Von der Struktion. In E. HĂśrl (Ed.), Die technologische Bedingung: Beiträge zur Beschreibung der technischen Welt (pp. 54–72). Suhrkamp.
  • Naveh, Z. (1982). Landscape ecology as an emerging branch of human ecosystem science. In F. A. Macfadyen (Ed.), Advances in ecological research (pp. 189–237). Academic Press.
  • Odum, E. P. (1984). The mesocosm. BioScience , 34 (9), 558–562.
  • Odum, H. T. (1962). Man and the ecosystem. In P. E. Waggoner & J. D. Ovington (Eds.), Proceedings of the Lockwood conference on the suburban forest and ecology (pp. 57–75). Connecticut Agricultural Experiment Station.
  • Odum, H. T. (1972). Ecosystem structure and function . Oregon State University Press.
  • Odum, H. T. (1989). Ecological engineering and self-organization. In W. J. Mitsch & S. E. Jørgensen (Eds.), Ecological engineering: An introduction to ecotechnology (pp. 79–101). Wiley.
  • Odum, H.T. , Wojcik, W. , Pritchard Jr., L. , Ton, S. , Delfino, J.J. , Wojcik, M. , Patel, J.D. , Leszczynski, S. , Doherty, S.J. & Stasik, J. , (2000a). Heavy Metals in the Environment, Using Wetlands for Their Removal . Lewis Publishers, Boca Raton, FL, 325
  • Odum, H.T. , Doherty, S. , Scatena, F. & Kharecha, P. , (2000b). Emergy evaluation of reforestation alternatives. For. Sci . 46 (4), 521–530.
  • Odum, H. T. , & Odum, B. (2003). Concepts and methods of ecological engineering. Ecological Engineering , 20 (5), 339–361.
  • Oesterreich, J. (Ed.). (2001). Eco-technology. Trialog , 71 (4), 2.
  • Qu, G. (2020). Preface to the inaugural issue of Environmental Science & Ecotechnology. Environmental Science & Ecotechnology , 1 , 1.
  • Radkau, J. (2014). The age of ecology. A global history . Polity.
  • Rip, A. , & Voß, J. P. (2013). Umbrella terms as mediators in the governance of emerging science and technology. Science, Technology & Innovation Studies , 9 (2), 39–59.
  • Ross, M. R. V. , Bernhardt, E. S. , Doyle, M. W. , & Heffernan, J. B. (2015). Designer ecosystems: Incorporating design approaches into applied ecology. Annual Review of Environment and Resources , 40 , 419–443
  • Scholz, R. W. (2011). Environmental literacy in science and society : From knowledge to decisions . Cambridge University Press.
  • SchĂśnborn, A. , & Junge, R. (2021). Refining ecological engineering in the context of circular economy and sustainable development. Circular Economy and Sustainability , 1 , 375–394.
  • Schumacher, E. F. (1973). Small is beautiful: Economics as if people mattered . Blond & Briggs.
  • Schwarz, A. , & Jax, K. (2011). Etymology and original sources of the concept “ecology.” In A. Schwarz & K. Jax . (Eds.), Ecology revisited. Reflecting on concepts, advancing science (pp. 145–148). Springer.
  • Sears, P. B. (1956). The process of environmental change by man. In W. L. Thomas (Ed.), Man’s role in changing the face of the world. International Symposium Wenner-Gren Foundation for Anthropological Research (pp. 471–484). University of Chicago Press.
  • Stokes, D. (1998). Pasteur‘s quadrant: Basic science and technological innovation . Brookings Institution Press.
  • StrĹĄkraba, M. (1993). Ecotechnology as a new means for environmental management. Ecological Engineering , 2 (4), 311–331.
  • Thofelt, L. , & Englund, A. (1995). Preface. In L. Thofelt & A. Englund (Eds.), Proceedings from Ecotechnics 95—International Symposium on Ecological Engineering (pp. xv–xviii). Mid Sweden University.
  • Todd, J. (1997/2005). The new Alchemists. In C. Zelov , P. Cousineau , & B. Danitz (Eds.), Design outlaws on the ecological frontier (pp. 172–183). Knossus Publishing.
  • Todd, J. , & Josephon, B. (1996). The design of living technologies for waste treatment. Ecological Engineering , 6, 109–136.
  • Todd, P. M. , Gigerenzer G. , & ABC Research Group . (2012). Ecological rationality: Intelligence in the world . Oxford University Press.
  • Trepl, L. (1987). Geschichte der Ökologie . Athenäum Verlag.
  • Uhlmann, D. (1983). Entwicklungstendenzen der Ökotechnologie. Wissenschaftliche Zeitschrift der Technischen Universität Dresden , 32 (6), 109–116.
  • UN Conference on the Human Environment . (1973, June 5–16). Report of the United Nations Conference on the Human Environment [Document A/Conf.48/14/Ref.1]. United Nations.
  • Yu, X. , & Zhang, Y. (2021). An economic mechanism of industrial ecology: Theory and evidence. Structure, Change and Economic Dynamics , 58 , 14–22.
  • Zelov, C. , Cousineau, P. , & Danitz, B. (Eds.). (2001). Design outlaws on the ecological frontier (pp. 164–175). Knossus Project.
  • Zhang, R. , Ji, W. , & Lu B. (1998). Emergence and development of agro-ecological engineering in China. Ecological Engineering , 11 , 17–26.

Related Articles

  • The Anthropocene

Printed from Oxford Research Encyclopedias, Environmental Science. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 10 June 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [185.126.86.119]
  • 185.126.86.119

Character limit 500 /500

Examples

Strong Thesis Statement

Ai generator.

thesis statement recycling paper

Navigating the vast ocean of ideas, every writer seeks that anchor – a robust thesis statement. This statement not only provides direction to the essay but also gives it a foundation. It distills the essence of the argument into a concise format, guiding the reader and asserting the writer’s perspective. To craft a compelling thesis statement, one must grasp its intricacies and nuances. This guide illuminates the path to creating magnetic thesis statements , backed by compelling examples and actionable tips.

What is a Powerful Thesis Statement?

A powerful thesis statement is a concise, specific declaration that presents the main point or argument of a piece of writing. It acts as a roadmap for the reader, outlining the central theme and position the writer will adopt or argue for. A potent final thesis statement not only states a fact but also takes a stance, establishes a perspective, and gives a hint about the line of reasoning the essay will adopt.

What is an Example of a Strong Thesis Statement?

“While global climate change is influenced by natural phenomena, predominant evidence indicates that human activities, especially the emission of greenhouse gases, are the primary contributors to the accelerated rate of global warming experienced in the last century.”

This statement presents a clear position, is debatable (therefore, not a mere fact), and hints at the reasoning that will be laid out in the essay or paper.

100 Strong Thesis Statement Examples

Strong Thesis Statement Examples

Size: 278 KB

Crafting a robust thesis statement is pivotal for any successful essay or research paper. This statement should encapsulate your main argument, presenting readers with a clear insight into your stance and the direction of your work.   you may also be interested to browse through our other thesis statement for research paper . Below are some strong thesis statement examples that provide a firm foundation for compelling arguments:

  • “The rise of electronic communications in the modern era has diminished the significance of face-to-face interaction.” This highlights the impact of technology on human relationships.
  • “Despite its perceived threats, artificial intelligence can be a beneficial tool when used ethically and can revolutionize sectors such as healthcare, finance, and education.” Here, AI’s advantages are emphasized despite potential pitfalls.
  • “Mandatory voting laws can potentially undermine democratic processes by forcing uninformed voters to make decisions.” This statement questions the efficiency of compulsory voting.
  • “Organic foods aren’t necessarily healthier than non-organic ones, but their production is more environmentally friendly and ethical.” A take on the broader implications of organic farming.
  • “The portrayal of women in media has evolved over the decades, yet it still adheres to aged stereotypes.” A comment on gender representation in media.
  • “Modern education must evolve with technological advancements, integrating digital literacy as a core component.” This underscores the importance of technology in today’s education.
  • “While many argue the death penalty serves as a deterrent to crime, studies have shown that states without it have lower murder rates.” A statement countering a popular belief.
  • “The cultural shift towards plant-based diets can lead to positive health outcomes and combat climate change.” This advocates for dietary change for health and environmental reasons.
  • “Remote working, although beneficial for work-life balance, can hinder team cohesion and organizational culture.” A nuanced view on the rise of remote work.
  • “Childhood vaccinations should be mandatory because they prevent outbreaks of contagious diseases, supporting herd immunity.” A statement emphasizing public health.
  • “Banning single-use plastics can drastically reduce ocean pollution and promote sustainable consumer behaviors.” An environmental call to action.
  • “Financial literacy education should be integrated into high school curricula to prepare students for real-world challenges.” Advocating for essential life skills in education.
  • “Despite its historical significance, Christopher Columbus’ celebration ignores the negative impact of his expeditions on indigenous populations.” A call for a more nuanced historical perspective.
  • “Excessive screen time can lead to a myriad of health issues in children, including impaired sleep and developmental issues.” A health-focused stance on technology.
  • “Urban green spaces not only enhance city aesthetics but also promote mental well-being and biodiversity.” Emphasizing the multifaceted benefits of urban greenery.
  • “A four-day workweek can boost productivity, improve mental health, and promote a better work-life balance.” A modern perspective on work culture.
  • “The gender pay gap persists not solely due to discrimination but also societal norms and occupational segregation.” A multifaceted look at wage disparities.
  • “Animal testing, while controversial, has led to numerous medical breakthroughs, but alternative methods should be explored more rigorously.” Balancing the pros and cons of a debated practice.
  • “The digital age’s advent, while promoting connectivity, has also escalated mental health issues due to increased isolation.” A dual-sided view of technology’s impact.
  • “Affirmative action, although divisive, is essential for redressing historical racial and ethnic injustices in higher education.” Advocating for a policy with historical context.
  • “Corporate social responsibility initiatives benefit not only the community but also companies themselves by improving their public image.” An insight into business ethics.
  • “E-sports should be recognized at the Olympic level, given their global popularity and demand for strategic mental agility.” Advocating for the evolving nature of sports.
  • “Experiencing art through virtual reality can democratize access but may diminish the genuine essence of artworks.” Balancing tech advancement with traditional experiences.
  • “The educational system should prioritize teaching emotional intelligence to foster healthier interpersonal relationships and decision-making.” Emphasizing holistic education.
  • “Although nuclear energy presents potential dangers, its efficiency and low carbon footprint make it essential for a sustainable future.” A balanced view on energy resources.
  • “Language learning should be compulsory in schools, fostering global understanding and cognitive development.” Advocating for a global perspective in education.
  • “The gig economy, despite offering flexibility, can undermine workers’ rights and financial security.” A take on modern employment trends.
  • “Fast fashion’s allure, from its affordability to trendiness, masks its detrimental environmental impact and exploitative production methods.” A statement on sustainable consumerism.
  • “Universal basic income can be a solution to growing automation, ensuring financial stability in the evolving job landscape.” A futuristic economic perspective.
  • “While social media platforms foster global connectivity, they can also perpetuate echo chambers and spread misinformation.” Highlighting the double-edged sword of technology.
  • “Mindfulness practices in the workplace can enhance productivity, mental well-being, and job satisfaction.” Advocating for holistic approaches to work.
  • “A holistic approach to criminal justice, focusing on rehabilitation rather than punishment, can lead to reduced recidivism rates.” A call for reform.
  • “Solar and wind energy, given their sustainability and decreasing costs, should be central in future energy policies.” A sustainable view on future energy.
  • “Despite its challenges, homeschooling offers personalized education, fostering in-depth knowledge and independence.” A take on alternative education methods.
  • “Space exploration, beyond its scientific merits, can unite humanity under a shared goal and perspective.” A broader perspective on space endeavors.
  • “Cultural appropriation, when done disrespectfully, not only offends but can erase the significance of traditional practices.” A statement on cultural sensitivity.
  • “Declining bee populations, if unchecked, threaten global food systems and biodiversity.” Emphasizing an often-overlooked environmental issue.
  • “Biographies, while insightful, can sometimes unintentionally perpetuate biases and inaccuracies of their subjects.” A take on the nature of historical recounting.
  • “The rise of autonomous vehicles can revolutionize urban infrastructure and sustainability but introduces new ethical dilemmas.” Balancing innovation with ethics.
  • “While international tourism boosts economies, it’s essential to balance it with local culture and environment preservation.” A sustainable view on tourism.
  • “Introducing coding and digital literacy from primary education prepares students for the modern workforce and fosters logical thinking.” Advocating for a tech-savvy curriculum.
  • “Consumerism during holidays, while boosting the economy, detracts from genuine cultural and familial significance.” A reflection on modern-day celebrations.
  • “Genetically modified organisms (GMOs), when regulated, can address food insecurity without compromising ecological balance.” A stance on biotechnology.
  • “Telemedicine, propelled by the pandemic, can revolutionize healthcare accessibility but also poses challenges in personal rapport and diagnosis accuracy.” A modern medical perspective.
  • “Biodiversity’s decline, more than just species loss, compromises ecosystem services and resilience.” Highlighting the broad implications of species conservation.
  • “3D printing in medicine holds the potential to revolutionize transplants and prosthetics but raises ethical concerns.” On the frontier of medical technology.
  • “Sports not only foster physical health but also cultivate teamwork, discipline, and resilience.” A multifaceted view of sports’ significance.
  • “Blockchain, beyond cryptocurrency, can enhance transparency and efficiency in sectors like supply chain and public records.” Broadening the scope of a tech trend.
  • “While antibiotics revolutionized medicine, their overuse threatens a rise in resistant superbugs, necessitating judicious use.” A cautionary medical perspective.
  • “Hydroponic and vertical farming, leveraging urban spaces, can meet food demands sustainably.” Innovations in agriculture
  • “Digital detox, in an era of constant connectivity, can rejuvenate mental well-being and restore personal relationships.” Emphasizing the need for tech boundaries.
  • “Mandatory voting laws, though seemingly undemocratic, can foster a more engaged and informed citizenry.” A new perspective on electoral participation.
  • “Plant-based diets, beyond personal health benefits, play a pivotal role in addressing climate change and resource conservation.” Food’s role in sustainability.
  • “Augmented reality (AR) in education can make learning immersive but requires careful integration to not overshadow foundational skills.” Balancing tech with foundational learning.
  • “Remote work, while offering flexibility, requires robust digital infrastructure and new strategies to maintain team cohesion.” Navigating the new work paradigm.
  • “Music therapy has proven benefits in cognitive rehabilitation, emotional well-being, and even physical recovery.” The therapeutic powers of melodies.
  • “Zero-waste lifestyles, more than a trend, embody a critical approach to sustainable consumption and waste management.” Advocating for conscious living.
  • “Classical literature, despite being rooted in bygone eras, offers timeless insights into human nature and society.” The enduring power of classics.
  • “Urban green spaces, beyond recreational benefits, enhance air quality, biodiversity, and even property values.” A case for urban planning with nature.
  • “Affordable housing initiatives, while challenging to implement, can revolutionize urban landscapes and socioeconomic equity.” Addressing urbanization challenges.
  • “Virtual reality (VR) in therapy holds potential for exposure treatment, phobia management, and even PTSD rehabilitation.” A dive into therapeutic tech innovations.
  • “Fair trade practices not only ensure equitable pay but also promote sustainable farming and community development.” Making a case for conscious consumerism.
  • “Pet therapy has demonstrated efficacy in reducing stress, anxiety, and even improving cardiovascular health.” Pets’ unrecognized therapeutic roles.
  • “Desalination, despite high costs, is a promising solution to freshwater scarcity in coastal regions.” Addressing global water challenges.
  • “Blended learning models, combining traditional and online methods, cater to diverse learning styles and enhance engagement.” Reinventing modern education.
  • “Incorporating mindfulness practices in schools can significantly reduce stress, increase focus, and foster emotional intelligence among students.” For holistic education.
  • “Agroforestry, blending agriculture with forestry, offers a sustainable approach to land use, ensuring productivity and biodiversity.” A green thumb approach to farming.
  • “While cryptocurrency promises decentralization and financial inclusivity, it also poses significant volatility and regulatory challenges.” A balanced financial perspective.
  • “Emphasizing soft skills in education, from empathy to problem-solving, prepares students for modern collaborative workspaces.” Beyond the traditional curriculum.
  • “Local farmers’ markets, more than just community hubs, support sustainable agriculture and strengthen local economies.” A fresh take on shopping sustainably
  • “Community gardens not only provide fresh produce but also foster neighborhood ties and promote sustainable practices.” Cultivating more than just vegetables.
  • “Artificial intelligence in healthcare can streamline diagnosis and treatment but raises ethical concerns about data privacy and decision-making autonomy.” The double-edged sword of AI.
  • “Multilingual education not only promotes linguistic skills but also enhances cognitive flexibility and cultural understanding.” Celebrating linguistic diversity.
  • “Adopting renewable energy sources isn’t just environmentally prudent; it can drive job creation and reduce dependency on fossil fuels.” A brighter, greener future.
  • “While e-commerce offers convenience, supporting local businesses is crucial for community sustainability and personalized shopping experiences.” Balancing online with local.
  • “Stem cell research, despite controversy, has the potential to revolutionize medicine, offering treatments for previously incurable conditions.” Pushing medical boundaries.
  • “Intergenerational programs, bringing together the young and old, can bridge cultural gaps and combat age-related stereotypes.” Mending age-old divides.
  • “Public transportation infrastructure investments not only ease urban congestion but also reduce carbon emissions and foster social equity.” A move towards sustainable mobility.
  • “Incorporating financial literacy programs in school curricula prepares students for real-world challenges and fosters responsible money management.” The value of early financial education.
  • “Biophilic design in urban planning, integrating nature with architecture, can enhance residents’ well-being and reduce urban heat islands.” Designing with nature in mind.
  • “The circular economy model, emphasizing recycling and reuse, is not just eco-friendly but also a sustainable business strategy.” Rethinking consumption patterns.
  • “Investing in mental health services in workplaces can increase productivity, reduce absenteeism, and foster overall well-being among employees.” Prioritizing mental well-being at work.
  • “Personalized learning, tailoring education to individual needs, can cater to diverse learners and elevate overall educational outcomes.” A customized approach to education.
  • “Restorative justice practices, focusing on reconciliation, can transform traditional punitive systems, fostering community healing and offender rehabilitation.” Rethinking justice.
  • “Microfinance initiatives not only provide capital to the underserved but also empower communities, especially women, towards financial independence.” Small loans, significant impacts.
  • “Nature-based tourism, if managed responsibly, can boost local economies while promoting environmental conservation.” Travelling with purpose.
  • “Public libraries, beyond being knowledge repositories, act as community hubs, fostering inclusivity, and lifelong learning.” The unsung heroes of communities.
  • “Co-working spaces, beyond their modern appeal, facilitate networking, foster collaboration, and can even promote work-life balance.” Redefining the modern workspace.
  • “Inclusion of arts in education can stimulate creativity, enhance critical thinking, and foster holistic intellectual development.” Championing the arts.
  • “Edible landscaping, integrating food crops with ornamental plants, can transform urban spaces into productive, sustainable ecosystems.” Gardens that feed communities
  • “Cultural exchange programs at the student level can promote global understanding, fostering peace and cooperation for future generations.” Bridging global divides early on.
  • “Telehealth, although born out of necessity in many cases, has the potential to revolutionize healthcare accessibility, especially in remote areas.” Healthcare at your fingertips.
  • “Urban farming initiatives not only provide local produce but also combat the urban heat island effect and promote biodiversity.” Cityscapes turned green.
  • “Ethical consumerism isn’t just a trend; it can drive businesses to adopt sustainable and socially responsible practices.” Voting with your wallet.
  • “The integration of mindfulness practices in schools can enhance student focus, reduce stress, and promote emotional intelligence.” Breathing life into education.
  • “Pet therapy, beyond the evident joys, can significantly aid in emotional healing, reducing anxiety and depression.” Healing with a paw or a purr.
  • “Preserving indigenous languages is essential, not only for cultural heritage but also for the unique worldviews they offer.” Linguistic treasures of humanity.
  • “Supporting women in STEM fields is not just about gender equality; it enriches research and drives innovation through diverse perspectives.” Science, enhanced by diversity.
  • “Green rooftops, apart from being aesthetically pleasing, can significantly reduce energy consumption and support urban wildlife habitats.” Elevating green solutions.
  • “Incorporating urban greenways promotes physical health, fosters community interactions, and enhances the overall livability of cities.” Paths to a healthier urban future.

A strong thesis statement paves the way for well-researched and impactful discussions, guiding readers through the intended narrative of the work.

Strong vs Weak Thesis Statement Examples

An impactful thesis statement captures the essence of an argument concisely, presenting a clear stance on an issue. On the other hand, a weak thesis might be vague, lacking a definitive point of view. Comparing strong vs. weak thesis statements helps in understanding the difference in depth, clarity, and precision. Below is a table illustrating this contrast:

Strong Thesis Statement Weak Thesis Statement
Childhood obesity can be directly linked to the consumption of sugary beverages and fast food. Childhood obesity is a problem.
Online education offers flexibility and a personalized learning experience. Online education is good.
Renewable energy sources like wind and solar can help in reducing global carbon emissions. We should use renewable energy.
Mandatory military service can instill discipline and a sense of responsibility in youth. Military service is beneficial.
The prohibition era in the 1920s led to the rise of organized crime. Prohibition had some bad outcomes.
Veganism can lead to health benefits and a lower carbon footprint. Veganism is better than other diets.
Shakespeare’s “Macbeth” delves into the destructive nature of ambition. “Macbeth” is a play about ambition.
Artificial intelligence will revolutionize healthcare diagnostics and patient care. AI will change healthcare.
The digital divide exacerbates socioeconomic disparities in urban communities. Technology differences are evident.
Mandatory vaccinations are essential for public health and herd immunity. Vaccinations are important.

Strong Thesis Statement Examples for Argumentative Essay

A potent argumentative essay thesis statement explicitly presents an argument, setting the foundation for persuasion. Here are ten examples:

  • The death penalty is an outdated form of punishment and should be abolished due to its potential for irreversible mistakes.
  • Animal testing is not only cruel but also ineffective and should be replaced by alternative research methods.
  • GMOs, when regulated and appropriately used, can help combat world hunger and reduce pesticide usage.
  • The gender wage gap is not only a matter of equality but also an economic imperative for a progressive society.
  • Censoring media under the guise of national security restricts freedom of expression and curtails democratic discourse.
  • The normalization of gig economy jeopardizes worker rights, leading to exploitation.
  • The Second Amendment shouldn’t be an excuse against common-sense gun regulations.
  • The electoral college system is archaic and does not truly represent the democratic wishes of the modern American populace.
  • Legalizing marijuana can aid in reducing the burden on the legal system and provide medicinal and economic benefits.
  • Privatization of essential services like water and electricity often leads to monopolies that neglect public welfare.

Strong Thesis Statement Examples for High School

Crafting a compelling thesis statement in high school lays the groundwork for effective argumentation and critical thinking. Here are ten examples:

  • The influence of social media on teenagers’ self-esteem has more negative implications than positive ones.
  • The “Great Gatsby” is not just a tragic love story but a portrayal of the decay of the American Dream.
  • Extracurricular activities in high school are essential for holistic development and should be equally emphasized as academics.
  • The portrayal of women in classic literature often reflects societal biases, as evident in Austen’s novels.
  • Adolescents should have a say in their educational curriculum to foster engagement and passion.
  • Cyberbullying in high schools is a by-product of technology misuse and requires stringent school policies for prevention.
  • Early school start times adversely affect student health and academic performance.
  • Parental involvement, while beneficial, can become counterproductive if excessively intrusive in high school education.
  • Shakespeare’s “Romeo and Juliet” exemplifies the dangers of impulsive decisions in youth.
  • Implementing financial literacy courses in high school is crucial for preparing students for adulthood.

Strong Thesis Statement Examples for Cyberbullying

Cyberbullying is a contemporary issue with profound effects. A strong thesis on this topic should address the nuances of digital harassment:

  • The rise of social media platforms has inadvertently created an arena for cyberbullying, impacting mental health.
  • Anonymity on the internet emboldens bullies, making it essential to have stricter online identity verifications.
  • Cyberbullying can lead to long-term psychological trauma, highlighting the need for robust support systems.
  • Schools need to adapt and incorporate cyberbullying awareness in their curriculum, preparing students for digital citizenship.
  • Legislation against cyberbullying is not just a necessity but a testament to recognizing online spaces as extensions of our society.
  • Online platforms should bear a shared responsibility in combating cyberbullying through better content monitoring.
  • The normalization of trolling culture on the internet is a gateway to more severe forms of cyberbullying.
  • The psychological impacts of cyberbullying are intensified due to the permanence and ubiquity of digital content.
  • Parents and guardians should be educated about the signs of cyberbullying to protect and support their children effectively.
  • Digital literacy programs should incorporate cyberbullying prevention as a foundational element to foster respectful online interactions.

How to Start a Strong Thesis Statement?

Understanding the Essence: Before you even begin, understand that a thesis statement encapsulates the main point of your paper in a concise manner. It’s not just a simple statement; it needs to be arguable and definitive.

1. Choose a Clear Topic: To craft a potent thesis statement, you first need a clear and specific topic. This could be anything from the subject of your research to the argument you wish to defend or refute.

2. Take a Stand: Your thesis statement shouldn’t merely state a fact. Instead, it should take a position or make an assertion.

3. Be Precise: Narrow down your statement to be as specific as possible. Avoid vague words and ensure your statement clearly expresses what you intend to discuss.

Good vs. Strong Thesis Statement

A good thesis statement might be clear and take a position, but a strong thesis statement goes further. It’s:

  • Debatable: There should be a genuine controversy surrounding your statement, not something universally agreed upon.
  • Specific: It uses concrete facts, figures, or points and doesn’t rely on generalities.
  • Concise: It gets to the point quickly, avoiding unnecessary words.
  • Informed: It shows a deep understanding of the topic.

For instance, a good statement might be: “Reading helps brain development in children.” A strong version would be: “Regular reading of literature from diverse genres between the ages 5 to 10 significantly boosts neural connectivity and cognitive flexibility in children.”

How to Write a Strong Thesis Statement?

1. Ask Questions: Start by posing questions about your topic. Your answers might form a preliminary version of your thesis statement.

2. Avoid the Obvious: Don’t just state a fact. Push yourself to think critically about the subject.

3. Use Strong Language: Avoid wishy-washy language. Use definitive language that shows you are making an assertion.

4. Test it Out: Before finalizing, test your thesis. Can you argue against it? If not, it might not be strong enough.

5. Revise as Necessary: Your first draft of a thesis statement won’t always be the best one. As your paper evolves, ensure that your thesis evolves with it.

Tips for Writing a Strong Thesis Statement

  • Stay Focused: Your thesis should be specific enough to stay within the boundaries of your paper.
  • Position it Right: Typically, your thesis statement should be the last one or two sentences in your introductory paragraph.
  • Stay Objective: A thesis statement shouldn’t be a subjective judgment. Instead, it should be based on evidence.
  • Seek Feedback: Before finalizing, get opinions from peers or mentors. Fresh eyes might catch ambiguities or areas of improvement.
  • Avoid ClichĂŠs: Make your statement original, even if the topic is common. Avoid predictable thoughts and challenge existing viewpoints if possible.

Remember, your thesis statement is the backbone of your paper. Invest time in crafting a strong one, and your paper will stand tall and clear.

Twitter

Text prompt

  • Instructive
  • Professional

10 Examples of Public speaking

20 Examples of Gas lighting

Purdue Online Writing Lab Purdue OWLÂŽ College of Liberal Arts

APA Sample Paper

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

Note:  This page reflects the latest version of the APA Publication Manual (i.e., APA 7), which released in October 2019. The equivalent resource for the older APA 6 style  can be found here .

Media Files: APA Sample Student Paper  ,  APA Sample Professional Paper

This resource is enhanced by Acrobat PDF files. Download the free Acrobat Reader

Note: The APA Publication Manual, 7 th Edition specifies different formatting conventions for student  and  professional  papers (i.e., papers written for credit in a course and papers intended for scholarly publication). These differences mostly extend to the title page and running head. Crucially, citation practices do not differ between the two styles of paper.

However, for your convenience, we have provided two versions of our APA 7 sample paper below: one in  student style and one in  professional  style.

Note: For accessibility purposes, we have used "Track Changes" to make comments along the margins of these samples. Those authored by [AF] denote explanations of formatting and [AWC] denote directions for writing and citing in APA 7. 

APA 7 Student Paper:

Apa 7 professional paper:.

Thesis Statement Generator: Free & Precise

Looking for a thesis statement generator? The free online tool we offer will make a thesis in no time! Our thesis sentence generator will suit argumentative, informative, and comparative essays. All you need to do is look at the examples and add the necessary information.

☑️ How to Use the Thesis Generator?

  • 📝 Essay Thesis
  • ✍️ Research Paper Thesis
  • 📜 Dissertation Thesis
  • 🙊 Thesis For a Speech

💡 Make a Thesis with Our Tips

🏆 10 best thesis generators, ⭐ thesis statement maker: the benefits, 🔗 references, 🔧 thesis generator: what is it.

Sometimes it can be challenging to come up with a topic, research question, or a thesis statement for your paper. An excellent solution is to use online topic makers, problem statement generators, and thesis topic generators, such as ours! Our free online generator will help you create the perfect thesis statement! Follow the steps below to get thesis statements relating to your topic:

  • Introduce your topic. It can also be the title of your paper (e.g., the benefits of online education).
  • State the main idea about this topic. It is the specific point of view that you will discuss in your paper (e.g., online learning is beneficial)
  • Make an argument supporting your point of view. It must be a strong and valid argument. Don't claim something that you can't back with facts (e.g., online learning is flexible)
  • Make another argument supporting your point of view (e.g., online learning is affordable).
  • Make an argument against your point of view. Make sure you don't just dismiss it, but acknowledge its validity (e.g., online learning is not always taken seriously)
  • Decide on the topic of your paper.
  • Think about the main idea that you will express in your paper. It will also be the conclusion.
  • Choose arguments that can support your point of view. Also, think of at least one counterargument. It will help you discuss your topic better.
  • Enter this information into respective fields. Use short sentences. Do not use punctuation or capital letters.
  • Click on the "Generate Thesis" button to get samples.
  • Choose the sample you like best!

📍 Why Make a Thesis Statement?

You might have already heard about theses and thesis statements. Well, the main difference is: a thesis is the key point or argument of your assignment. And the thesis statement is this point expressed in one sentence.

Here’s one crucial thing you should always keep in mind when you write this sentence: it should meet the professor’s requirements.

There are two types of thesis statements:

  • Direct. It states the exact reasons for your paper. For example, "I do not support vegan lifestyle because animals do not have feelings, this lifestyle is too expensive, and a vegan diet is not healthy." Such a thesis sentence would tell the reader what each body paragraph or section is going to be about.
  • Indirect. Unlike the direct thesis statement, it does not state clear arguments. Here’s the sample: "I do not support vegan lifestyle for three reasons." The fact “I do not support vegan lifestyle” is the topic, and "three reasons" represent an indirect thesis statement. The assignment will contain these three reasons.

Most kinds of academic papers require a thesis statement, which can also be considered as your answer to the research question.

Now that you've learned the basics let's see what can help you to create an excellent thesis statement for anything: from history research to a critique paper!

📝 Essay Thesis Statement

You will probably write many essays as a high school or college student. Writing an essay is quite easy: it doesn't require any serious research on your part, and the resulting text is usually short. That's why you choose a narrow thesis statement that you can talk about in 4-5 paragraphs.

Your choice of a thesis statement depends on what type of essay you're writing. Here are some examples:

In an expository essay , you explain the topic logically, using your analytical skills. This type of essay relies only on facts, without any reference to the writer's personal opinion. The topic statement is the most critical part of an expository essay. It should be short and manageable so that you can describe it in just a few paragraphs. As you can see from the definition, it also should be based on facts and not on the writer's position. This category includes compare and contrast essays, definition essays , and others:

e.g., While online education is not always taken seriously, it is beneficial because of its flexibility and affordability.

On the contrary, argumentative essays are centered on the writer's personal opinion. This type of essay is also called persuasive because your aim is to persuade people that your idea is right. The thesis statement should reflect this:

e.g., Vegan lifestyle should not be promoted because it's expensive and not healthy.

Note: it's better not to use the word "I," because it may appear as too subjective. Remember: a strong thesis statement means an excellent essay!

✍️ Research Paper Thesis Statement

Unlike essays, research papers require more information, and they are lengthier than essays. That's why a research paper thesis statement should be slightly broader. This way, you make sure that you have a lot to discuss and can demonstrate your more profound knowledge on the topic.

Research paper thesis statements can be simple or more complex, depending on the purpose of your paper. Simple thesis statements can be formulated with the help of the outlines:

Something is true because of these reasons .

The US Constitution is not outdated because it's an integral part of the country's identity.

Despite these counterarguments , something is true.

e.g., Despite not being outdated, the US Constitution needs many amendments to keep up with the changing times.

You can make more complex thesis statements by combining several arguments:

e.g., The US Constitution is not outdated, because it's a part of the country's identity; still, some amendments need to be made.

Remember: it is essential to stay on topic! Avoid including unnecessary and random words into your statement. Our online thesis creator can help you in writing a statement directly connected with your theme.

Our thesis statement generator can help writing a thesis for your research. Create a short, catchy thesis statement, and you are one step closer to completing a perfect research paper!

📜 Dissertation Thesis Statement

Writing a master's thesis or a Ph.D. dissertation is not the same as writing a simple research paper. These types of academic papers are very lengthy. They require extensive analysis of information, as well as your ideas and original research.

Besides, you only have limited time for writing a dissertation, so you'll have to work on it systematically.

That's why it's better to come up with a thesis statement as early as possible . It will help you always stay on topic and not to waste your time on irrelevant information.

A dissertation can have an even broader thesis statement because of how lengthy your work should be. Make sure it's something you can study extensively and from different points of view:

e.g., The use of memory techniques at school can boost children's abilities and revolutionize modern teaching.

Don't forget to include a statement showing why your dissertation is interesting and relevant!

🙊 Thesis Statement For a Speech

Similarly, the thesis statement for a speech should be catchy and exciting . If you include it in the introduction, you will provide your audience with a sense of direction and make it easier to concentrate. The audience will know what to expect of your speech, and they will pay more attention.

Speech, unlike a research paper, includes only the most relevant information . If your speech is based on a paper, use your thesis statement to decide what to leave out. Remember that everything you say should be connected to your thesis statement! This way, you'll make your speech consistent, informative, and engaging.

Another useful tip is to rehearse your speech several times before deciding that it's finished. You may need to make some corrections or even rephrase the thesis statement. Take your time and make sure you do your best!

Now, we will concentrate on your thesis writing. We’ve prepared six tips that would help you to master your thesis statement regardless of the paper type you were assigned to:

  • Formulate your topic. Here’s the secret: the good topic makes half of the success when you write a paper. It defines your research area, the degree of your involvement, and, accordingly, how good will the result be at the end. So what is the topic of an essay? Basically, it’s a phrase that defines the subject of your assignment. Don’t make it too broad or too specific.
  • Determine the key idea. It will help you get an understanding of your essay subject. Think about things you are trying to state or prove. For example, you may write down one main idea; consider a specific point of view that you’re going to research; state some facts and reasons you will use in your assignment, or express your opinion about the issue.
  • Choose the central argument to support your thesis. Make a list of arguments you would use in your essay. This simple task has at least two benefits. First, you will get a clear understanding on what you’re going to write. It will wipe out the writer’s block. Second, gathering arguments for the topic will help you create an outline for your assignment.
  • Generate other arguments to support the thesis. Free thesis generators suggest you proceed with a few arguments that support your topic idea. Don’t forget to prepare some logical evidence!
  • Come up with a counterargument to the main idea. You might find this exercise a bit hard, but still, if you're dreaming of writing an excellent paper, think of another side of the argument. To complete this task, you should conduct preliminary research to find another standpoint and evidence behind it.
  • Provide your thesis statement as early as possible in your paper. If you're writing a short paper, put your thesis in the introductory paragraph. For more extended essays, it is acceptable to write it in the second paragraph. And avoid phrases like, "The point of my essay is…"
  • Make your thesis statement specific. Remember to keep it short, clear, and specific. Check if there are two broad statements. If so, think about settling on one single idea and then proceed with further development. Avoid making it too broad. Your paper won’t be successful if you write three pages on things that do not disclose the topic and are too generic.

Original thesis:

There are serious objections to abortions.

Revised thesis:

Because of the high risk of breast cancer or subsequent childbearing, there should be broadly implemented the informed consent practice that certifies that women are advised of such risks prior to having an abortion.

When writing your thesis, you use words that your audience will understand:

  • Avoid technical language unless you’re writing a technical report.
  • Forget about jargon.
  • Avoid vague words: “exciting,” “interesting,” “usual,” “difficult,” etc.
  • Avoid simply announcing the topic. Share your specific “angle” and show why your point on the issue matter.
  • Do not make judgments that oversimplify complex topics.
  • If you use judgment call in your thesis, don’t forget to specify and justify your reasoning.
  • Don't just report facts. Instead, share your personal thoughts and ideas on the issue.
  • Explain why your point matters. When you’re writing a thesis, imagine that your readers ask you a simple question: “So what?” Instead of writing something general, like "There are a lot of pros and cons of behaviorism", tell your readers why you think the behaviorism theory is better than cognitivist theory.
  • Avoid quotes in your thesis statement. Instead of citing someone, use your own words in the thesis. It will help you to grab the reader's attention and gain credibility. And the last advice: change your thesis as you write the essay. Revise it as your paper develops to get the perfect statement. Now it's time to apply this knowledge and create your own thesis! We believe this advice and tools will be useful in your essay writing!
Thesis Generator Tool Type of paper Free/ Paid Outline option Ads Hints and examples IvyScore
Any Free Yes None Instructions, questionnaire 5 out of 5
Any Free None None Instructions, examples 4 out of 5
Any Free None None Examples, guide 4 out of 5
Persuasive Free Yes None Instructions, questionnaire 4 out of 5
Any Free None None Guides, examples 4 out of 5
Argumentative Free None None Instructions, examples 4 out of 5
Argumentative Free (up to 1000 words/week) Yes None Hints, a short guide on different thesis types 4 out of 5
Any Free None Too many Hints, guide 3 out of 5
Any Free None Too many Hints, samples, thesis statement examples on various topics 3 out of 5
Persuasive, research, compare and contrast Free None None A short guide on each type of thesis, questionnaire 3 out of 5

To ease your writing, we prepared an IvyPanda thesis statement generators. Check the list below:

1. Thesis Statement Generator

Thesis Statement Generator is a simple online tool which will guide you through the thesis statement creation. To get your thesis, you will have to provide the following information: the topic, your personal opinion, the qualification, and reason sentences. Then press the button “My Thesis” to see the final draft, edit it and print or save it on your computer.

Also, you can make an outline for your future paper within a couple of clicks. The tool works with any type of paper.

2. Grammarly AI Thesis Statement Generator

Grammarly is known for its superb grammar-checking software, but it has recently added various AI-powered tools. An AI Thesis Statement Generator is one of them. To use this tool, specify your audience and briefly describe your paper type and topic. After that, wait a few seconds, and Grammarly will provide three thesis statement options.

However, as with any AI writing tool, you should be critical of the information they provide. Therefore, we recommend you check the generated thesis statements for inaccuracies before using them in your writing.

3. HelpfulPapers Thesis Statement Checker

HelpfulPapers Thesis Statement Checker is another free service that requires no registration and provides unlimited attempts for thesis creation. To create a thesis statement, you should put a topic, your main conclusion about it, two arguments, and a counterargument. Then, click the button “Make a thesis statement.” You will get a few thesis examples to choose from.

On the page, you will also find a comprehensive guide on thesis statement writing with good and bad samples. This website doesn’t allow its users to create an outline draft. However, the HelpfulPapers blog contains lots of useful articles on writing.

4. Thesis Builder

Thesis Builder is a service by Tom March, which is available for students since 1995. This ad-free tool allows you to generate a persuasive thesis and create your essay outline. This web app is completely free, so fill in the boxes and write your assignment. You can print a result or send it as email.

5. Thesis Statement Creator

The next tool in our list is Thesis Statement Creator. The service is ad-free and offers unlimited attempts to generate thesis statement. It works with any type of paper and requires no registration. Users can find a short guide and thesis statement prompts. The app allows printing the result.

6. UAGC Thesis Generator

The University of Arizona Global Campus has designed a convenient tool for crafting compelling argumentative thesis statements. Just follow the prompts on the website to fill in all the boxes and get a strong and focused thesis.

If you want to learn more about developing thesis statements, the university invites you to follow the link to their thesis writing guide. From there, you’ll learn how to craft not only argumentative thesis statements but also analytical and expository ones.

7. HIX.AI Thesis Statement Generator

HIX.AI is an AI-powered thesis statement generator. To use the tool, enter your topic, specify the main idea and supporting evidence, and add a counterargument. You can also choose your audience, tone of voice, and language. Then, click the button and check your thesis.

HIX.AI offers a free plan: you can generate a maximum of 1,000 words per week without charge. Although not quite a lot, it can be enough to craft 20-25 thesis statements a week. So, you are highly likely to get the one that suits you.

8. Editpad Thesis Statement Generator

Editpad Thesis Statement Generator is another AI-powered tool for crafting thesis statements. Yet, it has a much simpler interface: you only have to enter your topic and click the button to get your thesis statement.

If you’re looking for a quick, unsophisticated tool or haven’t identified your main point, evidence, and counterargument yet, the Editpad thesis generator can be just what you need. However, if you want a more customizable option, you’d better choose something different from our list.

9. Thesis Statement Maker

Thesis Statement Maker is similar to the previous tool. The page contains hints on thesis writing, four fields to fill and get a thesis, and works with any type of paper. As a bonus, you will find a list of thesis statements on various topics.

The key drawback is the same too: lots of ads and no paper outline option.

10. Thesis Generator | SUNY Empire State College

The truly academic tool in our list: SUNY Empire State College Thesis Generator. Students can find a lot of useful information on thesis writing. To generate summary, choose the type of paper you are going to write, fill the form and get your thesis. The website is ad-free and provides a short guide on most common types of thesis.

Among its drawbacks are only three supported types of thesis statements and no outline generation.

🧭 Intuitive Use the prompts and look at the examples to make a thesis.
📍 Customizable Generate a thesis statement for an argumentative or analytical essay.
💰 Free Don’t pay anything with this thesis statement generator.
🌐 Online No need to waste precious space on your devices with this tool.

Updated: Dec 19th, 2023

  • Argumentative Essays: Purdue OWL
  • Developing A Thesis: Harvard College Writing Center
  • 5 Types of Thesis Statements: University of Guelph
  • The Ultimate Guide to Writing a Thesis Statement: Grammarly
  • Expository Essays: Purdue OWL
  • How to Write a Thesis Statement: Indiana University Bloomington
  • Thesis Statements: UNC Writing Center
  • Thesis Statements: Texas A&M University Writing Center
  • Free Essays
  • Writing Tools
  • Lit. Guides
  • Donate a Paper
  • Referencing Guides
  • Free Textbooks
  • Tongue Twisters
  • Job Openings
  • Expert Application
  • Video Contest
  • Writing Scholarship
  • Discount Codes
  • IvyPanda Shop
  • Terms and Conditions
  • Privacy Policy
  • Cookies Policy
  • Copyright Principles
  • DMCA Request
  • Service Notice

If you need help to write a thesis for your paper, this page will give you plenty of resources to do that. You’ll find out about the essentials of thesis statement. There are also tips on how to write the statement properly. But most importantly, this page contains reviews and links to online thesis generators.

  • Google Meet
  • Mobile Dialer

thesis statement recycling paper

Resent Search

image

Management Assignment Writing

image

Technical Assignment Writing

image

Finance Assignment Writing

image

Medical Nursing Writing

image

Resume Writing

image

Civil engineering writing

image

Mathematics and Statistics Projects

image

CV Writing Service

image

Essay Writing Service

image

Online Dissertation Help

image

Thesis Writing Help

image

RESEARCH PAPER WRITING SERVICE

image

Case Study Writing Service

image

Electrical Engineering Assignment Help

image

IT Assignment Help

image

Mechanical Engineering Assignment Help

image

Homework Writing Help

image

Science Assignment Writing

image

Arts Architecture Assignment Help

image

Chemical Engineering Assignment Help

image

Computer Network Assignment Help

image

Arts Assignment Help

image

Coursework Writing Help

image

Custom Paper Writing Services

image

Personal Statement Writing

image

Biotechnology Assignment Help

image

C Programming Assignment Help

image

MBA Assignment Help

image

English Essay Writing

image

MATLAB Assignment Help

image

Narrative Writing Help

image

Report Writing Help

image

Get Top Quality Assignment Assistance

image

Online Exam Help

image

Macroeconomics Homework Help

image

Change Management Assignment Help

image

Operation management Assignment Help

image

Strategy Assignment Help

image

Human Resource Management Assignment Help

image

Psychology Assignment Writing Help

image

Algebra Homework Help

image

Best Assignment Writing Tips

image

Statistics Homework Help

image

CDR Writing Services

image

TAFE Assignment Help

image

Auditing Assignment Help

image

Literature Essay Help

image

Online University Assignment Writing

image

Economics Assignment Help

image

Programming Language Assignment Help

image

Political Science Assignment Help

image

Marketing Assignment Help

image

Project Management Assignment Help

image

Geography Assignment Help

image

Do My Assignment For Me

image

Business Ethics Assignment Help

image

Pricing Strategy Assignment Help

image

The Best Taxation Assignment Help

image

Finance Planning Assignment Help

image

Solve My Accounting Paper Online

image

Market Analysis Assignment

image

4p Marketing Assignment Help

image

Corporate Strategy Assignment Help

image

Project Risk Management Assignment Help

image

Environmental Law Assignment Help

image

History Assignment Help

image

Geometry Assignment Help

image

Physics Assignment Help

image

Clinical Reasoning Cycle

image

Forex Assignment Help

image

Python Assignment Help

image

Behavioural Finance Assignment Help

image

PHP Assignment Help

image

Social Science Assignment Help

image

Capital Budgeting Assignment Help

image

Trigonometry Assignment Help

image

Java Programming Assignment Help

image

Corporate Finance Planning Help

image

Sports Science Assignment Help

image

Accounting For Financial Statements Assignment Help

image

Robotics Assignment Help

image

Cost Accounting Assignment Help

image

Business Accounting Assignment Help

image

Activity Based Accounting Assignment Help

image

Econometrics Assignment Help

image

Managerial Accounting Assignment Help

image

R Studio Assignment Help

image

Cookery Assignment Help

image

Solidworks assignment Help

image

UML Diagram Assignment Help

image

Data Flow Diagram Assignment Help

image

Employment Law Assignment Help

image

Calculus Assignment Help

image

Arithmetic Assignment Help

image

Write My Assignment

image

Business Intelligence Assignment Help

image

Database Assignment Help

image

Fluid Mechanics Assignment Help

image

Web Design Assignment Help

image

Student Assignment Help

image

Online CPM Homework Help

image

Chemistry Assignment Help

image

Biology Assignment Help

image

Corporate Governance Law Assignment Help

image

Auto CAD Assignment Help

image

Public Relations Assignment Help

image

Bioinformatics Assignment Help

image

Engineering Assignment Help

image

Computer Science Assignment Help

image

C++ Programming Assignment Help

image

Aerospace Engineering Assignment Help

image

Agroecology Assignment Help

image

Finance Assignment Help

image

Conflict Management Assignment Help

image

Paleontology Assignment Help

image

Commercial Law Assignment Help

image

Criminal Law Assignment Help

image

Anthropology Assignment Help

image

Biochemistry Assignment Help

image

Get the best cheap assignment Help

image

Online Pharmacology Course Help

image

Urgent Assignment Help

image

Paying For Assignment Help

image

HND Assignment Help

image

Legitimate Essay Writing Help

image

Best Online Proofreading Services

image

Need Help With Your Academic Assignment

image

Assignment Writing Help In Canada

image

Assignment Writing Help In UAE

image

Online Assignment Writing Help in the USA

image

Assignment Writing Help In Australia

image

Assignment Writing Help In the UK

image

Scholarship Essay Writing Help

image

University of Huddersfield Assignment Help

image

Ph.D. Assignment Writing Help

image

Law Assignment Writing Help

image

Website Design and Development Assignment Help

image

University of Greenwich Assignment Assistance in the UK

thesis statement recycling paper

100 Effective Argumentative Essay Topics for Your Next Assignment

Writing an argumentative essay can be difficult at times since it involves a careful presentation of factual data. To ensure that your perspective is heard effectively, this form of essay requires a clear and engaging thesis and a well-structured argument supported by evidence. In this blog, we will discuss how to write an argumentative essay and present over 80 ideas to get you started. Understanding the complex process of building a compelling argument can help you improve your writing considerably. Furthermore, mastering this type of essay writing can benefit you in various academic and professional settings. Let's discuss the fundamentals of writing a successful argumentative essay.

What is an Argumentative Essay?

An argumentative essay is a type of writing that requires extensive research to support its claims. The basic goal of an argumentative essay is to persuade the reader of a specific point of view. It relies on fact-based evidence and unbreakable logic to demonstrate that its thesis is valid. Unlike other forms of writing that may include the writer's thoughts and opinions, an argumentative essay must focus on facts that support its central claim to provide strong insights.

An effective argumentative essay presents the writer's point of view and then substantiates it with supportive facts. Since the purpose is to engage in a logically structured debate, the essay should be constructed in a manner that allows for the examination of the argument from multiple perspectives, making it debatable yet grounded in factual accuracy.

How to Write an Argumentative Essay, Step by Step

Choosing a topic.

Selecting a topic is the first and most critical stage. Find a topic that interests you and will keep you engaged while writing. Ensure the subject is debatable, so you can easily find facts to support your argument. Also, pick a topic with plenty of research material and one that is relevant to current issues. This will make your essay more interesting and easier to support with evidence.

Research and Gathering Evidence

Research and gathering evidence are crucial steps in writing an argumentative essay. You need to understand what you are writing about and include well-supported facts to support your argument. To do so, you can find reliable sources such as websites, journal articles, and books.

Structuring Your Essay

Structuring your essay properly is essential for clarity and effectiveness. Organise your points logically, starting with an introduction, then body paragraphs presenting your arguments and evidence, and ending with a strong conclusion. Creating a simple outline can help you manage your findings and the material you have noted for your essay, so structure it properly.

Editing and Revising

Editing and revising are important for improving the quality of your essay. Carefully review your work for any grammatical errors, ensure that your arguments are coherent, and verify that all evidence is accurately presented and properly cited. Make sure your essay matches the required format and standards of your university.

Guide to Argumentative Essay Structure

  • Introduction

The introduction sets the stage for your essay. It should include a hook to grab the reader's attention, some background information on the topic, and a clear thesis statement outlining your main argument. Make sure to write it so there won't be any problem for the reader to understand what matter you are talking about. Provide a proper overview of the overall assignment.

Body paragraphs

As we know, the argumentative essay works in a three-body paragraph format, so make sure to use them correctly. The body paragraphs present your arguments and supporting evidence. Each paragraph should focus on a single point, beginning with a topic sentence, followed by evidence and examples, and ending with a transition to the next paragraph. Addressing refutation strengthens your argument and ensures that the reader understands what you want them to comprehend.

The conclusion highlights your important points and refocuses your thesis. It should emphasize the significance of your argument and conclude with a concluding thought or call to action. Avoid including fresh information in the end. Make careful to end things clearly to avoid any problems with the matter.

Whether you're writing a short report or an essay, from a standard essay to an argumentative essay, you must always include references that show you've given proper credit to the author. Choose credible sources and correctly cite them to demonstrate to your reader that you have done extensive research and put in a lot of blood, sweat, and effort to compose a brilliant argumentative essay.

100 Effective Argumentative Essay Topics for Your Next Assignment in 2024

Science argumentative essay topics.

  • Is space exploration worth the cost?
  • Should vaccinations be mandatory?
  • Should animal testing be banned?
  • Should there be a limit to scientific research?
  • Is packaged food safe to consume?
  • Is technology making us smarter or dumber?
  • Would the world be safer if we eliminated nuclear weapons?
  • Is it possible to grow completely organic food?
  • Is the overuse of antibiotics dangerous?
  • Are humans to blame for the extinction of species?

Education Argumentative Essay Topics

  • Single-Sex Schools: Advantages and Disadvantages
  • Should Mental Health Education be Necessary in Educational Institutions?
  • The Impact of Music Education on Student Growth.
  • Are students overburdened with too much schoolwork?
  • Should colleges and institutions compensate student-athletes?
  • Are standardized assessments effective?
  • Is our current educational system beneficial?
  • Should there be no cost for college education?
  • Are private schools superior to public ones?
  • Is a college degree required for success?

Technology Argumentative Essay Topics

  • Should individuals take legal action against hate speech on social media?
  • Is social media making people feel more lonely?
  • Do smartphones make people rely on one another?
  • Can technology help people live better lives?
  • Will conventional textbooks disappear?
  • Is Internet education more effective than traditional?
  • Should cell phones be prohibited in cars?
  • Is social media hurting mental health?
  • Should data gathering regulations be stricter?
  • Are self-driving vehicles safe?

Environmental Argumentative Essay Topics

  • Do electric vehicles reduce pollution overall?
  • Is climate change the cause of frequent natural disasters?
  • Should disposable bags be prohibited?
  • Is recycling successful? Discuss.
  • Should petroleum and petroleum products be banned?
  • Are we doing enough to protect our planet from destruction?
  • Is urbanization harmful to the environment?
  • Does conserving water make sense?
  • Should single-use materials be banned?
  • Does maintaining natural environments have any significance?

Economic Argumentative Essay Topics

  • Would you support national basic income?
  • Is it appropriate to waive student loans?
  • Should all people have access to free electricity?
  • Should authorities regulate the economy?
  • Should the salary of Directors be capped?
  • Is open trade advantageous?
  • Is border-free taxation a good concept?
  • Is income disparity a big problem?
  • Should employers have to provide paid maternity leave?
  • Is working from home more efficient than a workplace job?

Sports Argumentative Essay Topics

  • Should sports that involve violence be prohibited?
  • Do collegiate athletes deserve compensation?
  • Are sports vital in educational institutions?
  • Should sports betting be legalized?
  • Should high schools have mandated athletic programs?
  • Should the Olympics include more sports?
  • Are sports overly competitive for young children?
  • Is e-sports a true sport?
  • Are pro athletes overpaid?
  • Should women be allowed to play games for men's teams?

Travel and Tourism Argumentative Essay Topics

  • Is it safe to travel to disputed nations?
  • Should travel blogs be trusted?
  • Is it preferable to travel by air or train?
  • Should travel prohibitions be implemented during epidemics?
  • Is it worth spending money on luxury travel?
  • Travelling domestically or abroad: which is a better choice?
  • Is travelling alone secure?
  • Should tourism be limited to popular locations?
  • Is tourism beneficial for the local economy?
  • Is tourism beneficial to local economies?

Philosophy Argumentative Essay Topics

  • Is there an afterlife?
  • Does life exist in a black hole?
  • Is it acceptable to consume creatures?
  • Is there a moral code that applies universally?
  • Should we be afraid of intelligent machines?
  • Should we favour science over feelings?
  • Is enjoyment the end purpose of life?
  • Are individuals accountable for their happiness?
  • Should wealthy individuals be compelled to assist those in need?
  • Is it appropriate for parents to influence their children's beliefs?

Law and Justice Argumentative Essay Topics

  • Should we abolish the death penalty?
  • Should marijuana be made legal?
  • Is the criminal justice system just and fair?
  • Are mandatory minimum sentences beneficial?
  • Should the voting age be reduced?
  • Is the war on drugs achieving its goals?
  • Should laws on cybercrime be stricter?
  • Is the legal system biased against minority groups?
  • Are privacy laws excessively stringent?
  • Is capital punishment morally acceptable?

Politics and Government Argumentative Essay Topics

  • Is democracy the best form of governance?
  • Should political parties be abolished?
  • Should the government provide free healthcare?
  • Is nationalism helpful or harmful?
  • Should voting be made mandatory?
  • Is socialism an effective political system?
  • Is conservatism a threat to freedom of speech?
  • Should the government be able to control family size?
  • Should the national government repeal all laws that criminalize drug manufacture and use?
  • The government should restrict the pricing of medicines.

In conclusion, in this blog, We have discussed how to write an argumentative essay, including its structure and essential components. We also covered the importance of a clear thesis statement, logical transitions, well-supported arguments, and a compelling conclusion. Additionally, we explored various potential topics for an argumentative essay. Remember to follow these guidelines to ensure your essay is coherent, persuasive, and exceeds the 80+ mark. By adhering to these principles, you will be well-equipped to craft a high-quality argumentative essay.

Frequently asked questions

How do you write an argument essay .

  • Selecting a topic is the first and most important stage.
  • Research and gathering evidence are crucial steps in writing an argumentative essay.
  • Structuring your essay properly is essential for clarity and effectiveness.
  • Editing and revising are important for improving the quality of your essay. Carefully review your work.

What makes a good argumentative essay ?

There are three key elements of a good argumentative essay:

  • Start with a strong Argumentative essay.
  • Add a formal and specific message or your thesis.
  • For an argumentative essay to be convincing, it needs to be based upon and cited by comprehensive research.

What is the main goal of an argumentative essay ?

When do i write an argumentative essay .

The majority of university essays are argumentative essays. Unless otherwise stated, you can presume that the purpose of every essay you're required to write is argumentative. To prove to the reader of your position through evidence and reasoning. In composition classes, you may be assigned assignments that particularly assess your ability to produce an argumentative essay. Look for prompts with directions like "argue," "assess," or "discuss" to see if this is the purpose.

What is an Argumentative Essay Structure ?

Argumentative Essay Structure Includes:

  • Body Paragraph

How to find an argumentative topic ?

Here are some strategies to help you find an argumentative essay topic:

  • Reflect on your interests and passions
  • Brainstorm around a general theme
  • Explore online resources
  • Consider the following criteria when choosing a topic

What is an argumentative essay example ?

thesis statement recycling paper

Top 10 Best Universities Ranking list in India 2022

Generic Conventions: Assignment Help

Generic Conventions: Assignment Help Services

Research Paper Topics For Medical | AHECounselling

Research Paper Topics For Medical

Top 5 Resources for Writing Excellent Academic Assignmentsb

Top 5 Resources for Writing Excellent Academic Assignments

How to Write a Literature Review for Academic Purposes

How to Write a Literature Review for Academic Purposes

thesis statement recycling paper

Tips for Writing a killer introduction to your assignment

How To Write A Compelling Conclusion For Your University Assignment

How To Write A Compelling Conclusion For Your University Assignment

Social Science, research ideas

Research Papers Topics For Social Science

Best 150 New Research Paper Ideas For Students

Best 150 New Research Paper Ideas For Students

7 Best Plagiarism Checkers for Students And Teachers in 2024

7 Best Plagiarism Checkers for Students And Teachers in 2024

Enquiry form.

Essay Checker

With Ginger’s Essay Checker, correcting common writing errors is easier than ever. Try it free online!

Avoid Common Writing Mistakes with the World’s Top Essay Checker

The Ginger Essay Checker helps you write better papers instantly. Upload as much text as you want – even entire documents – and Essay Checker will automatically correct any spelling mistakes, grammar mistakes, and misused words. Ginger Essay Checker uses patent-pending technology to fix essays, improving your writing just like a human editor would. Take advantage of the most advanced essay corrector on the market. You’ll benefit from instant proofreading, plus you’ll automatically improve your writing skills as you view highlighted errors side by side with Ginger Essay Checker’s corrections.

Check Essays Fast with Ginger Software

You’ve selected a topic, constructed an outline, written your thesis statement, and completed your first draft. Don’t let your efforts go to waste. With Ginger Software’s Essay Checker, you’ll be the only one to see those little mistakes and perhaps even those glaring errors peppering your paper. The tedious task of checking an essay once had to be done by hand – and proofreading sometimes added hours of work to large projects. Where writers once had to rely on peers or editors to spot and correct mistakes, Essay Checker has taken over. Better yet, this innovative online paper checker does what other free essay corrector programs can’t do: Not only does it flag errors so you can learn from your mistakes, it automatically corrects all spelling and grammar issues at lightning speed.

Stop Wasting Time and Effort Checking Papers

You have a heavy workload, and the last thing you need to do is waste time staring at an essay you’ve just spent hours writing. Proofreading your own work – especially when you’re tired – allows you to find a few mistakes, but some errors inevitably go unnoticed no matter how much time you spend re-reading what you’ve just written. The Ginger Essay Checker lightens your workload by completely eliminating the need for hours of tedious self-review. With Ginger’s groundbreaking Essay Checker, a vast array of grammar mistakes and spelling errors are detected and corrected with unmatched accuracy. While most online paper checker tools claiming to correct essays simply flag mistakes and sometimes make suggestions for fixing them, Essay Checker goes above and beyond, picking up on such issues as tense usage errors, singular vs. plural errors, and more. Even the most sophisticated sentence structures are checked with accuracy, ensuring no mistake is overlooked even though all you’ve done is made a single click.

Essay Checker Paves the Way to Writing Success

Writing has always been important, and accuracy has always been sought after. Getting your spelling, grammar, and syntax right matters, whether your audience is online or off. Error-free writing is a vital skill in the academic world, and it’s just as important for conducting business. Casual bloggers need to maintain credibility with their audiences, and professional writers burn out fast when faced with mounds of work to proofread. Make sure your message is conveyed with clarity by checking your work before submitting it to readers – no matter who they are.

Checking essays has never been easier. With Ginger Essay Checker, you’ll save time, boost productivity, and make the right impression.

thesis statement recycling paper

  • Walden University
  • Faculty Portal

Paragraphs: Topic Sentences

Topic sentences video playlist.

Note that these videos were created while APA 6 was the style guide edition in use. There may be some examples of writing that have not been updated to APA 7 guidelines.

  • Academic Paragraphs: Introduction to Paragraphs and the MEAL Plan (video transcript)
  • Academic Paragraphs: Examples of the MEAL Plan (video transcript)

The best way to understand the role of the topic sentence in paragraph development is to imagine that any given paragraph is a miniature essay that has its own thesis, support, and conclusion. The parts of a paragraph easily correspond to the parts of an essay:

Thesis statement Topic sentence
Body paragraphs Supporting details, explanation, analysis
Conclusion Wrap-up sentence(s)

Just as an effective essay starts off with an introduction that presents the paper's thesis statement and indicates the specific claim or argument that the essay will develop, each paragraph should begin with a topic sentence that indicates the focus of that paragraph, alerting the reader to the particular subtopic that the paragraph will provide evidence to support.

A strong topic sentence should be placed at or near the beginning of a paragraph. In addition, this sentence should focus on a specific issue, avoid the use of direct quotations, and leave room for support and analysis within the body of the paragraph. Read on to learn more about creating an effective topic sentence.

The topic sentence does not have to be the first sentence in the paragraph; however, it should come early in the paragraph in order to orient the reader to the paragraph's focus right away. Occasionally a writer may place a transition sentence before the topic sentence, to create continuity between topics.

Topic Sentence to begin paragraph:

In the novel Sula , Morrison uses the physical bonds of female friendship to propel her characters into self-awareness.

Transition Sentence + Topic Sentence to begin paragraph:

However, Morrison does not only use the emotional and spiritual bonds between her female characters to initiate their coming-of-age. In addition, the author uses the physical bonds of female friendship to propel her adolescent protagonists into self-awareness.

Specificity

Your topic sentence should be more narrowly focused than your thesis sentence, and you will want to make sure the claim you are making can be supported, argued, and analyzed within the body of your paragraph.

Example: In the novel Sula , Morrison uses the physical bonds of female friendship to propel her characters into self-awareness.

In this topic sentence, the essayist is arguing that physical bonds of friendship, specifically, make the female characters more self-aware. Because this idea can be refuted or supported by readers (based on how successfully the essayist persuades his or her readers with examples and analysis from the novel), and because the claim is narrow enough to address within a single paragraph, the above sentence is a successful topic sentence.

Direct Quotations (Are Best Avoided)

Although it might be tempting to begin a paragraph with a compelling quotation, as a general rule, topic sentences should state the main idea of the paragraph in your own words. Direct quotations have a place later in the paragraph, where they may be incorporated to support the topic sentence.

Needs Improvement: As Morrison (1982) conveyed, the girls' "friendship let them use each other to grow on…they found in each other's eyes the intimacy they were looking for" (p. 52).
Better: In the novel Sula , Morrison uses the physical bonds of female friendship to propel her characters into self-awareness. Pointing to the connection of eyes meeting and bodies growing together, Morrison makes coming-of-age an interactive physical process between the adolescent protagonists. Specifically, Morrison describes how Sula and Nel have used "each other to grow on…they found in each other's eyes the intimacy they were looking for" (p. 52).

In this second paragraph, the topic sentence appears first, immediately orienting readers to the main focus (or topic) of the paragraph. The quotation is used later in the paragraph as a form of evidence or support for the topic sentence.

If you are finding it challenging to create effective topic sentences, you might consider outlining before beginning to write a paper. The points and subpoints of an outline can then become the topic sentences for the paper's paragraphs.

Additionally, because the topic sentence functions similarly at the paragraph level to the thesis at the essay level, you may also find it helpful to check out our thesis statement construction information. Our resource on paragraphs has helpful information about the scope of a paragraph, as well.

Related Resources

Blogger

  • Previous Page: Length and Scope
  • Next Page: Organization (MEAL Plan)
  • Office of Student Disability Services

Walden Resources

Departments.

  • Academic Residencies
  • Academic Skills
  • Career Planning and Development
  • Customer Care Team
  • Field Experience
  • Military Services
  • Student Success Advising
  • Writing Skills

Centers and Offices

  • Center for Social Change
  • Office of Academic Support and Instructional Services
  • Office of Degree Acceleration
  • Office of Research and Doctoral Services
  • Office of Student Affairs

Student Resources

  • Doctoral Writing Assessment
  • Form & Style Review
  • Quick Answers
  • ScholarWorks
  • SKIL Courses and Workshops
  • Walden Bookstore
  • Walden Catalog & Student Handbook
  • Student Safety/Title IX
  • Legal & Consumer Information
  • Website Terms and Conditions
  • Cookie Policy
  • Accessibility
  • Accreditation
  • State Authorization
  • Net Price Calculator
  • Contact Walden

Walden University is a member of Adtalem Global Education, Inc. www.adtalem.com Walden University is certified to operate by SCHEV Š 2024 Walden University LLC. All rights reserved.

  • Privacy Policy

Research Method

Home » Research Methodology – Types, Examples and writing Guide

Research Methodology – Types, Examples and writing Guide

Table of Contents

Research Methodology

Research Methodology

Definition:

Research Methodology refers to the systematic and scientific approach used to conduct research, investigate problems, and gather data and information for a specific purpose. It involves the techniques and procedures used to identify, collect , analyze , and interpret data to answer research questions or solve research problems . Moreover, They are philosophical and theoretical frameworks that guide the research process.

Structure of Research Methodology

Research methodology formats can vary depending on the specific requirements of the research project, but the following is a basic example of a structure for a research methodology section:

I. Introduction

  • Provide an overview of the research problem and the need for a research methodology section
  • Outline the main research questions and objectives

II. Research Design

  • Explain the research design chosen and why it is appropriate for the research question(s) and objectives
  • Discuss any alternative research designs considered and why they were not chosen
  • Describe the research setting and participants (if applicable)

III. Data Collection Methods

  • Describe the methods used to collect data (e.g., surveys, interviews, observations)
  • Explain how the data collection methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or instruments used for data collection

IV. Data Analysis Methods

  • Describe the methods used to analyze the data (e.g., statistical analysis, content analysis )
  • Explain how the data analysis methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or software used for data analysis

V. Ethical Considerations

  • Discuss any ethical issues that may arise from the research and how they were addressed
  • Explain how informed consent was obtained (if applicable)
  • Detail any measures taken to ensure confidentiality and anonymity

VI. Limitations

  • Identify any potential limitations of the research methodology and how they may impact the results and conclusions

VII. Conclusion

  • Summarize the key aspects of the research methodology section
  • Explain how the research methodology addresses the research question(s) and objectives

Research Methodology Types

Types of Research Methodology are as follows:

Quantitative Research Methodology

This is a research methodology that involves the collection and analysis of numerical data using statistical methods. This type of research is often used to study cause-and-effect relationships and to make predictions.

Qualitative Research Methodology

This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

Mixed-Methods Research Methodology

This is a research methodology that combines elements of both quantitative and qualitative research. This approach can be particularly useful for studies that aim to explore complex phenomena and to provide a more comprehensive understanding of a particular topic.

Case Study Research Methodology

This is a research methodology that involves in-depth examination of a single case or a small number of cases. Case studies are often used in psychology, sociology, and anthropology to gain a detailed understanding of a particular individual or group.

Action Research Methodology

This is a research methodology that involves a collaborative process between researchers and practitioners to identify and solve real-world problems. Action research is often used in education, healthcare, and social work.

Experimental Research Methodology

This is a research methodology that involves the manipulation of one or more independent variables to observe their effects on a dependent variable. Experimental research is often used to study cause-and-effect relationships and to make predictions.

Survey Research Methodology

This is a research methodology that involves the collection of data from a sample of individuals using questionnaires or interviews. Survey research is often used to study attitudes, opinions, and behaviors.

Grounded Theory Research Methodology

This is a research methodology that involves the development of theories based on the data collected during the research process. Grounded theory is often used in sociology and anthropology to generate theories about social phenomena.

Research Methodology Example

An Example of Research Methodology could be the following:

Research Methodology for Investigating the Effectiveness of Cognitive Behavioral Therapy in Reducing Symptoms of Depression in Adults

Introduction:

The aim of this research is to investigate the effectiveness of cognitive-behavioral therapy (CBT) in reducing symptoms of depression in adults. To achieve this objective, a randomized controlled trial (RCT) will be conducted using a mixed-methods approach.

Research Design:

The study will follow a pre-test and post-test design with two groups: an experimental group receiving CBT and a control group receiving no intervention. The study will also include a qualitative component, in which semi-structured interviews will be conducted with a subset of participants to explore their experiences of receiving CBT.

Participants:

Participants will be recruited from community mental health clinics in the local area. The sample will consist of 100 adults aged 18-65 years old who meet the diagnostic criteria for major depressive disorder. Participants will be randomly assigned to either the experimental group or the control group.

Intervention :

The experimental group will receive 12 weekly sessions of CBT, each lasting 60 minutes. The intervention will be delivered by licensed mental health professionals who have been trained in CBT. The control group will receive no intervention during the study period.

Data Collection:

Quantitative data will be collected through the use of standardized measures such as the Beck Depression Inventory-II (BDI-II) and the Generalized Anxiety Disorder-7 (GAD-7). Data will be collected at baseline, immediately after the intervention, and at a 3-month follow-up. Qualitative data will be collected through semi-structured interviews with a subset of participants from the experimental group. The interviews will be conducted at the end of the intervention period, and will explore participants’ experiences of receiving CBT.

Data Analysis:

Quantitative data will be analyzed using descriptive statistics, t-tests, and mixed-model analyses of variance (ANOVA) to assess the effectiveness of the intervention. Qualitative data will be analyzed using thematic analysis to identify common themes and patterns in participants’ experiences of receiving CBT.

Ethical Considerations:

This study will comply with ethical guidelines for research involving human subjects. Participants will provide informed consent before participating in the study, and their privacy and confidentiality will be protected throughout the study. Any adverse events or reactions will be reported and managed appropriately.

Data Management:

All data collected will be kept confidential and stored securely using password-protected databases. Identifying information will be removed from qualitative data transcripts to ensure participants’ anonymity.

Limitations:

One potential limitation of this study is that it only focuses on one type of psychotherapy, CBT, and may not generalize to other types of therapy or interventions. Another limitation is that the study will only include participants from community mental health clinics, which may not be representative of the general population.

Conclusion:

This research aims to investigate the effectiveness of CBT in reducing symptoms of depression in adults. By using a randomized controlled trial and a mixed-methods approach, the study will provide valuable insights into the mechanisms underlying the relationship between CBT and depression. The results of this study will have important implications for the development of effective treatments for depression in clinical settings.

How to Write Research Methodology

Writing a research methodology involves explaining the methods and techniques you used to conduct research, collect data, and analyze results. It’s an essential section of any research paper or thesis, as it helps readers understand the validity and reliability of your findings. Here are the steps to write a research methodology:

  • Start by explaining your research question: Begin the methodology section by restating your research question and explaining why it’s important. This helps readers understand the purpose of your research and the rationale behind your methods.
  • Describe your research design: Explain the overall approach you used to conduct research. This could be a qualitative or quantitative research design, experimental or non-experimental, case study or survey, etc. Discuss the advantages and limitations of the chosen design.
  • Discuss your sample: Describe the participants or subjects you included in your study. Include details such as their demographics, sampling method, sample size, and any exclusion criteria used.
  • Describe your data collection methods : Explain how you collected data from your participants. This could include surveys, interviews, observations, questionnaires, or experiments. Include details on how you obtained informed consent, how you administered the tools, and how you minimized the risk of bias.
  • Explain your data analysis techniques: Describe the methods you used to analyze the data you collected. This could include statistical analysis, content analysis, thematic analysis, or discourse analysis. Explain how you dealt with missing data, outliers, and any other issues that arose during the analysis.
  • Discuss the validity and reliability of your research : Explain how you ensured the validity and reliability of your study. This could include measures such as triangulation, member checking, peer review, or inter-coder reliability.
  • Acknowledge any limitations of your research: Discuss any limitations of your study, including any potential threats to validity or generalizability. This helps readers understand the scope of your findings and how they might apply to other contexts.
  • Provide a summary: End the methodology section by summarizing the methods and techniques you used to conduct your research. This provides a clear overview of your research methodology and helps readers understand the process you followed to arrive at your findings.

When to Write Research Methodology

Research methodology is typically written after the research proposal has been approved and before the actual research is conducted. It should be written prior to data collection and analysis, as it provides a clear roadmap for the research project.

The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

The methodology should be written in a clear and concise manner, and it should be based on established research practices and standards. It is important to provide enough detail so that the reader can understand how the research was conducted and evaluate the validity of the results.

Applications of Research Methodology

Here are some of the applications of research methodology:

  • To identify the research problem: Research methodology is used to identify the research problem, which is the first step in conducting any research.
  • To design the research: Research methodology helps in designing the research by selecting the appropriate research method, research design, and sampling technique.
  • To collect data: Research methodology provides a systematic approach to collect data from primary and secondary sources.
  • To analyze data: Research methodology helps in analyzing the collected data using various statistical and non-statistical techniques.
  • To test hypotheses: Research methodology provides a framework for testing hypotheses and drawing conclusions based on the analysis of data.
  • To generalize findings: Research methodology helps in generalizing the findings of the research to the target population.
  • To develop theories : Research methodology is used to develop new theories and modify existing theories based on the findings of the research.
  • To evaluate programs and policies : Research methodology is used to evaluate the effectiveness of programs and policies by collecting data and analyzing it.
  • To improve decision-making: Research methodology helps in making informed decisions by providing reliable and valid data.

Purpose of Research Methodology

Research methodology serves several important purposes, including:

  • To guide the research process: Research methodology provides a systematic framework for conducting research. It helps researchers to plan their research, define their research questions, and select appropriate methods and techniques for collecting and analyzing data.
  • To ensure research quality: Research methodology helps researchers to ensure that their research is rigorous, reliable, and valid. It provides guidelines for minimizing bias and error in data collection and analysis, and for ensuring that research findings are accurate and trustworthy.
  • To replicate research: Research methodology provides a clear and detailed account of the research process, making it possible for other researchers to replicate the study and verify its findings.
  • To advance knowledge: Research methodology enables researchers to generate new knowledge and to contribute to the body of knowledge in their field. It provides a means for testing hypotheses, exploring new ideas, and discovering new insights.
  • To inform decision-making: Research methodology provides evidence-based information that can inform policy and decision-making in a variety of fields, including medicine, public health, education, and business.

Advantages of Research Methodology

Research methodology has several advantages that make it a valuable tool for conducting research in various fields. Here are some of the key advantages of research methodology:

  • Systematic and structured approach : Research methodology provides a systematic and structured approach to conducting research, which ensures that the research is conducted in a rigorous and comprehensive manner.
  • Objectivity : Research methodology aims to ensure objectivity in the research process, which means that the research findings are based on evidence and not influenced by personal bias or subjective opinions.
  • Replicability : Research methodology ensures that research can be replicated by other researchers, which is essential for validating research findings and ensuring their accuracy.
  • Reliability : Research methodology aims to ensure that the research findings are reliable, which means that they are consistent and can be depended upon.
  • Validity : Research methodology ensures that the research findings are valid, which means that they accurately reflect the research question or hypothesis being tested.
  • Efficiency : Research methodology provides a structured and efficient way of conducting research, which helps to save time and resources.
  • Flexibility : Research methodology allows researchers to choose the most appropriate research methods and techniques based on the research question, data availability, and other relevant factors.
  • Scope for innovation: Research methodology provides scope for innovation and creativity in designing research studies and developing new research techniques.

Research Methodology Vs Research Methods

Research MethodologyResearch Methods
Research methodology refers to the philosophical and theoretical frameworks that guide the research process. refer to the techniques and procedures used to collect and analyze data.
It is concerned with the underlying principles and assumptions of research.It is concerned with the practical aspects of research.
It provides a rationale for why certain research methods are used.It determines the specific steps that will be taken to conduct research.
It is broader in scope and involves understanding the overall approach to research.It is narrower in scope and focuses on specific techniques and tools used in research.
It is concerned with identifying research questions, defining the research problem, and formulating hypotheses.It is concerned with collecting data, analyzing data, and interpreting results.
It is concerned with the validity and reliability of research.It is concerned with the accuracy and precision of data.
It is concerned with the ethical considerations of research.It is concerned with the practical considerations of research.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Research Objectives

Research Objectives – Types, Examples and...

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Research Design

Research Design – Types, Methods and Examples

Critical Analysis

Critical Analysis – Types, Examples and Writing...

How to Publish a Research Paper

AHelp Essay Rewriter

Fast and efficient paraphrase generator.

7 paraphrasing modes

7 paraphrasing modes

Smart rewording algorithms

Smart rewording algorithms

Plagiarism-free content

Plagiarism-free content

New ideas at the tips of your fingers.

Have you ever felt like your assignments are never-ending? That every time you finish one, another is creeping up on you from around the corner? And there is always not enough time, but you still have to come up with new and original ideas to make your paper really shine. We are pretty sure, you have found yourself in that position one time or another. It is absolutely exhausting having to come up with exciting ways to engage your audience while rephrasing the same information again and again. And don’t even get us started on grammar checks, structure, outline, vocabulary, and many other things. Learning is supposed to be fun and interesting, and one way to make it such is to use a paraphrasing tool to optimize your routine. 

Let’s discuss what paraphrasing is all about, the benefits of trying one, and how you can work with it effectively!

AHelp Paraphraser – Your Go-To Study Buddy

Want to know more about our tool? The Paraphraser by AHelp equipped with intelligent rewording algorithms is a handy companion for anyone regularly engaged with writing tasks. Whether you’re drafting a formal email, an essay, a research proposal, or you just strive for clearer communication, this tool is versatile with its seven different paraphrasing modes. 

More than just rearranging words or altering sentence structures, it is a pro at keeping the core meaning of the text while making it more engaging and easy to read. Our Paraphraser delivers content that is free from plagiarism, allowing you to present your work with confidence. For writers, students, and professionals looking to present high-quality, original content without the risk of copying, this paraphrasing tool proves itself to be a go-to resource.

The Benefits of Using a Paraphrase Tool

Have you ever found yourself stuck trying to rephrase your writing to make it clearer or more engaging? This is where paraphrase tools come into play, offering a handy way to breathe new life into your text. Here are five reasons why these tools can be incredibly useful:

  • Clearer communication . Sometimes, the way we phrase things can be a bit complex, especially when juggling technical terms or intricate ideas. Paraphrase tools help simplify and clarify your writing, so your audience can easily grasp what you’re trying to convey. This results in stronger engagement and improves the overall impact of your message, which is always a plus.
  • Steering clear of plagiarism . We all know the importance of keeping our work original, especially when strict academic rules are in place. Paraphrase tools are great for rewording content while maintaining the meaning of the original text. This way, you can avoid plagiarism and keep your integrity intact, whether you’re crafting a research paper, writing up a report, or putting together a simple essay.
  • Sparking creativity . Ever feel like you’re just recycling the same phrases over and over? A paraphrase tool can suggest fresh ways to express your ideas, which can be particularly refreshing when you’re stuck. It’s a bit like shaking up a can of soda and watching new bubbles pop up, bringing a creative fizz to your work! This can be a major boost when you’re looking to innovate or add a unique twist to your writing.
  • Time Saver . Let’s face it, rewording content manually can eat up a lot of time. With a paraphrase tool, you can rework a piece in a fraction of the time, freeing you up to focus on other parts of your writing or even just giving you a break to catch your breath. This efficiency not only speeds up the editing process but also helps you maintain a flow of ideas without getting bogged down by details.
  • Better Learning Comprehension . Paraphrase tools do more than just alter text—they provide an opportunity to see different ways to structure sentences and use language. This can be particularly beneficial for non-native English speakers or those looking to improve their language skills. Users can learn new vocabulary and different sentence constructions, improving their overall language level by observing variations in phrasing.

With that being said, our Paraphraser doesn’t just shuffle words around—it helps make your writing more effective, engaging, and accessible. Whether you’re a student polishing an essay or a professional fine-tuning a project, trying a paraphrase tool can elevate your writing game significantly.

How to Effectively Write with a Paraphrasing Tool

There’s no such thing as a free lunch, so simply copying the entire text into your assignment file won’t do. Of course, using a paraphrasing tool can boost your writing performance, but it can be even more effective if you try some extra techniques. Remember,  it’s not just about plugging in and cranking out text; it’s about using the tool thoughtfully to get the best results. Here’s what you can do.

First off, consider the original message of your text . A paraphrasing tool isn’t just about finding fancy synonyms; it’s about reshaping your message in a way that might be clearer or more engaging. Give the tool solid sentences to work with, and it’s more likely to give you something back that’s ready to go with little need for tweaks. This can be especially handy when you’re tackling topics that are complex or filled with specific terminology.

Next, keep an eye out for plagiarism . Even the slickest tools can occasionally echo too much of your source. Using the paraphrased text as a starting point, make sure to add your own flair so that the final product stands apart. This step is absolutely necessary, as it keeps your integrity intact and your content original.

Also, remember that while synonyms can spice up your writing, relying on them too heavily can backfire , leading to odd phrases or even misuse of terms. It’s important to not just swap out words but to play with the structure of sentences for a smoother flow. Engage with the tool’s output by tweaking and refining it. This way, your style shines through, keeping the text natural and relatable.

Lastly, treat your paraphrasing tool as your assistant, not your replacement . Use it to enhance your understanding of how sentences can be shaped and ideas expressed differently. You’ll not only avoid potential pitfalls but also polish your skills in composition when you stay involved in the editing process.

If you stick to these practices, you can use paraphrasing tools to their fullest potential, ensuring your writing remains sharp, effective, and uniquely yours. Whether you’re a student or a professional, these tools can help elevate your writing and make it more accessible and engaging for everyone.

Using a Paraphrase Generator for Work and Study

Paraphrasing tools aren’t just for tackling essays or schoolwork—they’re also incredibly useful in the workplace. Whether you’re drafting emails, writing reports, or preparing proposals, a paraphraser can be your secret weapon for clear and professional communication. In any job, getting your message across clearly directly affects your (and others) performance. A paraphrasing tool helps polish your words so they’re easy to understand. This is especially handy when you need to explain complex topics simply and clearly, making technical speech accessible to everyone.

Paraphrasers also spark creativity. If you find yourself using the same phrases over and over, this tool can mix things up, suggesting new ways to say the same info. This keeps your writing fresh and interesting, which is great for things like marketing or customer communications where catching and keeping attention is key. And let’s not forget about time-saving! Paraphrasing tools allow you to quickly revise drafts and frees up your schedule to focus on other important tasks. This can be a lifesaver in a fast-paced work environment where every minute counts.

So, it’s clear that paraphrasing tools are not just for students. They offer a lot of value in professional settings, too, helping to boost readability, stir up creativity, and save precious time. Whatever your role, incorporating a paraphraser can make a big difference in how effectively you communicate.

FREE PARAGRAPH REWRITER

thesis statement recycling paper

What is the best paraphrasing tool online free?

One of the top free options for paraphrasing online is the AHelp Paraphraser. It offers many rewriting modes that cater to a wide range of needs and doesn’t take much time to complete the task. This makes it ideal for students and professionals looking to improve the clarity and creativity of their text or just rephrase it in a new light.

What is the best free AI to paraphrase?

The best free AI for paraphrasing is AHelp's Paraphrasing Tool. It was designed with advanced algorithms in mind to restructure and refine text while keeping the authentic meaning, which provides an invaluable resource for producing unique and engaging content.

Is using AI to paraphrase cheating?

Using AI to paraphrase is not really cheating; it depends on how you use it! If used to understand and rephrase text for clarity or learning, it's a legitimate tool. However, passing off AI-paraphrased work as entirely your own in contexts where originality is expected can be considered unethical and you can even face penalties for it.

Remember Me

What is your profession ? Student Teacher Writer Other

Forgotten Password?

Username or Email

How to Write an Introduction For a Research Paper

Learn how to write a strong and efficient research paper introduction by following the suitable structure and avoiding typical errors.

' src=

An introduction to any type of paper is sometimes misunderstood as the beginning; yet, an introduction is actually intended to present your chosen subject to the audience in a way that makes it more appealing and leaves your readers thirsty for more information. After the title and abstract, your audience will read the introduction, thus it’s critical to get off to a solid start.  

This article includes instructions on how to write an introduction for a research paper that engages the reader in your research. You can produce a strong opening for your research paper if you stick to the format and a few basic principles.

What is An Introduction To a Research Paper?

An introduction is the opening section of a research paper and the section that a reader is likely to read first, in which the objective and goals of the subsequent writing are stated. 

The introduction serves numerous purposes. It provides context for your research, explains your topic and objectives, and provides an outline of the work. A solid introduction will establish the tone for the remainder of your paper, enticing readers to continue reading through the methodology, findings, and discussion. 

Even though introductions are generally presented at the beginning of a document, we must distinguish an introduction from the beginning of your research. An introduction, as the name implies, is supposed to introduce your subject without extending it. All relevant information and facts should be placed in the body and conclusion, not the introduction.

Structure Of An Introduction

Before explaining how to write an introduction for a research paper , it’s necessary to comprehend a structure that will make your introduction stronger and more straightforward.

A Good Hook

A hook is one of the most effective research introduction openers. A hook’s objective is to stimulate the reader’s interest to read the research paper.  There are various approaches you may take to generate a strong hook:  startling facts, a question, a brief overview, or even a quotation. 

Broad Overview

Following an excellent hook, you should present a wide overview of your major issue and some background information on your research. If you’re unsure about how to begin an essay introduction, the best approach is to offer a basic explanation of your topic before delving into specific issues. Simply said, you should begin with general information and then narrow it down to your relevant topics.

After offering some background information regarding your research’s main topic, go on to give readers a better understanding of what you’ll be covering throughout your research. In this section of your introduction, you should swiftly clarify your important topics in the sequence in which they will be addressed later, gradually introducing your thesis statement. You can use some  The following are some critical questions to address in this section of your introduction: Who? What? Where? When? How? And why is that?

Thesis Statement

The thesis statement, which must be stated in the beginning clause of your research since your entire research revolves around it, is the most important component of your research.

A thesis statement presents your audience with a quick overview of the research’s main assertion. In the body section of your work, your key argument is what you will expose or debate about it. An excellent thesis statement is usually very succinct, accurate, explicit, clear, and focused. Typically, your thesis should be at the conclusion of your introductory paragraph/section.

Tips for Writing a Strong Introduction

Aside from the good structure, here are a few tips to make your introduction strong and accurate:

  • Keep in mind the aim of your research and make sure your introduction supports it.
  • Use an appealing and relevant hook that catches the reader’s attention right away.
  • Make it obvious to your readers what your stance is.
  • Demonstrate your knowledge of your subject.
  • Provide your readers with a road map to help them understand what you will address throughout the research.
  • Be succinct – it is advised that your opening introduction consists of around 8-9 percent of the overall amount of words in your article (for example, 160 words for a 2000 words essay). 
  • Make a strong and unambiguous thesis statement.
  • Explain why the article is significant in 1-2 sentences.
  • Remember to keep it interesting.

Mistakes to Avoid in Your Introduction

Check out what not to do and what to avoid now that you know the structure and how to write an introduction for a research paper .

  • Lacking a feeling of direction or purpose.
  • Giving out too much.
  • Creating lengthy paragraphs.
  • Excessive or insufficient background, literature, and theory.
  • Including material that should be placed in the body and conclusion.
  • Not writing enough or writing excessively.
  • Using too many quotes.

Unleash the Power of Infographics with Mind the Graph

Do you believe your research is not efficient in communicating precisely or is not aesthetically appealing? Use the Mind The Graph tool to create great infographics and add more value to your research.

How to Write a Conclusion for a Research Paper

Subscribe to our newsletter

Exclusive high quality content about effective visual communication in science.

Unlock Your Creativity

Create infographics, presentations and other scientifically-accurate designs without hassle — absolutely free for 7 days!

About Jessica Abbadia

Jessica Abbadia is a lawyer that has been working in Digital Marketing since 2020, improving organic performance for apps and websites in various regions through ASO and SEO. Currently developing scientific and intellectual knowledge for the community's benefit. Jessica is an animal rights activist who enjoys reading and drinking strong coffee.

Content tags

en_US

Curtin University Homepage

espace - Curtin’s institutional repository

  • espace Home
  • Curtin Research Publications

For a philosophy of good construction: a learning experience

Access status, source title.

The knowledge of construction techniques handed down its wealth of experience through manuals and codes of practice for a long time. The manuals of the past not only supported the construction through technical information but also expressed a 'philosophy' of good construction by transferring construction principles and rules into the project. The themes of good construction were enriched in the twentieth century by numerous objectives, among which the most significant are the industrialization and systematisation of building processes and the challenges of sustainability, from energy efficiency to the recycling of materials to building regeneration. In university education, however, the transmission of knowledge on construction stayed limited to lessons on the elements and construction techniques that declined in the various materials.While the recent global spread of computerization ensured the wide availability of technical information sources online, this phenomenon did not produce, per se, innovative, integrated and sustainable building solutions. The authors hypothesise that today's technical information is not ethically committed to clarifying the complex aspects of construction in sustainable terms.The proposed thesis considers architecture, like medicine, a “practice based on science and operating in a world of values” (Cosmacini, 2008).

Related items

Showing items related by title, author, creator and subject.

  • A study of business risks of public housing construction in Hong Kong and risk management methods adopted by contractors Lee, Kin-wang ( 2004 ) The research conducted in this thesis studies the business risks considered as critical by construction contractors in the public housing construction industry in Hong Kong and the risk management methods adopted by these ...
  • The use of construction images in a safety assessment system Nugraheni, Fitri ( 2008 ) This thesis sets out research carried out to investigate the usefulness of a descriptive database of construction methods for safety assessment. In addition, it investigates the possibility of utilising construction images ...
  • Construction Waste Management in India: an exploratory study Arif, M. ; Bendi, D. ; Toma-Sabbagh, T. ; Sutrisna, Monty ( 2012 ) Purpose – The growth of Indian economy has brought with it significant increase in construction activities. These increased construction activities have further highlighted the problem of waste generation on construction ...

Show Statistical Information

Earth911

Earth911 Reader: Sustainability, Recycling, Business, and Science Articles for Concerned Citizens

' src=

By Mitch Ratcliffe

thesis statement recycling paper

Every week, the Earth911 team combs news and research for interesting ideas and stories about the challenges of creating a sustainable world. We pick a few science, sustainability, recycling, and business stories, along with ideas you can act on to support the environment and Earth-friendly initiatives. Sometimes it is good news we can all celebrate, sometimes it is bad news or a seemingly intractable challenge that should make us double-down on finding new solutions. We call it the Earth911 Reader and we hope you find it useful.

In Sustainability

California to ban gas and diesel car sales by 2035.

The sixth-largest market for cars in the world, California, will ban the sale of gasoline and diesel cars by 2035, CleanTechnica reports . Governor Gavin Newsome, who has battled pandemic and wildfire all year, signed an executive order to stop the sale, but not an outright ban on internal combustion vehicles. California’s big challenge is rethinking its infrastructure. It must change how it builds cities and suburbs for car dependence and reimagine the car culture that helped define the state’s identity. Many barriers to success, from building a charging infrastructure to generating energy from renewable sources at sufficient capacity to fuel the state’s driving, are detailed in the coverage.

What’s Next Now That Climate Disruption Is Permanent?

The New York Times delivered a comprehensive explanation of the many large and small impacts that climate change will have worldwide, using recent California events to illustrate this new peril. People, property, and ways of life are at risk. Still, the report does hold out hope that a late awakening to climate change damage can spur innovation and social changes to build a sustainable economy. Yet it is painfully apparent for those on the West Coast that we cannot change fast enough to avoid devastating climate damage.

The Wealth Gap Extends to CO2 Emissions, Too

Phys.org reports on a recent Oxfam study that found that global greenhouse gas emissions rose 60% between 1990 and 2015. It also showed that the wealthiest one percent of humans — about 63 million people — account for more CO2 output than the poorest half of the planet’s population. Economic growth since 1970 was increasingly unequal. The lopsided share of emissions created by the wealthy shows that debates about the trade-offs between prosperity and sustainability may be resolved by emphasizing equality when exploring green economic strategies. “It’s clear that the carbon-intensive and highly unequal model of economic growth over the last 20-30 years has not benefited the poorest half of humanity,” said Tim Gore, head of policy, advocacy and research at Oxfam. “It’s a false dichotomy to suggest that we have to choose between economic growth and (fixing) the climate crisis.” It starts with counting and taxing carbon, we think.

California Wildfires Are a Carbon Dioxide Disaster

We are only two-thirds through 2020, but California has already broken its record for CO2 emissions for any year since records were first kept in 2003, Quartz reported . California has produced one-third more emissions than in other years. 83 million metric tons of CO2 were released by the massive wildfires that have scorched more than 7 million acres this year. Wildfire smoke is more polluting than all of California’s vehicle and industrial output. The implications are dire. If every year sees fires of the same magnitude, the carbon released will require other sources to be cut to prevent accelerated warming. This is the price of having waited to stop emissions, and now the choices will be even more challenging. California’s decision to end gas-burning vehicle sales in 2035 is remarkably conservative in light of the evidence or recent weeks.

U.S. Electrical Producers Must Stop Relying on Natural Gas To Achieve Decarbonization

GreenTechMedia reports on a recent Deloitte analysis of U.S. electric utilities, which continue to rely on natural gas as an interim step from coal to renewables. Now is the first time industry has tried to transition from one energy source to another in only three decades, Deloitte points out. Though the transition should have started two decades earlier. “There are significant gaps between decarbonization targets and the scheduled fossil-fuel plant retirements, renewable additions, and flexibility requirements needed to achieve full decarbonization,” Deloitte writes . “The math doesn’t yet add up.” The researchers propose a three-phase transition with short-term efforts focus on moving off coal and beginning carbon capture and storage projects. After 2030, the utilities can start to reshape their generation systems, achieve a flexible grid, and migrate to solar and renewable gasses.

Shoppers Will Get Sustainable Products Help From Amazon

In a first step toward enabling sustainable shopping decisions, Amazon announced this week that it will put a “Climate Pledge Friendly” tag on as many as 25,000 products. Unfortunately, as Eco Watch reports , the reality is that the products are a tiny fraction of the 120 million items for sale on the site. The Environmental Defense Fund wrote skeptically but positively about the announcement: “Amazon’s sustainable shopping site is a critical first step, and the potential to raise awareness across the globe is unparalleled.” Actually tracking carbon through the supply chain is much more easily promised than done, the EDF argues. And Amazon is not currently addressing social justice, governance, or other critical elements of humane sustainability, which are essential to ethical decisions, too.

couple laying on a natural green bed and smiling

Cutting Plastic Pollution Won’t Keep Up With Plastic Growth

The use of plastic is increasing faster than efforts to reduce plastic pollution can offset additional waste that reaches the oceans, a new report in Science Magazine reports . The volume of plastic made will continue to increase, meaning even as improvements are made in recycling, the volume of plastic pollution will rise. We need to be much better at collecting plastic, but the current plastic recycling system is broken, as EcoWatch wrote this week. A key idea called out in the article is that collecting plastic from households will require “waste pickers” to be integrated into the economy. Globally, these menial laborers who pull plastic from dumps and garbage bins accounted for “58% of post consumer plastic waste collected for recycling in 2016.” A resident of the developed world disdains this work. Yet, it could be instrumental to our preserving nature and human prosperity. Perhaps it is time to recognize that anyone working a full-time job to create sustainable economic outcomes needs to be paid a generous living wage.

The Gulf Stream Is Weakening Due to Climate Changes

The Gulf Stream, or Atlantic Meridional Overturning Circulation (AMOC), which moves warm water into the North Atlantic and cold water toward the equator, shows signs of slowing. The amount of energy transferred by the AMOC is greater than human energy use each year. If it fails, It could result in higher salinity in the Southern Atlantic and weather changes across the Northern Hemisphere on both sides of the Atlantic. Long predicted, Real Climate reports that this is another signal confirmation of climate science’s predictive power.

Arctic Ice Reaches Second-Lowest Level in Recorded History

A blue sea throughout the summer at the North Pole is becoming more probably every year, even though it was not predicted for several more decades. Nature reports that high Summer temperatures in Siberia following a warm winter produced a Sept. 15, 2020 ice sheet of only 2.32 million square miles (3.74 million kilometers) in size. Only 2012 saw less ice coverage on the Arctic Ocean. This March, Arctic ice covered only 5.4 million square miles, the 11th lowest coverage on record.

In Business

Making carbon pledges count.

Grist ‘s Emily Pontecorvo writes that corporate pledges to become carbon neutral or negative are often self-deluding or deceptive because companies selectively count emissions. She points to Google’s statements that it has been carbon neutral since 2007. But that statement is based on the performance of its offices, data centers and employee commutes and travel, which account for only 27% of Google emissions. They don’t count manufacturing-related emissions for its smart home and computing products or supplier carbon footprints, which Google has acknowledged. The Science Based Targets initiative recently announced a project to establish global standards for net-zero pledges. That standard could be applied with confidence that action will translate into reduced carbon emissions.

GE Abandons Coal-Fired Power Plant Construction

Perhaps it was just bowing to market realities, but General Electric said this week it will stop building coal-powered electricity generation plants, according to Environmental Leader .

Tesla Underwhelms With Battery Day

The eagerly anticipated Tesla “Battery Day” failed to deliver a significant breakthrough in battery technology, The Guardian reported . Still, Elon Musk promised to cut the cost of Tesla battery technology by 56% over the next three years. At that level, around $80 per kilowatt-hour (kWh) compared to today’s $156/kWH cost for Tesla batteries, electric vehicles will achieve economic parity with internal combustion vehicles. The world and Wall Street were expecting more prominent things, including a rumored million-mile battery. Tesla shares fell 13.4% between Friday, Sept. 18, and Thursday, Sept. 24, and that after they gained 1.95% on Thursday.

Plastic Polluters Quantified

Statista offers a graphic that shows the pollution volume from the “world’s worst offenders for plastic pollution.” Coca-Cola, Pepsi, and Nestle are the top-three plastic polluters, accounting for 2.9 million tons, 2.3 million tons and 1.7 million tons of plastic packaging produced respectively. Seriously, folks, the easiest way to reduce your plastic footprint is to abandon the PET bottle in favor or other packaging. For additional insight into the plastic pollution produced and the energetic marketing of half- and none-measures announced by polluters, take some time to read Talking-Trash.com ‘s study . They examined corporate plastic pollution in 15 nations. The plastic industry’s strategy, they write, is to “distract, delay, and derail legislation” that would require them to clean up their packaging and the waste it produces.

Infographic: The World's Worst Offenders For Plastic Pollution | Statista

Big Oil’s Paltry Climate Plans

OilChange International this week released a study that examines the climate plans at the largest oil and gas companies on the planet, Big Oil Reality Check — Assessing Oil And Gas Climate Plans. In a nutshell, “none come close to aligning their actions with the urgent 1.5°C global warming limit as outlined by the Paris Agreement.” Incredibly depressing is a graphic that summarizes eight oil companies’ plans is a sea of red, which represents “grossly insufficient” efforts. Last week, we wrote about BP and ExxonMobil’s recent declarations that they will move into renewables, which may be the first sign of real change. Yet, there is literally nothing to be hopeful about in this report.

Big Oil Reality Check

Bamboo Toilet Paper Is the New Unicorn Wannabe Industry?

Uber’s CEO, Shark Tank’s Mark Cuban, Robert Downey Jr., and others have piled into a bamboo toilet paper investment, Cloud Paper, TechCrunch reports . They put a collective $3 million into the company that promises to reduce deforestation using bamboo instead of virgin timber for TP. Earth911 is assembling a buyer’s guide to sustainable toilet paper. While bamboo has advantages, we are not convinced that bamboo products shipped from Asia are fundamentally more sustainable than recycled toilet paper. Recycled TP can be made from recovered domestic post-consumer paper, avoiding overseas shipping. But investors are human and like to chase appealing stories that could net them huge profits. We have identified at least nine companies doing the same thing. And Business Green reports that climate-conscious venture capital investments are “soaring,” which is an excellent sign for green startups. We all need to get ready to ignore all the talk and wait to see which companies actually succeed.

Morgan Stanley Prepares To Stop Funding for CO2-Intensive Companies

After investing $91 billion in fossil fuel companies between 2016 and 2019, investment bank Morgan Stanley is pledging to stop providing funding for companies that generate CO2 as a by-product of their business. That’s good news, but the bank will not achieve “zero financed emissions” until 2050, Business Green reports . As noted above, the bank points to measuring CO2 emissions as a problem it must solve to enact its pledge. This is an important decision. We hope Morgan Stanley follows through, immediately ending its financing of firms that clearly contribute to global warming. The bank should only fund fossil fuel company projects that move away from burning petroleum.

Walmart Announces 2040 Zero-Emissions, Reforestation Goals

In a press release , Walmart president and CEO Doug McMillon said: “We want to play an important role in transforming the world’s supply chains to be regenerative. We face a growing crisis of climate change and nature loss, and we all need to take action with urgency.” To accomplish that, the company will transition to 100% renewable energy by 2040, switch to an all-electric vehicle fleet by 2040, transition its cooling systems to “low-impact” refrigerants and restore 50 million acres of forest in the same timeframe. It currently uses renewable energy for about 29% of its operations. ClimateFriendlySupermarkets.org , however, points to Walmart’s 15% progress toward sustainability goals for supermarkets as evidence the retailer is far from fulfilling on the promise. Once again, we are promised change and need exact measurements that can be used to hold businesses accountable. Otherwise, these announcements are nothing but marketing spin.

thesis statement recycling paper

In Recycling

Innovators racing to add smart tech in recycling systems.

The recycling system relies on a 1950s collection infrastructure and increasingly sophisticated sorting technology at the materials recovery facility (MRF) where recycling is processed. Now developers are introducing artificial intelligence devices that plug into existing sensors and tracking-plus-incentive systems based on blockchain technology to refine collection and sorting processes. As plastic pollution soars amid the pandemic, it’s time to rethink recycling . The on-demand delivery world could provide new local recovery options. However, it will still fall to consumers to send clean, dry, and well-sorted materials to their recycling program. As noted in a story above, “trash picking” is an essential component of successful materials recovery. That labor is performed by the very poorest people in other countries. Technology can assist consumers with sorting and even provide financial or promotional rewards from brands for successful recycling. A distributed, on-demand recycling system could be invented during this decade. It is still early days.

China’s Waste Import Bans Questioned by U.S. Recyclers

The Institute for Scrap Recycling Industries (ISRI), an industry organization representing recyclers, has called on the U.S. Trade Representative (USTR) to China to demand greater transparency in developing materials importation rules. During the growing trade war between China and the U.S., Chinese leadership has introduced increasingly “overly-strict” rules about what can and cannot be sent to the country for recycling. It threatens not only U.S. recyclers, ISRI argues, but also the Chinese firms that rely on U.S. materials, notably aluminum, brass, and copper, to produce new items. “[I]t is our general understanding that the Chinese Government intends to ban all “solid waste” by 2021, but there has been no transparency on such a policy, leading to great uncertainty in the marketplace,” ISRI wrote to the USTR . Many of China’s decisions were justified by environmental concerns, but after years of rising tensions, arbitrary trade decisions are preventing both nations from making use of recycled materials.

Rethink “Single-Use Stuff” To Rethink the Economy

Bloomberg’s Adam Minter, author of Junkyard Planet and Secondhand: Travels in the New Global Garage Sale , keynoted WasteExpo last week. He suggested that the rising tide of single-use products and packaging breaks an emerging global circular economy. Waste360 reports on his presentation, which may be available soon for viewing. The core of his thesis is that developed economies naturally shed old materials that can be successfully reused or recycled in emerging economies. He pointed to a Vermont-based recycler that has successfully exported old electronics to Ghana. “The Ghanese people view these used electronics as ‘stuff,’ not as waste or recycling, as many Americans would,” Waste 360 reported Minter said. During COVID, Americans have begun to understand the value of repair and reuse, Minter said. “During COVID, we’re seeing a lot of people taking up the cause of mending and fixing their clothes. Now that’s a small-scale movement, but people are starting to think more in terms of durability; in terms of their things lasting longer.”

Corrugated Cardboard and Tissue Recycling on the Rise

Kelly McNamara of Numera Analytics told the Institute for Scrap Recycling Industries (ISRI) that corrugated cardboard — known in the industry as OCC, or “old corrugated containers” — are being recycled more despite China’s ban on contaminated cardboard imports, Recycling Today reports . Other Asian nations have begun to import cardboard. Concurrently, consumer demand for recycled paper personal care products is reshaping paper recycling in the United States. Demand for printing paper and newsprint have fallen by 21% and 47%, respectively. At the same time, OCC now accounts for almost two-thirds of recovered paper globally. Tissue paper, driven by consumer demand for recycled products, has soared by 34% over the last decade. The article also discusses packaging innovation, including smart packaging that helps return post-consumer recycling materials. It is a good read to get an informed perspective on why some materials are no longer recycled because they are not profitable. And because that material isn’t going away, we need to reorganize U.S. recycling to eliminate the unnecessary waste that unrecycled materials contribute to landfills and incineration-related pollution.

Actions You Can Take

Pledge to vote earth.

The Earth Day Foundation asks everyone to vote this November. It will send reminders and news updates to voters who pledge to vote for environmental progress. Visit the site to signup or text VOTEEARTH or VOTE to 202-816-5784.

Get Active With Only One

Only One is a new learning and citizen action site that calls for ocean protections and restoration during the 2020s — they want to protect 30% of our oceans by 2030. Although only just launched, it features informative stories about ocean pollution and preservation along with actions you can support and fund with a few clicks. For example, you can sign a petition to world governments demanding protections for Antarctic waters and biodiversity. They promise to keep you tuned into what you can do to support our planet’s seas.

Watch the Second Annual Global Climate Restoration Forum

Carbon capture and storage can contribute to the restoration of the environment to pre-industrial CO2 levels. These technologies and natural tactics, such as seeding sections of the ocean with milled iron to encourage algae blooms that capture CO2, were discussed during the Second Annual Global Climate Restoration Forum. The entire event is now available to stream for free on YouTube.

Register for 24 Hours of Reality

Vice President Al Gore and a group of activists from around the world will present a 24-hour climate call to action event, 24 Hours of Reality , starting at 4 PM EDT on Oct. 11. You can join in and watch conversations about the climate crisis, racial justice, the impact of COVID-19 on environmental policy and personal behavior. Declaring that “We are at an inflection point,” the group is dedicated to helping society make the turn to sustainability. Want to know more? Local presentations are available for schools and organizations , just reach out and ask to schedule a free virtual event.

Does Earth911 Reader help you understand sustainability, recycling, and climate issues?

We’ve published the Earth911 Reader, a collection of short article summaries, every Saturday for the past month. Does the Reader help you?

thesis statement recycling paper

Mitch is the publisher at Earth911.com and Director of Digital Strategy and Innovation at Intentional Futures, an insight-to-impact consultancy in Seattle. A veteran tech journalist, Mitch is passionate about helping people understand sustainability and the impact of their decisions on the planet.

Related Post

Eath911 podcast: world ocean day special — blueprints for coastal adaptation, earth911 inspiration: there is no such thing as ‘away’, how many times can that be recycled.

The Anarchist Library

Kevin Carson

The homebrew industrial revolution a low-overhead manifesto.

     Preface

     Chapter One: A Wrong Turn

       A. Preface: Mumford’s Periodization of Technological History

       B. The Neotechnic Phase

       C. A Funny Thing Happened on the Way to the Neotechnic Revolution

     Chapter Two: Moloch: The Sloanist Mass Production Model

       Introduction

       A. Institutional Forms to Provide Stability

       B. Mass Consumption and Push Distribution to Absorb Surplus

       C. State Action to Absorb Surplus: Imperialism

       D. State Action to Absorb Surplus: State Capitalism

       E. Mene, Mene, Tekel, Upharsin (a Critique of Sloanism’s Defenders)

       F. The Pathologies of Sloanism

       G. Mandatory High Overhead

     Chapter Three: Babylon is Fallen

       A. Resumption of the Crisis of Overaccumulation

       B. Resource crises (Peak Oil)

       C. Fiscal Crisis of the State

       D. Decay of the Cultural Pseudomorph

       E. Failure to Counteract Limits to Capture of Value by Enclosure of the Digital Commons

       F. Networked Resistance, Netwar, and Asymmetric Warfare Against Corporate Management

       Appendix: Three Works on Abundance and Technological Unemployment

     Chapter Four: Back to the Future

       A. Home Manufacture

       B. Relocalized Manufacturing

       C. New Possibilities for Flexible Manufacturing

     Chapter Five: The Small Workshop, Desktop Manufacturing, and PowerCube Household Production

       A. Neighborhood and Backyard Industry

       B. The Desktop Revolution and Peer Production in the Immaterial Sphere

       C. The Expansion of the Desktop Revolution and Peer Production into the Physical Realm

       D. The Microenterprise

       Appendix: Case Studies in the Coordination of Networked Fabrication and Open Design

     Chapter Six: Resilient Communities and Local Economies

       A. Local Economies as Bases of Independence and Buffers Against Economic Turbulence

       B. Historical Models of Resilient Community

       C. Resilience, Primary Social Units, and Libertarian Values

       D. LETS Systems, Barter Networks, and Community Currencies

       E. Community Bootstrapping

       F. Contemporary Ideas and Projects

     Chapter Seven: The Alternative Economy as a Singularity

       A. Networked Production and the Bypassing of Corporate Nodes

       B. The Advantages of Value Creation Outside the Cash Nexus

       C. More Efficient Extraction of Value from Inputs

       D. Seeing Like a Boss

       E. The Implications of Reduced Physical Capital Costs

       F. Strong Incentives and Reduced Agency Costs

       G. Reduced Costs from Supporting Rentiers and Other Useless Eaters

       H. The Stigmergic Non-Revolution

       I. The Singularity

       Appendix: The Singularity in the Third World

     Bibliography

To my mother, Ruth Emma Rickert, and the memory of my father, Amos Morgan Carson.

In researching and writing my last book, Organization Theory: A Libertarian Perspective , I was probably more engaged and enthusiastic about working on material related to micromanufacturing, the microenterprise, the informal economy, and the singularity resulting from them, than on just about any other part of the book. When the book went to press, I didn’t feel that I was done writing about those things. As I completed that book, I was focused on several themes that, while they recurred throughout the book, were imperfectly tied together and developed.

In my first paper as research associate at Center for a Stateless Society, [1] I attempted to tie these themes together and develop them in greater detail in the form of a short monograph. I soon found that it wasn’t going to stop there, as I elaborated on the same theme in a series of C4SS papers on industrial history. [2] And as I wrote those papers, I began to see them as the building blocks for a stand-alone book.

One of the implicit themes in Organization Theory which I have attempted to develop since, and which is central to this book, is the central role of fixed costs—initial capital outlays and other overhead—in economics. The higher the fixed costs of an enterprise, the larger the income stream required to service them. That’s as true for the household microenterprise, and for the “enterprise” of the household itself, as for more conventional businesses. Regulations that impose artificial capitalization and other overhead costs, the purchase of unnecessarily expensive equipment of a sort that requires large batch production to amortize, the use of stand-alone buildings, etc., increase the size of the minimum revenue stream required to stay in business, and effectively rule out part-time or intermittent self-employment. When such restrictions impose artificially high fixed costs on the means of basic subsistence (housing and feeding oneself, etc.), their effect is to make cheap and comfortable subsistence impossible, and to mandate ongoing external sources of income just to survive. As Charles Johnson has argued,

If it is true (as Kevin has argued, and as I argued in Scratching By [3] ) that, absent the state, most ordinary workers would experience a dramatic decline in the fixed costs of living, including (among other things) considerably better access to individual ownership of small plots of land, no income or property tax to pay, and no zoning, licensing, or other government restraints on small-scale neighborhood home-based crafts, cottage industry, or light farming/heavy gardening, I think you’d see a lot more people in a position to begin edging out or to drop out of low-income wage labor entirely—in favor of making a modest living in the informal sector, by growing their own food, or both... [4]

On the other hand, innovation in the technologies of small-scale production and of daily living reduce the worker’s need for a continuing income stream. It enables the microenterprise to function intermittently and to enter the market incrementally, with no overhead to be serviced when business is slow. The result is enterprises that are lean and agile, and can survive long periods of slow business, at virtually no cost; likewise, such increased efficiencies, by minimizing the ongoing income stream required for comfortable subsistence, have the same liberating effect on ordinary people that access to land on the common did for their ancestors three hundred years ago.

The more I thought about it, the more central the concept of overhead became to my analysis of the two competing economies. Along with setup time, fixed costs and overhead are central to the difference between agility and its lack. Hence the subtitle of this book: “A Low Overhead Manifesto.”

Agility and resilience are at the heart of the alternative economy’s differences with its conventional predecessor. Its superiorities are summed up by a photograph I found at Wikimedia Commons, which I considered using as a cover image; a tiny teenage Viet Cong girl leading an enormous captured American soldier. I’m obliged to Jerry Brown (via Reason magazine’s Jesse Walker) for the metaphor: guerrillas in black pajamas, starting out with captured Japanese and French arms, with a bicycle-based supply train, kicking the living shit out of the best-trained and highest-technology military force in human history.

But Governor Brown was much more of a fiscal conservative than Governor Reagan, even if he made arguments for austerity that the Republican would never use. (At one point, to get across the idea that a lean organization could outperform a bloated bureaucracy, he offered the example of the Viet Cong.) [5]

I since decided to go with the picture of the Rep-Rap 3-D printer which you see at the beginning of this preface now, but a guerrilla soldier is still an appropriate symbol for all the characteristics of the alternative economy I’m trying to get across. As I write in the concluding chapter of the book:

Running throughout this book, as a central theme, has been the superior efficiency of the alternative economy: its lower burdens of overhead, its more intensive use of inputs, and its avoidance of idle capacity. Two economies are fighting to the death: one of them a highly-capitalized, high-overhead, and bureaucratically ossified conventional economy, the subsidized and protected product of one and a half century’s collusion between big government and big business; the other a low capital, low-overhead, agile and resilient alternative economy, outperforming the state capitalist economy despite being hobbled and driven underground. The alternative economy is developing within the interstices of the old one, preparing to supplant it. The Wobbly phrase “building the structure of the new society within the shell of the old” is one of the most fitting phrases ever conceived for summing up the concept.

I’d like to thank Brad Spangler and Roderick Long for providing me the venue, at Center for a Stateless Society, where I wrote the series of essays this book is based on. I couldn’t have written this without all the valuable information I gathered as a participant in the P2P Research email list and the Open Manufacturing list at Google Groups. My participation (no doubt often clueless) was entirely that of a fanboy and enthusiastic layman, since I can’t write a line of code and can barely hammer a nail straight. But I thank them for allowing me to play the role of Jane Goodall. And finally, thanks to Professor Gary Chartier of LaSierra University, for his beautiful job formatting the text and designing the cover, as well as his feedback and kind promotion of this work in progress.

Chapter One: A Wrong Turn

A. preface: mumford’s periodization of technological history.

Lewis Mumford, in Technics and Civilization , divided the progress of technological development since late medieval times into three considerably overlapping periods (or phases): the eotechnic, paleotechnic, and neotechnic.

The original technological revolution of the late Middle Ages, the eotechnic, was associated with the skilled craftsmen of the free towns, and eventually incorporated the fruits of investigation by the early scientists. It began with agricultural innovations like the horse collar, horseshoe and crop rotation. It achieved great advances in the use of wood and glass, masonry, and paper (the latter including the printing press). The agricultural advances of the early second millennium were further built on by the innovations of market gardeners in the sixteenth and seventeenth centuries—like, for example, raised bed horticulture, composting and intensive soil development, and the hotbeds and greenhouses made possible by advances in cheap production of glass.

In mechanics, in particular, its greatest achievements were clockwork machinery and the intensive application of water and wind power. The first and most important prerequisite of machine production was the transmission of power and control of movement by use of meshed gears. Clockwork, Mumford argued, was “the key-machine of the modern industrial age.” It was

a new kind of power-machine, in which the source of power and the transmission were of such a nature as to ensure the even flow of energy throughout the works and to make possible regular production and a standardized product. In its relationship to determinable quantities of energy, to standardization, to automatic action, and finally to its own special product, accurate timing, the clock has been the foremost machine in modern technics.... The clock, moreover, served as a model for many other kinds of mechanical works, and the analysis of motion that accompanied the perfection of the clock, with the various types of gearing and transmission that were elaborated, contributed to the success of quite different kinds of machine. [6] If power machinery be a criterion, the modern industrial revolution began in the twelfth century and was in full swing by the fifteenth. [7]

With this first and largest hurdle cleared, Renaissance tinkerers like DaVinci quickly turned to the application of clockwork machinery to specific processes. [8] Given the existence of clockwork, the development of machine processes for every imaginable specific task was inevitable. Regardless of the prime mover at one end, or the specific process at the other, clockwork transmission of power was the defining feature of automatic machinery.

In solving the problems of transmitting and regulating motion, the makers of clockwork helped the general development of fine mechanisms. To quote Usher once more: “The primary development of the fundamental principles of applied mechanics was ... largely based upon the problems of the clock.” Clockmakers, along with blacksmiths and locksmiths, were among the first machinists: Nicholas Forq, the Frenchman who invented the planer in 1751, was a clockmaker; Arkwright, in 1768, had the help of a Warrington clockmaker; it was Huntsman, another clockmaker, desirous of a more finely tempered steel for the watchspring, who invented the process of producing crucible steel: these are only a few of the more outstanding names. In sum, the clock was the most influential of machines, mechanically as well as socially; and by the middle of the eighteenth century it had become the most perfect: indeed, its inception and its perfection pretty well delimit the eotechnic phase. To this day, it is the pattern of fine automatism. [9]

With the use of clockwork to harness the power of prime movers and transmit it to machine production processes, eotechnic industry proliferated wherever wind or running water was abundant. The heartland of eotechnic industry was the river country of the Rhineland and northern Italy, and the windy areas of the North and Baltic seas. [10]

Grinding grain and pumping water were not the only operations for which the water-mill was used: it furnished power for pulping rags for paper (Ravensburg: 1290): it ran the hammering and cutting machines of an ironworks (near Dobrilugk, Lausitz, 1320): it sawed wood (Augsburg: 1322): it beat hides in the tannery, it furnished power for spinning silk, it was used in fulling-mills to work up the felts, and it turned the grinding machines of the armorers. The wire-pulling machine invented by Rudolph of NĂźrnberg in 1400 was worked by water-power. In the mining and metal working operations Dr. Georg Bauer described the great convenience of water-power for pumping purposes in the mine, and suggested that if it could be utilized conveniently, it should be used instead of horses or man-power to turn the underground machinery. As early as the fifteenth century, water-mills were used for crushing ore. The importance of water-power in relation to the iron industries cannot be over-estimated: for by utilizing this power it was possible to make more powerful bellows, attain higher heats, use larger furnaces, and therefore increase the production of iron. The extent of all these operations, compared with those undertaken today in Essen or Gary, was naturally small: but so was the society. The diffusion of power was an aid to the diffusion of population: as long as industrial power was represented directly by the utilization of energy, rather than by financial investment, the balance between the various regions of Europe and between town and country within a region was pretty evenly maintained. It was only with the swift concentration of financial and political power in the sixteenth and seventeenth centuries, that the excessive growth of Antwerp, London, Amsterdam, Paris, Rome, Lyons, Naples, took place. [11]

With the “excessive growth of Antwerp, London, Amsterdam, Paris, Rome, Lyons, Naples,” came the triumph of a new form of industry associated with the concentrated power of those cities. The eotechnic phase was supplanted or crowded out in the early modern period by the paleotechnic—or what is referred to, wrongly, in most conventional histories simply as “the Industrial Revolution.”

Paleotechnic had its origins in the new centralized state and the industries closely associated with it (most notably mining and armaments), and centered on mining, iron, coal, and steam power. To give some indication of the loci of the paleotechnic institutional complex, the steam engine was first introduced for pumping water out of mines, and its need for fuel in turn reinforced the significance of the coal industry [12] ; the first appearance of large-scale factory production was in the armaments industry. [13] The paleotechnic culminated in the “dark satanic mills” of the nineteenth century and the giant corporations of the late nineteenth and early twentieth.

The so-called “Industrial Revolution,” in conventional parlance, conflates two distinct phenomena: the development of mechanized processes for specific kinds of production (spinning and weaving, in particular), and the harnessing of the steam engine as a prime mover. The former was a direct outgrowth of the mechanical science of the eotechnic phase, and would have been fully compatible with production in the small shop if not for the practical issues raised by steam power. The imperative to concentrate machine production in large factories resulted, not from the requirements of machine production as such, but from the need to economize on steam power.

Although the paleotechnic incorporated some contributions from the eotechnic period, it was a fundamental departure in direction, and involved the abandonment of a rival path of development. Technology was developed in the interests of the new royal absolutists, mercantilist industry and the factory system that grew out of it, and the new capitalist agriculturists (especially the Whig oligarchy of England); it incorporated only those eotechnic contributions that were compatible with the new tyrannies, and abandoned the rest.

But its successor, the neotechnic, is what concerns us here.

B. The Neotechnic Phase

Much of the centralization of paleotechnic industry resulted, in addition to the authoritarian institutional culture associated with its origins, from the need (which we saw above) to economize on power.

….the steam engine tended toward monopoly and concentration.... Twenty-four hour operations, which characterized the mine and the blast furnace, now came into other industries which had heretofore respected the limitations of day and night. Moved by a desire to earn every possible sum on their investments, the textile manufacturers lengthened the working day.... The steam engine was pacemaker. Since the steam engine requires constant care on the part of the stoker and engineer, steam power was more efficient in large units than in small ones: instead of a score of small units, working when required, one large engine was kept in constant motion. Thus steam power fostered the tendency toward large industrial plants already present in the subdivision of the manufacturing process. Great size, forced by the nature of the steam engine, became in turn a symbol of efficiency. The industrial leaders not only accepted concentration and magnitude as a fact of operation, conditioned by the steam engine: they came to believe in it by itself, as a mark of progress. With the big steam engine, the big factory, the big bonanza farm, the big blast furnace, efficiency was supposed to exist in direct ratio to size. Bigger was another way of saying better. [Gigantism] was... abetted by the difficulties of economic power production with small steam engines: so the engineers tended to crowd as many productive units as possible on the same shaft, or within the range of steam pressure through pipes limited enough to avoid excessive condensation losses. The driving of the individual machines in the plant from a single shaft made it necessary to spot the machines along the shafting, without close adjustment to the topographical needs of the work itself.... [14]

Steam power meant that machinery had to be concentrated in one place, in order to get the maximum use out of a single prime mover. The typical paleotechnic factory, through the early 20 th century, had machines lined up in long rows, “a forest of leather belts one arising from each machine, looping around a long metal shaft running the length of the shop,” all dependent on the factory’s central power plant. [15]

The neotechnic revolution of the late nineteenth century put an end to all these imperatives.

If the paleotechnic was a “coal-and-iron complex,” in Mumford’s terminology, the neotechic was an “electricity-and-alloy complex.” [16] The defining features of the neotechnic were the decentralized production made possible by electricity, and the light weight and ephemeralization (to borrow a term from Buckminster Fuller) made possible by the light metals.

The beginning of the neotechnic period was associated, most importantly, with the invention of the prerequisites for electrical power—the dynamo, the alternator, the storage cell, the electric motor—and the resulting possibility of scaling electrically powered production machinery to the small shop, or even scaling power tools to household production.

Electricity made possible the use of virtually any form of energy, indirectly, as a prime mover for production: combustibles of all kinds, sun, wind, water, even temperature differentials. [17] As it became possible to run free-standing machines with small electric motors, the central rationale for the factory system disappeared. “In general,” as Paul Goodman wrote, “the change from coal and steam to electricity and oil has relaxed one of the greatest causes for concentration of machinery around a single driving shaft.” [18]

The decentralizing potential of small-scale, electrically powered machinery was a common theme among many writers from the late 19 th century on. That, and the merging of town and village it made possible, were the central themes of Kropotkin’s Fields, Factories and Workshops . With electricity “distributed in the houses for bringing into motion small motors of from one-quarter to twelve horse-power,” it was possible to produce in small workshops and even homes. Freeing machinery up from a single prime mover ended all limits on the location of machine production. The primary basis for economy of scale, as it existed in the nineteenth century, was the need to economize on horsepower—a justification that vanished when the distribution of electrical power eliminated reliance on a single source of power. [19]

William Morris seems to have made some Kropotkinian technological assumptions in his depiction of a future libertarian communist society in News From Nowhere :

“What building is that?” said I, eagerly; for it was a pleasure to see something a little like what I was used to: “it seems to be a factory.” “Yes, he said,” “I think I know what you mean, and that’s what it is; but we don’t call them factories now, but Banded-workshops; that is, places where people collect who want to work together.” “I suppose,” said I, “power of some sort is used there?” “No, no,” said he. “Why should people collect together to use power, when they can have it at the places where they live or hard by, any two or three of them, or any one, for the matter of that?...” [20] The introduction of electrical power, in short, put small-scale machine production on an equal footing with machine production in the factory. The introduction of the electric motor worked a transformation within the plant itself. For the electric motor created flexibility in the design of the factory: not merely could individual units be placed where they were wanted, and not merely could they be designed for the particular work needed: but the direct drive, which increased the efficiency of the motor, also made it possible to alter the layout of the plant itself as needed. The installation of motors removed the belts which cut off light and lowered efficiency, and opened the way for the rearrangement of machines in functional units without regard for the shafts and aisles of the old-fashioned factory: each unit could work at its own rate of speed, and start and stop to suit its own needs, without power losses through the operation of the plant as a whole. ...[T]he efficiency of small units worked by electric motors utilizing current either from local turbines or from a central power plant has given small-scale industry a new lease on life: on a purely technical basis it can, for the first time since the introduction of the steam engine, compete on even terms with the larger unit. Even domestic production has become possible again through the use of electricity: for if the domestic grain grinder is less efficient, from a purely mechanical standpoint, than the huge flour mills of Minneapolis, it permits a nicer timing of production to need, so that it is no longer necessary to consume bolted white flours because whole wheat flours deteriorate more quickly and spoil if they are ground too long before they are sold and used. To be efficient, the small plant need not remain in continuous operation nor need it produce gigantic quantities of foodstuffs and goods for a distant market: it can respond to local demand and supply; it can operate on an irregular basis, since the overhead for permanent staff and equipment is proportionately smaller; it can take advantage of smaller wastes of time and energy in transportation, and by face to face contact it can cut out the inevitable red-tape of even efficient large organizations. [21]

Mumford’s comments on flour milling also anticipated the significance of small-scale powered machinery in making possible what later became known as “lean production”; its central principle is that overall flow is more important to cost-cutting than maximizing the efficiency of any particular stage in isolation. The modest increases in unit production cost at each separate stage are offset not only by greatly reduced transportation costs, but by avoiding the large eddies in overall production flow (buffer stocks of goods-in-process, warehouses full of goods “sold” to inventory without any orders, etc.) that result when production is not geared to demand. [22]

Neotechnic methods, which could be reproduced anywhere, made possible a society where “the advantages of modern industry [would] be spread, not by transport—as in the nineteenth century—but by local development.” The spread of technical knowledge and standardized methods would make transportation far less important. [23]

Mumford also described, in quite Kropotkinian terms, the “marriage of town and country, of industry and agriculture,” that could result from the application of further refined eotechnic horticultural techniques and the decentralization of manufacturing in the neotechnic age. [24]

Mumford saw the neotechnic phase as a continuation of the principles of the eotechnic, with industrial organization taking the form it would have done if allowed to develop directly from the eotechnic without interruption.

The neotechnic, in a sense, is a resumption of the lines of development of the original eotechnic revolution, following the paleotechnic interruption. The neotechnic differs from the paleotechnic phase almost as white differs from black. But on the other hand, it bears the same relation to the eotechnic phase as the adult form does to the baby. ....The first hasty sketches of the fifteenth century were now turned into working drawings the first guesses were now re-enforced with a technique of verification the first crude machines were at last carried to perfection in the exquisite mechanical technology of the new age, which gave to motors and turbines properties that had but a century earlier belonged almost exclusively to the clock. [25]

Or as Ralph Borsodi put it, “[t]he steam engine put the water-wheel out of business. But now the gasoline engine and the electric motor have been developed to a point where they are putting the steam engine out of business.”

The modern factory came in with steam. Steam is a source of power that almost necessitates factory production. But electricity does not. It would be poetic justice if electricity drawn from the myriads of long neglected small streams of the country should provide the power for an industrial counter-revolution. [26]

Mumford suggested that, absent the abrupt break created by the new centralized states and their state capitalist clients, the eotechnic might have evolved directly into the neotechnic. Had not the eotechnic been aborted by the paleotechnic, a full-scale modern industrial revolution would still almost certainly have come about “had not a ton of coal been dug in England, and had not a new iron mine been opened.” [27]

The amount of work accomplished by wind and water power compared quite favorably with that of the steam-powered industrial revolution. Indeed, the great advances in textile output of the eighteenth century were made with water-powered factories; steam power was adopted only later. The Fourneyron water-turbine, perfected in 1832, was the first prime-mover to exceed the poor 5% or 10% efficiencies of the early steam engine, and was a logical development of earlier water-power technology that would likely have followed much earlier in due course, had not the development of water-power been sidetracked by the paleotechnic revolution. [28]

Had the spoonwheel of the seventeenth century developed more rapidly into Fourneyron’s efficient water-turbine, water might have remained the backbone of the power system until electricity had developed sufficiently to give it a wider area of use. [29]

The eotechnic phase survived longest in America, according to Mumford. Had it survived a bit longer, it might have passed directly into the neotechnic. In The City in History , he mentioned abortive applications of eotechnic means to decentralized organization, unfortunately forestalled by the paleotechnic revolution, and speculated at greater length on the Kropotkinian direction social evolution might have taken had the eotechnic passed directly into the neotechnic. Of the societies of seventeenth century New England and New Netherlands, he wrote:

This eotechnic culture was incorporated in a multitude of small towns and villages, connected by a network of canals and dirt roads, supplemented after the middle of the nineteenth century by short line railroads, not yet connected up into a few trunk systems meant only to augment the power of the big cities. With wind and water power for local production needs, this was a balanced economy; and had its balance been maintained, had balance indeed been consciously sought, a new general pattern of urban development might have emerged.... In Technics and Civilization, I pointed out how the earlier invention of more efficient prime movers, Fourneyron’s water turbine and the turbine windmill, could perhaps have provided the coal mine and the iron mine with serious technical competitors that might have kept this decentralized regime long enough in existence to take advantage of the discovery of electricity and the production of the light metals. With the coordinate development of science, this might have led directly into the more humane integration of ‘Fields, Factories, and Workshops’ that Peter Kropotkin was to outline, once more, in the eighteen-nineties. [30]

Borsodi speculated, along lines similar to Mumford’s, on the different direction things might have taken had the eotechnic phase been developed to its full potential without being aborted by the paleotechnic:

It is impossible to form a sound conclusion as to the value to mankind of this institution which the Arkwrights, the Watts, and the Stephensons had brought into being if we confine ourselves to a comparison of the efficiency of the factory system of production with the efficiency of the processes of production which prevailed before the factory appeared. A very different comparison must be made. We must suppose that the inventive and scientific discoveries of the past two centuries had not been used to destroy the methods of production which prevailed before the factory. We must suppose that an amount of thought and ingenuity precisely equal to that used in developing the factory had been devoted to the development of domestic, custom, and guild production. We must suppose that the primitive domestic spinning wheel had been gradually developed into more and more efficient domestic machines; that primitive looms, churns, cheese presses, candle molds, and primitive productive apparatus of all kinds had been perfected step by step without sacrifice of the characteristic “domesticity” which they possessed. In short, we must suppose that science and invention had devoted itself to making domestic and handicraft production efficient and economical, instead of devoting itself almost exclusively to the development of factory machines and factory production. The factory-dominated civilization of today would never have developed. Factories would not have invaded those fields of manufacture where other methods of production could be utilized. Only the essential factory would have been developed. Instead of great cities, lined with factories and tenements, we should have innumerable small towns filled with the homes and workshops of neighborhood craftsmen. Cities would be political, commercial, educational, and entertainment centers.... Efficient domestic implements and machines developed by centuries of scientific improvement would have eliminated drudgery from the home and the farm. [31]

And, we might add, the home production machinery itself would have been manufactured, not in Sloanist mass-production factories, but mainly in small factories and shops integrating power machinery into craft production.

C. A Funny Thing Happened on the Way to the Neotechnic Revolution

The natural course of things, according to Borsodi, was that the “process of shifting production from the home and neighborhood to the distantly located factory” would have peaked with “the perfection of the reciprocating steam-engine,” and then leveled off until the invention of the electric motor reversed the process and enabled families and local producers to utilize the powered machinery previously restricted to the factory. [32] But it didn’t happen that way. Instead, electricity was incorporated into manufacturing in an utterly perverse way.

Michael Piore and Charles Sabel described a fork in the road, based on which of two possible alternative ways were chosen for incorporating electrical power into manufacturing. The first, more in keeping with the unique potential of the new technology, was to integrate electrically powered machinery into small-scale craft production: “a combination of craft skill and flexible equipment,” or “mechanized craft production.”

Its foundation was the idea that machines and processes could augment the craftsman’s skill, allowing the worker to embody his or her knowledge in ever more varied products: the more flexible the machine, the more widely applicable the process, the more it expanded the craftsman’s capacity for productive expression.

The other was to adapt electrical machinery to the preexisting framework of paleotechnic industrial organization—in other words, what was to become twentieth century mass-production industry. This latter alternative entailed breaking the production process down into its separate steps, and then substituting extremely expensive and specialized machinery for human skill. “The more specialized the machine—the faster it worked and the less specialized its operator needed to be—the greater its contribution to cutting production costs.” [33]

The first path, unfortunately, was for the most part the one not taken; it has been followed only in isolated enclaves, particularly in assorted industrial districts in Europe. The most famous current example is Italy’s Emilia-Romagna region, which we will examine in a later chapter.

The second, mass-production model became the dominant form of industrial organization. Neotechnic advances like electrically powered machinery, which offered the potential for decentralized production and were ideally suited to a fundamentally different kind of society, have so far been integrated into the framework of mass production industry.

Mumford argued that the neotechnic advances, rather than being used to their full potential as the basis for a new kind of economy, were instead incorporated into a paleotechnic framework. Neotechnic had not “displaced the older regime” with “speed and decisiveness,” and had not yet “developed its own form and organization.”

Emerging from the paleotechnic order, the neotechnic institutions have nevertheless in many cases compromised with it, given way before it, lost their identity by reason of the weight of vested interests that continued to support the obsolete instruments and the anti-social aims of the middle industrial era. Paleotechnic ideals still largely dominate the industry and the politics of the Western World.... To the extent that neotechnic industry has failed to transform the coal-and-iron complex, to the extent that it has failed to secure an adequate foundation for its humaner technology in the community as a whole, to the extent that it has lent its heightened powers to the miner, the financier, the militarist, the possibilities of disruption and chaos have increased. [34] True: the industrial world produced during the nineteenth century is either technologically obsolete or socially dead. But unfortunately, its maggoty corpse has produced organisms which in turn may debilitate or possibly kill the new order that should take its place: perhaps leave it a hopeless cripple. [35] The new machines followed, not their own pattern, but the pattern laid down by previous economic and technical structures. [36] The fact is that in the great industrial areas of Western Europe and America..., the paleotechnic phase is still intact and all its essential characteristics are uppermost, even though many of the machines it uses are neotechnic ones or have been made over—as in the electrification of railroad systems—by neotechnic methods. In this persistence of paleotechnics... we continue to worship the twin deities, Mammon and Moloch.... [37] We have merely used our new machines and energies to further processes which were begun under the auspices of capitalist and military enterprise we have not yet utilized them to conquer these forms of enterprise and subdue them to more vital and humane purposes.... [38] Not alone have the older forms of technics served to constrain the development of the neotechnic economy but the new inventions and devices have been frequently used to maintain, renew, stabilize the structure of the old social order.... [39] The present pseudomorph is, socially and technically, third-rate. It has only a fraction of the efficiency that the neotechnic civilization as a whole may possess, provided it finally produces its own institutional forms and controls and directions and patterns. At present, instead of finding these forms, we have applied our skill and invention in such a manner as to give a fresh lease of life to many of the obsolete capitalist and militarist institutions of the older period. Paleotechnic purposes with neotechnic means that is the most obvious characteristic of the present order. [40]

Mumford used Spengler’s idea of the “cultural pseudomorph” to illustrate the process: “...in geology... a rock may retain its structure after certain elements have been leached out of it and been replaced by an entirely different kind of material. Since the apparent structure of the old rock remains, the new product is called a pseudomorph.”

A similar metamorphosis is possible in culture new forces, activities, institutions, instead of crystallizing independently into their own appropriate forms, may creep into the structure of an existing civilization.... As a civilization, we have not yet entered the neotechnic phase.... [W]e are still living, in Matthew Arnold’s words, between two worlds, one dead, the other powerless to be born. [41]

For Mumford, Soviet Russia was a mirror image of the capitalist West in shoehorning neotechnic technology into a paleotechnic institutional framework. Despite the neotechnic promise of Lenin’s “electrification plus Soviet power,” the Soviet aesthetic ideal was that of the Western mass-production factory: “the worship of size and crude mechanical power, and the introduction of a militarist technique in both government and industry....” [42] That Lenin’s vision of “communism” entailed a wholesale borrowing of the mass-production model, under state ownership, is suggested for his infatuation with Taylorism and his suppression of worker self-management in the factories. The Stalinist fetish for gigantism, with its boasts of having the biggest factory, power plant, etc. in the world, followed as a matter of course.

How were existing institutional interests able to thwart the revolutionary potential of electrical power, and divert neotechnic technologies into paleotechnic channels? The answer is that the state tipped the balance.

The state played a central role in the triumph of mass-production industry in the United States.

The state’s subsidies to long-distance transportation were first and most important. There never would have been large manufacturing firms producing for a national market, had not the federal government first created a national market with the national railroad network. A high-volume national transportation system was an indispensable prerequisite for big business.

We quoted Mumford’s observation above, that the neotechnic revolution offered to substitute industrialization by local economic development for reliance on long-distance transport. State policies, however, tipped the balance in the other direction: they artificially shifted the competitive advantage toward industrial concentration and long-distance distribution.

Alfred Chandler, the chief apostle of the large mass-production corporation, himself admitted as much: all the advantages he claimed for mass production presupposed a high-volume, high-speed, high-turnover distribution system on a national scale, without regard to whether the costs of the latter exceeded the alleged benefits of the former..

...[M]odern business enterprise appeared for the first time in history when the volume of economic activities reached a level that made administrative coordination more efficient and more profitable than market coordination. [43] ...[The rise of administrative coordination first] occurred in only a few sectors or industries where technological innovation and market growth created high-speed and high-volume throughput. [44]

William Lazonick, a disciple of Chandler, described the process as obtaining “a large market share in order to transform the high fixed costs into low unit costs....” [45]

The railroad and telegraph, “so essential to high-volume production and distribution,” were in Chandler’s view what made possible this steady flow of goods through the distribution pipeline. [46]

The primacy of such state-subsidized infrastructure is indicated by the very structure of Chandler’s book. He begins with the railroads and telegraph system, themselves the first modern, multi-unit enterprises. [47] And in subsequent chapters, he recounts the successive evolution of a national wholesale network piggybacking on the centralized transportation system, followed by a national retail system, and only then by large-scale manufacturing for the national market. A national long-distance transportation system led to mass distribution, which in turn led to mass production.

The revolution in the processes of distribution and production rested in large part on the new transportation and communications infrastructure. Modern mass production and mass distribution depend on the speed, volume, and regularity in the movement of goods and messages made possible by the coming of the railroad, telegraph and steamship. [48] The coming of mass distribution and the rise of the modern mass marketers represented an organizational revolution made possible by the new speed and regularity of transportation and communication. [49] ...The new methods of transportation and communication, by permitting a large and steady flow of raw materials into and finished products out of a factory, made possible unprecedented levels of production. The realization of this potential required, however, the invention of new machinery and processes. [50]

In other words, the so-called “internal economies of scale” in manufacturing could come about only when the offsetting external diseconomies of long-distance distribution were artificially nullified by corporate welfare. Such “economies” can only occur given an artificial set of circumstances which permit the reduced unit costs of expensive, product-specific machinery to be considered in isolation, because the indirect costs entailed are all externalized on society. And if the real costs of long-distance shipping, high-pressure marketing, etc., do in fact exceed the savings from faster and more specialized machinery, then the “efficiency” is a false one.

It’s an example of what Ivan Illich called “counterproductivity”: the adoption of a technology beyond the point, not only of diminishing returns, but of negative returns. Illich also used the term “second watershed” to describe the same concept: e.g., in the case of medicine, the first watershed included such basic things as public sanitation, the extermination of rats, water purification, and the adoption of antibiotics; the second watershed was the adoption of skill- and capital-intensive methods to the point that iatrogenic (hospital- or doctor-induced) illness exceeded the health benefits. In other areas, the introduction of motorized transportation, beyond a certain point, produces artificial distance between things and generates congestion faster than it can be relieved. [51]

Where Illich went wrong was in seeing counterproductivity as inevitable, if adoption of technologies wasn’t restrained by regulation. In fact, when all costs and benefits of a technology are internalized by the adopter, adoption beyond the point of counterproductivity will not occur. Adoption beyond the point of counterproductivity is profitable only when the costs are externalized on society or on the taxpayer, and the benefits are appropriated by a privileged class.

As Chandler himself admitted, the greater “efficiency” of national wholesale organizations lay in their “even more effective exploitation of the existing railroad and telegraph systems.” [52] That is, they were more efficient parasites. But the “efficiencies” of a parasite are usually of a zero-sum nature.

Chandler also admitted, perhaps inadvertently, that the “more efficient” new production methods were adopted almost as an afterthought, given the artificially large market areas and subsidized distribution:

...the nature of the market was more important than the methods of production in determining the size and defining the activities of the modern industrial corporation. [53]

And finally, Chandler admitted that the new mass-production industry was not more efficient at producing in response to autonomous market demand. He himself helpfully pointed out, as we shall see in the next chapter, that the first large industrialists only integrated mass-production with mass-distribution because they were forced to: “They did so because existing marketers were unable to sell and distribute products in the volume they were produced.” [54]

Despite all this, Chandler—astonishingly—minimized the role of the state in creating the system he so admired:

The rise of modern business enterprise in American industry between the 1880s and World War I was little affected by public policy, capital markets, or entrepreneurial talents because it was part of a more fundamental economic development. Modern business enterprise... was the organizational response to fundamental changes in processes of production and distribution made possible by the availability of new sources of energy and by the increasing application of scientific knowledge to industrial technology. The coming of the railroad and telegraph and the perfection of new high-volume processes... made possible a historically unprecedented volume of production. [55]

“The coming of the railroad”? In Chandler’s language, the railroads seem to be an inevitable force of nature rather than the result of deliberate actions by policy makers.

We can’t let Chandler get by without challenging his implicit assumption (shared by many technocratic liberals) that paleotechnic industry was more efficient than the decentralized, small-scale production methods of Kropotkin and Borsodi. The possibility never occurred to him that massive state intervention, at the same time as it enabled the revolutions in corporate size and capital-intensiveness, might also have tipped the balance between alternative forms of production technology.

The national railroad system simply never would have come into existence on such a scale, with a centralized network of trunk lines of such capacity, had not the state rammed the project through.

Piore and Sabel describe the enormous capital outlays, and the enormous transaction costs to be overcome, in creating a national railroad system. Not only the startup costs of actual physical capital, but those of securing rights of way, were “huge”:

It is unlikely that railroads would have been built as quickly and extensively as they were but for the availability of massive government subsidies.

Other transaction costs overcome by government, in creating the railroad system, included the revision of tort and contract law (e.g., to exempt common carriers from liability for many kinds of physical damage caused by their operation). [56]

According to Matthew Josephson, for ten years or more before 1861, “the railroads, especially in the West, were ‘land companies’ which acquired their principal raw material through pure grants in return for their promise to build, and whose directors... did a rushing land business in farm lands and town sites at rising prices.” For example, under the terms of the Pacific Railroad bill, the Union Pacific (which built from the Mississippi westward) was granted twelve million acres of land and $27 million worth of thirty-year government bonds. The Central Pacific (built from the West Coast eastward) received nine million acres and $24 million worth of bonds. [57]

The federal railroad land grants, according to Murray Rothbard, included fifteen mile tracts of land on either side of the actual right of way. As the railroads were completed, this land skyrocketed in value. And as new towns were built along the railroad routes, every house and business was built on land sold by the railroads. The tracts included valuable timber land, as well. [58]

Theodore Judah, chief engineer for what became the Central Pacific, assured potential investors “that it could be done— if government aid were obtained . For the cost would be terrible.” Collis Huntington, the leading promoter for the project, engaged in a sordid combination of strategically placed bribes and appeals to communities’ fears of being bypassed, in order to extort grants of “rights of way, terminal and harbor sites, and... stock or bond subscriptions ranging from $150,000 to $1,000,000” from a long string of local governments that included San Francisco, Stockton, and Sacramento. [59]

Absent the land grants and government purchases of railroad bonds, the railroads would likely have developed instead along the initial lines described by Mumford: many local rail networks linking communities into local industrial economies. The regional and national interlinkages of local networks, when they did occur, would have been far fewer and far smaller in capacity. The comparative costs of local and national distribution, accordingly, would have been quite different. In a nation of hundreds of local industrial economies, with long-distance rail transport much more costly than at present, the natural pattern of industrialization would have been to integrate small-scale power machinery into flexible manufacturing for local markets.

Instead, the state artificially aggregated the demand for manufactured goods into a single national market, and artificially lowered the costs of distribution for those serving that market. In effect, it created an artificial ecosystem to which large-scale, mass-production industry was best “adapted.”

The first organisms to adapt themselves to this artificial ecosystem, as recounted by Chandler, were the national wholesale and retail networks, with their dependence on high turnover and dependability. Then, piggybacked on them, were the large manufacturers serving the national market. But they were only “more efficient” in terms of their more efficient exploitation of an artificial environment which itself was characterized by the concealment and externalization of costs. With all the concealed and externalized costs fully subsumed into the price of mass-produced goods, rather than shifted onto society or the taxpayer, it is likely that the overall cost of goods produced flexibly on general-purpose machinery for local markets would have been less than that of mass-produced goods.

Besides almost single-handedly creating the artificially unified and cheap national market without which national manufacturers could not have existed, the railroad companies also actively promoted the concentration of industry through their rate policies. Piore and Sabel argue that “the railroads’ policy of favoring their largest customers, through rebates,” was a central factor in the rise of the large corporation. Once in place, the railroads—being a high fixed-cost industry—had

a tremendous incentive to use their capacity in a continuous, stable way. This incentive meant, in turn, that they had an interest in stabilizing the output of their principal customers—an interest that extended to protecting their customers from competitors who were served by other railroads. It is therefore not surprising that the railroads promoted merger schemes that had this effect, nor that they favored the resulting corporations or trusts with rebates.

“Indeed, seen in this light, the rise of the American corporation can be interpreted more as the result of complex alliances among Gilded Age robber barons than as a first solution to the problem of market stabilization faced by a mass-production economy.” [60] According to Josephson,

while the tillers of the soil felt themselves subject to extortion, they saw also that certain interests among those who handled the grain or cattle they produced, the elevators, millers and stockyards, or those from whom they purchased their necessities, the refiners of oil, the great merchant-houses, were encouraged by the railroads to combine against the consumer. In the hearings before the Hepburn Committee in 1879 it was revealed that the New York Central, like railways all over the country, had some 6,000 secret rebate agreements, such as it had made with the South Improvement Company.... [61] ...[T]he secret tactics of the rebate gave certain producing groups (as in petroleum, beef, steel) those advantages which permitted them to outstrip competitors and soon to conduct their business upon as large a scale as the railways themselves. [62] ...Upon the refined oil [Rockefeller] shipped from Cleveland he received a rebate of 50 cents a barrel, giving him an advantage of 25 per cent over his competitors. [63] In the meantime the political representatives whom the disabused settlers sent forth to Washington or to the state legislatures seemed not only helpless to aid them, but were seen after a time riding about the country wherever they listed by virtue of free passes generously distributed to them. [64]

The railroads also captured the state legislatures and railroad commissions. [65]

Among certain Objectivists and vulgar libertarians of the Right, this is commonly transformed into a morality play in which men of innovative genius built large businesses through sheer effort and entrepreneurship, and the power of superior efficiency. These heroic John Galts then charged rates based on the new railroad’s benefits to customers, and were forced into political lobbying only as a matter of self-defense against government extortion. This is a lie.

What happened was nothing to do with a free market, unless one belongs to the right-wing strain of libertarianism for which “free market” equates to “beneficial to big business.” It was, rather, a case of the government intervening to create an industry almost from scratch, and by the same act putting it in a commanding height from which it could extort monopoly profits from the public. The closest modern analogy is the drug companies, which use unlimited patent monopolies granted by the state to charge extortionate prices for drugs developed entirely or almost entirely with government research funds. But then the Randroids and vulgar libertarians are also fond of Big Pharma.

Of course, the railroads were only the first of many centralizing infrastructure projects. The process continued through the twentieth century, with the development of the subsidized highway system and the civil aviation system. But unlike the railroads, whose chief significance was their role in creating the national market in the first place, civil aviation and the automobile-industrial complex were arguably most important as sinks for surplus capital and output. They will be treated in the next chapter, accordingly, as examples of a phenomenon described by Paul Baran and Paul Sweezy in Monopoly Capitalism : government creation of new industries to absorb the surplus resulting from corporate capitalism’s chronic tendencies toward overinvestment and overproduction.

Second, the American legal framework was transformed in the mid-nineteenth century in ways that made a more hospitable environment for large corporations operating on a national scale. Among the changes were the rise of a general federal commercial law, general incorporation laws, and the status of the corporation as a person under the Fourteenth Amendment. The functional significance of these changes on a national scale was analogous to the later effect, on a global scale, of the Bretton Woods agencies and the GATT process: a centralized legal order was created, prerequisite for their stable functioning, coextensive with the market areas of large corporations.

The federalization of the legal regime is associated, in particular, with the recognition of a general body of federal commercial law in Swift v. Tyson (1842), and with the application of the Fourteenth Amendment to corporate persons in Santa Clara County v. Southern Pacific Railroad Company (1886).

The Santa Clara decision was followed by an era of federal judicial activism, in which state laws were overturned on the basis of “substantive due process.” The role of the federal courts in the national economy was similar to the global role of the contemporary World Trade Organization, with higher tribunals empowered to override the laws of local jurisdictions which were injurious to corporate interests.

In the federal courts, the “due process” and “equal protection” rights of corporations as “juristic persons” have been made the basis of protections against legal action aimed at protecting the older common law rights of flesh and blood persons. For example local ordinances to protect groundwater and local populations against toxic pollution and contagion from hog farms, to protect property owners from undermining and land subsidance caused by coal extraction—surely indistinguishable in practice from the tort liability provisions of any just market anarchy’s libertarian law code—have been overturned as violations of the “equal protection” rights of hog factory farms and mining companies.

Still another component of the corporate legal revolution was the increased ease, under general incorporation laws, of forming limited liability corporations with permanent entity status apart (severally or collectively) from the shareholders.

Arguably, as Robert Hessen and others have made a case, corporate entity status and limited liability against creditors could be achieved entirely through private contract. Whether or not that is so, the government has tilted the playing field decisively toward the corporate form by providing a ready-made and automatic procedure for incorporation. In so doing, it has made the corporation the standard or default form of organization, reduced the transaction costs of establishing it relative to what would prevail were it negotiated entirely from scratch, and thereby reduced the bargaining power of other parties in negotiating the terms on which it operates.

Third, not only did the government indirectly promote the concentration and cartelization of industry through the railroads it had created, but it did so directly through patent law. As we shall see in the next chapter, mass-production requires large business organizations capable of exercising sufficient power over their external environment to guarantee the consumption of their output. Patents promoted the stable control of markets by oligopoly firms through the control, exchange and pooling of patents.

According to David Noble, two essentially new science-based industries (those that “grew out of the soil of scientific rather than traditional craft knowledge”) emerged in the late 19 th century: the electrical and chemical industries. [66]

In the electrical industry, General Electric had its origins first in a merger between Edison Electric (which controlled all of Edison’s electrical patents) and the Sprague Electric Railway and Motor Company, and then in an 1892 merger between Edison General Electric and Thomas-Houston—both of them motivated primarily by patent considerations. In the latter case, in particular, Edison General Electric and Thomas-Houston each needed patents owned by the others and could not “develop lighting, railway or power equipment without fear of infringement suits and injunctions.” [67] From the 1890s on, the electrical industry was dominated by two large firms: GE and Westinghouse, both of which owed their market shares largely to patent control. In addition to the patents which they originally owned, they acquired control over patents (and hence over much of the electrical manufacturing market) through “acquisition of the patent rights of individual inventors, acquisition of competing firms, mergers with competitors, and the systematic and strategic development of their own patentable inventions. As GE and Westinghouse together secured a deadlock on the electrical industry through patent acquisition, competition between them became increasingly intense and disruptive. By 1896 the litigation cost from some three hundred pending patent suits was enormous, and the two companies agreed to form a joint Board of Patent Control. General Electric and Westinghouse pooled their patents, with GE handling 62.5% of the combined business. [68]

The structure of the telephone industry had similar origins, with the Bell Patent Association forming “the nucleus of the first Bell industrial organization” (and eventually of AT&T) The National Bell Telephone Company, from the 1880s on, fought vigorously to “occupy the field” (in the words of general manager Theodore N. Vail) through patent control. As Vail described the process, the company surrounded itself

with everything that would protect the business, that is the knowledge of the business, all the auxiliary apparatus; a thousand and one little patents and inventions with which to do the business which was necessary, that is what we wanted to control and get possession of.

To achieve this, the company early on established an engineering department

whose business it was to study the patents, study the development and study these devices that either were originated by our own people or came in to us from the outside. Then early in 1879 we started our patent department, whose business was entirely to study the question of patents that came out with a view to acquiring them, because... we recognized that if we did not control these devices, somebody else would. [69]

This approach strengthened the company’s position of control over the market not only during the seventeen year period of the main patents, but (as Frederick Fish put it in an address to the American Institute of Electrical Engineers) during the subsequent seventeen years of

each and every one of the patents taken out on subsidiary methods and devices invented during the progress of commercial development. [Therefore] one of the first steps taken was to organize a corps of inventive engineers to perfect and improve the telephone system in all directions ...that by securing accessory inventions, possession of the field might be retained as far as possible and for as long a time as possible. [70]

This method, preemptive occupation of the market through strategic patent acquisition and control, was also used by GE and Westinghouse.

Even with the intensified competition resulting from the expiration of the original Bell patents in 1894, and before government favoritism in the grants of rights-of-way and regulated monopoly status, the legacy effect of AT&T’s control of the secondary patents was sufficient to secure it half the telephone market thirteen years later, in 1907. [71] AT&T, anticipating the expiration of its original patents, had (to quote Vail again) “surrounded the business with all the auxiliary protection that was possible.” For example, the company in 1900 purchased Michael Pupin’s patent on loading coils and in 1907 acquired exclusive domestic rights for Cooper-Hewitt’s patents on the mercury-arc repeater—essential technologies underlying AT&T’s monopoly on long-distance telephony. [72]

By the time the FCC was formed in 1935, the Bell System had acquired patents to “some of the most important inventions in telephony and radio,” and “through various radio-patent pool agreements in the 1920s... had effectively consolidated its position relative to the other giants in the industry.” In so doing, according to an FCC investigation, AT&T had gained control of “the exploitation of potentially competitive and emerging forms of communication” and “pre-empt[ed] for itself new frontiers of technology for exploitation in the future....” [73]

The radio-patent pools included AT&T, GE and Westinghouse, RCA (itself formed as a subsidiary of GE after the latter acquired American Marconi), and American Marconi. [74] Alfred Chandler’s history of the origins of the consumer electronics industry is little more than an extended account of which patents were held, and subsequently acquired, by which companies. [75] This should give us some indication, by the way, of what he meant by “organizational capability,” a term of his that will come under more scrutiny in the next chapter. In an age where the required capital outlays for actual physical plant and equipment are rapidly diminishing in many forms of manufacturing, one of the chief functions of “intellectual property” is to create artificial “comparative advantage” by giving a particular firm a monopoly on technologies and techniques, and prevent their diffusion throughout the market.

The American chemical industry, in its modern form, was made possible by the Justice Department’s seizure of German chemical patents in WWI. Until the war, some 98% of patent applications in chemical industry came from German firms, and were never worked in the U.S. As a result the American chemical industry was technically second-rate, largely limited to final processing of intermediate goods imported from Germany. Attorney General A. Mitchell Palmer, as “Alien Property Custodian” during the war, held the patents in trust and licensed 735 of them to American firms; Du Pont alone received three hundred. [76]

More generally, patents are an effective tool for cartelizing markets in industry at large. They were used in the automobile and steel industries among others, according to Noble. [77] In a 1906 article, mechanical engineer and patent lawyer Edwin Prindle described patents as “the best and most effective means of controlling competition.”

Patents are the only legal form of absolute monopoly. In a recent court decision the court said, “within his domain, the patentee is czar.... cries of restraint of trade and impairment of the freedom of sales are unavailing, because for the promotion of the useful arts the constitution and statutes authorize this very monopoly.” The power which a patentee has to dictate the conditions under which his monopoly may be exercised has been used to form trade agreements throughout practically entire industries, and if the purpose of the combination is primarily to secure benefit from the patent monopoly, the combination is legitimate. Under such combinations there can be effective agreements as to prices to be maintained...; the output for each member of the combination can be specified and enforced... and many other benefits which were sought to be secured by trade combinations made by simple agreements can be added. Such trade combinations under patents are the only valid and enforceable trade combinations that can be made in the United States. [78]

And unlike purely private cartels, which tend toward defection and instability, patent control cartels—being based on a state-granted privilege—carry a credible and effective punishment for defection.

Through ttangible propertyheir “Napoleonic concept of industrial warfare, with inventions and patents as the soldiers of fortune,” and through “the research arm of the ‘patent offensive,’” manufacturing corporations were able to secure stable control of markets in their respective industries. [79]

These were the conditions present at the outset of the mass production revolution, in which the development of the corporate industrial economy began. In the absence of these necessary preconditions, there simply would not have been a single national market or large industrial corporations serving it. Rather than being adopted into the framework of the paleotechnic factory system, the introduction of electrical machinery would likely have followed its natural course and lived up to its unique potential: powered machinery would have been incorporated into small-scale production for local markets, and the national economy would have developed as “a hundred Emilia-Romagnas.”

But these were only the necessary conditions at the outset. As we shall see in the next chapter, the growth of big government continued to parallel that of big business, introducing newer and larger-scale forms of political intervention to address the corporate economy’s increasing tendencies toward destabilization, and to insulate the giant corporation from the market forces that would otherwise have destroyed it.

Chapter Two: Moloch: The Sloanist Mass Production Model

Introduction

The mass-production model carried some strong imperatives: first, it required large-batch production, running the enormously expensive product-specific machinery at full capacity, to minimize unit costs (in Amory Lovins’ words, “ever-faster once-through flow of materials from depletion to pollution” [80] ); and second, it required social control and predictability to ensure that the output would be consumed, lest growing inventories and glutted markets cause the wheels of industry to stop turning. Utilize capacity, utilize capacity, that is Moses and the prophets. Here’s Lewis Mumford on the principle:

As mechanical methods have become more productive, the notion has grown up that consumption should become more voracious. In back of this lies an anxiety lest the productivity of the machine create a glut in the market....

This threat is overcome by “the devices of competitive waste, of shoddy workmanship, and of fashion...” [81]

As described by Michael Piore and Charles Sabel, the problem was that product-specific resources could not be reallocated when the market shifted; under such conditions, the cost of market unpredictability was unacceptably high. Markets for the output of mass-production industry had to be guaranteed because highly specialized machinery could not be reallocated to other uses with changes in demand. “A piece of modern machinery dedicated to the production of a single part cannot be turned to another use, no matter how low the price of that part falls, or how high the price of other goods rises.” [82]

Mass production required large investments in highly specialized equipment and narrowly trained workers. In the language of manufacturing, these resources were “dedicated” suited to the manufacture of a particular product—often, in fact, to just one make or model. When the market for that particular product declined, the resources had no place to go. Mass production was therefore profitable only with markets that were large enough to absorb an enormous output of a single, standardized commodity, and stable enough to keep the resources involved in the production of that commodity continuously employed. Markets of this kind... did not occur naturally. They had to be created. [83] ….It became necessary for firms to organize the market so as to avoid fluctuations in demand and create a stable atmosphere for profitable, long-term investment. [84] ...[There were] two consequences of the Americans’ discovery that the profitability of investment in mass-production equipment depends on the stabilization of markets. The first of these consequences was the construction, from the 1870s to the 1920s, of giant corporations, which could balance demand and supply within their industries. The second consequence was the creation, two decades later, of a Keynesian system for matching production and consumption in the national economy as a whole. [85]

Ralph Borsodi argued that “[w]ith serial production, … man has ventured into a topsy-turvy world in which

goods that wear out rapidly or that go out of style before they have a chance to be worn out seem more desirable than goods which are durable and endurable. Goods now have to be consumed quickly or discarded quickly so that the buying of goods to take their place will keep the factory busy. By the old system production was merely the means to an end. By the new system production itself has become the end. [86] With continuous operation of [the factory’s] machinery, much larger quantities of its products must be sold to the public. The public buys normally only as fast as it consumes the product. The factory is therefore confronted by a dilemma; if it makes things well, its products will be consumed but slowly, while if it makes them poorly, its products will be consumed rapidly. It naturally makes its products as poorly as it dares. It encourages premature depreciation. [87]

(In a free market, of course, firms that made stuff well would have a competitive advantage. But in our unfree market, the state’s subsidies to inefficiency cost, “intellectual property” laws, and other restraints on competition insulate firms from the full competitive disadvantage of offering inferior products.)

Because of the imperative for overcapitalized industry to operate at full capacity, on round-the-clock shifts, in order to spread the cost of its expensive machinery over the greatest possible number of units of output, the imperative of guaranteeing consumption of the output was equally great. As Benjamin Barber puts it, capitalism manufactures needs for the goods it’s producing rather than producing goods in response to needs. [88]

This is not just a caricature by the enemies of Sloanist mass-production. It has been a constant theme of the model’s most enthusiastic advocates and defenders. They disagree with economic decentralists, not on the systemic requirements of the mass-production model, but only on whether or not it has on the whole been a good thing, and whether there is any viable alternative.

In The New Industrial State , Galbraith wrote about the connection between capital intensiveness and the “technostructure’s” need for predictability and control:

...[Machines and sophisticated technology] require... heavy investment of capital. They are designed and guided by technically sophisticated men. They involve, also, a greatly increased lapse of time between any decision to produce and the emergence of a salable product. From these changes come the need and the opportunity for the large organization. It alone can deploy the requisite capital; it alone can mobilize the requisite skills.... The large commitment of capital and organization well in advance of result requires that there be foresight and also that all feasible steps be taken to insure that what is foreseen will transpire. [89] ...From the time and capital that must be committed, the inflexibility of this commitment, the needs of large organization and the problems of market performance under conditions of advanced technology, comes the necessity for planning. [90] The need for planning... arises from the long period of time that elapses during the production process, the high investment that is involved and the inflexible commitment of that investment to the particular task. [91] Planning exists because [the market] process has ceased to be reliable. Technology, with its companion commitment of time and capital, means that the needs of the consumer must be anticipated--by months or years.... [I]n addition to deciding what the consumer will want and will pay, the firm must make every feasible step to see that what it decides to produce is wanted by the consumer at a remunerative price.... It must exercise control over what is sold.... It must replace the market with planning. [92] ...The need to control consumer behavior is a requirement of planning. Planning, in turn, is made necessary by extensive use of advanced technology and capital and by the relative scale and complexity of organization. These produce goods efficiently; the result is a very large volume of production. As a further consequence, goods that are related only to elementary physical sensation--that merely prevent hunger, protect against cold, provide shelter, suppress pain--have come to comprise a small and diminishing part of all production. Most goods serve needs that are discovered to the individual not by the palpable discomfort that accompanies deprivation, but by some psychic response to their possession.... [93]

For Galbraith, the “accepted sequence” of consumer sovereignty (what Mises called “dollar democracy”), in which consumer demand determines what is produced, was replaced by a “revised sequence” in which oligopoly corporations determine what is produced and then dispose of it by managing consumer behavior. In contemporary terms, the demand-pull economy is replaced by a supply-push model.

Alfred Chandler, like Galbraith, was thoroughly sold on the greater efficiencies of the large corporation. He argued that the modern multi-unit enterprise arose when administrative coordination “permitted” greater efficiencies. [94]

By linking the administration of producing units with buying and distributing units, costs for information on markets and sources of supply were reduced. Of much greater significance, the internalization of many units permitted the flow of goods from one unit to another to be administratively coordinated. More effective scheduling of flows achieved a more intensive use of facilities and personnel employed in the processes of production and so increased productivity and reduced costs. [95] Organizationally, output was expanded through improved design of manufacturing or processing plants and by innovations in managerial practices and procedures required to synchronize flaws and supervise the work force. Increases in productivity also depend on the skills and abilities of the managers and the workers and the continuing improvement of their skills over time. Each of these factors or any combination of them helped to increase the speed and volume of the flow, or what some processors call the “throughput,” of materials within a single plant or works.... [96] Integration of mass production with mass distribution afforded an opportunity for manufacturers to lower costs and increase productivity through more effective administration of the processes of production and distribution and coordination of the flow of goods through them. Yet the first industrialists to integrate the two basic sets of processes did not do so to exploit such economies. They did so because existing marketers were unable to sell and distribute products in the volume they were produced. [97]

The mass-production factory achieved “economies of speed” from “greatly increasing the daily use of equipment and personnel.” [98] (Of course, Chandler starts by assuming the greater inherent efficiency of capital-intensive modes of production, which then require “economies of speed” to reduce unit costs from the expensive capital assets).

What Chandler meant by “economies of speed” was entirely different from lean production’s understanding of flow. Chandler’s meaning is suggested by his celebration of the new corporate managers who “developed techniques to purchase, store, and move huge stocks of raw and semifinished materials. In order to maintain a more certain flow of goods, they often operated fleets of railroad cars and transportation equipment.” [99] In other words, both the standard Sloanist model of enormous buffer stocks of unfinished goods, and warehouses full of finished goods awaiting orders—and the faux “lean” model in which inventory is swept under the rug and moved into warehouses on wheels and in container-ships.

(The reader may be puzzled or even annoyed by my constant use of the term “Sloanism.” I got it from the insightful commentary of Eric Husman at GrimReader blog, in which he treats the production and accounting methods of General Motors as paradigmatic of 20 th -century American mass-production industry, and contrasts them with the lean methods popularly identified with Taichi Ohno’s Toyota production system.)

“Sloanism” refers, in particular, to the management accounting system identified with General Motors. It was first developed by Brown at DuPont, and brought to GM when DuPont acquired a controlling share of the company and put Alfred Sloan in charge. Brown’s management accounting system, whose perverse incentives are dissected in detail by William Waddell and Norman Bodek in Rebirth of American Industry , became the prevailing standard throughout American corporate management.

In Sloanist management accounting, inventory is counted as an asset “with the same liquidity as cash.” Regardless of whether a current output is needed to fill an order, the producing department sends it to inventory and is credited for it. Under the practice of “overhead absorption,” all production costs are fully incorporated into the price of goods “sold” to inventory, at which point they count as an asset on the balance sheet.

With inventory declared to be an asset with the same liquidity as cash, it did not really matter whether the next ‘cost center,’ department, plant, or division actually needed the output right away in order to consummate one of these paper sales. The producing department put the output into inventory and took credit. [100] ...Expenses go down..., while inventory goes up, simply by moving a skid full of material a few operations down the stream. In fact, expenses can go down and ROI can improve even when the plant pays an overtime premium to work on material that is not needed; or if the plant uses defective material in production and a large percentage of the output from production must be scrapped. [101]

In other words, by the Sloanist accounting principles predominant in American industry, the expenditure of money on inputs is by definition the creation of value. As Waddell described it at his blog,

companies can make a bunch of stuff, assign huge buckets of fixed overhead to it and move those overheads over to the balance sheet, making themselves look more profitable.

In other words, “they accept cost as a fait accompli....” Paul Goodman’s idea of the culture of cost-plus (about which more below) sums it up perfectly. And as Waddell points out, the GDP as a metric depends on the same assumptions as the management accounting system used by American industry: it counts expenditure on inputs, by definition, as the creation of wealth. [102]

American factories frequently have warehouses filled with millions of dollars worth of obsolete inventory, which is still there “to avoid having to reduce profits this quarter by writing it off.” When the corporation finally does have to adjust to reality, the result is costly write-downs of inventory.

It did not take much of a mathematician to figure out that, if all you really care about is the cost of performing one operation to a part, and you were allowed to make money by doing that single operation as cheaply as possible and then calling the partially complete product an asset, it would be cheaper to make them a bunch at a time. It stood to reason that spreading set-up costs over many parts was cheaper than having to set-up for just a few even if it meant making more parts than you needed for a long time. It also made sense, if you could make enough parts all at once, to just make them cheaply, and then sort out the bad ones later. Across the board, batches became the norm because the direct cost of batches was cheap and they could be immediately turned into money—at least as far as Mr. DuPont was concerned—by classifying them as work-in-process inventory. [103]

And the effect of these inventories on cost is enormous. In the garment industry, making to forecast rather than to order, and maintaining large enough inventory to avoid idle machines, is estimated to account for some 25% of retail price. [104] That means your clothes cost about a third more because of the “efficiencies” of Sloanist mass production.

Under the Sloan system, if a machine can be run at a certain speed, it must be run at that speed to maximize efficiency. And the only way to increase efficiency is to increase the speed at which individual machines can be run. [105] The Sloan system focuses, exclusively, on labor savings “perceived to be attainable only through faster machines. Never mind that faster machines build inventory faster, as well.” [106]

The incredible bureaucratic inefficiencies resulting from these inventories is suggested by GM’s “brilliant innovation” of MRP software in the 1960s—a central planning system that surely would have made the folks at Gosplan green with envy. Of course, as Toyota Production System father Taichi Ohno pointed out, MRP would be useless to a company operating on zero lead time and lot sizes of one. [107] The point of MRP is that it “allows each cost center to operate at its individual optimum without regard to the performance of the other cost centers.”

If the machining department is having a good week, that supervisor can claim credit for his production—perhaps even exceeding the schedule. It does not affect him at all that the next department upstream—assembly, for example—is having major problems and will not come close to making schedule.... ...[MRP’s] core is the logic and a set of algorithms to eanble each component of a product to be produced at different volumes and speeds; and, in fact, the same components of a product going through different operations to be produced at different volumes and speeds, in order to optimum efficiency at each operation. It is based on the assumption that manufacturing is best performed in such a disjointed manner, and it assures adequate inventory to buffer all of this unbalanced production. [108]

The lean approach has its own “economies of speed,” but they are the direct opposite of the Sloanist approach. The Sloanist approach focuses on maximizing economies of speed in terms of the unit cost of a particular machine, without regard to the inventories of unfinished goods that must accumulate as buffer stocks as a result, and all the other enormous eddies in the flow of production. As the authors of Natural Capitalism put it, it attempts to optimize each step of the production process in isolation, “thereby pessimizing the entire system.” A machine can reduce the labor cost of one step by running at enormous speeds, and yet be out of sync with the overall process. [109] Waddell and Bodek give the example of Ernie Breech, sent from GM to “save” Ford, demanding a plant manager tell him the cost of manufacturing the steering wheel so he could calculate ROI for that step of the process. The plant manager was at a loss trying to figure out what Breech wanted: did he think steering wheel production was a bottleneck in production flow, or what? But for Breech, if the unit cost of that machine and the direct cost of the labor working it were low enough compared to the “value” of the steering wheels “sold” to inventory, that was all that mattered. Under the Sloan accounting system, producing a steering wheel—even in isolation, and regardless of what was done with it or whether there was an order for the car it was a part of—was a money-making proposition. “Credit for that work—it looks like a payment on the manufacturing budget—is given for performing that simple task because it moves money from expenses to assets. [110]

“Selling to inventory,” under standard management accounting rules, is equivalent to the incentive systems for production under a Five-Year Plan: there is no incentive to produce goods that will actually work or be consumed. Hence the carloads of refrigerators, for which Soviet factories were credited toward their 5YP quotas, thrown off trains with no regard to whether they were damaged beyond repair in the process.

The lean approach, in contrast, gears production flow to orders, and then sizes individual machines and steps in the production process to the volume of overall flow. Under lean thinking, it’s better to have a less specialized machine with a lower rate of output, in order to avoid an individual step out of proportion to the overall production flow. This is what the Toyota Production System calls takt: pacing the output of each stage of production to meet the needs of the next stage, and pacing the overall flow of all the stages in accordance with current orders. [111] In a Sloan factory, the management would select machinery to produce the entire production run “as fast as they humanly could, then sort out the pieces and put things together later.” [112]

To quote the authors of Natural Capitalism again: “The essence of the lean approach is that in almost all modern manufacturing,

the combined and often synergistic benefits of the lower capital investment, greater flexibility, often higher reliability, lower inventory cost, and lower shipping cost of much smaller and more localized production equipment will far outweigh any modest decreases in its narrowly defined “efficiency” per process step. It’s more efficient overall, in resources and time and money, to scale production properly, using flexible machines that can quickly shift between products. By doing so, all the different processing steps can be carred out immediately adjacent to one another with the product kept in continuous flow. The goal is to have no stops, no delays, no backflows, no inventories, no expediting, no bottlenecks, no buffer stocks, and no muda [waste]. [113]

The contrast is illustrated by a couple of examples from Natural Capitalism : an overly “efficient” grinding machine at Pratt & Whitney, and a cola bottling machine likewise oversized in relation to its task:

The world’s largest maker of jet engines for aircraft had paid $80 million for a “monument”--state-of-the-art German robotic grinders to make turbine blades. The grinders were wonderfully fast, but their complex computer controls required about as many technicians as the old manual production system had required machinists. Moreover, the fast grinders required supporting processes that were costly and polluting. Since the fast grinders were meant to produce big, uniform batches of product, but Pratt & Whitney needed agile production of small, diverse batches, the twelve fancy grinders were replaced with eight simple ones costing one-fourth as much. Grinding time increased from 3 to 75 minutes, but the throughput time for the entire process decreased from 10 days to 75 minutes because the nasty supporting processes were eliminated. Viewed from the whole-system perspective of the complete production process, not just the grinding step, the big machines had been so fast that they slowed down the process too much, and so automated that they required too many workers. The revised production system, using a high-wage traditional workforce and simple machines, produced $1 billion of annual value in a single room easily surveyable from a doorway. It cost half as much, worked 100 times faster, cut changeover time from 8 hours to 100 seconds, and would have repaid its conversion costs in a year even if the sophisticated grinders were simply scrapped. [114]

In the cola industry, the problem is “the mismatch between a very small-scale operation—drinking a can of cola—and a very large-scale one, producing it.” The most “efficient” large-scale bottling machine creates enormous batches that are out of scale with the distribution system, and result in higher unit costs overall than would modest-sized local machines that could immediately scale production to demand-pull. The reason is the excess inventories that glut the system, and the “pervasive costs and losses of handling, transport, and storage between all the elephantine parts of the production process.” As a result, “the giant cola-canning machine may well cost more per delivered can than a small, slow, unsophisticated machine that produces the cans of cola locally and immediately on receiving an order from the retailer.” [115]

As Womack and Jones put it in Lean Thinking , “machines rapidly making unwanted parts during one hundred percent of their available hours and employees earnestly performing unneeded tasks during every available minute are only producing muda .” [116] Lovins et al. sum it up more broadly:

Their basic conclusion, from scores of practical case studies, is that specialized, large-scale, high-speed, highly production departments and equipment are the key to in efficiency and un competitiveness, and that maximizing the utilization of productive capacity, the pride of MBAs, is nearly always a mistake. [117]

Rather, it’s better to scale productive capacity to demand.

In a genuine lean factory, managers are hounded in daily meetings about meeting the numbers for inventory reduction and reduction of cycle time, in the same way that they’re hounded on a daily basis to reduce direct labor hours and increase ROI in a Sloanist factory (including the American experiments with “lean production” in firms still governed by Donaldson Brown’s accounting principles). James Womack et al. in The Machine That Changed the World , recount an amusing anecdote about a delegation of lean production students from Corporate America touring a Toyota plant. Reading a question on their survey form as to how many days of inventory were in the plant, the Toyota manager politely asked whether the translator could have meant minutes of inventory. [118]

As Mumford put it, “Measured by effective work, that is, human effort transformed into direct subsistence or into durable works of art and technics, the relative gains of the new industry were pitifully small.” [119] The amount of wasted resources and crystallized labor embodied in the enormous warehouses of Sloanist factories and the enormous stocks of goods in process, the mushrooming cost of marketing, the “warehouses on wheels,” and the mountains of discarded goods in the landfills that could have been repaired for a tiny fraction of the cost of replacing them, easily outweigh the savings in unit costs from mass production itself. As Michael Parenti put it, the essence of corporate capitalism is “the transformation of living nature into mountains of commodities and commodities into heaps of dead capital.” [120] The cost savings from mass production are more than offset by the costs of mass distribution.

Chandler’s model of production resulted in the adoption of increasingly specialized, asset-specific production machinery:

The large industrial enterprise continued to flourish when it used capital-intensive, energy-consuming, continuous or large-batch production technology to produce for mass markets. [121] The ratio of capital to labor, materials to labor, energy to labor, and managers to labor for each unit of output became higher. Such high-volume industries soon became capital-intensive, energy-intensive, and manager-intensive. [122]

Of course this view is fundamentally wrong-headed. To regard a particular machine as “more efficient” based on its unit costs taken in isolation is sheer idiocy. If the costs of idle capacity are so great as to elevate unit costs above those of less specialized machinery, at the levels of spontaneous demand occurring without push marketing, and if the market area required for full utilization of capacity results in distribution costs greater than the unit cost savings from specialized machinery, then the expensive product-specific machinery is, in fact, less efficient. The basic principle was stated by F. M. Scherer:

Ball bearing manufacturing provides a good illustration of several product-specific economies. If only a few bearings are to be custom-made, the ring machining will be done on general-purpose lathes by a skilled operator who hand-positions the stock and tools and makes measurements for each cut. With this method, machining a single ring requires from five minutes to more than an hour, depending on the part’s size and complexity and the operator’s skill. If a sizable batch is to be produced, a more specialized automatic screw machine will be used instead. Once it is loaded with a steel tube, it automatically feeds the tube, sets the tools and adjusts its speed to make the necessary cuts, and spits out machined parts into a hopper at a rate of from eighty to one hundred forty parts per hour. A substantial saving of machine running and operator attendance time per unit is achieved, but setting up the screw machine to perform these operations takes about eight hours. If only one hundred bearing rings are to be made, setup time greatly exceeds total running time, and it may be cheaper to do the job on an ordinary lathe. [123]

The Sloanist approach is to choose the specialized automatic machine and find a way to make people buy more bearing rings.

Galbraith and Chandler write as though the adoption of the machinery were enough to automatically increase efficiency, in and of itself, regardless of how much money had to be spent elsewhere to “save” that money.

But if we approach things from the opposite direction, we can see that flexible manufacturing with easily redeployable assets makes it feasible to shift quickly from product to product in the face of changing demand, and thus eliminates the imperative of controlling the market. As Barry Stein said,

if firms could respond to local conditions, they would not need to control them. If they must control markets, then it is a reflection of their lack of ability to be adequately responsive. [124] ...Consumer needs, if they are to be supplied efficiently, call increasingly for organizations that are more flexibly arranged and in more direct contact with those customers. The essence of planning, under conditions of increasing uncertainty, is to seek better ways for those who have the needs to influence or control the productive apparatus more effectively, not less. Under conditions of rapid environmental change, implementing such planning is possible only if the “distance” between those supplied and the locus of decision-making on the part of those producing is reduced.... But it can be shown easily in information theory that the feedback—information linking the environment and the organization attempting to service that environment—necessarily becomes less accurate or less complete as the rate of change of data increases, or as the number of steps in the information transfer process continues.

Stein suggested that Galbraith’s solution was to suppress the turbulence: “to control the changes, in kind and extent, that the society will undergo.” [125] But far better, he argues, would be “a value shift that integrates the organization and the environment it serves.”

This problem is to be solved not by the hope of better planning on a large scale..., but by the better integration of productive enterprises with the elements of society needing that production. Under conditions of rapid change in an affluent and complex society, the only means available for meeting differentiated and fluid needs is an array of producing units small enough to be in close contact with their customers, flexible enough to produce for their demands, and able to do so in a relatively short time.... It is a contradiction in terms to speak of the necessity for units large enough to control their environment, but producing products which in fact no one may want! [126] As to the problem of planning—large firms are said to be needed here because the requirements of sophisticated technology and increasingly specialized knowledge call for long lead times to develop, design, and produce products. Firms must therefore have enough control over the market to assure that the demand needed to justify that time-consuming and costly investment will exist. This argument rests on a foundation of sand; first, because the needs of society should precede , not follow, decisions about what to produce, and second, because the data do not substantiate the need for large production organizations except in rare and unusual instances, like space flight. On the contrary, planning for social needs requires organizations and decision-making capabilities in which the feedback and interplay between productive enterprises and the market in question is accurate and timely—conditions more consistent with smaller organizations than large ones. [127]

A. Institutional Forms to Provide Stability

In keeping with the need for stability and control Galbraith described above, the technostructure resorted to organizational expedients within the corporate enterprise to guarantee reliable outlets for production and provide long-term predictability in the availability and price of inputs. These expedients can be summed up as replacing the market price mechanism with planning.

A firm cannot usefully foresee and schedule future action or prepare for contingencies if it does not know what its prices will be, what its sales will be, what its costs including labor and capital costs will be and what will be available at these costs.... Much of what the firm regards as planning consists in minimizing or getting rid of market influences. [128]

There’s a reason for twentieth century liberalism’s strong affinity for mass-production industry (e.g. Michael Moore’s nostalgia for the consensus capitalism of the ‘50s, when the predominant mode of employment was a factory job with lifetime security). Twentieth century liberalism had its origins as the ideology of the managerial and professional classes, particularly the managers and engineers who ran the giant manufacturing corporations. And the centerpiece of their ideology was to extend to society outside the corporation the same planning and control, the same government by disinterested experts, that prevailed inside it. And this ideological affinity for social planning dovetailed exactly with mass-production industry’s need to reshape society as a whole to guarantee consumption of its output. [129]

Galbraith describes three institutional expedients taken by the technostructure to control the uncertainties of the market and permit long-term predictability: vertical integration, the use of market power to control suppliers and outlets, and long-term contractual arrangements with suppliers and outlets. [130]

In vertical integration, “[t]he planning unit takes over the source of supply or the outlet; a transaction that is subject to bargaining over prices and amounts is thus replaced with a transfer within the planning unit.” [131]

One of the most important forms of “vertical integration” is the choice to “make” rather than “buy” credit—replacing the external credit markets with internal finance through retained earnings. [132] The theory that management is controlled by outside capital markets assumes a high degree of dependence on outside finance. But in fact management’s first line of defense, in maintaining its autonomy from shareholders and other outside interests, is to minimize its reliance on outside finance. Management tends to finance new investments as much as possible with retained earnings, followed by debt, with new issues of shares only as a last resort. [133] Issues of stock are important sources of investment capital only for startups and small firms undertaking major expansions. [134] Most corporations finance a majority of their new investment from retained earnings, and tend to limit investment to the highest priorities when retained earnings are scarce. [135] As Doug Henwood says, in the long run “almost all corporate capital expenditures are internally financed, through profits and depreciation allowances.” Between 1952 and 1995, almost 90% of investment was funded from retained earnings. [136]

The prevailing reliance on internal financing tends to promote concentration. Internally generated funds that exceed internal requirements are used to expand or diversify internal operations, or for horizontal and vertical integration, rather than “lending it or making other kinds of arm’s-length investments.” [137] Martin Hellwig, in his discussion of the primacy of finance by retained earnings, makes one especially intriguing observation, in particular. He denies that reliance primarily on retained earnings necessarily leads to a “rationing” of investment, in the sense of underinvestment; internal financing, he says, can just as easily result in overinvestment, if the amount of retained earnings exceeds available opportunities for rational capital investment. [138] This confirms Schumpeter’s argument that double taxation of corporate profits promoted excessive size and centralization, by encouraging reinvestment in preference to the issue of dividends. Of course it may result in structural misallocations and irrationality, to the extent that retention of earnings prevents dividends from returning to the household sector to be invested in other firms, so that overaccumulation in the sectors with excessive retained earnings comes at the expense of a capital shortage in other sectors. [139] Doug Henwood contrasts the glut of retained earnings, under the control of corporate bureaucracies with a shortage of investment opportunities, to the constraints the capital markets place on small, innovative firms that need capital the most. [140]

Market control “consists in reducing or eliminating the independence of action of those to whom the planning unit sells or from whom it buys,” while preserving “the outward form of the market.” Market power follows from large size in relation to the market. A decision to buy or not to buy, as in the case of General Motors and its suppliers, can determine the life or death of a firm. What’s more, large manufacturers always have the option of vertical integration—making a part themselves instead of buying it—to discipline suppliers. “The option of eliminating a market is an important source of power for controlling it.” [141]

Long-term contracting can reduce uncertainty by “specifying prices and amounts to be provided or bought for substantial periods of time.” Each large firm creates a “matrix of contracts” in which market uncertainty is eliminated. [142]

Piore and Sabel mention Edison Electric as an example of using long-term contracts to guarantee stability,

inducing its customers to sign long-term “future delivery” contracts, under which they had to buy specified quantities of Edison products at regular intervals over ten years. By assuring the demand for output, these contracts enabled the company to invest in large plants.... As one Edison executive explained: It is essential in order to make lamps at a minimum cost that the factory should be run constantly at as uniform an output as possible. Our future delivery plan in lamps has been very successful [in this regard].... It is very expensive work changing from one rate of production to another in factories.... The benefit of the future delivery plan is apparent since we can manufacture to stock knowing that all the stock is to be taken within a certain time. [143]

Unlike lean, demand-pull production, which minimizes inventory costs by producing only in response to orders, mass production requires supply-push distribution (guaranteeing a market before production takes place).

The use of contracts to stabilize input availability and price is exemplified, in particular, by the organizational expedients to stabilize wages and reduce labor turnover. After mixed success with a variety of experiments with company unions, the “American Plan,” and other forms of welfare capitalism, employers finally turned to the official organized labor regime under the Wagner Act to establish long-term predictability in the supply and price of labor inputs, and to secure management’s control of production. Under the terms of “consensus capitalism,” the comparatively small profile of labor costs in the total cost package of capital-intensive industry meant that management was willing to pay comparatively high wages and benefits (up to the point of gearing wages to productivity), to provide more or less neutral grievance procedures, etc., so long as management’s prerogatives were recognized for directing production. But the same had been true in many cases of the American Plan: it allowed for formalized grievance procedures and progressive discipline, and in some cases negotiation over rates of pay. The common goal of all these various attempts, however much they disagreed in their particulars, was “by stabilizing wages and employment, to insulate the cost of a major element of production from the flux of a market economy.” [144] From management’s perspective, the sort of bureaucratized industrial union established under Wagner had the primary purposes of enforcing contracts on the rank and file and suppressing wildcat strikes. The corporate liberal managers who were most open to industrial unionism in the 1930s were, in many cases, the same people who had previously relied on company unions and works councils. Their motivation, in both cases, was the same. For example, GE’s Gerard Swope, one of the most “progressive” of corporate liberals and the living personification of the kinds of corporate interests that backed FDR, had attempted in 1926 to get the AFL’s William Green to run GE’s works council system. [145]

Another institutional expedient of Galbraith’s technostructure is to regulate the pace of technical change, with the oligopoly firms in an industry colluding to introduce innovation at a rate that maximizes returns. Baran and Sweezy described the regulation of technical change, as it occurs in oligopoly markets under corporate capitalism:

Here innovations are typically introduced (or soon taken over) by giant corporations which act not under the compulsion of competitive pressures but in accordance with careful calculations of the profit-maximizing course. Whereas in the competitive case no one, not even the innovating firms themselves, can control the rate at which new technologies are generally adopted, this ceases to be true in the monopolistic case. It is clear that the giant corporation will be guided not by the profitability of the new method considered in isolation, but by the net effect of the new method on the overall profitability of the firm. And this means that in general there will be a slower rate of introduction of innovation than under competitive criteria. [146]

Or as Paul Goodman put it, a handful of manufacturers control the market, “competing with fixed prices and slowly spooned-out improvements.” [147]

Besides these microeconomic structures created by the nominally private corporation to provide stability, the state engaged in the policies described by Gabriel Kolko as “political capitalism.”

Political capitalism is the utilization of political outlets to attain conditions of stability, predictability, and security—to attain rationalization—in the economy. Stability is the elimination of internecine competition and erratic fluctuations in the economy. Predictability is the ability, on the basis of politically stabilized and secured means, to plan future economic action on the basis of fairly calculable expectations. By security I mean protection from the political attacks latent in any formally democratic political structure. I do not give to rationalization its frequent definition as the improvement of efficiency, output, or internal organization of a company; I mean by the term, rather, the organization of the economy and the larger political and social spheres in a manner that will allow corporations to function in a predictable and secure environment permitting reasonable profits over the long run. [148]

The state played a major role in cartelizing the economy, to protect the large corporation from the destructive effects of price competition. At first the effort was mainly private, reflected in the trust movement at the turn of the 20 th century. Chandler celebrated the first, private efforts toward consolidation of markets as a step toward rationality:

American manufacturers began in the 1870s to take the initial step to growth by way of merger—that is, to set up nationwide associations to control price and production. They did so primarily as a response to the continuing price decline, which became increasingly impressive after the panic of 1873 ushered in a prolonged economic depression. [149]

The process was further accelerated by the Depression of the 1890s, with mergers and trusts being formed through the beginning of the next century in order to control price and output: “the motive for merger changed. Many more were created to replace the association of small manufacturing firms as the instrument to maintain price and production schedules.” [150]

From the turn of the twentieth century on, there was a series of attempts by J.P. Morgan and other promoters to create some institutional structure for the corporate economy by which price competition could be regulated and their respective market shares stabilized. “It was then,” Paul Sweezy wrote,

that U.S. businessmen learned the self-defeating nature of price-cutting as a competitive weapon and started the process of banning it through a complex network of laws (corporate and regulatory), institutions (e.g., trade associations), and conventions (e.g., price leadership) from normal business practice. [151]

Chandler’s celebratory account of the trust movement, as a progressive force, ignores one central fact: the trusts were less efficient than their smaller competitors. They immediately began losing market share to less leveraged firms outside the trusts. The trust movement was an unqualified failure, as big business quickly recognized. Subsequent attempts to cartelize the economy, therefore, enlisted the state. As recounted by Gabriel Kolko, [152] the main force behind the Progressive Era regulatory agenda was big business itself, the goal being to restrict price and quality competition and to reestablish the trusts under the aegis of government. His thesis was that, “contrary to the consensus of historians, it was not the existence of monopoly that caused the federal government to intervene in the economy, but the lack of it.”

Merely private attempts at cartelization (i.e., collusive price stabilization) before the Progressive Era—namely the so-called “trusts”—were miserable failures, according to Kolko. The dominant trend at the turn of the century—despite the effects of tariffs, patents, railroad subsidies, and other existing forms of statism—was competition. The trust movement was an attempt to cartelize the economy through such voluntary and private means as mergers, acquisitions, and price collusion. But the over-leveraged and over-capitalized trusts were even less efficient than before, and steadily lost market share to their smaller, more efficient competitors. Standard Oil and U.S. Steel, immediately after their formation, began to lose market share.

In the face of this resounding failure, big business acted through the state to cartelize itself—hence, the Progressive regulatory agenda.

Ironically, contrary to the consensus of historians, it was not the existence of monopoly that caused the federal government to intervene in the economy, but the lack of it.” [153] If economic rationalization could not be attained by mergers and voluntary economic methods, a growing number of important businessmen reasoned, perhaps political means might succeed.” [154]

The rationale of the Progressive Era regulatory state was stated in 1908 by George Perkins, whom Kolko described as “the functional architect... of political capitalism during Roosevelt’s presidency....” The modern corporation

must welcome federal supervision, administered by practical businessmen, that “should say to stockholders and the public from time to time that the management’s reports and methods of business are correct.” With federal regulation, which would free business from the many states, industrial cooperation could replace competition. [155]

Kolko provided considerable evidence that the main force behind the Progressive Era legislative agenda was big business. The Meat Inspection Act, for instance, was passed primarily at the behest of the big meat packers. [156] This pattern was repeated, in its essential form, in virtually every component of the “Progressive” regulatory agenda.

The various safety and quality regulations introduced during this period also worked to cartelize the market. They served essentially the same purpose as attempts in the Wilson war economy to reduce the variety of styles and features available in product lines, in the name of “efficiency.” Any action by the state to impose a uniform standard of quality (e.g. safety), across the board, necessarily eliminates that feature as a competitive issue between firms. As Butler Shaffer put it, the purpose of “wage, working condition, or product standards” is to “universalize cost factors and thus restrict price competition.” [157] Thus, the industry is partially cartelized, to the very same extent that would have happened had all the firms in it adopted a uniform quality standard, and agreed to stop competing in that area. A regulation, in essence, is a state-enforced cartel in which the members agree to cease competing in a particular area of quality or safety, and instead agree on a uniform standard which they establish through the state. And unlike private cartels, which are unstable, no member can seek an advantage by defecting.

Although theoretically the regulations might simply put a floor on quality competition and leave firms free to compete by exceeding the standard, in practice corporations often take a harsh view of competitors that exceed regulatory safety or quality requirements. A good example is Monsanto’s (often successful) attempts to secure regulatory suppression of commercial speech by competitors who label their milk rBGH-free; more generally, the frankenfoods industry relies on FDA regulations to prohibit the labeling of food as GMO-free. Another example is the beef industry’s success at getting the government to prohibit competitors from voluntarily testing their cattle for mad cow disease more frequently than required by law. [158] So the regulatory floor frequently becomes a ceiling.

More importantly, the FTC and Clayton Acts reversed the long trend toward competition and loss of market share and made stability possible.

The provisions of the new laws attacking unfair competitors and price discrimination meant that the government would now make it possible for many trade associations to stabilize, for the first time, prices within their industries, and to make effective oligopoly a new phase of the economy. [159]

The Federal Trade Commission created a hospitable atmosphere for trade associations and their efforts to prevent price cutting. [160] Butler Shaffer, in In Restraint of Trade , provides a detailed account of the functioning of these trade associations, and their attempts to stabilize prices and restrict “predatory price cutting,” through assorted codes of ethics. [161] Specifically, the trade associations established codes of ethics directly under FTC auspices that had the force of law: “[A]s early as 1919 the FTC began inviting members of specific industries to participate in conferences designed to identify trade practices that were felt by “the practically unanimous opinion” of industry members to be unfair.” The standard procedure, through the 1920s, was for the FTC to invite members of a particular industry to a conference, and solicit their opinions on trade practice problems and recommended solutions.

The rules that came out of the conferences and were approved by the FTC fell into two categories Group I rules and Group II rules. Group I rules were considered by the commission as expressions of the prevailing law for the industry developing them, and a violation of such rules by any member of that industry—whether that member had agreed to the rules or not—would subject the offender to prosecution under Section 5 of the Federal Trade Commission Act as an “unfair method of competition.”... Contained within Group I were rules that dealt with practices considered by most business organizations to be the more “disruptive” of stable economic conditions. Generally included were prohibitions against inducing “breach of contract; ...commercial bribery; ...price discrimination by secret rebates, excessive adjustments, or unearned discounts; ...selling of goods below cost or below published list of prices for purpose of injuring competitor ; misrepresentation of goods; ... use of inferior materials or deviation from standards; [and] falsification of weights, tests, or certificates of manufacture [emphasis added].” [162]

The two pieces of legislation accomplished what the trusts had been unable to: they enabled a handful of firms in each industry to stabilize their market share and to maintain an oligopoly structure between them.

It was during the war that effective, working oligopoly and price and market agreements became operational in the dominant sectors of the American economy. The rapid diffusion of power in the economy and relatively easy entry virtually ceased. Despite the cessation of important new legislative enactments, the unity of business and the federal government continued throughout the 1920s and thereafter, using the foundations laid in the Progressive Era to stabilize and consolidate conditions within various industries. And, on the same progressive foundations and exploiting the experience with the war agencies, Herbert Hoover and Franklin Roosevelt later formulated programs for saving American capitalism. The principle of utilizing the federal government to stabilize the economy, established in the context of modern industrialism during the Progressive Era, became the basis of political capitalism in its many later ramifications. [163]

The regulatory state provided “rationality” in two other ways: first, by the use of federal regulation to preempt potentially harsher action by populist governments at the state and local level; and second, by preempting and overriding older common law standards of liability, replacing the potentially harsh damages imposed by local juries with a least common denominator of regulatory standards based on “sound science” (as determined by industry, of course). Regarding the first, whatever view one takes of the validity of the local regulations in and of themselves, it is hardly legitimate for a centralized state to act on behalf of corporate interests, in suppressing unfriendly local regulations and overcoming the transaction costs of operating in a large number of conflicting jurisdictions, all at taxpayer expense. “Free trade” simply means the state does not hinder those under its own jurisdiction from trading with anyone else on whatever terms they can obtain on their own—not that the state actually opens up markets. Regarding the second, it is interesting that so many self-described “libertarians” support what they call “tort reform,” when civil liability for damages is in fact the libertarian alternative to the regulatory state. Much of such “tort reform” amounts to indemnifying business firms from liability for reckless fraud, pollution, and other externalities imposed on the public.

There is also the regulatory state’s function, which we will examine below in more depth, of imposing mandatory minimum overhead costs and thus erecting barriers to competition from low-overhead producers.

State spending serves to cartelize the economy in much the same way as regulation. Just as regulation removes significant areas of quality and safety as issues in cost competition, the socialization of operating costs on the state (e.g. R&D subsidies, government-funded technical education, etc.) allows monopoly capital to remove them as components of price in cost competition between firms, and places them in the realm of guaranteed income to all firms in a market alike. Transportation subsidies reduce the competitive advantage of locating close to one’s market. Farm price support subsidies turn idle land into an extremely lucrative real estate investment. Whether through regulations or direct state subsidies to various forms of accumulation, the corporations act through the state to carry out some activities jointly, and to restrict competition to selected areas.

An ever-growing portion of the functions of the capitalist economy have been carried out through the state. According to James O’Connor, state expenditures under monopoly capitalism can be divided into “social capital” and “social expenses.”

Social capital is expenditures required for profitable private accumulation; it is indirectly productive (in Marxist terms, social capital indirectly expands surplus value). There are two kinds of social capital: social investment and social consumption (in Marxist terms, social constant capital and social variable capital).... Social investment consist of projects and services that increase the productivity of a given amount of laborpower and, other factors being equal, increase the rate of profit.... Social consumption consists of projects and services that lower the reproduction costs of labor and, other factors being equal, increase the rate of profit. An example of this is social insurance, which expands the productive powers of the work force while simultaneously lowering labor costs. The second category, social expenses, consists of projects and services which are required to maintain social harmony—to fulfill the state’s “legitimization” function.... The best example is the welfare system, which is designed chiefly to keep social peace among unemployed workers. [164]

According to O’Connor, such state expenditures counteract the falling direct rate of profit that Marx predicted in volume 3 of Capital . Monopoly capital is able to externalize many of its operating expenses on the state; and since the state’s expenditures indirectly increase the productivity of labor and capital at taxpayer expense, the apparent rate of profit is increased. “In short, monopoly capital socializes more and more costs of production.” [165]

(In fact, O’Connor makes the unwarranted assumption that the subsidized increase in capital-intensiveness actually increases productivity, rather than simply subsidizing the cost of increasing the ratio of capital to unit of output and despite the inefficiency of more capital-intensive methods. The subsidized capital-intensive production methods are, in fact, as surely a means of destroying surplus capital as sinking it in the ocean would be.)

O’Connor listed several ways in which monopoly capital externalizes its operating costs on the political system:

Capitalist production has become more interdependent—more dependent on science and technology, labor functions more specialized, and the division of labor more extensive. Consequently, the monopoly sector (and to a much lesser degree the competitive sector) requires increasing numbers of technical and administrative workers. It also requires increasing amounts of infrastructure (physical overhead capital)—transportation, communication, R&D, education, and other facilities. In short, the monopoly sector requires more and more social investment in relation to private capital.... The costs of social investment (or social constant capital) are not borne by monopoly capital but rather are socialized and fall on the state. [166]

The general effect of the state’s intervention in the economy, then, is to remove ever increasing spheres of economic activity from the realm of competition in price or quality, and to organize them collectively through organized capital as a whole.

B. Mass Consumption and Push Distribution to Absorb Surplus

As we have already seen, the use of expensive product-specific machinery requires large-batch production to achieve high throughput and thus spread production costs out over as many units as possible. And to do this, in turn, requires enormous exercises of power to ensure that a market existed for this output.

First of all, it required the prior forms of intervention described in the last chapter and in the previous section of this chapter: state intervention to create a unified national market and transportation system, and state intervention to promote the formation of stable oligopoly cartels.

But despite all the state intervention up front to make the centralized corporate economy possible, state intervention is required afterward as well as before in order to keep the system running. Large, mass-production industry is unable to survive without the government guaranteeing an outlet for its overproduction, and insulating it from a considerable amount of market competition. As Paul Baran and Paul Sweezy put it, monopoly capitalism

tends to generate ever more surplus, yet it fails to provide the consumption and investment outlets required for the absorption of a rising surplus and hence for the smooth working of the system. Since surplus which cannot be absorbed will not be produced, it follows that the normal state of the monopoly capitalist economy is stagnation. With a given stock of capital and a given cost and price structure, the system’s operating rate cannot rise above the point at which the amount of surplus produced can find the necessary outlets. And this means chronic underutilization of available human and material resources. Or, to put the point in slightly different terms, the system must operate at a point low enough on its profitability schedule not to generate more surplus than can be absorbed. Since the profitability schedule is always moving upward, there is a corresponding downdrift of the “equilibrium” operating rate. Left to itself—that is to say, in the absence of counteracting forces which are no part of what may be called the “elementary logic” of the system—monopoly capitalism would sink deeper and deeper into a bog of chronic depression. [167]

Mass production divorces production from consumption. The rate of production is driven by the imperative of keeping the machines running at full capacity so as to minimize unit costs, rather than by customer orders. So in addition to contractual control of inputs, mass-production industry faces the imperative of guaranteeing consumption of its output by managing the consumer. It does this through push distribution, high-pressure marketing, planned obsolescence, and consumer credit.

Mass advertising serves as a tool for managing aggregate demand. According to Baran and Sweezy, the main function of advertising is “waging, on behalf of the producers and sellers of consumer goods, a relentless war against saving and in favor of consumption.” And that function is integrally related to planned obsolescence:

The strategy of the advertiser is to hammer into the heads of people the unquestioned desirability, indeed the imperative necessity, of owning the newest product that comes on the market. For this strategy to work, however, producers have to pour on the market a steady stream of “new” products, with none daring to lag behind for fear his customers will turn to his rivals for newness. Genuinely new or different products, however, are not easy to come by, even in our age of rapid scientific and technological advance. Hence much of the newness with which the consumer is systematically bombarded is either fraudulent or related trivially and in many cases even negatively to the function and serviceability of the product. [168] ....In a society with a large stock of consumer durable goods like the United States, an important component of the total demand for goods and services rests on the need to replace a part of this stock as it wears out or is discarded. Built-in obsolescence increases the rate of wearing out, and frequent style changes increase the rate of discarding.... The net result is a stepping up in the rate of replacement demand and a general boost to income and employment. In this respect, as in others, the sales effort turns out to be a powerful antidote to monopoly capitalism’s tendency to sink into a state of chronic depression. [169]

Although somewhat less state-dependent than the expedients discussed later in this chapter, mass advertising had a large state component. For one thing, the founders of the mass advertising and public relations industries were, in large part, also the founders of the science of “manufacturing consent” used to manipulate Anglo-American populations into support for St. Woodrow’s crusade. Edward Bernays and Harold Lasswell, who played a central role in the Creel Commission and other formative prowar propaganda efforts in WWI, went on to play similarly prominent roles in the development of public relations and mass consumer advertising.

For another, the state’s own organs of propaganda (through the USDA, school home economics classes, etc.) reinforced the message of advertising, placing great emphasis on discrediting “old-fashioned” atavisms like home-baked bread and home-grown and -canned vegetables, and promoting in their place the “up-to-date” housewifely practice of heating stuff up out of cans from the market. [170] Jeffrey Kaplan described this, in a recent article, as the “gospel of consumption”:

[Industrialists] feared that the frugal habits maintained by most American families would be difficult to break. Perhaps even more threatening was the fact that the industrial capacity for turning out goods seemed to be increasing at a pace greater than people’s sense that they needed them. It was this latter concern that led Charles Kettering, director of General Motors Research, to write a 1929 magazine article called “Keep the Consumer Dissatisfied.”... Along with many of his corporate cohorts, he was defining a strategic shift for American industry—from fulfilling basic human needs to creating new ones. In a 1927 interview with the magazine Nation’s Business , Secretary of Labor James J. Davis provided some numbers to illustrate a problem that the New York Times called “need saturation.” Davis noted that “the textile mills of this country can produce all the cloth needed in six months’ operation each year” and that 14 percent of the American shoe factories could produce a year’s supply of footwear. The magazine went on to suggest, “It may be that the world’s needs ultimately will be produced by three days’ work a week.” Business leaders were less than enthusiastic about the prospect of a society no longer centered on the production of goods. For them, the new “labor-saving” machinery presented not a vision of liberation but a threat to their position at the center of power. John E. Edgerton, president of the National Association of Manufacturers, typified their response when he declared “Nothing... breeds radicalism more than unhappiness unless it is leisure.” By the late 1920s, America’s business and political elite had found a way to defuse the dual threat of stagnating economic growth and a radicalized working class in what one industrial consultant called “the gospel of consumption”—the notion that people could be convinced that however much they have, it isn’t enough. President Herbert Hoover’s 1929 Committee on Recent Economic Changes observed in glowing terms the results: “By advertising and other promotional devices ... a measurable pull on production has been created which releases capital otherwise tied up.” They celebrated the conceptual breakthrough: “Economically we have a boundless field before us; that there are new wants which will make way endlessly for newer wants, as fast as they are satisfied.” [171]

Right-wing libertarians like Murray Rothbard answer critiques of mass advertising by saying they downplay the role of the audience as an active moral agent in deciding what to accept and what to reject, and fail to recognize that information has a cost and that there’s such a thing as “rational ignorance.” Interestingly, however, many of Rothbard’s followers at Mises.Org and Lew Rockwell.Com show no hesitancy whatsoever in attributing a cumulative sleeper effect to statist propaganda in the public schools and state-allied media. No doubt they would argue that, in the latter case, both the volume and the content of the propaganda are artificially shifted in the direction of a certain message, thus artificially raising the cost of defending against the propaganda message. But that is exactly my point concerning mass advertising. The state capitalist system makes mass-production industry for the national market artificially prevalent, and makes its need to dispose of surplus output artificially urgent, thus subjecting the consumer to a barrage of pro-consumption propaganda far greater in volume than would be experienced in a decentralized, free market society of small-scale local commodity production.

Chandler’s model of “high-speed, high-throughput, turning high fixed costs into low unit costs,” and Galbraith’s “technostructure,” presuppose a “push” model of distribution. The push paradigm, according to, is characterized by the following assumptions:

There’s not enough to go around

Elites do the deciding

Organisations must be hierarchical

People must be molded

Bigger is better

Demand can be forecast

Resources can be allocated centrally

Demand can be met [172]

Here’s how push distribution was described by Paul and Percival Goodman not long after World War II:

... in recent decades... the center of economic concern has gradually shifted from either providing goods for the consumer or gaining wealth for the enterpriser, to keeping the capital machines at work and running at full capacity; for the social arrangements have become so complicated that, unless the machines are running at full capacity, all wealth and subsistence are jeopardized, investment is withdrawn, men are unemployed. That is, when the system depends on all the machines running, unless every kind of good is produced and sold, it is also impossible to produce bread. [173]

The same imperative was at the root of the hypnopaedic socialization in Huxley’s Brave New World: “ending is better than mending”; “the more stitches, the less riches.” Or as GM designer Harley Earl said in the 1950s,

My job is to hasten obsolescence. I’ve got it down to two years; now when I get it down to one year, I’ll have a perfect score. [174]

Along the same lines, Baran and Sweezy cite a New York investment banker on the disaster that would befall capitalism without planned obsolescence or branding: “Clothing would be purchased for its utility value; food would be bought on the basis of economy and nutritional value; automobiles would be stripped to essentials and held by the same owners for the full ten to fifteen years of their useful lives; homes would be built and maintained for their characteristics of shelter....” [175]

The older economy that the “push” distribution system replaced was one in which most foods and drugs were what we would today call “generic.” Flour, cereal, and similar products were commonly sold in bulk and weighed and packaged by the grocer (the ratio had gone from roughly 95% bulk to 75% package goods during the twenty years before Borsodi wrote in 1927); the producers geared production to the level of demand that was relayed to them by the retailers’ orders. Drugs, likewise, were typically compounded by the druggist on-premises to the physician’s specifications, from generic components. [176] Production was driven by orders from the grocer, as customers used up his stock of bulk goods.

Under the new “push” system, the producers appealed directly to the consumer through brand-name advertising, and relied on pressure on the grocer to create demand for what they chose to produce. Brand loyalty helps to stabilize demand for a particular manufacturer’s product, and eliminate the fluctuation of demand that accompanies price competition in pure commodities.

It is possible to roughly classify a manufacturer as belonging either to those who “make” products to meet requirements of the market, or as belonging to those who “distribute” brands which they decide to make. The manufacturer in the first class relies upon the natural demand for his product to absorb his output. He relies upon competition among wholesalers and retailers in maintaining attractive stocks to absorb his production. The manufacturer in the second class creates a demand for his brand and forces wholesalers and retailers to buy and “stock” it. In order to market what he has decided to manufacture, he figuratively has to make water run uphill. [177]

The problem was that the consumer, under the new regime of Efficiency, paid about four times as much for trademarked flour, sugar, etc., as he had paid for bulk goods under the old “inefficient” system. [178] Under the old regime, the grocer was a purchasing agent for the customer; under the new, he was a marketing agent for the producer.

Distribution costs are increased still further by the fact that larger-scale production and greater levels of capital intensiveness increase the unit costs resulting from idle capacity, and thereby (as we saw in the last chapter) greatly increase the resources devoted to high-pressure, “push” forms of marketing.

Borsodi’s book The Distribution Age was an elaboration of the fact that, as he stated in the Preface, production costs fell by perhaps a fifth between 1870 and 1920, even as the cost of marketing and distribution nearly tripled. [179] The modest reduction in unit production cost was more than offset by the increased costs of distribution and high-pressure marketing. “[E]very part of our economic structure,” he wrote, was “being strained by the strenuous effort to market profitably what modern industry can produce.” [180]

Distribution costs are far lower under a demand-pull regime, in which production is geared to demand. As Borsodi argued,

...[I]t is still a fact... that the factory which sells only in its natural field because that is where it can serve best, meets little sales-resistance in marketing through the normal channels of distribution. The consumers of such a factory are so “close” to the manufacturer, their relations are so intimate, that buying from that factory has the force of tradition. Such a factory can make shipment promptly; it can adjust its production to the peculiarities of its territory, and it can make adjustments with its customers more intelligently than factories which are situated at a great distance. High pressure methods of distribution do not seem tempting to such a factory. They do not tempt it for the very good reason that such a factory has no problem to which high pressure distribution offers a solution. It is the factory which has decided to produce trade-marked, uniform, packaged, individualized, and nationally advertised products, and which has to establish itself in the national market by persuading distributors to pay a higher than normal price for its brand, which has had to turn to high pressure distribution. Such a factory has a selling problem of a very different nature from that of factories which are content to sell only where and to whom they can sell most efficiently. [181]

For those whose low overhead permits them to produce in response to consumer demand, marketing is relatively cheap. Rather than expending enormous effort to make people buy their product, they can just fill the orders that come in. When demand for the product must be created, the effort (to repeat Borsodi’s metaphor) is comparable to that of making water run uphill. Mass advertising is only a small part of it. Even more costly is direct mail advertising and door-to-door canvassing by salesmen to pressure grocers in a new market to stock one’s goods, and canvassing of grocers themselves by sales reps. [182] The costs of advertising, packaging, brand differentiation, etc., are all costs of overcoming sales resistance that only exist because production is divorced from demand rather than driven by it.

And this increased marginal cost of distribution for output above the natural level of demand results, in accordance with Ricardo’s law of rent, in higher average price for all goods. This means that in the market as it exists now, the price of generic and store brand goods is not governed by production cost, as it would be if competing in a commodity market; it is governed by the bare amount it needs to be marked down to compete with brand name goods. [183]

For those who can flexibly respond to demand, also, predictability of consumer demand doesn’t matter that much. Of the grocer, for example, Borsodi pointed out that the customer would always have to eat, and would continue to do so without a single penny of high pressure marketing. It was therefore a matter of indifference to the grocer whether the customer ate some particular product or brand name; he would stock whatever goods the customer preferred, as his existing stocks were used up, and change his orders in keeping with changes in customer preference. To the manufacturer, on the other hand, it is of vital importance that the customer buy (say) mayonnaise in particular—and not just mayonnaise, but his particular brand of mayonnaise. [184]

And the proliferation of brand names with loyal followings raises the cost of distribution considerably: rather than stocking generic cornflakes in bulk commodity form, and replacing the stock as it is depleted, the grocer must maintain large enough stocks of all the (almost identical) popular brands to ensure against running out, which means slower turnover and more wasted shelf space. In other words, push distribution results in the costly disruption of flow by stagnant eddies and flows, in the form of ubiquitous inventories. [185]

The advantage of brand specification, from the perspective of the producer, is that it “lifts a product out of competition”: [186] “the prevalence of brand specification has all but destroyed the normal basis upon which true competitive prices can be established.” [187] As Barry Stein described it, branding “convert[s] true commodities to apparent tailored goods, so as to avoid direct price competition in the marketplace.”

The distinctions introduced—elaborate packaging, exhortative advertising and promotion that asserts the presence of unmeasurable values, and irrelevant physical modification (colored toothpaste)—do not, in fact, render these competing products “different” in any substantive sense, but to the extent that consumers are convinced by these distinctions and treat them as if they were different, product loyalty is generated. [188]

Under the old regime, competition between identifiable producers of bulk goods enabled grocers to select the highest quality bulk goods, while providing them to customers at the lowest price. Brand specification, on the other hand, relieves the grocer of the responsibility for standing behind his merchandise and turns him into a mere stocker of shelves with the most-demanded brands.

The change, naturally, did not go unremarked by those profiting from it. For example, here’s a bit of commentary from an advertising trade paper in 1925:

In the statement to its stockholders issued recently by The American Sugar Refining Company, we find this statement: “Formerly, as is well known, household sugar was largely of bulk pricing. We have described the sale of package sugar and table syrup under the trade names of ‘Domino’ and ‘Franklin’ with such success that the volume of trade-mark packages now constitutes roundly one-half of our production that goes into households....” These facts should be of vital interest to any executive who faces the problem of marketing a staple product that is hard to control because it is sold in bulk. Twenty years ago the sale of sugar in cardboard cartons under a brand name would have been unthinkable. Ten years hence this kind of history will have repeated itself in connection with many other staple commodities now sold in bulk.... [189]

The process went on, just as the paper predicted, until—decades later—the very idea of a return to price competition in the production of goods, instead of brand-name competition for market share, would strike manufacturers with horror. What Borsodi proposed, making “[c]ompetition... descend from the cloudy heights of sales appeals and braggadocio generally, to just one factor—price,” [190] is the worst nightmare of the oligopoly manufacturer and the advertising industry:

At the annual meeting of the U.S. Association of National Advertisers in 1988, Graham H. Phillips, the U.S. Chairman of Ogilvy & Mather, berated the assembled executives for stooping to participate in a “commodity marketplace” rather than an image-based one. “I doubt that many of you would welcome a commodity marketplace in which one competed solely on price, promotion and trade deals, all of which can be easily duplicated by competition, leading to ever-decreasing profits, decay, and eventual bankruptcy.” Others spoke of the importance of maintaining “conceptual value-added,” which in effect means adding nothing but marketing. Stooping to compete on the basis of real value, the agencies ominously warned, would speed not just the death of the brand, but corporate death as well. [191]

It’s telling that Chandler, the apostle of the great “efficiencies” of this entire system, frankly admitted all of these things. In fact, far from regarding it as an “admission,” he treated it as a feature of the system. He explicitly equated “prosperity” to the rate of flow of material through the system and the speed of production and distribution—without any regard to whether the rate of “flow” was twice as fast because people were throwing stuff in the landfills twice as fast to keep the pipelines from clogging up.

The new middle managers did more than devise ways to coordinate the high-volume flow from suppliers of raw materials to consumers. They invented and perfected ways to expand markets and to speed up the processes of production and distribution. Those at American Tobacco, Armour, and other mass producers of low-priced packaged products perfected techniques of product differentiation through advertising and brand names that had been initially developed by mass marketers, advertising agencies, and patent medicine makers. The middle managers at Singer wee the first to systematize personal selling by means of door-to-door canvassing; those at McCormick among the first to have franchised dealers using comparable methods. Both companies innovated in installment buying and other techniques of consumer credit. [192]

In other words, the Sloanist system Chandler idealized was more “efficient” because it was better at persuading people to throw stuff away so they could buy more, and better at producing substandard shit that would have to be thrown away in a few years. Only a man of the mid-20 th century, writing at the height of consensus capitalism, from the standpoint of an establishment liberalism as yet utterly untainted by the thinnest veneer of greenwash, could write such a thing from the standpoint of an enthusiast .

Increased unit costs from idle capacity, given the high overhead of large-scale production, are the chief motive behind the push distribution model. Even so, the restrained competition of an oligopoly market limits the competitive disadvantage resulting from idle capacity—so long as the leading firms in an industry are running at roughly comparable percentages of capacity, and can pass their overhead costs onto the customer. The oligopoly mark-up included in consumer price reflects the high costs of excess capacity.

It is difficult to estimate how large a part of the nation’s production facilities are normally in use. One particularly able observer of economic tendencies, Colonel Leonard P. Ayres, uses the number of blast furnaces in operation as a barometer of business conditions. When blast furnaces are in 60 per cent. operation, conditions are normal.... It is obvious, if 60 per cent. represents normality, that consumers of such a basic commodity as pig iron must pay dividends upon an investment capable of producing two-thirds more pig iron than the country uses in normal times.

Borsodi also found that flour mills, steel plants, shoe factories, copper smelters, lumber mills, automobiles, and rayon manufacturers were running at similar or lower percentages of total capacity. [193] Either way, it is the consumer who pays for overaccumulation: both for the high marketing costs of distributing overproduced goods when industry runs at full capacity, and for the high overhead when the firms in an oligopoly market all run at low capacity and pass their unit costs on through administered pricing.

So cartelization and high costs from idle capacity, alongside push distribution and planned obsolescence, together constitute the twin pathologies of monopoly capitalism. Both are expedients for dealing with the enormous capital outlays and overproduction entailed in mass-production industry, and both require that outside society be subordinated to the needs of the corporation and subjected to its control.

The worst-case scenario, from our standpoint, is that big business will attempt an end-run around the problem of excess capacity and underconsumption through measures like the abortive National Industrial Recovery Act of the New Deal era: cartelizing an industry under government auspices, so all its firms can operate at a fraction of full capacity indefinitely and use monopoly pricing to pass the cost of idle capacity on to the consumer on a cost-plus basis. Anyone tempted to see this as a solution should bear in mind that it removes all incentive to control costs or to promote efficiency. For a picture of the kind of society that would result from such an arrangement, one need only watch the movie Brazil .

The overall system, in short, was a “solution” in search of a problem. State subsidies and mercantilism gave rise to centralized, overcapitalized industry, which led to overproduction, which led to the need to find a way of creating demand for lots of crap that nobody wanted.

C. State Action to Absorb Surplus: Imperialism

The roots of the corporate state in the U.S., more than anything else, lie in the crisis of overproduction as perceived by corporate and state elites—especially the traumatic Depression of the 1890s—and the requirement, also as perceived by them, for state intervention to absorb surplus output or otherwise deal with the problems of overproduction, underconsumption, and overaccumulation. According to William Appleman Williams, “the Crisis of the 1890s raised in many sections of American society the specter of chaos and revolution.” [194] Economic elites saw it as the result of overproduction and surplus capital, and believed it could be resolved only through access to a “new frontier.” Without state-guaranteed access to foreign markets, output would fall below capacity, unit costs would go up, and unemployment would reach dangerous levels.

Accordingly, the centerpiece of American foreign policy to the present day has been what Williams called “Open Door Imperialism” [195] : securing American access to foreign markets on equal terms to the European colonial powers, and opposing attempts by those powers to divide up or close markets in their spheres of influence.

Open Door Imperialism consisted of using U.S. political power to guarantee access to foreign markets and resources on terms favorable to American corporate interests, without relying on direct political rule. Its central goal was to obtain for U.S. merchandise, in each national market, treatment equal to that afforded any other industrial nation. Most importantly, this entailed active engagement by the U.S. government in breaking down the imperial powers’ existing spheres of economic influence or preference. The result, in most cases, was to treat as hostile to U.S. security interests any large-scale attempt at autarky, or any other policy whose effect was to withdraw major areas of the world from the disposal of the U.S. corporate economy. When the power attempting such policies was an equal, like the British Empire, the U.S. reaction was merely one of measured coolness. When it was perceived as an inferior, like Japan, the U.S. resorted to more forceful measures, as events of the late 1930s indicate. And whatever the degree of equality between advanced nations in their access to Third World markets, it was clear that Third World nations were still to be subordinated to the industrialized West in a collective sense.

In the late 1930s, the American political leadership feared that Fortress Europe and the Greater East Asian Co-Prosperity sphere would deprive the American corporate economy of vitally needed raw materials, not to mention outlets for its surplus output and capital; that’s what motivated FDR to maneuver the country into another world war. The State Department’s internal studies at the time estimated that the American economy required, at a minimum, the resources and markets of a “Grand Area” consisting of Latin America, East Asia, and the British Empire. Japan, meanwhile, was conquering most of China (home of the original Open Door) and the tin and rubber of Indochina, and threatening to capture the oil of the Dutch East Indies as well. In Europe, the worst case scenario was the fall of Britain, followed by the German capture of some considerable portion of the Royal Navy and subsequently of the Empire. War with the Axis would have followed from these perceived threats as a matter of course, even had FDR not successfully maneuvered Japan into firing the first shot. [196]

World War II, incidentally, also went a long way toward postponing America’s crises of overproduction and overaccumulation for a generation, by blowing up most of the capital in the world outside the United States and creating a permanent war economy to absorb surplus output.

The American policy that emerged from the war was to secure control over the markets and resources of the global “Grand Area” through institutions of global economic governance, as created by the postwar Bretton Woods system, and to make preventing “defection from within” by autarkic powers the centerpiece of national security policy.

The problem of access to foreign markets and resources was central to U.S. postwar planning. Given the structural imperatives of “export dependent monopoly capitalism,” [197] the threat of a postwar depression was very real. The original drive toward foreign expansion at the end of the nineteenth century reflected the fact that industry, with state capitalist encouragement, had expanded far beyond the ability of the domestic market to consume its output. Even before World War II, the state capitalist economy had serious trouble operating at the level of output needed for full utilization of capacity and cost control. Military-industrial policy during the war exacerbated the problem of over-accumulation, greatly increasing the value of plant and equipment at taxpayer expense. The end of the war, if followed by the traditional pattern of demobilization, would have resulted in a drastic reduction in orders to that same overbuilt industry just as over ten million workers were being dumped back into the civilian labor force.

A central facet of postwar economic policy, as reflected in the Bretton Woods agencies, was state intervention to guarantee markets for the full output of U.S. industry and profitable outlets for surplus capital. The World Bank was designed to subsidize the export of capital to the Third World, by financing the infrastructure without which Western-owned production facilities could not be established there. According to Gabriel Kolko’s 1988 estimate, almost two thirds of the World Bank’s loans since its inception had gone to transportation and power infrastructure. [198] A laudatory Treasury Department report referred to such infrastructure projects (comprising some 48% of lending in FY 1980) as “externalities” to business, and spoke glowingly of the benefits of such projects in promoting the expansion of business into large market areas and the consolidation and commercialization of agriculture. [199] The Volta River power project, for example, was built with American loans (at high interest) to provide Kaiser aluminum with electricity at very low rates. [200]

D. State Action to Absorb Surplus: State Capitalism

Government also directly intervened to alleviate the problem of overproduction, by its increasing practice of directly purchasing the corporate economy’s surplus output—through Keynesian fiscal policy, massive highway and civil aviation programs, the military-industrial complex, the prison-industrial complex, foreign aid, and so forth. Baran and Sweezy point to the government’s rising share of GDP as “an approximate index of the extent to which government’s role as a creator of effective demand and absorber of surplus has grown during the monopoly capitalist era.” [201]

If the depressive effects of growing monopoly had operated unchecked, the United States economy would have entered a period of stagnation long before the end of the nineteenth century, and it is unlikely that capitalism could have survived into the second half of the twentieth century. What, then, were the powerful external stimuli which offset these depressive effects and enabled the economy to grow fairly rapidly during the later decades of the nineteenth century and, with significant interruptions, during the first two thirds of the twentieth century? In our judgment, they are of two kinds which we classify as (1) epoch-making innovations, and (2) wars and their aftermaths.

By “epoch-making innovations,” Baran and Sweezy refer to “those innovations which shake up the entire pattern of the economy and hence create vast investment outlets in addition to the capital which they directly absorb.” [202]

As for wars, Emmanuel Goldstein described their function quite well. “Even when weapons of war are not actually destroyed, their manufacture is still a convenient way of expending labor power without producing anything that can be consumed.” War is a way of “shattering to pieces, or pouring into the stratosphere, or sinking in the depths of the sea,” excess output and capital. [203]

Earlier, we quoted Robin Marris on the tendency of corporate bureaucracies to emphasize, not the character of goods produced, but the skills with which their production was organized. This is paralleled at a societal level. The imperative to destroy surplus is reflected in the GDP, which measures not the utility of goods and services to the consumer but the materials consumed in producing them. The more of Bastiat’s “broken windows,” the more inputs consumed to produce a given output, the higher the GDP.

As we said in the last chapter, the highway-automobile complex and the civil aviation system were continuations of the process begun with the railroads and other “internal improvements” of the nineteenth century: i.e., government subsidy to market centralization and large firm size. But as we pointed out then, they also have special significance as examples of the phenomenon Paul Baran and Paul Sweezy described in Monopoly Capitalism : government’s creation of entire new industries to soak up the surplus generated by corporate capitalism’s chronic tendencies toward overinvestment and overproduction.

Of the automobile-highway complex, Baran and Sweezy wrote, “[t]his complex of private interests clustering around one product has no equal elsewhere in the economy—or in the world. And the whole complex, of course, is completely dependent on the public provision of roads and highways.” [204] Not to mention the role of U.S. foreign policy in guaranteeing access to “cheap and abundant” petroleum.

One of the major barriers to the fledgling automobile industry at the turn of the century was the poor state of the roads. One of the first highway lobbying groups was the League of American Wheelmen, which founded “good roads” associations around the country and, in 1891, began lobbying state legislatures....

The Federal Aid Roads Act of 1916 encouraged coast-to-coast construction of paved roads, usually financed by gasoline taxes (a symbiotic relationship if ever there was one). By 1930, the annual budget for federal road projects was $750 million. After 1939, with a push from President Franklin Roosevelt, limited-access interstates began to make rural areas accessible. [205]

It was this last, in the 1930s, that signified the most revolutionary change. From its beginning, the movement for a national superhighway network was identified, first of all, with the fascist industrial policy of Hitler, and second with the American automotive industry.

The “most powerful pressure group in Washington” began in June, 1932, when GM President, Alfred P. Sloan, created the National Highway Users Conference, inviting oil and rubber firms to help GM bankroll a propaganda and lobbying effort that continues to this day. [206]

One of the earliest depictions of the modern superhighway in America was the Futurama exhibit at the 1939 World’s Fair in New York, sponsored by (who else?) GM.

The exhibit... provided a nation emerging from its darkest decade since the Civil War a mesmerizing glimpse of the future--a future that involved lots and lots of roads. Big roads. Fourteen-lane superhighways on which cars would travel at 100 mph. Roads on which, a recorded narrator promised, Americans would eventually be able to cross the nation in a day. [207]

The Interstate’s association with General Motors didn’t end there, of course. Its actual construction took place under the supervision of DOD Secretary Charles Wilson, formerly the company’s CEO. During his 1953 confirmation hearings, when asked whether “he could make a decision in the country’s interest that was contrary to GM’s interest,”

Wilson shot back with his famous comment, “I cannot conceive of one because for years I thought what was good for our country was good for General Motors, and vice versa. The difference did not exist. Our company is too big.” [208]

Wilson’s role in the Interstate program was hardly that of a mere disinterested technocrat. From the time of his appointment to DOD, he “pushed relentlessly” for it. And the chief administrator of the program was “Francis DuPont, whose family owned the largest share of GM stock....” [209]

Corporate propaganda, as so often in the twentieth century, played an active role in attempts to reshape the popular culture.

Helping to keep the driving spirit alive, Dow Chemical, producer of asphalt, entered the PR campaign with a film featuring a staged testimonial from a grade school teacher standing up to her anti-highway neighbors with quiet indignation. “Can’t you see this highway means a whole new way of life for the children?” [210]

Whatever the political motivation behind it, the economic effect of the Interstate system should hardly be controversial. Virtually 100% of the roadbed damage to highways is caused by heavy trucks. And despite repeated liberalization of maximum weight restrictions, far beyond the heaviest conceivable weight the Interstate roadbeds were originally designed to support,

fuel taxes fail miserably at capturing from big-rig operators the cost of exponential pavement damage caused by higher axle loads. Only weight-distance user charges are efficient, but truckers have been successful at scrapping them in all but a few western states where the push for repeal continues. [211]

So only about half the revenue of the highway trust fund comes from fees or fuel taxes on the trucking industry, and the rest is externalized on private automobiles. Even David S. Lawyer, a skeptic on the general issue of highway subsidies, only questions whether highways receive a net subsidy from general revenues over and above total user fees on both trucks and cars; he effectively concedes the subsidy of heavy trucking by the gasoline tax. [212]

As for the civil aviation system, from the beginning it was a creature of the state. The whole physical infrastructure was built, in its early decades, with tax money.

Since 1946, the federal government has poured billions of dollars into airport development . In 1992, Prof. Stephen Paul Dempsey of the University of Denver estimated that the current replacement value of the U.S. commercial airport system —virtually all of it developed with federal grants and tax-free municipal bonds—at $1 trillion . Not until 1971 did the federal government begin collecting user fees from airline passengers and freight shippers to recoup this investment. In 1988 the Congressional Budget Office found that in spite of user fees paid into the Airport and Airways Trust Fund, the taxpayers still had to transfer $3 billion in subsidies per year to the FAA to maintain its network of more than 400 control towers, 22 air traffic control centers, 1,000 radar-navigation aids, 250 long-range and terminal radar systems and its staff of 55,000 traffic controllers, technicians and bureaucrats. [213]

(And even aside from the inadequacy of user fees, eminent domain remains central to the building of new airports and expansion of existing ones.)

Subsidies to the airport and air traffic control infrastructure of the civil aviation system are only part of the picture. Equally important was the direct role of the state in creating the heavy aircraft industry, whose heavy cargo and passenger jets revolutionized civil aviation after WWII. The civil aviation system is, many times over, a creature of the state.

In Harry Truman and the War Scare of 1948 , Frank Kofsky described the aircraft industry as spiraling into red ink after the end of the war, and on the verge of bankruptcy when it was rescued by Truman’s new bout of Cold War spending on heavy bombers. [214] David Noble pointed out that civilian jumbo jets would never have existed without the government’s heavy bomber contracts. The production runs for the civilian market alone were too small to pay for the complex and expensive machinery. The 747 is essentially a spinoff of military production. [215]

The permanent war economy associated with the Cold War prevented the U.S. from relapsing into depression after demobilization. The Cold War restored the corporate economy’s heavy reliance on the state as a source of guaranteed sales. Charles Nathanson argued that “one conclusion is inescapable: major firms with huge aggregations of corporate capital owe their survival after World War II to the Cold War....” [216] According to David Noble, employment in the aircraft industry grew more than tenfold between 1939 and 1954. Whereas military aircraft amounted to only a third of industry output in 1939, by 1953, military airframe weight production was 93% of total output. [217] “The advances in aerodynamics, metallurgy, electronics, and aircraft engine design which made supersonic flight a reality by October 1947 were underwritten almost entirely by the military.” [218]

As Marx pointed out in Volume Three of Capital , the rise of major new forms of industry could absorb surplus capital and counteract the falling direct rate of profit.” Baran and Sweezy, likewise, considered “epoch-making inventions” as partial counterbalances to the ever-increasing surplus. Their chief example was the rise of the automobile industry in the 1920s, which (along with the highway program) was to define the American economy for most of the mid-20 th century. [219] The high tech boom of the 1990s was a similarly revolutionary event. It is revealing to consider the extent to which both the automobile and computer industries, far more than most industries, were direct products of state capitalism.

Besides civilian jumbo jets, many other entirely new industries were also created almost entirely as a byproduct of military spending. Through the military-industrial complex, the state has socialized a major share—probably the majority—of the cost of “private” business’s research and development. If anything the role of the state as purchaser of surplus economic output is eclipsed by its role as subsidizer of research cost, as Charles Nathanson pointed out. Research and development was heavily militarized by the Cold War “military-R&D complex.” Military R&D often results in basic, general use technologies with broad civilian applications. Technologies originally developed for the Pentagon have often become the basis for entire categories of consumer goods. [220] The general effect has been to “substantially [eliminate] the major risk area of capitalism: the development of and experimentation with new processes of production and new products.” [221]

This is the case in electronics especially, where many products originally developed by military R&D “have become the new commercial growth areas of the economy.” [222] Transistors and other miniaturized circuitry were developed primarily with Pentagon research money. The federal government was the primary market for large mainframe computers in the early days of the industry; without government contracts, the industry might never have had sufficient production runs to adopt mass production and reduce unit costs low enough to enter the private market.

Overall, Nathanson estimated, industry depended on military funding for around 60% of its research and development spending; but this figure is considerably understated by the fact that a significant part of nominally civilian R&D spending is aimed at developing civilian applications for military technology. [223] It is also understated by the fact that military R&D is often used for developing production technologies that become the basis for production methods throughout the civilian sector.

In particular, as described by Noble in Forces of Production , industrial automation, cybernetics and miniaturized electronics all emerged directly from the military-funded R&D of WWII and the early Cold War. The aircraft, electronics and machine tools industries were transformed beyond recognition by the military economy. [224]

“The modern electronics industry,” Noble writes, “was largely a military creation.” Before the war, the industry consisted largely of radio. [225] Miniaturized electronics and cybernetics were almost entirely the result of military R&D.

Miniaturization of electrical circuits, the precursor of modern microelectronics, was promoted by the military for proximity fuses for bombs.... Perhaps the most significant innovation was the electronic digital computer, created primarily for ballistics calculations but used as well for atomic bomb analysis. After the war, the electronics industry continued to grow, stimulated primarily by military demands for aircraft and missile guidance systems, communications and control instruments, industrial control devices, high-speed electronic computers for air defense command and control networks..., and transistors for all of these devices.... In 1964, two-thirds of the research and development costs in the electrical equipment industry (e.g., those of GE, Westinghouse, RCA, Raytheon, AT&T, Philco, IBM, Sperry Rand, were still paid for by the government. [226]

The transistor, “the outgrowth of wartime work on semi-conductors,” came out of Bell Labs in 1947. Despite obstacles like high cost and reliability, and resistance resulting from path dependency in the tube-based electronic industry, the transistor won out

through the large-scale and sustained sponsorship of the military, which needed the device for aircraft and missile control, guidance, and communications systems, and for the digital command- and-control computers that formed the core of their defense networks. [227]

In cybernetics, likewise, the electronic digital computer was developed largely in response to military needs. ENIAC, developed for the Army at the University’s Moore School of Electrical Engineering, was used for ballistics calculations and for calculations in the atomic bomb project. [228] Despite the reduced cost and increased reliability of hardware, and advances in computer language software systems, “in the 1950s the main users remained government agencies and, in particular, the military. The Air Force SAGE air defense system alone, for example, employed the bulk of the country’s programmers...”

SAGE produced, among other things, “a digital computer that was fast enough to function as part of a continuous feedback control system of enormous complexity,” which could therefore “be used continuously to monitor and control a vast array of automatic equipment in ‘real time’....” These capabilities were key to later advances industrial automation. [229]

The same pattern prevailed in the machine tool industry, the primary focus of Forces of Production. The share of total machine tools in use that were under ten years old rose from 28% in 1940 to 62% in 1945. At the end of the war, three hundred thousand machine tools were declared surplus and dumped on the commercial market at fire-sale prices. Although this caused the industry to contract (and consolidate), the Cold War resulted in a revival of the machine tools industry. R&D expenditures in machine tools expanded eightfold from 1951 to 1957, thanks to military needs. In the process, the machine tool industry became dominated by the “cost plus” culture of military industry, with its guaranteed profit. [230]

The specific technologies used in automated control systems for machine tools all came out of the military economy:

...[T]he effort to develop radar-directed gunfire control systems, centered at MIT’s Servomechanisms Laboratory, resulted in a range of remote control devices for position measurement and precision control of motion; the drive to develop proximity fuses for mortar shells produced miniaturized transceivers, early integrated circuits, and reliable, rugged, and standardized components. Finally, by the end of the war, experimentation at the National Bureau of Standards, as well as in Germany, had produced magnetic tape, recording heads (tape readers), and tape recorders for sound movies and radio, as well as information storage and programmable machine control. [231]

In particular, World War II R&D for radar-directed gunfire control systems were the primary impetus behind the development of servomechanisms and automatic control,

pulse generators, to convey precisely electrical information; transducers, for converting information about distance, heat, speed, and the like into electrical signals; and a whole range of associated actuating, control and sensing devices. [232]

Industrial automation was introduced in private industry by the same people who had developed the technology for the military economy. The first analog computer-controlled industrial operations were in the electrical power and petroleum refining industries in the 1950s. By 1959, Texaco’s Port Arthur refinery placed production under full digital computer control, and was followed in 1960 by Monsanto’s Louisiana ammonia plant and B. F. Goodrich’s vinyl plant in Calvert, Kentucky. From there the revolution quickly spread to steel rolling mills, blast furnaces, and chemical processing plants. By the 1960s, computerized control evolved from open-loop to closed-loop feedback systems, with computers making adjustments automatically based on sensor feedback. [233]

Numerically controlled machine tools, in particular, were first developed with Air Force money, and first introduced (both with Air Force funding and under Air Force pressure) in the aircraft and the aircraft engines and parts industries, and in USAF contractors in the machine tool industry. [234]

So, the military economy and other state-created industries were an enormous sponge for surplus capital and surplus output. The heavy industrial and high tech sectors were given a virtually guaranteed outlet, not only by U.S. military procurement, but by grants and loan guarantees for foreign military sales under the Military Assistance Program.

Although apologists for the military-industrial complex have tried to stress the relatively small fraction of total production represented by military goods, it makes more sense to compare the volume of military procurement to the amount of idle capacity. Military production runs amounting to a minor percentage of total production might absorb a major part of total idle production capacity, and have a huge effect on reducing unit costs. Besides, the rate of profit on military contracts tends to be quite a bit higher, given the fact that military goods have no “standard” market price, and the fact that prices are set by political means (as periodic Pentagon budget scandals should tell us). [235] So military contracts, small though they might be as a portion of a firm’s total output, might well make the difference between profit and loss.

Seymour Melman described the “permanent war economy” as a privately-owned, centrally-planned economy that included most heavy manufacturing and high tech industry. This “ state-controlled economy ” was based on the principles of “maximization of costs and of government subsidies.” [236]

It can draw on the federal budget for virtually unlimited capital. It operates in an insulated, monopoly market that makes the state-capitalist firms, singly and jointly, impervious to inflation, to poor productivity performance, to poor product design and poor production managing. The subsidy pattern has made the state-capitalist firms failure-proof. That is the state-capitalist replacement for the classic self-correcting mechanisms of the competitive, cost-minimizing, profit-maximizing firm. [237]

A great deal of what is called “progress” amounts, not to an increase in the volume of consumption per unit of labor, but to an increase in the inputs consumed per unit of consumption—namely, the increased cost and technical sophistication entailed in a given unit of output, with no real increase in efficiency.

The chief virtue of the military economy is its utter unproductivity. That is, it does not compete with private industry to supply any good for which there is consumer demand. But military production is not the only such area of unproductive government spending. Neo-Marxist Paul Mattick elaborated on the theme in a 1956 article. The overbuilt corporate economy, he wrote, ran up against the problem that “[p]rivate capital formation... finds its limitation in diminishing market-demand.” The State had to absorb part of the surplus output; but it had to do so without competing with corporations in the private market. Instead, “[g]overnment-induced production is channeled into non-market fields--the production of non-competitive public-works, armaments, superfluities and waste. [238] As a necessary result of this state of affairs,

so long as the principle of competitive capital production prevails, steadily growing production will in increasing measure be a “production for the sake of production,” benefiting neither private capital nor the population at large. This process is somewhat obscured, it is true, by the apparent profitability of capital and the lack of large-scale unemployment. Like the state of prosperity, profitability, too, is now largely government manipulated. Government spending and taxation are managed so as to strengthen big business at the expense of the economy as a whole.... In order to increase the scale of production and to accummulate [sic] capital, government creates “demand” by ordering the production of non-marketable goods, financed by government borrowings. This means that the government avails itself of productive resources belonging to private capital which would otherwise be idle. [239]

Such consumption of output, while not always directly profitable to private industry, serves a function analogous to foreign “dumping” below cost, in enabling industry to operate at full capacity despite the insufficiency of private demand to absorb the entire product at the cost of production.

It’s interesting to consider how many segments of the economy have a guaranteed market for their output, or a “conscript clientele” in place of willing consumers. The “military-industrial complex” is well known. But how about the state’s education and penal systems? How about the automobile-trucking-highway complex, or the civil aviation complex? Foreign surplus disposal (“export dependant monopoly capitalism”) and domestic surplus disposal (government purchases) are different forms of the same phenomenon

E. Mene, Mene, Tekel, Upharsin (a Critique of Sloanism’s Defenders)

Although Galbraith and Chandler commonly justified the corporation’s power over the market in terms of its social benefits, they had things exactly backward. The “technostructure” can survive because it is enabled to be less responsive to consumer demand. An oligopoly firm in a cartelized industry, in which massive, inefficient bureaucratic corporations share the same bureaucratic culture, is protected from competition. The “innovations” Chandler so prized are made by a leadership completely out of touch with reality. These “innovations” succeed because they are determined by the organization for its own purposes, and the organization has the power to impose top-down “change” on a cartelized market, with little regard to consumer preferences, instead of responding flexibly to them. “Innovative strategies” are based, not on finding out what people want and providing it, but on inventing ever-bigger hammers and then forcing us to be nails. The large corporate organization is not more efficient at accomplishing goals received from outside; it is more efficient at accomplishing goals it sets for itself for its own purposes, and then using its power to adapt the rest of society to those goals.

So to turn to our original point, the apostles of mass production have all, at least tacitly, identified the superior efficiency of the large corporation with its control over the external environment. Sloanist mass production subordinates the consumer, and the rest of outside society, to the institutional needs of the corporation.

Chandler himself admitted as much, in discussing what he called a strategy of “productive expansion.” Big business added new outlets that permitted it to make “more complete use” of its “centralized services and facilities.” [240] In other words, “efficiency” is defined by the existence of “centralized facilities,” as such; efficiency is then promoted by finding ways to make people buy the stuff the centralized facilities can produce running at full capacity.

The authoritarianism implicit in such thinking is borne out by Chandler disciple William Lazonick’s circular understanding of “organizational success,” as he discusses it in his survey of “innovative organizations” in Part III of Business Organization and the Myth of the Market Economy . [241] The centralized, managerialist technostructure is the best vehicle for “organizational success”—defined as what best suits the interests of the centralized, managerialist technostructure. And of course, such “organizational success” has little or nothing to do with what society outside that organization might decide, on its own initiative, that it wants. Indeed (as Galbraith argued), “organizational success” requires institutional mechanisms to prevent outside society from doing what it wants, in order to provide the levels of stability and predictable demand that the technostructure needs for its long planning horizons. These theories amount, in practice, to a circular argument that oligopoly capitalism is “successful” because it is most efficient at achieving the ends of oligopoly capitalism.

Lazonick’s model of “ successful capitalist development” raises the question “successful” for whom? His “innovative organization” is no doubt “successful” for the people who make money off it—but not for those at whose expense they make money. It is only “success” if one posits the goals and values of the organization as those of society, and acquiesces in whatever organizational supports are necessary to impose those values on the rest of society.

His use of the expression “value-creating capabilities” seems to have very little to do with the ordinary understanding of the word “value” as finding out what people want and then producing it more efficiently than anyone else. According to his (and Chandler’s and Galbraith’s) version of value, rather, the organization decides what it wants to produce based on the interests of its hierarchy, and then uses its organizational power to secure the stability and control it needs to carry out its self-determined goals without interference from the people who actually buy the stuff.

This parallels Chandler’s view of “organizational capabilities,” which he seemed to identify with an organization’s power over the external environment. A telling example, as we saw in Chapter One, is Chandler’s book on the tech industry. [242] For Chandler, “organizational capabilities” in the consumer electronics industry amounted to the artificial property rights by which the firm was able to exercise ownership rights over technology and over the skill and situational knowledge of its employees, and to prevent the transfer of technology and skill across corporate boundaries. Thus, his chapter on the history of the consumer electronics industry through the mid-20 th century is largely an account of what patents were held by which companies, and who subsequently bought them.

The “innovation” Chandler and Lazonick lionize means, in practice, 1) developing processes so capital-intensive and high-tech that, if all costs were fully internalized in the price of the goods produced, consumers would prefer simpler and cheaper models; or 2) developing products so complex and prone to breakdown that, if cartelized industry weren’t able to protect its shared culture from outside competition, the consumer would prefer a more durable and user-friendly model. Cartelized, over-built industry deals with overproduction through planned obsolescence, and through engineering a mass-consumer culture, and succeeds because cartelization restricts the range of consumer choice.

The “innovative products” that emerge from Chandler’s industrial model, all too often, are what engineers call “gold-plated turds”: horribly designed products with proliferating features piled one atop another with no regard to the user’s needs, ease of use, dependability or reparability. For a good example, compare the acceptable Word 2003 to the utterly godawful Word 2007. [243]

Chandler’s version of “successful development” is a roaring success indeed, if we start with the assumption that society should be reengineered to desire what the technostructure wants to produce.

Robin Marris described this approach quite well. The bureaucratic culture of the corporation, he wrote,

is likely to divert emphasis from the character of the goods and services produced to the skill with which these activities are organized.... The concept of consumer need disappears, and the only question of interest... is whether a sufficient number of consumers, irrespective of their “real need” can be persuaded to buy [a proposed new product].” [244]

As the satirist John Gall put it, the large organization tends to redefine the consumption of inputs as outputs.

A giant program to conquer cancer is begun. At the end of five years, cancer has not been conquered, but one thousand research papers have been published. In addition, one million copies of a pamphlet entitled “You and the War Against Cancer” have been distributed. These publications will absolutely be regarded as Output rather than Input. [245]

The marketing “innovations” Chandler trumpeted in Scale and Scope —in foods, for example, the techniques for “refining, distilling, milling, and processing” [246] —were actually expedients for ameliorating the inefficiencies imposed by large-scale production and long-distance distribution of refined white flour, inferior in taste and nutrition to fresh-milled local flour, but which would keep for long-term storage; gas-ripened rubber tomatoes and other vegetables grown for transportability rather than taste; etc. The standard American diet of refined white flour, hydrogenated oils, and high fructose corn syrup is in large part a tribute to Chandler.

F. The Pathologies of Sloanism

Not only are the large and capital-intensive manufacturing corporations themselves characterized by high overhead and bureaucratic style; their organizational culture contaminates the entire system, becoming a hegemonic norm copied even by small organizations, labor-intensive firms, cooperatives and non-profits. In virtually every field of endeavor, as Goodman put it, there is a “need for amounts of capital out of proportion to the nature of the enterprise.” Every aspect of social life becomes dominated by the high overhead organization.

Goodman classifies organizations into a schema. Categories A and B, respectively, are “enterprises extrinsically motivated and interlocked with the other centralized systems,” and “enterprises intrinsically motivated and tailored to the concrete product or service.” The two categories are each subdivided, roughly, into profit and nonprofit classes.

The interesting thing is that the large institutional nonprofits (Red Cross, Peace Corps, public schools, universities, etc.) are not counterweights to for-profit culture. Rather, they share the same institutional culture: “status salaries and expense accounts are equally prevalent, excessive administration and overhead are often more prevalent, and there is less pressure to trim costs.”

Rather than the state and large nonprofits acting as a “countervailing power” on large for-profit enterprise, in Galbraith’s schema, what happens more often is a coalition of the large for-profit and large nonprofit:

...the military-industrial complex, the alliance of promoters, contractors, and government in Urban Renewal; the alliance of universities, corporations, and government in research and development. This is the great domain of cost-plus. [247]

Goodman contrasts the bureaucratic organization with the small, libertarian organization. “What swell the costs in enterprises carried on in the interlocking centralized systems of society, whether commercial, official, or non-profit institutional,”

are all the factors of organization, procedure, and motivation that are not directly determined to the function and to the desire to perform it. These are patents and rents, fixed prices, union scales, featherbedding, fringe benefits, status salaries, expense accounts, proliferating administration, paper work, permanent overhead, public relations and promotion, waste of time and sill by departmentalizing task-roles, bureaucratic thinking that is penny-wise and pound-foolish, inflexible procedure and tight scheduling that exaggerate contingencies and overtime. But when enterprises can be carried on autonomously by professionals, artists, and workmen intrinsically committed to the job, there are economies all along the line. People make do on means. They spend on value, not convention. They flexibly improvise procedures as opportunity presents and they step in in emergencies. They do not watch the clock. The available skills of each person are put to use. They eschew status and in a pinch accept subsistence wages. Administration and overhead are ad hoc . The task is likely to be seen in its essence rather than abstractly.

Instead of expensive capital outlays, the ad hoc organization uses spare capacity of existing small-scale capital goods its members already own, along with recycled or vernacular building materials. The staff of a small self-managed organization are free to use their own judgment and ingenuity in formulating solutions to unforeseen problems, cutting costs, and so forth. And because the staff is often the source of the capital investments, they are likely to be quite creative in finding ways to save money.

A couple of things come to mind here. First, Friedrich Hayek’s treatment of distributed knowledge: those directly engaged in a task are usually the best source of ideas for improving its efficiency. And second, Milton Friedman’s ranking of the relative efficiencies achieved by 1) people spending other people’s money on other people; 2) people spending other people’s money on themselves; 3) people spending their own money on other people; and 4) people spending their own money on themselves.

The staff of a small, self-directed undertaking can afford to throw themselves into maximizing their effectiveness, because they know the efficiency gains they produce won’t be appropriated by absentee owners or senior management who simply use the higher productivity to skim more profit off the top or to lay off some of the staff. Most of the features of Weberian bureaucracy and hierarchical systems of control—job descriptions, tracking forms and controls, standard procedures, and the like—result from the fact that the workforce has absolutely no rational interest in expending effort or working effectively, beyond the bare minimum required to keep the employer in business and to avoid getting fired.

Goodman’s chapter on “Comparative Costs” in People or Personnel is a long series of case studies contrasting the cost of bureaucratic to ad hoc organizations. [248] He refers, for example, to the practices at a large corporate TV station (“the usual featherbedding of stagehands to provide two chairs,” or paying technicians “twice $45 to work the needle on a phonograph”)—jobs that would be done by the small permanent staff at a nonprofit station run out of City College of New York. [249] The American Friends’ Voluntary International Service Assignments carried almost no administrative costs, compared to the Peace Corps’ enormous cost of thousands of dollars per volunteer. [250]

The Housing Board’s conventional Urban Renewal proposal in Greenwich Village would have bulldozed a neighborhood containing many useful villages, to be replaced by “the usual bureaucratically designed tall buildings,” at a cost of $30 million and a net increase of 300 dwelling units. The neighborhood offered a counter-proposal that ruled out demolishing anything salvageable or relocating anyone against their wishes; it would have provided a net increase of 475 new units at a cost of $8.5 million. Guess which one was chosen? [251]

Most of the per pupil cost of conventional urban public schools, as opposed to alternative or experimental schools, results from administrative overhead and the immense cost of buildings and other materials built to a special set of specifications at some central location on some of the most expensive real estate in town. His hypothetical cooperative prep school cost about a third as much per pupil as the typical high school. [252] This is a thought experiment I’d repeatedly conducted for myself long before ever reading Goodman: figuring the cost for twenty or so parents to set up their own schooling cooperative, renting a house for classroom space and hiring a few part-time instructors, and then trying to imagine how one could possibly waste enough money to come up with the $8,000 or more per-pupil that the public schools typically spend.

In the nearby town of Siloam Springs, Ark., not long after voters rejected a millage increase for the schools, the administration announced the cancellation of its planned purchase of new computers and its decision instead to upgrade existing ones. The cost of adding RAM, it was said, would be a small fraction of replacement—and yet it would result in nearly the same performance improvement. But it’s a safe guess the administration would never have considered such a thing if it hadn’t been forced to.

Another similar case is Goodman’s contrast of the tuition costs of the typical large, institutional college, to those of an “alternative” school like Black Mountain College (run by the faculty, on the same “scholars’ guild” model as the medieval universities). Much of the physical plant of the latter was the work of faculty and staff, and indeed for its first eight years (1933–1941) the “campus” consisted of buildings rented from a YMCA. Without any endowment or contributions, the tuition was still far lower than that of a conventional college. [253]

A more contemporary example might be the enormous cost of conventional Web 2.0 firms compared to that of their free culture counterparts. The Pirate Bay’s file-sharing operations, for example, cost only $3,000 a month—compared to estimated daily operating costs for YouTube ranging from $130,000 to a million! [254]

The contrasting styles of the ad hoc, self-managed organization and the bureaucratic, institutional organization were brought home to me in my personal experience with two libraries.

At the University of Arkansas (Fayetteville), until a few years ago, non-students were discouraged from applying for library cards by an application form that asked whether their needs could not be met instead by, among other things, relying on Interlibrary Loan services. Then the policy changed so that a library card (with $40 annual fee) was required to use Interlibrary Loan. Never mind that a library official professed unawareness (while hardly bothering to conceal her disbelief), in her best “Oceania has always at war with Eastasia” manner, that the library had ever promoted Interlibrary Loan as an alternative to a library card. The interesting thing was that she justified the new card purchase requirement on grounds of equity: it cost, she claimed, some $25 to process every Interlibrary Loan request. I was utterly dumbfounded. If this were true, you’d think the ILL bureaucracy would be ashamed to admit it. How does Amazon.Com or AbeBooks manage to stay in business when buying a used book and shipping it cross-country usually costs me less than that—shipping and handling included? The only answer must be that the library bureaucracy has far higher levels of bureaucratic overhead than even a large bureaucratic corporation, for performing an analogous function.

At the Springdale, Ark. public library, I submitted a written complaint to their Technology Coordinator regarding the abysmally poor performance of their new desktop software after the recent “upgrade,” compared to what they had had before.

Comment: Please don’t automatically upgrade the desktops to the latest version of Windows and other MS accessories. In general, if you already have something from Microsoft that works in a minimally acceptable manner, you should quit while you’re ahead; if Bill Gates offers you something “new” and “better,” run in the opposite direction as fast as you can. Since you “upgraded” the computers, if you can call it that, usability has suffered a nosedive. I used to have no problem emailing myself attachments and opening them up here to work on. Now if I want to print something out, I have to open it as a Google Document and paste it into a new Word file. What’s more, I can’t edit the file here and save it to the desktop so I can email it to myself again. Any time I attempt to save a textfile on your computers I’m blocked from doing so. In addition, if you compare Word 2007 to the Word 2003 you previously had on the desktop menu, the former is a classic example of what engineers call a “gold-plated turd.” It’s got so many proliferating “features” that the editing dashboard has to be tabbed to fit them all in. To summarize: your computers worked just fine for all my purposes before the so-called “upgrade,” and now they’re godawful. Please save yourselves money in future and stick with what works instead of being taken in by Microsoft’s latest poorly designed crap.

The Coordinator, C.M., replied (rather lamely in my opinion) that “the recent upgrade to MicroSoft Office 2007 on both the Library’s public and staff computers is in line with what other libraries and companies across the country currently offer/use as office productivity software.” And the refusal to save files to desktop, which the previous software had done without a problem, was “a standard security feature.”

Now, this would be perfectly understandable from a grandma, who uses the computer mainly to read email from her grandkids, and buys her granddaughter a PC with Vista and Word 2007 installed because “I heard it’s the latest thing.” But this was an IT officer—someone who’s supposed to be at least vaguely aware of what’s going on.

So I told her the software was a piece of crap that didn’t work, and Ms. C. M. (although I’m sure it wasn’t her intention) told me why it was a piece of crap that didn’t work: Springdale’s library adopted it because it was what all the other libraries and corporations use. I replied, probably a little too testily:

...I’m afraid the fact that an upgrade “in line with what other libraries and companies across the country currently offer/use” actually made things worse reflects unflatteringly on the institutional culture that predominates in organizations across the country, and in my opinion suggests the folly of being governed by the institutional culture of an industry rather than bottom-up feedback from one’s own community of users. I’ve worked in more than one job where company policy reflected the common institutional culture of the industry, and whatever “best practice” du jour the other CEOs solemnly assured our CEO was working like gangbusters. Had there been less communication between the people at the tops of the pyramids, and more communication between the top of each pyramid with those below, the people in direct contact with the situation might have cut through the... official happy talk and told them what a total clusterf**** their policies had resulted in.

For some reason, I never heard back.

The state and its affiliated corporate system, by mandating minimum levels of overhead for supplying all human wants, creates what Ivan Illich called “radical monopolies.”

I speak about radical monopoly when one industrial production process exercises an exclusive control over the satisfaction of a pressing need, and excludes nonindustrial activities from competition.... Radical monopoly exists where a major tool rules out natural competence. Radical monopoly imposes compulsory consumption and thereby restricts personal autonomy. It constitutes a special kind of social control because it is enforced by means of the imposed consumption of a standard product that only large institutions can provide. [255] Radical monopoly is first established by a rearrangement of society for the benefit of those who have access to the larger quanta; then it is enforced by compelling all to consume the minimum quantum in which the output is currently produced.... [256]

The goods supplied by a radical monopoly can only be obtained at comparably high expense, requiring the sale of wage labor to pay for them, rather than direct use of one’s own labor to supply one’s own needs. The effect of radical monopoly is that capital-, credential- and tech-intensive ways of doing things crowd out cheaper and more user-friendly, more libertarian and decentralist, technologies. The individual becomes increasingly dependent on credentialed professionals, and on unnecessarily complex and expensive gadgets, for all the needs of daily life. He experiences an increased cost of subsistence, owing to the barriers that mandatory credentialing erects against transforming one’s labor directly into use-value (Illich’s “convivial” production), and the increasing tolls levied by the licensing cartels and other gatekeeper groups.

People have a native capacity for healing, consoling, moving, learning, building their houses, and burying their dead. Each of these capacities meets a need. The means for the satisfaction of these needs are abundant so long as they depend on what people can do for themselves, with only marginal dependence on commodities.... These basic satisfactions become scarce when the social environment is transformed in such a manner that basic needs can no longer be met by abundant competence. The establishment of a radical monopoly happens when people give up their native ability to do what they can do for themselves and each other, in exchange for something “better” that can be done for them only by a major tool. Radical monopoly reflects the industrial institutionalization of values.... It introduces new classes of scarcity and a new device to classify people according to the level of their consumption. This redefinition raises the unit cost of valuable services, differentially rations privileges, restricts access to resources, and makes people dependent. [257]

The overall process is characterized by “the replacement of general competence and satisfying subsistence activities by the use and consumption of commodities;”

the monopoly of wage-labor over all kinds of work; redefinition of needs in terms of goods and services mass-produced according to expert design; finally, the arrangement of the environment... [to] favor production and consumption while they degrade or paralyze use-value oriented activities that satisfy needs directly. [258]

Leopold Kohr observed that “what has actually risen under the impact of the enormously increased production of our time is not so much the standard of living as the level of subsistence.” [259] Or as Paul Goodman put it, “decent poverty is almost impossible.” [260]

For example: subsidized fuel, freeways, and automobiles generate distance between things, so that “[a] city built around wheels becomes inappropriate for feet.” [261] The car becomes an expensive necessity; feet and bicycle are rendered virtually useless, and the working poor are forced to earn the additional wages to own and maintain a car just to be able to work at all.

Radical monopoly has a built-in tendency to perpetuate itself and expand. First of all, those running large hierarchical organizations tend to solve the problems of bureaucracy by adding more of it. In the hospital where I work, this means that problems resulting from understaffing are “solved” by new tracking forms that further reduce nurses’ available time for patient care—when routine care already frequently goes undone, and nurses stay over two or three hours past the end of a twelve-hour shift to finish paperwork.

They solve problems, in general, with a “more of the same” approach. In Illich’s excellent phrase, it’s an attempt to “solve a crisis by escalation.” [262] It’s what Einstein referred to as trying to solve problems “at the same level of thinking we were at when we created them.” Or as E. F. Schumacher says of intellectuals, technocrats “always tend to try and cure a disease by intensifying its causes.” [263]

The way the process works, in Paul Goodman’s words, is that “[a] system destroys its competitors by pre-empting the means and channels and then proves that it is the only conceivable mode of operating.” [264]

The effect is to make subsistence goods available only through institutional providers, in return for money earned by wages, at enormous markup. As Goodman put it, it makes decent poverty impossible. To take the neoliberals’ statistical gushing over increased GDP and stand it on its head, “[p]eople who were poor and had food now cannot subsist on ten or fifty times the income.” [265] “Everywhere one turns... there seems to be a markup of 300 and 400 percent, to do anything or make anything.” [266] And paradoxically, the more “efficiently” an organization is run, “the more expensive it is per unit of net value, if we take into account the total social labor involved, both the overt and the covert overhead.” [267]

Goodman points to countries where the official GDP is one fourth that of the U.S., and yet “these unaffluent people do not seem four times ‘worse off’ than we, or hardly worse off at all.” [268] The cause lies in the increasing portion of GDP that goes to support and overhead, rather than direct consumption. Most of the costs do not follow from the technical requirements of producing direct consumption goods themselves, but from the mandated institutional structures for producing and consuming them.

It is important to notice how much the various expensive products and services of corporations and government make people subject to repairmen, fees, commuting, queues, unnecessary work, dressing just for the job; and these things often prevent satisfaction altogether. [269]

A related phenomenon is what Kenneth Boulding called the “non-proportional change” principle of structural development: the larger an institution grows, the larger the proportion of resources that must be devoted to secondary, infrastructure and support functions rather than the actual primary function of the institution. “As any structure grows, the proportions of the parts and of its significant variables cannot remain constant.... This is because a uniform increase in the linear dimensions of a structure will increase all its areas as the square, and its volume as the cube, of the increase in the linear dimension....” [270]

Leopold Kohr gave the example of a skyscraper: the taller the building, the larger the percentage of floorspace that must taken up with elevator shafts and stairwells, heating and cooling ducts, and so forth. Eventually, the building reaches the point where the space on the last floor added will be cancelled out by the increased space required for support structures. This is hardly theoretical: Kohr gave the example in the 1960s of a $25 billion increase in GNP, $18 billion (or 72%) of which went to administrative and support costs of various sorts. [271]

G. Mandatory High Overhead

As a pathology, this phenomenon deserves a separate section of its own. It is a pathology not only of the Sloanist mass-production economy, but also of local economies under the distorting effects of zoning, licensing, “safety” and “health” codes, and other regulations whose primary effect is to put a floor under overhead costs. Social regulations and commercial prohibitions, as Thomas Hodgskin said, “compel us to employ more labour than is necessary to obtain the prohibited commodity,” or “to give a greater quantity of labour to obtain it than nature requires,” and put the difference into the pockets of privileged classes. [272]

Such artificial property rights enable the privileged to appropriate productivity gains for themselves, rather than allowing their benefits to be socialized through market competition.

But they do more than that: they make it possible to collect tribute for the “service” of not obstructing production. As John R. Commons observed, the alleged “service” performed by the holder of artificial property rights, in “contributing” some “factor” to production, is defined entirely by his ability to obstruct access to it. As I wrote in Studies in Mutualist Political Economy , marginalist economics

treated the existing structure of property rights over “factors” as a given, and proceeded to show how the product would be distributed among these “factors” according to their marginal contribution. By this method, if slavery were still extant, a marginalist might with a straight face write of the marginal contribution of the slave to the product (imputed, of course, to the slave-owner), and of the “opportunity cost” involved in committing the slave to one or another use. [273]

Such privileges, Maurice Dobb argued, were analogous to a state grant of authority to collect tolls, (much like the medieval robber barons who obstructed commerce between their petty principalities):

Suppose that toll-gates were a general institution, rooted in custom or ancient legal right. Could it reasonably be denied that there would be an important sense in which the income of the toll-owning class represented “an appropriation of goods produced by others” and not payment for an “activity directed to the production or transformation of economic goods?” Yet toll-charges would be fixed in competition with alternative roadways, and hence would, presumably, represent prices fixed “in an open market....” Would not the opening and shutting of toll-gates become an essential factor of production, according to most current definitions of a factor of production, with as much reason at any rate as many of the functions of the capitalist entrepreneur are so classed to-day? This factor, like others, could then be said to have a “marginal productivity” and its price be regarded as the measure and equivalent of the service it rendered. At any rate, where is a logical line to be drawn between toll-gates and property-rights over scarce resources in general? [274]

Thorstein Veblen made a similar distinction between property as capitalized serviceability, versus capitalized disserviceability. The latter consisted of power advantages over rivals and the public which enabled owners to obstruct production. [275]

At the level of the national corporate economy, a central function of government is to artificially inflate the levels of capital outlay and overhead needed to undertake production.

The single biggest barrier to modular design for common platforms is probably “intellectual property.” If it were abolished, there would be no legal barrier against many small companies producing competing modular components or accessories for the same platform, or even big companies producing modular components designed for interoperability with other companies products.

What’s more, with the barrier to such competition removed, there would be a great deal of competitive advantage from designing one’s product so as to be conducive to production of modular components by other companies. In a market where the consumer preferred the highest possible degree of interoperability and cross-compatibility, to maximize his own freedom to mix ‘n’ match components, or to maximize his options for extending the lifetime of the product, a product that was designed with such consumer behavior in mind would have a leg up on competing products designed to be incompatible with other companies’ accessories and modules. In other words, products designed to be easily used with other people’s stuff would sell better. Imagine if

Ford could produce engine blocks that were compatible with GM chasses, and vice versa;

if a whole range of small manufacturers could produce competing spare parts and modular accessories for Ford or GM vehicles;

such small companies, individually or in networks, could produce entire competing car designs around the GM or Ford engine block;

or many small assembly plants sprang up to put together automobiles from engine blocks ordered from Ford or GM, combined with other components produced by themselves or a wide variety of other small companies on the Emilia-Romagna networked model.

Under those circumstances, there would be no legal barrier to other companies producing entire, modularization-friendly design platforms for use around Ford or GM products, and Ford and GM would find it to their competitive advantage to facilitate compatibility with such designs.

In keeping with Sloanism’s emphasis on planned obsolescence to generate artificially high levels of product turnover, products are deliberately designed to discourage or impede repair by the user.

... [A]n engineering culture has developed in recent years in which the object is to “hide the works,” rendering the artifacts we use unintelligible to direct inspection.... This creeping concealedness takes various forms. The fasteners holding small appliances together now often require esoteric screwdrivers not commonly available, apparently to prevent the curious or the angry from interrogating the innards. By way of contrast, older readers will recall that until recent decades, Sears catalogues included blown-up parts diagrams and conceptual schematics for all appliances and many other mechanical goods. It was simply taken for granted that such information would be demanded by the consumer. [276]

Julian Sanchez gives the specific example of Apple’s iPhone. The scenario, as he describes it, starts when

1) Some minor physical problem afflicts my portable device—the kind of thing that just happens sooner or later when you’re carting around something meant to be used on the go. In this case, the top button on my iPhone had gotten jammed in, rendering it nonfunctional and making the phone refuse to boot normally unless plugged in. 2) I make a pro forma trip to the putative “Genius Bar” at an Apple Store out in Virginia. Naturally, they inform me that since this doesn’t appear to be the result of an internal defect, it’s not covered. But they’ll be only too happy to service/replace it for something like $250, at which price I might as well just buy a new one.... 3) I ask the guy if he has any tips if I’m going to do it myself—any advice on opening it, that sort of thing. He’s got no idea.... 4) Pulling out a couple of tiny screwdrivers, I start in on the satanic puzzlebox casing Apple locks around all its hardware. I futz with it for at least 15 minutes before cracking the top enough to get at the inner works. 5) Once this is done, it takes approximately five seconds to execute the necessary repair by unwedging the jammed button. I have two main problems with this. First, you’ve got what’s obviously a simple physical problem that can very probably be repaired in all of a minute flat with the right set of tools. But instead of letting their vaunted support guys give this a shot, they’re encouraging customers--many of whom presumably don’t know any better--to shell out a ludicrous amount of money to replace it and send the old one in. I appreciate that it’s not always obvious that a problem can be this easily remedied on site, but in the instance, it really seems like a case of exploiting consumer ignorance. Second, the iPhone itself is pointlessly designed to deter self service. Sure, the large majority of users are never going to want to crack their phone open. Then again, most users probably don’t want to crack their desktops or laptops open, but we don’t expect manufacturers to go out of their way to make it difficult to do. [277]

The iPhone is a textbook example of a “blobject,” the product of industrial design geared toward the cheap injection-molding of streamlined plastic artifacts. Eric Hunting writes:

Blobjects are also often deliberately irreparable and un-upgradeable -sometimes to the point where they are engineered to be unopenable without being destroyed in the process. This further facilitates planned obsolescence while also imposing limits on the consumer’s own use of a product as a way to protect market share and technology propriety. Generally, repairability of consumer goods is now impractical as labor costs have made repair frequently more expensive than replacement, where it isn’t already impossible by design. In the 90s car companies actually toyed with the notion of welding the hoods of new cars shut on the premise that the engineering of components had reached the state where nothing in the engine compartment needed to be serviceable over a presumed ‘typical’ lifetime for a car. (a couple of years) This, of course, would have vastly increased the whole replacement rate for cars and allowed companies to hide a lot of dirty little secrets under that welded hood. [278]

“Intellectual property” in onboard computer software and diagnostic equipment has essentially the same effect.

As cars become vastly more complicated than models made just a few years ago, [independent mechanic David] Baur is often turning down jobs and referring customers to auto dealer shops. Like many other independent mechanics, he does not have the thousands of dollars to purchase the online manuals and specialized tools needed to fix the computer-controlled machines.... Access to repair information is at the heart of a debate over a congressional bill called the Right to Repair Act. Supporters of the proposal say automakers are trying to monopolize the parts and repair industry by only sharing crucial tools and data with their dealership shops. The bill, which has been sent to the House Committee on Energy and Commerce, would require automakers to provide all information to diagnose and service vehicles. Automakers say they spend millions in research and development and aren’t willing to give away their intellectual property. They say the auto parts and repair industry wants the bill passed so it can get patented information to make its own parts and sell them for less.... Many new vehicles come equipped with multiple computers controlling everything from the brakes to steering wheel, and automakers hold the key to diagnosing a vehicle’s problem. In many instances, replacing a part requires reprogramming the computers — a difficult task without the software codes or diagrams of the vehicle’s electrical wires.... Dealership shops may be reaping profits from the technological advancements. A study released in March by the Automotive Aftermarket Industry Association found vehicle repairs cost an average of 34 percent more at new car dealerships than at independent repair shops, resulting in $11.7 billion in additional costs for consumers annually. The association, whose members include Autozone, Jiffy Lube and other companies that provide replacement parts and accessories, contend automakers want the bill rejected so they can continue charging consumers more money. “You pay all this money for your car, you should be able to decide where to get it repaired,” said Aaron Lowe, the association’s vice president of government affairs. Opponents of the bill counter that the information and tools to repair the vehicles are available to those willing to buy them. [279]

As Mike Masnick sums it up:

Basically, as cars become more sophisticated and computerized, automakers are locking up access to those computers, and claiming that access is protected by copyrights. Mechanics are told they can only access the necessary diagnostics if they pay huge sums — meaning that many mechanics simply can’t repair certain cars, and car owners are forced to go to dealers, who charge significantly higher fees. [280]

One of Masnick’s readers at Techdirt pointed out that a primary effect of “intellectual property” law in this case is to give manufacturers “an incentive to build crappy cars.” If automakers have “an exclusive right to fix their own products,” they will turn repair operations into a “cash cow.” (Of course, that’s exactly the same business model currently followed by companies that sell cheap platforms and make money off proprietary accessories and spare parts.) “Suddenly, the money made from repairing automobiles would outweigh the cost of selling them.”

In a free market, of course, it wouldn’t be necessary to pay for the information, or to pay proprietary prices for the tools, because software hacks and generic versions of the tools would be freely available without any legal impediment. That Congress is considering legislation to mandate the sharing of information protected by “intellectual property” law is a typical example of government’s Rube Goldberg nature: all that’s really needed is to eliminate the “intellectual property” in the first place.

One effect of the shift in importance from tangible to intangible assets is that a growing portion of product prices consists of embedded rents on “intellectual property” and other artificial property rights rather than the material costs of production. Tom Peters cited former 3M strategic planner George Hegg on the increasing portion of product “value” made up of “intellect” (i.e., the amount of final price consisting of tribute to the owners of “intellectual property”): “We are trying to sell more and more intellect and less and less materials.” Peters produces a long string of such examples:

...My new Minolta 9xi is a lumpy object, but I suspect I paid about $10 for its plastic casing, another $50 for the fine-ground optical glass, and the rest, about $640, for its intellect... [281] It is a soft world.... Nike contracts for the production of its spiffy footwear in factories around the globe, but it creates the enormous stock value via superb design and, above all, marketing skills. Tom Silverman, founder of upstart Tommy Boy Records, says Nike was the first company to understand that it was in the lifestyle business.... Shoes? Lumps? Forget it! Lifestyle. Image. Speed. Value via intellect and pizazz. [282] “Microsoft’s only factory asset is the human imagination,” observed The New York Times Magazine writer Fred Moody. In seminars I’ve used the slide on which those words appear at least a hundred times, yet every time that simple sentence comes into view on the screen I feel the hairs on the back of my neck bristle. [283] A few years back, Philip Morris purchased Kraft for $12.9 billion, a fair price in view of its subsequent performance. When the accountants finished their work, it turned out that Philip Morris had bought $1.3 billion worth of “stuff” (tangible assets) and $11.6 billion of “Other.” What’s the other, the 116/129? ....Call it intangibles, good-will (the U.S. accountants’ term), brand equity, or the ideas in the heads of thousands of Kraft employees around the world. [284]

Regarding Peters’ Minolta example, as Benkler points out the marginal cost of reproducing “its intellect” is virtually zero. So about 90% of the price of that new Minolta comes from tolls to corporate gatekeepers, who have been granted control of that “intellect.”

The same goes for Nike’s sneakers. I suspect the amortization cost of the physical capital used to manufacture the shoes in those Asian sweatshops, plus the cost of the sweatshop labor, is less than 10% of the price of the shoes. The wages of the workers could be tripled or quadrupled with negligible impact on the retail price.

In an economy where software and product design were the product of peer networks, unrestricted by the “intellectual property” of old corporate dinosaurs, 90% of the product’s price would evaporate overnight. To quote Michael Perelman,

the so-called weightless economy has more to do with the legislated powers of intellectual property that the government granted to powerful corporations. For example, companies such as Nike, Microsoft, and Pfizer sell stuff that has high value relative to its weight only because their intellectual property rights insulate them from competition. [285]

“Intellectual property” plays exactly the same protectionist role for global corporations that tariffs did for the old national industrial economies. Patents and copyrights are barriers, not to the movement of physical goods, but to the diffusion of technique and technology. The one, as much as the other, constitutes a monopoly of productive capability. “Intellectual property” enables the transnational corporation to benefit from the moral equivalent of tariff barriers, regardless of where it is situated. In so doing, it breaks the old link between geography and protectionism. “Intellectual property,” exactly like tariffs, serves the primary function of legally restricting who can produce a given thing for a given market. With an American tariff on a particular kind of good, the corporations producing that good have a monopoly on it only within the American market. With the “tariff” provided by a patent on the industrial technique for producing that good, the same corporations have an identical monopoly in every single country in the world that adheres to the international patent regime.

How many extra hours does the average person work each week to pay tribute to the owners of the “human imagination”?

The Consumer Product Safety Improvement Act (CPSIA) is a good illustration of how regulations put a floor under overhead. To put it in perspective, first consider how the small apparel manufacturer operates. According to Eric Husman, an engineer who blogs on lean manufacturing and whose wife is in the apparel industry, a small apparel manufacturer comes up with a lot of designs, and then produces whatever designs sell, switching back and forth between products as the orders come in. Now consider the effect the CPSIA has on this model. Its most onerous provision is its mandate of third party testing and certification, not of materials, but of every component of each separate product.

The testing and certification requires that finished products be tested, not materials, and that every component of every item must be tested separately. A price quote from a CPSIA-authorized testing facility says that testing Learning Resources’ product Let’s Tackle Kindergarten, a tackle box filled with learning tools—flash cards, shapes, counters and letters—will cost $6,144. Items made from materials known not to contain lead, or items tested to other comparable standards, must still be tested. A certified organic cotton baby blanket appliquéd with four fabrics must be tested for lead at $75 per component material. Award-winning German toy company Selecta Spielzeug—whose sustainably harvested wood toys are colored with nontoxic paints, sealed with beeswax, and compliant with European testing standards—pulled out of the United States market at the end of 2008, stating that complying with the CPSIA would require them to increase their retail prices by at least 50 percent. Other European companies are expected to follow suit.

The total cost of testing can range from $100 to thousands of dollars per product. With this level of mandated overhead per product, obviously, the only way to amortize such an enormous capital outlay is large batch production. So producing on a just-in-time basis, with low overhead, using small-scale capital goods, is for all intents and purposes criminalized. [286]

The Design Piracy Prohibition Act, which Sen. Charles Schumer recently introduced for the fourth time, would have a similar effect on fashion. Essentially a DMCA for the fashion industry, it would require thousands of dollars in legal fees to secure CYA documentation of the originality of each design. Not only would it impose such fees on apparel producers of any scale, no matter how small, who produce their own designs, but—because it fails to indemnify apparel manufacturers or retailers—it would deter small producers and retailers from producing or selling the designs of small independent designers who had not paid for such a legal investigation. [287]

NAIS, which requires small family farms to ID chip their livestock at their own expense, operates on the same principle.

At the local level, one of the central functions of so-called “health” and “safety” codes, and occupational licensing is to prevent people from using idle capacity (or “spare cycles”) of what they already own anyway, and thereby transforming them into capital goods for productive use. Such regulations mandate minimum levels of overhead (for example, by outlawing a restaurant run out of one’s own home, and requiring the use of industrial-sized ovens, refrigerators, dishwashers, etc.), so that the only way to service the overhead and remain in business is to engage in large batch production.

You can’t do just a few thousand dollars worth of business a year, because the state mandates capital equipment on the scale required for a large-scale business if you engage in the business at all. Consider all the overhead costs imposed on this chef, who wanted to open a restaurant on the first floor of a hotel:

That’s when the fun began. I sketched some plans and had them drawn up by an architect ($1000). I submitted them for review to the County building Dept. ($300). Everything was OK, except for the bathrooms. They were not ADA compliant. Newly built bathrooms must have a 5’ radius turning space for a wheelchair. No problem. I tried every configuration I could think of to accomodate the larger bathroom space without losing seating which would mean losing revenue. No luck. I would have to eat into my storage space and replace it with a separate exterior walk-in cooler($5,000). I would also have to reduce the dining room space slightly so I had to plan on banquettes along the exterior wall to retain the same number of seats (banquettes vs. separate stand alone tables ($5,000) Revised plans ($150). Re-review ($100) Next came the Utility Dept. It seems the water main was insufficient even for the current use, a 24 suite hotel, and would need to be replaced ($10,000). Along comes the Historical Preservation Society, a purely advisory group of starched collar, pince nez wearing fuddy-duddies (well, not literally) to offer their “better take it or else” advice, or maybe lose the Historic Status tax break for the hotel. It seems that the mushroom for the kitchen exhaust fan would be visible from the street, so could I please relocate it to the rear of the building? Pretty please? Extra ducting and more powerful fan ($5,000). Hello Fire Dept! My plans showed a 40 seat dining room, 2 restrooms , a microscopic office, and a kitchen. My full staffing during tourist season was 4 servers, 1 dishwasher and 1 seasonal cook-total occupancy 47, myself included. The Fire Inspector said the space could accomodate 59. “But I only have 40 seats. I want luxurious space around the tables.” I pleaded. “No. It goes by square footage. 48 seats, 4 servers, 3 cooks, one dishwasher, 1 person in the office and 2 people in the restrooms.” “Why would I need 4 cooks for 40 seats when I am capable of doing that alone? And if the cooks are cooking, the servers are serving, the officer is officing, the diners are dining, then who the H#$% is in the bathrooms?” “Square footage. Code!” And therefore it went from Class B to Class A, requiring a sprinkler system for the dining room and a third exit ($10,000) in addition to the existing front door and the back kitchen door. It would have to be punched through the side wall and have a lit EXIT sign. Could it be behind the screen shielding the patrons from viewing the inside of the bathrooms every time the door opened? Oh, no! It might not be visible. The door would have to be located where 4 guests at the banquette plus their opposite companions were seated-loss of 20% of seating unless I squeezed them into smaller tables destroying the whole planned luxurious ambience. Pro Forma: $250K sales. $75K Food and Beverage purchases $75K Labor cost $75K Expenses $25K net before taxes. Result of above experience=Fugget Aboud It!!! Loss to community-$100K income plus tips +$20K Sales tax. Another “Gifte Shoppe” went into the space and closed a month after the end of tourist season. When we left town 2 years later to go sailing the Caribbean, the space was still vacant. I might add that I had advice in all this from a retired executive who volunteered his time (small donation to Toys 4 Tots gratefully accepted) through a group that connected us. He said that in his opinion that my project budgeted at $200K would cost upward of $1 million in NYC and perhaps SF due to higher permits and fees. [288]

At the smaller end of the spectrum, consider restrictions on informal, unlicensed daycare centers operated out of people’s homes.

MIDDLEVILLE, Mich. (WZZM) — A West Michigan woman says the state is threatening her with fines and possibly jail time for babysitting her neighbors’ children. Lisa Snyder of Middleville says her neighborhood school bus stop is right in front of her home. It arrives after her neighbors need to be at work, so she watches three of their children for 15–40 minutes until the bus comes. The Department of Human Services received a complaint that Snyder was operating an illegal child care home. DHS contacted Snyder and told her to get licensed, stop watching her neighbors’ kids, or face the consequences. “It’s ridiculous.” says Snyder. “We are friends helping friends!” She added that she accepts no money for babysitting. Mindy Rose, who leaves her 5-year-old with Snyder, agrees. “She’s a friend... I trust her.” State Representative Brian Calley is drafting legislation that would exempt people who agree to care for non-dependent children from daycare rules as long as they’re not engaged in a business. “We have babysitting police running around this state violating people, threatening to put them in jail or fine them $1,000 for helping their neighbor (that) is truly outrageous” says Rep. Calley. A DHS spokesperson would not comment on the specifics of the case but says they have no choice but to comply with state law, which is designed to protect Michigan children. [289]

Another good example is the medallion system of licensing taxicabs, where a license to operate a cab costs into the hundreds of thousands of dollars. The effect of the medallion system is to criminalize the countless operators of gypsy cab services. For the unemployed person or unskilled laborer, driving carless retirees around on their errands for an hourly fee seems like an ideal way to transform one’s labor directly into a source of income without doing obesiance to the functionaries of some corporate Human Resources department.

The primary purpose of the medallion system is not to ensure safety. That could be accomplished just as easily by mandating an annual vehicle safety inspection, a criminal background check, and a driving record check (probably all the licensed taxi firms do anyway, and with questionable results based on my casual observation of both vehicles and drivers). And it would probably cost under a hundred bucks rather than three hundred thousand. No, the primary purpose of the medallion system is to allow the owners of licenses to screw both the consumer and the driver.

Local building codes amount to a near-as-dammit lock-in of conventional techniques, regulating the pace of innovation in building techniques in accordance with the preferences of the consensus of contracting firms. As a result, building contractors are protected against vigorous competition from cheap, vernacular local materials, and from modular or prefab designs that are amenable to self-building.

In the case of occupational licensing, a good example is the entry barriers to employment as a surveyor today, as compared to George Washington’s day. As Vin Suprynowicz points out, Washington had no formal schooling until he was eleven, only two years of it thereafter, and still was able to learn enough geometry, trigonometry and surveying to get a job paying $100,000 annually in today’s terms.

How much government-run schooling would a youth of today be told he needs before he could contemplate making $100,000 a year as a surveyor—a job which has not changed except to get substantially easier, what with hand-held computers, GPS scanners and laser range-finders? Sixteen years, at least—18, more likely. [290]

The licensing of retailers protects conventional retail establishments against competition from buying clubs and other low-overhead establishments run out of people’s homes, by restricting their ability to sell to the general public. For example, a family-run food-buying co-op in LaGrange, Ohio, whose purpose was to put local farmers into direct contact with local consumers, was raided by sheriff’s deputies for allegedly operating as an unlicensed retail establishment.

A spokeswoman at the Department of Agriculture said its officers were at the scene in an advisory role. A spokeswoman at the county health agency refused to comment except to explain it was a “licensing” issue regarding the family’s Manna Storehouse. [291]

Never mind the illegitimacy of the legal distinction between a private bulk food-buying club and a public retail establishment, or the licensing requirement for selling to the general public. The raid was a textbook entrapment operation, in which and undercover agent had persistently badgered the family to sell him eggs. Apparently the family had gotten on the bad side of local authorities by responding in an inadequately deferential manner to peremptory accusations that they were running a store.

The confrontation began developing several years ago when local health officials demanded the family hold a retail food license in order to run their co-op. Thompson said the family wrote a letter questioning that requirement and asking for evidence that would suggest they were operating a food store and how their private co-op was similar to a WalMart. The Stowers family members simply “take orders from (co-op) members … then divide up the food,” Thompson explained. “The health inspector didn’t like the tone of the letter,” Thompson said, and the result was that law enforcement officials planned, staged and carried out the Dec. 1 SWAT-style raid on the family’s home. Thompson said he discussed the developments of the case with the health inspector personally. “He didn’t think the tone of that letter was appropriate,” Thompson said. “I’ve seen the letter. There’s not anything there that’s belligerent.” Thompson explained the genesis of the raid was a series of visits to the family by an undercover agent for the state agriculture agency. “He showed up (at the Stowers’ residence) unannounced one day,” Thompson explained, and “pretended” to be interested in purchasing food. The family explained the co-op was private and they couldn’t provide service to the stranger. The agent then returned another day, stayed for two hours, and explained how he thought his sick mother would be helped by eggs from range-fed chickens to which the Stowers had access. The family responded that they didn’t sell food and couldn’t help. When he refused to leave, the family gave him a dozen eggs to hasten his departure, Thompson explained. Despite protests from the family, the agent left some money on a counter and departed. On the basis of that transaction, the Stowers were accused of engaging in the retail sale of food, Thompson said.... He said the state agency came from “nowhere” and then worked to get the family involved “in something that might require a license.”... Pete Kennedy of the Farm-to-Consumer Legal Defense Fund said the case was government “overreaching” and was designed more to intimidate and “frighten people into believing that they cannot provide food for themselves.” “This is an example where, once again, the government is trying to deny people their inalienable, fundamental right to produce and consume the foods of their choice,” said Gary Cox, general counsel for the FTCLDF. “The purpose of our complaint is to correct that wrong.” [292]

As much as I love the local brew pub I visit on a weekly basis, I was taken aback by the manager’s complaint about street hot dog vendors being allowed to operate during street festivals. It was unfair for the city to allow it, he said, because an established indoor business with all its associated overhead costs couldn’t compete.

The system is effectively rigged to ensure that nobody can start a small business without being rich. Everyone else can get by on wage labor and like it (and of course that works out pretty well for the people trying to hire wage labor on the most advantageous terms, don’t you think?). Roderick Long asks,

In the absence of licensure, zoning, and other regulations, how many people would start a restaurant today if all they needed was their living room and their kitchen? How many people would start a beauty salon today if all they needed was a chair and some scissors, combs, gels, and so on? How many people would start a taxi service today if all they needed was a car and a cell phone? How many people would start a day care service today if a bunch of working parents could simply get together and pool their resources to pay a few of their number to take care of the children of the rest? These are not the sorts of small businesses that receive SBIR awards; they are the sorts of small businesses that get hammered down by the full strength of the state whenever they dare to make an appearance without threading the lengthy and costly maze of the state’s permission process. [293]

Chapter Three: Babylon is Fallen

If you watch the mainstream cable news networks and Sunday morning interview shows, you’ve no doubt seen, many times, talking head commentators rolling their eyes at any proposal for reform that’s too radically different from the existing institutional structure of society. That much of a departure would be completely unrealistic, they imply, because it is an imposition on all of the common sense people who prefer things the way they are, and because “the way things are” is a natural state of affairs that came about by being recognized, through a sort of tacit referendum of society at large, as self-evidently the most efficient way of doing things.

But, in fact, the present system is, itself, radical. The corporate economy was created in a few short decades as a radical departure from what prevailed before. And it did not come about by natural evolutionary means, or “just happen;” it’s not just “the way things are.” It was imposed from above (as we saw in Chapter One) by a conscious, deliberate, radical social engineering effort, with virtually no meaningful democratic input from below. The state-imposed corporatization of the economy in the late nineteenth century could be compared in scope and severity, without much exaggeration, to Stalin’s collectivization of agriculture and the first Five Year Plan. Although the Period is sometimes called the Gilded Age or the Great Barbecue, John Curl prefers to call it the Great Betrayal. [294] In the Tilden-Hayes dispute, Republicans ended military Reconstruction and handed the southern states back over to the planter class and segregation, in return for a free hand in imposing corporate rule at the national level.

All social systems include social reproduction apparatuses, whose purpose is to produce a populace schooled to accept “the way things are” as the only possible world, and the only natural and inevitable way of doing things. So the present system, once established, included a cultural, ideological and educational apparatus (lower and higher education, the media, etc.) run by people with exactly the same ideology and the same managerial class background as those running the large corporations and government agencies.

All proposals for “reform” within the present system are designed to be implemented within existing institutional structures, by the sorts of people currently running the dominant institutions. Anything that fundamentally weakened or altered the present pattern of corporate-state domination, or required eliminating the power of the elites running the dominant institutions, would be—by definition—“too radical.”

The system of power, consequently, can only be undermined by forces beyond its control. Fortunately, it faces a mutually reinforcing and snowballing series of terminal crises which render it unsustainable.

The present system’s enculturation apparatus functions automatically to present it as inevitable, and to suppress any consciousness that “other worlds are possible.” But not only are other worlds possible; under the conditions of Sloanist mass production described in Chapter Two, the terminal crises of the present system mean that this world, increasingly, is becoming impossible .

A. Resumption of the Crisis of Overaccumulation

State capitalism, with industry organized along mass-production lines, has a chronic tendency to overaccumulation: in other words, its overbuilt plant and equipment are unable to dispose of their full output when running at capacity, and the system tends to generate a surplus that only worsens the crisis over time.

Paul Baran and Paul Sweezy, founders of the neo-Marxist Monthly Review , described the Great Depression as “the normal outcome of the workings of the American economic system.” It was the culmination of the “stagnationist tendencies inherent in monopoly capitalism,” and far from being a deviation from economic normality was “the realization in practice of the theoretical norm toward which the system is always tending.” [295]

Fortunately for corporate capitalism, World War II postponed the crises for a generation or so, by blowing up most of the plants and equipment in the world outside the United States. William Waddell and Norman Bodek, in The Rebirth of American Industry , describe the wide-open field left for the American mass-production model:

General Motors, Ford, General Electric and the rest converted to war production and were kept busy, if not prosperous, for the next four years. When the war ended, they had vast, fully functional factories filled with machine tools. They also had plenty of cash, or at least a pocket full of government IOUs. More important, they also had the entire world market to themselves. The other emerging automobile makers, electric product innovators, consumer product companies, and machine tool builders of Europe and Asia were in ruins. [296]

Harry Magdoff and Paul Sweezy of the Monthly Review group described it, in similar terms, as a virtual rebirth of American capitalism.

The Great Depression was ended, not by a spontaneous resurgence of the accumulation process but by the Second World War. And... the war itself brought about vast changes in almost every aspect of the world capitalist system. Much capital was destroyed; the diversion of production to wartime needs left a huge backlog of unfilled consumer demand; both producers and consumers were able to pay off debts and build up unprecedented reserves of cash and borrowing power; important new industries (e.g., jet planes) grew from military technologies; drastically changed power relations between and among victorious and defeated nations gave rise to new patterns of trade and capital flows. In a real sense, world capitalism was reborn on new foundations and entered a period in important respects similar to that of its early childhood. [297]

Even so, the normal tendency was toward stagnation even during the early postwar “Golden Age.” In the period after WWII, “actual GNP has equaled or exceeded potential” in only ten years. And eight of those were during the Korean and Vietnam conflicts. The only two peacetime years in which the economy reached its potential, 1956 and 1973, had notably worse levels of employment than 1929. [298]

The tendency postwar, as before it, was for the productive capacity of the economy to far outstrip the ability of normal consumption to absorb. The difference:

Whereas in the earlier period this tendency worked itself out in a catastrophic collapse of production—during the 1930s as a whole, unemployment and utilization of productive capacity averaged 18 percent and 63 percent respectively—in the postwar period economic energies, instead of lying dormant, have increasingly been channelled into a variety of wasteful, parasitic, and generally unproductive uses.... [T]he point to be emphasized here is that far from having eliminated the stagnationist tendencies inherent in today’s mature monopoly capitalist economy, this process has forced these tendencies to take on new forms and disguises. [299]

The destruction of capital in World War II postponed the crisis of overaccumulation until around 1970, when the industrial capacity of Europe and Japan had been rebuilt. By that time, according to Piore and Sabel, American domestic markets for industrial goods had become saturated. [300]

This saturation was simply a resumption of the normal process described by Marx in the third volume of Capital , which World War II had only temporarily set back.

Leaving aside more recent issues of technological development tunneling through the cost floor and reducing the capital outlays needed for manufacturing by one or more orders of magnitude (about which more below), it is still natural for investment opportunities to decline in mature capitalism. According to Magdoff and Sweezy, domestic opportunities for the extensive expansion of capitalist investment were increasingly scarce as the domestic noncapitalist environment shrunk in relative size and the service sectors were increasingly industrialized. And quantitative needs for investment in producer goods decline steadily as industrialization proceeds:

...[T]he demand for investment capital to build up Department I, a factor that bulked large in the later nineteenth and early twentieth centuries, is of relatively minor importance today in the advanced capitalist countries. They all have highly developed capital-goods industries which, even in prosperous times, normally operate with a comfortable margin of excess capacity. The upkeep and modernization of these industries—and also of course of existing industries in Department II (consumer goods)—is provided for by depreciation reserves and generates no new net demand for investment capital. [301] ...[T]he need for new investment, relative to the size of the system as a whole, had steadily declined and has now reached an historic low. The reproduction of the system is largely self-financing (through depreciation reserves), and existing industries are for the most part operating at low levels of capacity utilization. New industries, on the other hand, are not of the heavy capital-using type and generate a relatively minor demand for additional capital investment. [302]

“Upkeep and modernization” of existing industry is funded almost entirely by retained earnings, and those retained earnings are in fact often far in excess of investment needs. Corporate management generally finances capital expansion as much as possible through retained earnings, and resorts to bond issues or new stock only as a last resort. And as Martin Hellwig points out, this does not by any means necessarily operate as a constraint on management resources, or force management to ration investment. If anything, the glut of retained earnings is more likely to leave management at a loss as to what to spend it all on. [303]

And as we saw in Chapter Two, the traditional investment model, in oligopoly industry, is tacit collusion between cartelized firms in spooning out investment in new capital assets only as fast as the old ones wear out. Schumpeter’s “creative destruction,” in a free market, would lead to the constant scrapping and replacement of functional capital assets. But, cartelized firms are freed from competitive pressure to scrap obsolete machinery and replace it before it wears out. What’s more, as we shall see in the next chapter, in the economically uncertain conditions of the past thirty years, established industry has increasingly shifted new investment from expensive product-specific machinery in the mass-production core to far less expensive general-purpose craft machinery in flexible manufacturing supplier networks.

If anything, Magdoff’s and Sweezy’s remarks on the reduced capital outlays required by new industries were radically understated, given developments of the subsequent twenty years. Newly emerging forms of manufacturing, as we shall see in Chapter Five, require far less capital to undertake production. The desktop revolution has reduced the capital outlays required for music, publishing and software by two orders of magnitude; and the newest open-source designs for computerized machine tools are being produced by hardware hackers for a few hundred dollars.

The result, according to Magdoff and Sweezy, is that “a developed capitalist system such as that of the United States today has the capacity to meet the needs of reproduction and consumption with little or no net investment.” [304] From the early days of the industrial revolution, when “the demand for investment capital seemed virtually unlimited, [and] the supply was narrowly restricted,” mature capitalism has evolved to the point where the opposite is true: the overabundant supply of investment capital is confronted by a dearth of investment opportunities. [305]

Marx, in the third volume of Capital , outlined a series of tendencies that might absorb surplus investment capital and thereby offset the general trend toward a falling direct rate of profit in mature capitalism. And these offsetting tendencies theorized by Marx coincide to a large extent with the expedients actually adopted under developed capitalism. According to Walden Bello, the capitalist state, after the resumed crisis of the 1970s, attempted to address the resumed crisis of overproduction with a long series of expedients—including a combination of neoliberal restructuring, globalization, the creation of the tech sector, the housing bubble and intensified suburbanization, and the expansion of the FIRE economy (finance, insurance and real estate)—as successive attempts to soak up surplus capital. [306]

Unfortunately for the state capitalists, the neoliberal model based on offshoring capital has reached its limit; China itself has become saturated with industrial capital. [307] The export-oriented industrialization model in Asia is hitting the walls of both Peak Oil and capital saturation.

The choice of export-oriented industrialization reflected a deliberate calculation by Asian governments, based on the realization that

import substitution industrialization could continue only if domestic purchasing power were increased via significant redistribution of income and wealth, and this was simply out of the question for the region’s elites. Export markets, especially the relatively open US market, appeared to be a painless substitute.

Today, however, as “goods pile up in wharves from Bangkok to Shanghai, and workers are laid off in record numbers, people in East Asia are beginning to realize they aren’t only experiencing an economic downturn but living through the end of an era.” The clear lesson is that the export-oriented industrial model is extremely vulnerable to both increased shipping costs and decreases in Western purchasing power—a lesson that has “banished all talk of decoupling” a growing Asian economy from the stagnating West. Asia’s manufacturing sector is “linked to debt-financed, middle-class spending in the United States, which has collapsed.” [308] The Asian export economy, as a result, has fallen through the floor.

Worldwide, industrial production has ground to a halt. Goods are stacking up, but nobody’s buying; the Washington Post reports that “the world is suddenly awash in almost everything: flat-panel televisions, bulldozers, Barbie dolls, strip malls, Burberry stores.” A Hong Kong-based shipping broker told The Telegraph that his firm had “seen trade activity fall off a cliff. Asia-Europe is an unmitigated disaster.” The Economist noted that one can now ship a container from China to Europe for free—you only need to pick up the fuel and handling costs—but half-empty freighters are the norm along the world’s busiest shipping routes. Global airfreight dropped by almost a quarter in December alone; Giovanni Bisignani, who heads a shipping industry trade group, called the “free fall” in global cargo “unprecedented and shocking.” [309]

If genuine decoupling is to take place, it will require a reversal of the strategic assessments and policy decisions which led to the choice of export-oriented industrialization over import substitution in the first place. It will require, in particular, rethinking the unthinkable: putting the issues of local income distribution and purchasing power back on the table. That means, in concrete terms, that Asian manufacturers currently engaged in the Nike (“outsource everything”) model of distributed manufacturing must treat the Western corporate headquarters as nodes to be bypassed, repudiate their branding and other “intellectual property,” and reorient production to the domestic market with prices that reflect something like the actual cost of production without brand-name markup. It also requires that Asian governments cease their modern-day reenactment of the “primitive accumulation” of eighteenth-century Britain, restore genuine village control of communal lands, and otherwise end their obsessive focus on attracting foreign investment through policies that suppress the bargaining power of labor and drive people into the factories like wild beasts. In other words, those Nike sneakers piling up on the wharves need to be marketed to the local population minus the swoosh, at an 80% markdown. At the same time, agriculture needs to shift from cash crop production for the urban and export market to a primary focus on subsistence production and production for the domestic market.

Bello points out that 75% of China’s manufacturers were already complaining of excess capacity and demand stagnation, even before the bubble of debt-fueled demand collapsed. Interestingly, he also notes that the Chinese government is trying to bolster rural demand as an alternative to collapsing demand in the export market, although he’s quite skeptical of the policy’s prospects for success. The efforts to promote rural purchasing power, he argues, are too little and too late—merely chipping at the edges of a 25-year policy of promoting export-oriented industrialization “on the back of the peasant.” China’s initial steps toward market liberalization in the 1970s were centered on the prosperity of peasant smallholders. In the ‘80s, the policy shifted toward subsidizing industry for the export market, with a large increase in the rural tax burden and as many as three hundred million peasants evicted from their land in favor of industrial use. But any hope at all for China’s industrial economy depends on restoring the prosperity of the agricultural sector as a domestic source of demand. [310]

Suburbanization, thanks to Peak Oil and the collapse of the housing bubble, has also ceased to be a viable outlet for surplus capital.

The stagnation of the economy from the 1970s on—every decade since the postwar peak of economic growth in the 1960s has seen lower average rates of annual growth in real GDP compared to the previous decade, right up to the flat growth of the present decade—was associated with a long-term trend in which demand was stimulated mainly by asset bubbles. [311] In 1988, a year after the 1987 stock market crash and on the eve of the penultimate asset bubble (the dotcom bubble of the ‘90s), Sweezy and Magdoff summed up the previous course of financialization in language that actually seems understated in light of subsequent asset bubbles.

Among the forces counteracting the tendency to stagnation, none has been more important or less understood by economic analysts than the growth, beginning in the 1960s and rapidly gaining momentum after the severe recession of the mid-1970s, of the country’s debt structure... at a pace far exceeding the sluggish expansion of the underlying “real” economy. The result has been the emergence of an unprecedentedly huge and fragile financial superstructure subject to stresses and strains that increasingly threaten the stability of the economy as a whole. Between the 1960s and 1987, the debt-to-GNP ratio rose from 1.5 to 2.25. [312]

But it was only after the collapse of the tech bubble that financialization—the use of derivatives and securitization of debt as surplus capital sponges to soak up investment capital for which no outlet existed in productive industry—really came into its own. As Joshua Holland noted, in most recessions the financial sector contracted along with the rest of the economy; but after the 2000 tech bust it just kept growing, ballooning up to ten percent of the economy. [313] We’re seeing now how that worked out.

Financialization was a way of dealing with a surplus of productive capacity, whose output the population lacked sufficient purchasing power to absorb—a problem exacerbated by the fact that almost all increases in productivity had gone to increasing the wealth of the upper class. Financialization enabled the upper class to lend its increased wealth to the rest of the population, at interest, so they could buy the surplus output.

Conventional analysts and editorialists frequently suggest, to the point of cliche, that the shift from productive investment to speculation in the finance sector is the main cause of our economic ills. But as Magdoff and Sweezy point out, it’s the other way around. The expansion of investment capital against the backdrop of a sluggish economy led to a shift in investment to financial assets, given the lack of demand for further investment in productive capital assets.

It should be obvious that capitalists will not invest in additional capacity when their factories and mines are already able to produce more than the market can absorb. Excess capacity emerged in one industry after another long before the extraordinary surge of speculation and finance in the 1970s, and this was true not only in the United States but throughout the advanced capitalist world. The shift in emphasis from industrial to pecuniary pursuits is equally international in scope. [314]

In any case, the housing bubble collapsed, government is unable to reinflate housing and other asset values even with trillion-dollar taxpayer bailouts, and an alarming portion of the population is no longer able to service the debts accumulated in “good times.” Not only are there no inflated asset values to borrow against to fuel demand, but many former participants in the Ditech spending spree are now becoming unemployed or homeless in the Great Deleveraging. [315]

Besides, the problem with debt-inflated consumer demand was that there was barely enough demand to keep the wheels running and absorb the full product of overbuilt industry even when everyone maxed out their credit cards and tapped into their home equity to replace everything they owned every five years. And we’ll never see that kind of demand again. So there’s no getting around the fact that a major portion of existing plant and equipment will be rust in a few years.

State capitalism seems to be running out of safety valves. Barry Eichengreen and Kevin O’Rourke suggest that, given the scale of the decline in industrial output and global trade, the term “Great Recession” may well be over-optimistic. Graphing the rate of collapse in global industrial output and trade from spring 2008 to spring 2009, they found the current rate of decline has actually been steeper than that of 1929–1930. From appearances in early 2009, it was “a Depression-sized event,” with the world “currently undergoing an economic shock every bit as big as the Great Depression shock of 1929–30.” [316]

Left-Keynesian Paul Krugman speculated that the economy narrowly escaped another Great Depression in early 2009.

A few months ago the possibility of falling into the abyss seemed all too real. The financial panic of late 2008 was as severe, in some ways, as the banking panic of the early 1930s, and for a while key economic indicators — world trade, world industrial production, even stock prices — were falling as fast as or faster than they did in 1929–30. But in the 1930s the trend lines just kept heading down. This time, the plunge appears to be ending after just one terrible year. So what saved us from a full replay of the Great Depression? The answer, almost surely, lies in the very different role played by government. Probably the most important aspect of the government’s role in this crisis isn’t what it has done, but what it hasn’t done: unlike the private sector, the federal government hasn’t slashed spending as its income has fallen. [317]

This is not to suggest that the Keynesian state is a desirable model. Rather, it is made necessary by state capitalism. But make no mistake: so long as we have state capitalism, with state promotion of overaccumulation and the maldistribution of purchasing power that results from privilege, state intervention to manage aggregate demand is necessary to avert depression. Given state capitalism, we have only two alternatives: 1) eliminate the privileges and subsidies to overaccumulation that result in chronic crisis tendencies; or 2) resort to Keynesian stabilizing measures. Frankly, I can’t work up much enthusiasm for the mobs of teabaggers demanding an end to the Keynesian stabilizing measures, when those mobs reflect an astroturf organizing effort funded by the very people who benefit from the privileges and subsidies that contribute to chronic crisis tendencies.

And we should bear in mind that it’s far from clear the worst has, in fact, been averted. Karl Denninger argues that the main reason GDP fell only 1% in the second quarter of 2009, as opposed to 6% in the first, was increased government spending. As he points out, the fall of investment slowed in the second quarter; but given that it was already cut almost in half, there wasn’t much further it could fall. Exports fell “only” 7% and imports 15.1%; but considering they had already fallen 29.9% and 36.4%, respectively, in the first quarter, this simply means that exports and imports have “collapsed.” Consumer spending fell in the second quarter more than in the first, with a second quarter increase in the rate of “savings” (or rather, of paying down debt). If the rate of collapse is slowing, it’s because there’s so much less distance to fall. Denninger’s take: “The recession is not ‘easing’, it is DEEPENING.” [318]

The reduction in global trade is especially severe, considering that the very modest uptick in summer 2009 still left the shortfall from baseline levels far lower in the Great Recession than it was at a comparable point in the Great Depression. As of late summer 2009, world trade was some 20% below the pre-recession baseline, compared to only 8% the same number of months into the Depression. Bear in mind that the collapse of world trade in the Depression is widely regarded as the catastrophic result of the Smoot-Hawley tariff, and to have been a major exacerbating factor in the continuing progression of the economic decline in the early ‘30s. The current reduction in volume of world trade, far greater than that of the Great Depression, has occurred without Smoot-Hawley! [319]

Stoneleigh, a former writer for The Oil Drum Canada , argues that the asset deflation has barely begun:

Banks hold extremely large amounts of illiquid ‘assets’ which are currently marked-to-make-believe. So long as large-scale price discovery events can be avoided, this fiction can continue. Unfortunately, a large-scale loss of confidence is exactly the kind of circumstance that is likely to result in a fire-sale of distressed assets.... A large-scale mark-to-market event of banks illiquid ‘assets’ would reprice entire asset classes across the board, probably at pennies on the dollar. This would amount to a very rapid destruction of staggering amounts of putative value. This is the essence of deflation.... The currently celebrated “green shoots,” which she calls “gangrenous,” are comparable to the suckers’ stock market rally of 1930. [320]

In any case, if Keynesianism is necessary for the survival of state capitalism, we’re reaching a point at which it is no longer sufficient . If pessimists like Denninger are wrong, and Keynesian policies have indeed turned the free fall into a slow motion collapse, the fact remains that they are insufficient to restore “normalcy”—because normalcy is no longer an option. Keynesianism was sufficient during the postwar “Consensus Capitalism” period only because of the worldwide destruction of plant and equipment in WWII, which postponed the crisis of overaccumulation for a generation or so.

Bello makes the very good point that Keynesianism is not a long-term solution to the present economic difficulties because it ceased to be a solution the first time around.

The Keynesian-inspired activist capitalist state that emerged in the post-World War II period seemed, for a time, to surmount the crisis of overproduction with its regime of relatively high wages and technocratic management of capital-labor relations. However, with the addition of massive new capacity from Japan, Germany, and the newly industrializing countries in the 1960s and 1970s, its ability to do this began to falter. The resulting stagflation — the coincidence of stagnation and inflation — swept throughout the industrialized world in the late 1970s. [321]

Conventional left-Keynesian economists are at a loss to imagine some basis on which a post-bubble economy can ever be reestablished with anything like current levels of output and employment. This is especially unfortunate, given the focus of both the Bush and Obama administrations’ banking policies on restoring asset prices to something approaching their pre-collapse value, and the focus of their economic policies on at least partially reinflating the bubble economy as a source of purchasing power, so that—as James Kunstler so eloquently puts it—

the US public could resume a revolving credit way-of-life within an economy dedicated to building more suburban houses and selling all the needed accessories from supersized “family” cars to cappuccino machines. This would keep everyone employed at the jobs they were qualified for—finish carpenters, realtors, pool installers, mortgage brokers, advertising account executives, Williams-Sonoma product demonstrators, showroom sales agents, doctors of liposuction, and so on. [322]

Both the Paulson and Geithner TARP plans involve the same kind of Hamiltonian skullduggery: borrowing money, to be repaid by taxpayers with interest, to purchase bad assets from banks at something much closer to face value than current market value in order to increase the liquidity of banks to the point that they might lend money back to the public—should they deign to do so—at interest. Or as Michael Hudson put it, TARP “aims at putting in place enough new bank-lending capacity to start inflating prices on credit all over again.” [323]

Charles Hugh Smith describes the parallel between Japan’s “Lost Decade” and the current economic crisis:

Ushinawareta junen is the Japanese phrase for “Lost Decade.” The term describes the 1991–2000 no-growth decade in which Japan attempted to defeat debt-liquidation deflationary forces with massive government borrowing and spending, and a concurrent bailout of “zombie” (insolvent) banks with government funds. The central bank’s reflation failed. By any measure, the Lost Decade is now the Lost Decades. Japan’s economy enjoyed a brief spurt from America’s real estate bubble and China’s need for Japanese factory equipment and machine tools. But now that those two sources of demand have ebbed, Japan is returning to its deflationary malaise.... ...It seems the key parallel is this: an asset bubble inflated with highly leveraged debt pops and the value of real estate and stocks declines. But the high levels of debt taken on to speculate in stocks and housing remain. Rather than let the private-sector which accepted the high risks and took the enormous profits take staggering losses and writedowns, the government and central bank shift the losses from the private sector to the public balance sheet via bailouts and outright purchases of toxic/impaired private debt. [324]

The problem is that pre-collapse levels of output can only be absorbed by debt-financed and bubble-inflated purchasing power, and that another bubble on the scale of the tech and real estate booms just ain’t happening.

Keynesianism might be viable as a long-term strategy if deficit stimulus spending were merely a way of bridging the demand shortfall until consumer spending could be restored to normal levels, after which it would use tax revenues in good times to pay down the public debt. But if normal levels of consumer spending won’t come back, it amounts to the U.S. government borrowing $2 trillion this year to shore up consumer spending for this year —with consumer spending falling back to Depression levels next year if another $2 trillion isn’t spent.

We estimate that absent all the forms of government stimulus in the second quarter, real GDP would have contracted at a decidedly brown-shooty 6% annual rate as opposed to the posted 1% decline. And, while consensus forecasts are centered around 3.0–3.5% for current quarter growth, again the pace of economic activity would be flat-to-negative absent Cash-for-Clunkers, government auto purchases, and first-time homebuyer subsidies, not to mention the FHA’s best efforts to recreate the housing and credit bubble.... [325]

So capitalism might be sustainable, in terms of the demand shortfall taken in isolation—if the state is prepared to run a deficit of $1 or $2 trillion a year, every single year, indefinitely. But there will never again be a tax base capable of paying for these outlays, because the implosion of production costs from digital production and small-scale manufacturing technology is destroying the tax base. What we call “normal” levels of demand are a thing of the past. As Paul Krugman points out, as of late fall 2009 stimulus spending is starting to run its course, with no sign of sufficient self-sustaining demand to support increased industrial production; the increasingly likely result is a double dip recession with Part Two in late 2010 or 2011. [326]

So the crisis of overaccumulation exacerbates the fiscal crisis of the state (about which more below).

It might be possible to sustain such spending on a permanent basis via something like the “Social Credit” proposals of Major Douglas some eighty years ago (simply creating the money out of thin air instead of borrowing it or funding it with taxes, and depositing so much additional purchasing power in every citizen’s checking account each month). But that would undermine the basic logic of capitalism, removing the incentive to accept wage labor on the terms offered, and freeing millions of people to retire on a subsistence income from the state while participating in the non-monetized gift or peer economy. Even worse, it would create the economic basis for continuing subsidized waste and planned obsolescence until the ecosystem reached a breaking point—a state of affairs analogous to the possibility, contemplated with horror by theologians, that Adam and Eve in their fallen state might have attained immortality from the Tree of Life.

Those who combine some degree of “green” sympathy with their Keynesianism have a hard time reconciling the fundamental contradiction involved in the two sides of modern “Progressivism.” You can’t have all the good Michael Moore stuff about full employment and lifetime job security, without the bad stuff about planned obsolescence and vulgar consumerism. Krugman is a good case in point:

I’m fairly optimistic about 2010. But what comes after that? Right now everyone is talking about, say, two years of economic stimulus — which makes sense as a planning horizon. Too much of the economic commentary I’ve been reading seems to assume, however, that that’s really all we’ll need — that once a burst of deficit spending turns the economy around we can quickly go back to business as usual. In fact, however, things can’t just go back to the way they were before the current crisis. And I hope the Obama people understand that. The prosperity of a few years ago, such as it was — profits were terrific, wages not so much — depended on a huge bubble in housing, which replaced an earlier huge bubble in stocks. And since the housing bubble isn’t coming back, the spending that sustained the economy in the pre-crisis years isn’t coming back either. To be more specific: the severe housing slump we’re experiencing now will end eventually, but the immense Bush-era housing boom won’t be repeated. Consumers will eventually regain some of their confidence, but they won’t spend the way they did in 2005–2007, when many people were using their houses as ATMs, and the savings rate dropped nearly to zero. So what will support the economy if cautious consumers and humbled homebuilders aren’t up to the job? [327]

(I would add that, whatever new standard of post-bubble “normalcy” prevails, in the age of Peak Oil and absent previous pathological levels of consumer credit, it’s unlikely the U.S. will ever see a return to automobile sales of 18 million a year. If anything, the current output of ca. ten million cars is probably enormously inflated.) [328]

And Krugman himself, it seems, is not entirely immune to the delusion that a sufficient Keynesian stimulus will restore the levels of consumer demand associated with something like “normalcy.”

Krugman first compares the longer duration and greater severity of depressions without countercyclical government policy to those with, and then cites Keynes as an authority in estimating the length of the current Great Recession without countercyclical stimulus spending: “a recession would have to go on until ‘the shortage of capital through use, decay and obsolescence causes a sufficiently obvious scarcity to increase the marginal efficiency.’” [329]

But, as he himself suggested in his earlier column, the post-stimulus economy may have much lower “normal” levels of demand than the pre-recession economy, in which case the only effect of the stimulus will be to pump up artificial levels of demand so long as the money is still being spent. In that case, as John Robb argues, the economy will eventually have to settle into a new equilibrium with levels of demand set at much lower levels.

The assumption is that new homes will eventually need to be built to accommodate population growth and new cars will be sold to replace old stock. However, what if there is a surge in multi-generational housing (there is) or people start to drive much less (they are) or keep their cars until they drop (most people I know are planning this). If that occurs, you have to revise the replacement level assumption to a far lower level than before the start of the downturn. What’s that level? I suspect it is well below current sales levels, which means that there is much more downside movement possible. [330]

The truth of the matter is, the present economic crisis is not cyclical, but structural. There is excess industrial capacity that will be rust in a few years because we are entering a period of permanently low consumer demand and frugality. As Peter Kirwan at Wired puts it, the mainstream talking heads are mistaking for a cyclical downturn what is really “permanent structural change” and “industrial collapse.” [331]

Both the bailout and stimulus policies, under the late Bush and Obama administrations, have amounted to standing in the path of these permanent structural changes and yelling “Stop!” The goal of U.S. economic policy is to prevent the deflation of asset bubbles, and restore sufficient demand to utilize the idle capacity of mass-production industry. But this only delays the inevitable structural changes that must take place, as Richard Florida points out:

The bailouts and stimulus, while they may help at the margins, also pose an enormous opportunity costs [sic]. On the one hand, they impede necessary and long-deferred economic adjustments. The auto and auto-related industries suffer from massive over-capacity and must shrink. The housing bubble not only helped spur the financial crisis, it also produced an enormous mis-allocation of resources. Housing prices must come a lot further down before we can reset the economy—and consumer demand—for a new round of growth. The financial and banking sector grew massively bloated—in terms of employment, share of GDP and wages, as the detailed research of NYU’s Thomas Phillipon has shown—and likewise have to come back to earth. [332]

The new frugality, to the extent that it entails more common-sense consumer behavior, threatens the prevailing Nike model of outsourcing production and charging a price consisting almost entirely of brand-name markup. A Wall Street Journal article cites a Ms. Ball: “After years of spending $17 on bottles of Matrix shampoo and conditioner, 28-year-old Ms. Ball recently bought $5 Pantene instead.... ‘I don’t know that you can even tell the difference.’” Procter & Gamble has been forced to scale back its prices considerably, and offer cheaper and less elaborate versions of many of its products. William Waddell comments:

Guess what P&G—Ms. Ball and millions like her will not come back to your hollow brands once the economy comes back now that she knows the $5 stuff is exactly the same as the $17 stuff. [333]

A permanent, mass shift from brand-name goods to almost identical generic and store brand goods would destroy the basis of push-distribution capitalism. We already saw, in the previous chapter, quotes from advertising industry representatives stating in the most alarmist terms what would happen if their name brand goods had to engage in direct price competition like commodities. The min-revolt against brand-name goods during the downturn of the early ‘90s—the so-called “Marlboro Friday”—was a dress rehearsal for just such an eventuality.

On April 2, 1993, advertising itself was called into question by the very brands the industry had been building, in some cases, for over two centuries. That day is known in marketing circles as “Marlboro Friday,” and it refers to a sudden announcement from Philip Morris that it would slash the price of Marlboro cigarettes by 20 percent in an attempt to compete with bargain brands that were eating into its market. The pundits went nuts, announcing in frenzied unison that not only was Marlboro dead, all brand names were dead. The reasoning was that if a “prestige” brand like Marlboro, whose image had been carefully groomed, preened and enhanced with more than a billion advertising dollars, was desperate enough to compete with no-names, then clearly the whole concept of branding had lost its currency. The public had seen the advertising, and the public didn’t care.... The implication that Americans were suddenly thinking for themselves en masse reverberated through Wall Street. The same day Philip Morris announced its price cut, stock prices nose-dived for all the household brands: Heinz, Quaker Oats, Coca-Cola, PepsiCo, Procter and Gamble and RJR Nabisco. Philip Morris’s own stock took the worst beating. Bob Stanojev, national director of consumer products marketing for Ernst and Young, explained the logic behind Wall Street’s panic: “If one or two powerhouse consumer products companies start to cut prices for good, there’s going to be an avalanche. Welcome to the value generation.”

As Klein went on to write, the Marlboro Man eventually recovered from his setback, and brand names didn’t exactly become obsolete in the ensuing age of Nike and The Gap. But even if the panic was an “overstated instant consensus,” it was nevertheless “not entirely without cause.”

The panic of Marlboro Friday was not a reaction to a single incident. Rather, it was the culmination of years of escalating anxiety in the face of some rather dramatic shifts in consumer habits that were seen to be eroding the market share of household-name brands, from Tide to Kraft. Bargain-conscious shoppers, hit hard by the recession, were starting to pay more attention to price than to the prestige bestowed on their products by the yuppie ad campaigns of the 1980s. The public was suffering from a bad case of what is known in the industry as “brand blindness.” Study after study showed that baby boomers, blind to the alluring images of advertising and deaf to the empty promises of celebrity spokespersons, were breaking their lifelong brand loyalties and choosing to feed their families with private-label brands from the supermarket—claiming, heretically, that they couldn’t tell the difference... It appeared to be a return to the proverbial shopkeeper dishing out generic goods from the barrel in a prebranded era. The bargain craze of the early nineties shook the name brands to their core. Suddenly it seemed smarter to put resources into price reductions and other incentives than into fabulously expensive ad campaigns. This ambivalence began to be reflected in the amounts companies were willing to pay for so-called brand-enhanced advertising. Then, in 1991, it happened: overall advertising spending actually went down by 5.5 percent for the top 100 brands. It was the first interruption in the steady increase of U.S. ad expenditures since a tiny dip of 0.6 percent in 1970, and the largest drop in four decades. It’s not that top corporations weren’t flogging their products, it’s just that to attract those suddenly fickle customers, many decided to put their money into promotions such as giveaways, contests, in-store displays and (like Marlboro) price reductions. In 1983, American brands spent 70 percent of their total marketing budgets on advertising, and 30 percent on these other forms of promotion. By 1993, the ratio had flipped: only 25 percent went to ads, with the remaining 75 percent going to promotions. [334]

And Ms. Ball, mentioned above, may prefigure a more permanent shift to the same sort of behavior in the longer and deeper Great Recession of the 21 st century.

While Krugman lamely fiddles around with things like a reduction of the U.S. trade deficit as a possible solution to the demand shortfall, liberal blogger Matthew Yglesias has a more realistic idea of what a sustainable post-bubble economy might actually entail.

I would say that part of the answer may well involve taking a larger share of our productivity gains as increased leisure rather than increased production and incomes.... A structural shift to less-work, less-output dynamic could be catastrophic if that means a structural shift to a very high rate of unemployment. But if it means a structural shift toward six-week vacations and fewer 60 hour weeks then that could be a good thing. [335]

Exactly. But a better way of stating it would be “a structural shift toward a less-work, less-output, less-planned-obsolescence, and less-embedded-rents-on-IP-and-ephemera dynamic, with no reduction in material standard of living. A structural dynamic toward working fewer hours to produce less stuff because it lasts longer instead of going to the landfill after a brief detour in our living rooms, would indeed be a good thing.

Michel Bauwens ventures a somewhat parallel analysis from a different perspective, that of Kondratiev’s long-wave theory and neo-Marxist theories of the social structure of accumulation (particularly the idea of a new social structure of accumulation as necessary to resolve the crises of the previous structure [336] ). According to Bauwens, 1929 was the sudden systemic shock of the last system, and from it emerged the present system, based on Fordist mass-production and the New Deal/organized labor social contract, the automobile, cheap fossil fuels—you know the drill. The system’s golden age lasted from WWII to the early 1970s, when its own series of systemic shocks began: the oil embargo, the saturation of world industrial capital, and all the other systemic crises we’re considering in this chapter. According to Bauwens, each long wave is characterized by a new energy source, a handful of technological innovations (what the neo-Marxists would call “epoch-making industries”), a new mode of financial system, and a new social contract. Especially interesting, each long wave presents “a new ‘hyperproductive’ way to ‘exploit the territory,’” which parallels his analysis (which we will examine in later chapters) of the manorial economy as a path of intensive development when the slave economy reached its limits of expansion, and of netarchical capitalism as a way to extract value intensively when extensive addition of capital inputs is no longer feasible.

According to Bauwens, the emerging long wave will be characterized by renewable energy and green technology, crowdsourced credit and microlending, relocalized networked manufacturing, a version of small-scale organic agriculture that applies the latest findings of biological science, and a mode of economic organization centered on civil society and peer networks. [337]

However, to the extent that the capture of value through “intellectual property” is no longer feasible (see below), it seems unlikely that any such new paradigm can function on anything resembling the current corporate capitalist model.

It’s a fairly safe bet we’re in for a period of prolonged economic stagnation and decline, measured in conventional terms. The imploding capital outlays required for manufacturing, thanks to current technological developments, mean that the need for investment capital falls short of available investment funds by at least an order of magnitude. The increasing unenforcability of “intellectual property” means that attempts to put a floor under either mandated capital outlays, overhead, or commodity price, as solutions to the crisis, will fail. Established industry will essentially cut off all net new investment in capital equipment and begin a prolonged process of decay, with employment levels suffering accordingly.

Those who see this as leading to a sudden, catastrophic increase in technological unemployment are probably exaggerating the rate of progression of the crisis. What we’re more likely to see is what Alan Greenspan called a Great Malaise, gradually intensifying over the next couple of decades. Given the toolkit of anti-deflationary measures available to the central bankers, he argued in 1980, the collapse of asset bubbles would never again be allowed to follow its natural course—a “cascading set of bankruptcies” leading to a chain reaction of debt deflation. The central banks, he continued, would “flood the world’s economies with paper claims at the first sign of a problem,” so that a “full-fledged credit deflation” on the pattern of the early 1930s could not happen. And, indeed, Sweezy and Magdoff argue, had the government not intervened following the stock market crash of 1987, it’s quite likely the aftermath would have been a deflationary collapse like that of the Depression.

Greenspan’s successor Ben “Helicopter” Bernanke, whose nickname comes from his stated willingness to airdrop cash to maintain liquidity, made good on such guarantees in the financial crisis of fall 2008. The federal government also moved far more quickly than in the 1930s, as we saw above, to use deficit spending to make up a significant part of the demand shortfall.

The upshot of this is that the crisis of overaccumulation and underconsumption is likely to be reflected, not in a sudden deflationary catastrophe, but—in Greenspan’s words—a Great Malaise.

Thus in today’s political and institutional environment, a replay of the Great Depression is the Great Malaise. It would not be a period of falling prices and double-digit unemployment, but rather an economy racked with inflation, excessive unemployment (8 to 9 percent), falling productivity, and little hope for a more benevolent future. [338]

That kind of stagnation is essentially what happened in the late ‘30s, after FDR succeeded in pulling the economy back from the cliff of full-scale Depression, but failed to restore anywhere near normal levels of output. From 1936 or so until the beginning of WWII, the economy seemed destined for long-term stagnation with unemployment fluctuating around 15%. In today’s Great Malaise, likewise, we can expect long-term unemployment from 10% to 15%, and utilization of industrial capacity in the 60% range, with a simultaneous upward creeping of part-time work and underemployment, and the concealment of real unemployment levels as more people stop looking for work and drop from the unemployment rolls.

Joshua Cooper Ramo notes that employment has fallen much more rapidly in the Great Recession than Okun’s Law (which states the normal ratio of GDP decline to job losses) would have predicted. Instead of the 8.5% unemployment predicted by Okun’s Law, we’re at almost 10%.

Something new and possibly strange seems to be happening in this recession. Something unpredicted by the experts. “I don’t think,” Summers told the Peterson Institute crowd — deviating again from his text — “that anyone fully understands this phenomenon.” And that raises some worrying questions. Will creating jobs be that much slower too? Will double-digit unemployment persist even after we emerge from this recession? Has the idea of full employment rather suddenly become antiquated?... When compiling the “worst case” for stress-testing American banks last winter, policymakers figured the most chilling scenario for unemployment in 2009 was 8.9%—a figure we breezed past in May. From December 2007 to August 2009, the economy jettisoned nearly 7 million jobs, according to the Bureau of Labor Statistics. That’s a 5% decrease in the total number of jobs, a drop that hasn’t occurred since the end of World War II. The number of long-term unemployed, people who have been out of work for more than 27 weeks, was the highest since the BLS began recording the number in 1948.... America now faces the direst employment landscape since the Depression. It’s troubling not simply for its sheer scale but also because the labor market, shaped by globalization and technology and financial meltdown, may be fundamentally different from anything we’ve seen before. And if the result is that we’re stuck with persistent 9%-to-11% unemployment for a while... we may be looking at a problem that will define the first term of Barack Obama’s presidency.... The total number of nonfarm jobs in the U.S. economy is about the same now—roughly 131 million—as it was in 1999. And the Federal Reserve is predicting moderate growth at best. That means more than a decade without real employment expansion. [339]

To put things in perspective, the employment-to-population ratio—since its peak of 64.7% in 2000—has fallen to 58.8%. [340] That means the total share of the population which is employed has fallen by about a tenth over the past nine years. And the employment-to-population ratio is a statistic that’s a lot harder to bullshit than the commonly used official unemployment figures. The severity of the latter is generally concealed by discouraged job-seekers dropping off the unemployment rolls; the official unemployment figure is consistently understated because of shrinkage of the job market, and counts only those who are still bothering to look for work. The reason unemployment only rose rose to 9.8% in September 2009, instead of 10%, is that 571,000 discouraged workers dropped out of the job market that month. Another statistic, the hours-worked index, has also displayed a record decline (8.6% from the prerecession peak, compared to only 5.8% in the 1980–82 recession). [341]

A much larger portion of total unemployed in this recession are long-term unemployed. 53% (or eight million) of the unemployed in August were not on temporary layoff, and of those five million had sought work unsuccessfully for six months or more—both record highs. [342] Although total unemployment levels as of November 2009 have yet to equal their previous postwar peak in 1983, the percentage of the population who have been seeking jobs for six months or more is now 2.3%—compared to only 1.6% in 1983. [343] The Bureau of Labor Statistics announced in January 2010 that the rate of long-term unemployment was the highest since 1948, when it began measuring it; those who had been out of work for six months or longer comprised 40% of all unemployed. [344]

And we face the likely prospect that the economy will continue to shed jobs even after the resumption of growth in GDP; in other words not just a “jobless recovery,” but a recovery with job losses. [345] As J. Bradford DeLong points out, the economy is shedding jobs despite an increase in demand for domestically manufactured goods.

Real spending on American-made products is rising at a rate of about 3.5% per year right now and has been since May. The point is that even though spending on American products is rising, employment in America is still falling. [346]

Three quarters after recovery began in the 1981 recession, employment was up 1.5%. Three quarters into this recovery, it’s down 0.6%. The recent surge in employment, despite enthusiastic celebration in the press, is hardly enough to keep pace with population growth and prevent unemployment from worsening. [347] And according to Neil Irwin, the massive debt deleveraging which is yet to come means there will be insufficient demand to put the unemployed back to work.

American households are trying to reduce debt to stabilize finances. But they are doing so slowly, with total household debt at 94 percent of gross domestic product in the fourth quarter down just slightly from 96 percent when the recession began in late 2007. By contrast, that ratio of household debt to economic output was 70 percent in 2000. To get back to that level, Americans would need to pay down $3.4 trillion in debt—and if they do, that money wouldn’t be available to spend on goods and services. [348]

In such a period of stagnation, capital goods investment is likely to lag far behind even the demand for consumer goods; investment in plant and equipment, generally, tends to fall much lower than capacity utilization of consumer goods industry in economic downturns, and to be much slower rebounding in the recovery. In the 1930s, investment in plant and equipment was cut by 70% to 80%. Machine tool builders shut down production for prolonged periods, and depreciated industrial capital stock was not replaced for years. In 1939, despite consumer demand 12% over its peak in the 1920s, investment in plant and equipment was at less than 60% of the 1929 level. [349] Investment in plant and equipment only began to come back with heavy government Lend-Lease spending (the machinery industry expanded output 30% in 1940). [350] In the coming period, as we shall see below, we can expect a virtual freeze of investment in the old mass-production industrial core.

Charles Hugh Smith expects “a decades-long period of structural unemployment in which there will not be enough jobs for tens of millions of citizens”: the employment rolls will gradually shrink from their present level of 137 million to 100 million or so, and then stagnate at that level indefinitely. [351] Economist Mark Zandi of Moody’s Economy.com predicts “the unemployment rate will be permanently higher, or at least for the foreseeable future.” [352] Of course, it’s quite plausible that the harm will be mitigated to some extent by a greater shift to job-sharing, part-time work by all but one member of a household, or even a reduction of the standard work week to 32 hours.

The hope— my hope—is that these increasing levels of underemployment and unemployment will be offset by increased ease of meeting subsistence needs outside the official economy, by the imploding cost of goods manufactured in the informal sector, and by the rise of barter networks as the means of providing an increasing share of consumption needs by direct production for exchange between producers in the informal sector. As larger and larger shares of total production disappear as sources of conventional wage employment, and cease to show up in the GDP figures, the number of hours it’s necessary to work to meet needs outside the informal sector will also steadily decline, and the remaining levels of part-time employment for a majority of the population will be sufficient to maintain a positive real material standard of living.

B. Resource crises (Peak Oil)

In recent decades, the centerpiece of both the energy policy and a major part of the national security policy of the U.S. government has been to guarantee “cheap, safe and abundant energy” to the corporate economy. It was perhaps exemplified most forcefully in the Carter Doctrine of 1980: “An attempt by any outside force to gain control of the Persian Gulf region will be regarded as an assault on the vital interests of the United States of America, and such an assault will be repelled by any means necessary, including military force.” [353]

This is no longer possible: the basic idea of Peak Oil is that the rate of extraction of petroleum has peaked, or is about to peak. On the downside of the peak, the supply of oil will gradually contract year by year. Although the total amount of oil reserves in the ground may be roughly comparable to those extracted to date, they will be poorer in quality, and more expensive in both dollar terms and energy to extract.

All the panaceas commonly put forth for Peak Oil—oil shale, tar sands, offshore drilling, algae—turn out to be pipe dreams. The issue isn’t the absolute amount of oil in offshore reserves or tar sands, but the cost of extracting them and the maximum feasible rate of extraction. In terms of the net energy surplus left over after the energy cost of extraction (Energy Return on Energy Investment, or EROEI), all the “drill baby drill” gimmicks are far more costly—cost far more BTUs per net BTU of energy produced—than did petroleum in the “good old days.” The maximum rate of extraction from all the newly discovered offshore oil bonanzas the press reports, and from unconventional sources like tar sands, doesn’t begin to compensate for the daily output of old wells in places like the Persian Gulf that will go offline in the next few years. And the oil from such sources is far more costly to extract, with much less net energy surplus. [354]

The list of false panaceas includes coal, by the way. It’s sometimes argued that Peak Coal is some time away, and that increased coal output (e.g. China’s much-vaunted policy of building another coal-fired generator every week) will compensate for decreased oil output in the intermediate term. But estimates of coal reserves have been revised radically downward in the last two decades—by some 55%, as a matter of fact. In virtually every country where coal reserves have been reestimated since the 1990s, such a downward revision has recurred. Poland, the largest coal producer in the EU, had its reserve estimates downgraded by 50%, and Germany by 90%. UK reserve estimates were revised from 45 billion tons to 0.22 billion tons. And interestingly, the countries with some of the highest estimated coal reserves (e.g. China) are also the countries whose estimates are the oldest and most out of date. The most recent figures for China, for an estimated 55 years’ reserves, date back all the way to 1992—and Chinese production since then has amounted to some 20% of those total reserves.

The Energy Watch Group report gives projected production profiles showing that China is likely to experience peak coal production in the next 10–15 years, followed by a steep decline. It should also be noted that these production profiles do not take into account uncontrolled coal fires which – according to satellite based estimates – add around 5–10% to regular consumption. Since China’s production dwarfs that of any other country (being almost double that of the second largest producer, the USA) the global coal production peak will be heavily influenced by China’s production profile. [355]

The Energy Watch Group’s estimate for peak coal energy is 2025. [356] And even assuming increased coal output for another decade or more, Richard Heinberg forecasts total fossil fuel energy production peaking around 2010 or so. [357]

Peak Oil skeptics frequently argue that a price spike like the one in 2008 is caused, not by Peak Oil, but “instead” by some special circumstance like a specific supply disruption or speculative bubble. But that misses the point.

The very fact that supply has reached its peak, and that price is entirely determined by the amount of demand bidding for a fixed supply, means that the price of oil is governed by the same speculative boom-bust cycle Henry George observed in land. Given the prospect of a fixed supply of land or oil, the rational interest of the oil industry, like that of real estate speculators, will lead them to hold greater or lesser quantities off the market, or dump them on the market, based on their estimate of the future movement of price. Hence the inconvenient fact, during the “drill here drill now” fever of the McCain-Palin campaign, that the oil companies were already sitting on large offshore oil reserves that they were failing to develop in anticipation of higher prices.

The oil companies already have access to some 34 billion barrels of offshore oil they haven’t even developed yet, but ending the federal moratorium on offshore drilling would probably add only another 8 billion barrels (assuming California still blocks drilling off its coast). Who thinks adding under 100,000 barrels a day in supply sometime after 2020 — some one-thousandth of total supply — would be more than the proverbial drop in the ocean? Remember the Saudis couldn’t stop prices from rising now by announcing that they will add 500,000 barrels of oil a day by the end of this year! Here is the key data from EIA: Look closely. As of 2003, oil companies had available for leasing and development 40.92 billion barrels of offshore oil in the Gulf of Mexico. I asked the EIA analyst how much of that (estimated) available oil had been discovered in the last five years. She went to her computer and said “about 7 billion barrels have been found.” That leaves about 34 billion still to find and develop. The federal moratorium only blocks another 18 billion barrels of oil from being developed. [358]

And given the prospect of fixed supplies of oil, the greater the anticipated future scarcity value of oil, the greater will be the rational incentive for terrorists to leverage their power by disrupting supply. The infrastructure for extracting and distributing oil is unprecedentedly fragile, precisely because of a decline in productive capacity. Between 1985 and 2001, OPEC’s excess production capacity fell from 25% of global demand to 2%. In 2003, the International Energy Agency estimated available excess capacity was at its lowest level in thirty years. [359]

According to Jeff Vail, speculative hoarding of petroleum and terrorist actions against oil pipelines are not alternative explanations in place of Peak Oil, but the results of a positive feedback process created by Peak Oil itself.

It is quite common to hear “experts” explain that the current tight oil markets are due to “above-ground factors,” and not a result of a global peaking in oil production. It seems more likely that it is geological peaking that is driving the geopolitical events that constitute the most significant “above-ground factors” such as the chaos in Iraq and Nigeria, the nationalization in Venezuela and Bolivia, etc. Geological peaking spawns positive feedback loops within the geopolitical system. Critically, these loops are not separable from the geological events—they are part of the broader “system” of Peak Oil. Existing peaking models are based on the logistics curves demonstrated by past peaking in individual fields or oil producing regions. Global peaking is an entirely different phenomenon—the geology behind the logistics curves is the same, but global peaking will create far greater geopolitical side-effects, even in regions with stable or rising oil production. As a result, these geopolitical side-effects of peaking global production will accelerate the rate of production decline, as well as increase the impact of that production decline by simultaneously increasing marginal demand pressures. The result: the right side of the global oil production curve will not look like the left…whatever logistics curve is fit to the left side of the curve (where historical production increased), actual declines in the future will be sharper than that curve would predict. Here are five geopolitical processes, each a positive-feedback loop, and each an accelerant of declining oil production:

Return on Investment: Increased scarcity of energy, as well as increased prices, increase the return on investment for attacks that target energy infrastructure....

Mercantilism: To avoid the dawning “bidding cycles” between crude oil price increases and demand destruction, Nation-States are increasingly returning to a mercantilist paradigm on energy. This is the attitude of “there isn’t enough of it to go around, and we can’t afford to pay the market price, so we need to lock up our own supply....

“Export-Land” Model: Jeffrey Brown, a commentator at The Oil Drum, has proposed a geopolitical feedback loop that he calls the “export-land” model. In a regime of high or rising prices, a state’s existing oil exports brings in great revenues, which trickles into the state’s economy, and leads to increasing domestic oil consumption. This is exactly what is happening in most oil exporting states. The result, however, is that growth in domestic consumption reduces oil available for export....

Nationalism: Because our Westphalian system is fundamentally broken, the territories of nations and states are rarely contiguous. As a result, it is often the case that a nation is cut out of the benefits from its host state’s oil exports.... As a result, nations or sectarian groups within states will increasingly agitate for a larger share of the pie.... This process will develop local variants on the tactics of infrastructure disruption, as well as desensitize energy firms to ever greater rents for the security of their facilities and personnel—both of which will drive the next loop….

Privateering: Nationalist insurgencies and economies ruined by the downslide of the “export-land” effect will leave huge populations with no conventional economic prospects. High oil prices, and the willingness to make high protection payments, will drive those people to become energy privateers. We are seeing exactly this effect in Nigeria, where a substantial portion of the infrastructure disruption is no longer carried out by politically-motivated insurgents, but by profit-motivated gangs.... [360]

Mercantilism, in particular, probably goes a long way toward explaining America’s invasion of Iraq and the Russian-American “Great Game” in Central Asia in recent years. The United States’ post-9/11 drive for basing rights in the former Central Asian republics of the old USSR, and the rise of the Shanghai Cooperation Organization as a counterweight to American power, are clearly more meaningful in the light of the Caspian Sea basin oil reserves.

And the evidence is clear that price really is governed entirely by the fluctuation of demand, and that supply—at least on the upward side—is extremely inelastic. Just consider the movement of oil supplies after the price shock of the late ‘70s and early eighties to that of the past few years. As “transition town” movement founder Rob Hopkins points out, the supply of oil has increased little if any since 2005—fluctuating between 84 and 87 mbd—despite record price levels. [361]

Peak Oil is likely to throw a monkey-wrench into the gears of the Chinese model of state-sponsored capitalism. China heavily subsidizes energy and transportation inputs, pricing them at artificially low levels to domestic industrial consumers, just as did the USSR. This accounting gimmick won’t work externally—the Saudis want cash on the barrel head, at the price they set for crude petroleum—and the increased demand for subsidized energy inputs by wasteful domestic Chinese producers will just cause China to bankrupt itself buying oil abroad.

Overall, the effect of Peak Oil is likely to be a radical shortening of corporate supply and distribution chains, a resurrection of small-scale local manufacturing in the United States, and a reorientation of existing manufacturing facilities in China and other offshore havens toward production for their own domestic markets.

The same is true of relocalized agriculture. The lion’s share of in-season produce is apt to shift back to local sourcing, and out of season produce to become an expensive luxury. As Jeff Rubin describes it,

As soaring transport costs take New Zealand lamb and California blueberries off Toronto menus and grocery-store shelves, the price of locally grown lamb and blueberries will rise. The higher they rise, the more they will encourage people to raise sheep and grow blueberries. Ultimately, the price will rise so high that now unsaleable real estate in the outer suburbs will be converted back into farmland. That new farmland will then help stock the grocery shelves in my supermarket, just like it did thirty or forty years ago. [362]

This was a common theme during the oil shocks of the 1970s, and has been revived in the past few years. In the late ‘70s Warren Johnson, in Muddling Toward Frugality , predicted that rising energy prices would lead to a radical shortening of industrial supply chains, and the relocalization of manufacturing and agriculture. [363] Although he jumped the gun by thirty years, his analysis is essentially sound in the context of today’s Peak Oil concerns. The most pessimistic (not to say catastrophic) Peak Oil scenario is that of James Kunstler, outlined not only in The Long Emergency but fictionally in World Made by Hand . [364] Kunstler’s depiction of a world of candles and horse-drawn wagons, in my opinion, greatly underestimates the resilience of market economies in adjusting to energy shocks. Brian Kaller’s “return to Mayberry scenario” is much less alarmist.

In fact, peak oil will probably not be a crash, a moment when everything falls apart, but a series of small breakdowns, price hikes, and local crises.... Take one of the more pessimistic projections of the future, from the Association for the Study of Peak Oil, and assume that by 2030 the world will have only two-thirds as much energy per person. Little breakdowns can feed on each other, so crudely double that estimate. Say that, for some reason, solar power, wind turbines, nuclear plants, tidal power, hydroelectric dams, bio-fuels, and new technologies never take off. Say that Americans make only a third as much money, or their money is worth only a third as much, and there is only a third as much driving. Assume that extended families have to move in together to conserve resources and that we must cut our flying by 98 percent. Many would consider that a fairly clear picture of collapse. But we have been there before, and recently. Those are the statistics of the 1950s — not remembered as a big time for cannibalism. [365]

Like Kaller, Jeff Rubin presents the world after Peak Oil as largely “a return to the past ... in terms of the re-emergence of local economies.” [366]

But despite the differences in relative optimism or pessimism among these various Peak Oil thinkers, their analyses all have a common thread running through them: the radical shortening of industrial supply and distribution chains, and an end to globalization based on the export of industry to low-wage sweatshop havens like China.

To quote a Rubin article from May 2008, two months before oil prices peaked, rising transportation costs had more than offset the Chinese wage differential. The cost of shipping a standard 40-ft container, he wrote, had tripled since 2000, and could be expected to double again as oil prices approached $200/barrel. [367] What’s more, “the explosion in global transport costs has effectively offset all the trade liberalization efforts of the last three decades.” A rise in oil prices from $20 to $150/barrel has the same effect on international trade as an increase in tariffs from 3% to 11%—i.e., to their average level in the 1970s. [368] According to Richard Milne,

Manufacturers are abandoning global supply chains for regional ones in a big shift brought about by the financial crisis and climate change concerns, according to executives and analysts. Companies are increasingly looking closer to home for their components, meaning that for their US or European operations they are more likely to use Mexico and eastern Europe than China, as previously. [369]

Domestically, sustained oil prices at or above mid-2008 levels will cause a radical contraction in the trucking and airline industries. Estimates were widespread in the summer of 2008 that airlines would shut down 20% of their routes in the near-term of oil prices of $140/barrel or more persisted, and long-haul truckers were under comparable pressure. Joseph Romm, an energy analyst, argues that the airline industry is “barely viable” at $150/barrel. Sustained oil prices of $200/barrel will cause air travel to become a luxury good (as in the days when those who could afford it were referred to as the “jet set”). [370]

C. Fiscal Crisis of the State

The origins of corporate capitalism and the mass-production economy are associated with massive government subsidies; since then the tendency of corporate capital to socialize its operating costs has never abated. As a matter of basic economics, whenever you subsidize something and make it available to the user for less than its real cost, demand for it will increase. American capitalism, as a result, has followed a pattern of expansion skewed toward extensive additions of subsidized inputs, rather than more intensive use of existing ones. As James O’Connor describes the process,

Transportation costs and hence the fiscal burden on the state are not only high but also continuously rising. It has become a standard complaint that the expansion of road transport facilities intensifies traffic congestion. The basic reason is that motor vehicle use is subsidized and thus the growth of the freeway and highway systems leads to an increase in the demand for their use. [371] There is another reason to expect transportation needs (and budgets) to expand. The development of rapid transport and the modernization of the railroads, together with the extension of the railroad systems, will push the suburbs out even further from urban centers, putting still more distance between places of work, residence, and recreation. Far from contributing to an environment that will free suburbanites from congestion and pollution, rapid transit will, no doubt, extend the traffic jams and air pollution to the present perimeters of the suburbs, thus requiring still more freeway construction, which will boost automobile sales. [372]

And the tendency of monopoly capitalism to generate surplus capital and output also increases the amount of money that the state must spend to absorb the surplus.

Monopoly capitalism, according to O’Connor, is therefore plagued by a “fiscal crisis of the state.” “...[T]he socialization of the costs of social investment and social consumption capital increases over time and increasingly is needed for profitable accumulation by monopoly capital.” [373]

...[A]lthough the state has socialized more and more capital costs, the social surplus (including profits) continues to be appropriated privately.... The socialization of costs and the private appropriation of profits creates a fiscal crisis, or “structural gap,” between state expenditures and state revenues. The result is a tendency for state expenditures to increase more rapidly than the means of financing them. [374]

In short, the state is bankrupting itself providing subsidized inputs to big business, while big business’s demand for those subsidized inputs increases faster than the state can provide them. As Ivan Illich put it,

queues will sooner or later stop the operation of any system that produces needs faster than the corresponding commodity.... [375] ...[I]nstitutions create needs faster than they can create satisfaction, and in the process of trying to meet the needs they generate, they consume the Earth. [376]

The distortion of the price system, which in a free market would tie quantity demanded to quantity supplied, leads to ever-increasing demands on state services. Normally price functions as a form of feedback, a homeostatic mechanism much like a thermostat. Putting a candle under a thermostat will result in an ice-cold house When certain hormonal feedback loops are distorted in an organism, you get gigantism; the victim dies crushed by his own weight. Likewise, when the consumption of some factor is subsidized by the state, the consumer is protected from the real cost of providing it, and unable to make a rational decision about how much to use. So the state capitalist sector tends to add factor inputs extensively, rather than intensively; that is, it uses the factors in larger amounts, rather than using existing amounts more efficiently. The state capitalist system generates demands for new inputs from the state geometrically, while the state’s ability to provide new inputs increases only arithmetically. The result is a process of snowballing irrationality, in which the state’s interventions further destabilize the system, requiring yet further state intervention, until the system’s requirements for stabilizing inputs finally exceed the state’s resources. At that point, the state capitalist system reaches a breaking point.

Eventually, therefore, state capitalism hits a wall at which the state is no longer able to increase the supply of subsidized inputs. States approach the condition described by John Robb’s term “hollow state”:

The hollow state has the trappings of a modern nation-state (“leaders”, membership in international organizations, regulations, laws, and a bureaucracy) but it lacks any of the legitimacy, services, and control of its historical counter-part. It is merely a shell that has some influence over the spoils of the economy. [377] ...A hollow state is different from a failed state in that it continues to exist on the international stage. It has all the standard edifices of governance although most are heavily corrupted and in thrall to global corporate/monied elites. It continues to deliver political goods (albeit to a vastly diminished group, usually around the capital) and maintains a military. Further, in sections of the country, there is an appearance of normal life. However, despite this facade, the hollow state has abdicated (either explicitly as in Lebanon’s case or de facto as in Mexico’s) vast sections of its territory to networked tribes (global guerrillas). Often, these groups maintain a semblance of order, as in rules of Sao Paulo’s militias or the Taliban’s application of sharia. Despite the fact that these group [sic] control/manipulate explicit economic activity and dominate the use/application of violence at the local level, these groups often grow the local economy. How? By directly connecting it to global supply chains of illegal goods — from people smuggling to drugs to arms to copytheft to money laundering. The longer this state of affairs persists, the more difficult it is to eradicate. The slate of alternative political goods delivered by these non-state groups, in contrast to the ineffectiveness of the central government, sets the stage for a shift in legitimacy. Loyalties shift. Either explicitly through membership in tribal networks, or acknowledgement of the primacy of these networks over daily life. [378]

The entente between American and Iraqi government military forces, on the one hand, and the Sunni militias in Al Anbar province, on the other, is a recent example of a hollowed state coming to terms with “Fourth Generation Warfare” networks as de facto local governments. An early example was the Roman imperial state of the fifth century, delegating de facto territorial control to German tribal entities in return for de jure fealty to Rome.

And of course, in Robb’s preferred scenario—as we will see in Chapter Six—loyalties shift from the state to resilient communities.

If the state does not become completely hollowed out by Robb’s criteria, it nevertheless is forced to retreat from an ever increasing share of its former functions owing to its shrinking resources: a collapse of the value of official currency, combined with a catastrophic decline in tax revenues. The state delegates more and more functions to private entities nominally operating pursuant to state policy but primarily in the interest of self-aggrandizement, becomes prey to kleptocrats, leaves unenforced more and more laws that are technically on the books, and abandons ever increasing portions of its territory to the black market and organized criminal gangs.

In many ways, this is a positive development. Local sheriffs may decide that evicting mortgage defaulters and squatters, enforcing regulatory codes against household microenterprises, and busting drug users fall very low on their list of priorities, compared to dealing with murder and robbery. Governments may find themselves without the means of financing corporate welfare.

Something like this happened in Poland in the 1980s, with Gen. Jaruzelski—in a classic example of joining ‘em when you can’t beat ‘em—finally deciding to legalize banned groups and hold open elections because Poland had become “ungovernable.” Solidarity activist Wiktor Kulerski, in what should be an extremely suggestive passage for those of us who dream of an unenforceable regime of patent and copyright, zoning and licensing laws, wrote of his vision for a hollow state in Poland:

This movement should create a situation in which authorities will control empty stores, but not the market; the employment of workers, but not their livelihood; the official media, but not the circulation of information; printing plants, but not the publishing movement; the mail and telephones, but not communications; and the school system, but not education. [379]

But to the extent that the current economic structure is heavily dependent on government activity, and adjustment to the withdrawal of subsidized infrastructure and services may take time, an abrupt retreat of state activity may result in a catastrophic period of adjustment.

The fiscal crisis dovetails with Peak Oil and other resource crises, in a mutually reinforcing manner. The imperative of securing strategic access to foreign oil reserves, and keeping the sea lanes open, results in costly wars. The increased cost of asphalt intensifies the already existing tendency, of demand for subsidized transportation infrastructure to outstrip the state’s ability to supply it. As the gap expands, the period between deterioration of roads and the appropriation of money to repair them lengthens. The number of miles of high-volume highway the state is able to keep in a reasonable state of repair falls from one year to the next, and the state is continually forced to retreat and regroup and relegate an ever-larger share of highways to second-tier status. As James Kunstler points out, a highway is either kept in repair, or it quickly deteriorates.

Another consequence of the debt problem is that we won’t be able to maintain the network of gold-plated highways and lesser roads that was as necessary as the cars themselves to make the motoring system work. The trouble is you have to keep gold-plating it, year after year. Traffic engineers refer to this as “level-of-service.” They’ve learned that if the level-of-service is less than immaculate, the highways quickly enter a spiral of disintegration. In fact, the American Society of Civil Engineers reported several years ago that the condition of many highway bridges and tunnels was at the “D-minus” level, so we had already fallen far behind on a highway system that had simply grown too large to fix even when we thought we were wealthy enough to keep up. [380] It doesn’t take many years of neglect before deterioration and axle-breaking potholes render a highway unusable to heavy trucks, so that a growing share of the highway network will for all intents and purposes be abandoned. [381]

So each input crisis feeds the other, and we have a perfect storm of terminal crises. As described by Illich,

The total collapse of the industrial monopoly on production will be the result of synergy in the failure of multiple systems that fed its expansion. This expansion is maintained by the illusion that careful systems engineering can stabilize and harmonize present growth, while in fact it pushes all institutions simultaneously toward their second watershed. [382]

D. Decay of the Cultural Pseudomorph

What Mumford called the “cultural pseudomorph,” as we saw it described in Chapter One, was actually only the first stage. It has since decayed into a second, much weaker stage, unforeseen by Mumford, and shows signs of its final downfall. In the first stage, as Mumford observed, neotechnic methods (i.e., electrically powered machinery) were integrated into a mass-production framework fundamentally opposed to the technology’s real potential. But this stage reached its limit by the 1970s.

In the second stage, mass production on the Sloan model is being replaced by flexible, networked production with general-purpose machinery, with the production process organized along lines much closer to the original neotechnic ideal.

Piore and Sabel describe the “lean” revolution of recent decades as the discovery, after a long interlude of mass production, of the proper way of organizing an industrial economy. “[T]he mass-production paradigm had unforeseen consequences: it took almost a century (from about 1870 to 1960) to discover how to organize an economy to reap the benefits of the new technology.” [383]

According to those authors, the shift to lean production in America from the 1980s on was in large part a response to the increasing environment of macroeconomic uncertainty that prevailed after the resumption of the crisis of overaccumulation, and the oil shocks of the ‘70s. Mass-production industry is extremely brittle—i.e., it “does not adjust easily to major changes in its environment.” The question is not just how industry will react to resource depletion, but how it will react to wildly fluctuating prices and erratic supplies. [384] Economic volatility and uncertainty means mass production industry will be hesitant to invest in specialized production machinery that may be unpredictably rendered superfluous by “changes in raw materials prices, interest rates, and so on.” [385] As we saw in Chapter Two, long-term capital investment in costly technologies requires predictability; and the environment associated with Peak Oil and other input and cyclical crises is just about the opposite of what conduces to the stability of mass-production industry.

Conversely, though, the system prevailing in industrial districts like Emilia-Romagna is called “flexible manufacturing” for a reason. It is able to reallocate dedicated capital goods and shift contractual relationships, and do so quite rapidly, in response to sudden changes in the environment.

Although craft production has always tended to expand relative to mass-production industry during economic downturns, it was only in the prolonged stagnation of the 1970s and ‘80s that it began permanently to break out of its peripheral status.

From the second industrial revolution at the end of the nineteenth century to the present, economic downturns have periodically enlarged the craft periphery with respect to the mass-production core—but without altering their relationship. Slowdowns in growth cast doubt on subsequent expansion; in an uncertain environment, firms either defer mass-production investments or else switch to craft-production techniques, which allow rapid entry into whatever markets open up. The most straightforward example is the drift toward an industrial-subsistence, or -repair, economy: as markets stagnate, the interval between replacements of sold goods lengthens. This lengthened interval increases the demand for spare parts and maintenance services, which are supplied only by flexibly organized firms, using general-purpose equipment. The 1930s craftsman with a tool kit going door to door in search of odd jobs symbolizes the decreased division of labor that accompanies economic retrocession: the return to craft methods. But what is distinctive about the current crisis is that the shift toward greater flexibility is provoking technological sophistication—rather than regression to simple techniques. As firms have faced the need to redesign products and methods to address rising costs and growing competition, they have found new ways to cut the costs of customized production.... In short, craft has challenged mass production as the paradigm. [386]

In the case of small Japanese metalworking firms, American minimills and the Pratese textile industry, the same pattern prevailed. Small subcontractors of larger manufacturing firms “felt the increasing volatility of their clients’ markets; in response, they adopted techniques that reduced the time and money involved in shifting from product to product, and that also increased the sophistication and quality of the output.” [387]

In the Third Italy in particular, large mass-production firms outsourced an increasing share of components to networks of small, flexible manufacturers. The small firms, initially, were heavily dependent on the large ones as outlets. But new techniques and machine designs made production increasingly efficient in the small firms.

In some cases... the larger equipment is miniaturized. In other cases, however, artisan-like techniques of smelting, enameling, weaving, cutting, or casting metal are designed into new machines, some of which are controlled by sophisticated microprocessors.

At the same time, small firms which previously limited themselves to supplying components to a large manufacturer’s blueprints instead began marketing products of their own. [388]

While small manufacturers in the late 1960s were still dependent on a few or even one large client, there was a wholesale shift in the 1970s.

To understand how this dependence was broken in the course of the 1970s, and a new system of production created, imagine a small factory producing transmissions for a large manufacturer of tractors. Ambition, the joy of invention, or fear that he and his clients will be devastated by an economic downturn lead the artisan who owns the shop to modify the design of the tractor transmission to suit the need of a small manufacturer of high-quality seeders.... But once the new transmission is designed, he discovers that to make it he needs precision parts not easily available on the market. If he cannot modify his own machines to make these parts, he turns to a friend with a special lathe, who like himself fears being too closely tied to a few large manufacturers of a single product. Soon more and more artisans with different machines and skills are collaborating to make more and more diverse products. [389]

So a shift has taken place, with the work formerly done by vertically integrated firms being outsourced to flexible manufacturing networks, and with a smaller and smaller share of essential functions that can only be performed by the core mass-production firm. As Eric Hunting observed:

In the year 2000, our civilization reached an important but largely unnoticed milestone. For the first time the volume of consumer goods produced in ‘job shop’ facilities—mostly in Asia—exceeded the volume produce in traditional Industrial Age factories. This marks a long emerging trend of demassification of production capability driven by the trends in machine-tool evolution (smaller, smarter, cheaper) that is producing a corresponding demassification of capital and a homogenization of labor values around the globe. Globalization has generally sought profit through geographical spot-market value differences in resources and labor. But now those differences are disappearing faster the more they’re exploited and capital has to travel ever faster and farther in search of shrinking margins. [390]

The organization of physical production, in both the Toyota Production System and in the Emilia-Romagna model of local manufacturing networks, is beginning—after a long mass-production interlude—to resemble the original neotechnic promise of integrating power machinery into craft production.

But the neotechnic, even though it has finally begun to emerge as the basis of a new, coherent production model governed by its own laws, is still distorted by the pseudomorph in a weaker form: the new form of production still takes place within a persistent corporate framework of marketing, finance and “intellectual property.”

Andy Robinson, a member of the P2P Research email list, argued that “given recent studies showing equal productivity in factories in North and South,”

the central mechanism of core-periphery exploitation has moved from technological inequality (high vs low value added) to rent extraction on IP. Since the loss of IP would make large companies irrelevant, they fight tooth and nail to preserve it, even beyond strict competitiveness, and behave in otherwise quite “irrational” ways to prevent their own irrelevance (e.g. the MPAA and RIAA’s alienating of customers). [391]

And despite the admitted control of distributed manufacturing within a corporate framework, based on corporate ownership of “intellectual property,” Robinson suggests that the growing difficulty of enforcing IP will cause that framework to erode in the near future:

...[I]t may be more productive to look at the continuing applicability or enforceability of IP, rather than whether businesses will continue to use it. While this is very visible in the virtual and informational sphere (“pirating” and free duplication of games, software, console systems, music, film, TV, news, books, etc), it is also increasingly the case in terms of technological hardware. Growing Southern economies—China being especially notorious—tend to have either limited IP regimes or lax enforcement, meaning that everything that a MNC produces there, will also be copied or counterfeited at the same quality for the local market, and in some cases traded internationally. I have my suspicions that Southern regimes are very aware of the centrality of IP to core-periphery exploitation and their laxity is quite deliberate. But, in part it also reflects the limits of the Southern state in terms of capacity to dominate society, and the growing sophistication of transnational networks (e.g. organised crime networks), which can evade, penetrate and fight the state very effectively. [392]

Elsewhere, Robinson brilliantly drew the parallels between the decay of the pseudomorph in the industrial and political realms:

I think part of the crisis of the 70s has to do with networks and hierarchies. The “old” system was highly hierarchical, but was suffering problems from certain kinds of structural weaknesses in relation to networks—the American defeat in Vietnam being especially important.... And ever since the 70s the system has been trying to find hybrids of network and hierarchy which will harness and capture the power of networks without leading to “chaos” or system-breakdown. We see this across a range of fields: just-in-time production, outsourcing and downsizing, use of local subsidiaries, contracting-out, Revolution in Military Affairs, full spectrum dominance, indirect rule through multinational agencies, the Nixon Doctrine, joined-up governance, the growing importance of groups such as the G8 and G20, business networks, lifelong learning, global cities, and of course the development of new technologies such as the Internet.... In the medium term, the loss of power to networks is probably irreversible, and capital and the state will either go down fighting or create more-or-less stable intermediary forms which allow them to persist for a time. We are already seeing the beginnings of the latter, but the former is more predominant. The way I see the crisis deepening is that large areas will drift outside state and capitalist control, integrated marginally or not at all (this is already happening at sites such as Afghanistan, NWFP, the Andes, Somalia, etc., and in a local way in shanty-towns and autonomous centres). I also expect the deterritorialised areas to spread, as a result of the concentration of resources in global cities, the ecological effects of extraction, the neoliberal closing of mediations which formerly integrated, and the growing stratum of people excluded either because of the small number of jobs available or the growing set of requirements for conformity. Eventually these marginal spaces will become sites of a proliferation of new forms of living, and a pole of attraction compared to the homogeneous, commandist, coercive core. [393]

So long as the state successfully manages to prop up the centralized corporate economic order, libertarian and decentralist technologies and organizational forms will be incorporated into the old centralized, hierarchical framework. As the system approaches its limits of sustainability, those elements become increasingly destabilizing forces within the present system, and prefigure the successor system. When the system finally reaches those limits, those elements will (to paraphrase Marx) break out of their state capitalist integument and become the building blocks of a fundamentally different society. We are, in short, building the foundations of the new society within the shell of the old.

And the second stage of the pseudomorph is weakening. For example, although the Nike model of “outsourcing everything” and retaining corporate control of an archipelago of small manufacturing shops still prevails to a considerable extent among U.S.-based firms, small subcontractors elsewhere have increasingly rebelled against the hegemony of their large corporate clients. In Italy and Japan, the subcontractors have federated among themselves to create flexible manufacturing networks and reduce their dependence on any one outlet for their products. [394] The result is that the corporate headquarters, increasingly, is becoming a redundant node in a network—a redundant node that can be bypassed.

Indeed, the Nike model is itself extremely vulnerable to such bypassing. As David Pollard observes:

In their famous treatise explaining the Internet phenomenon, Doc Searls, Dave Weinberger et al. said that what made the Internet so powerful and so resilient was that it had no control ‘centre’ and no hierarchy: All the value was added, by millions of people, at the ‘ends’. And if someone tried to disrupt it, these millions of users would simply work around the disruption. There is growing evidence that the same phenomenon is happening in businesses, which have long suffered from diseconomies of scale and bureaucracy that stifle innovation and responsiveness. Think of this as a kind of ‘outsourcing of everything’.... Already companies like Levi Strauss make nothing at all—they simply add their label to stuff made by other companies, and distribute it (largely through independent companies they don’t own either). [395]

If the people actually producing and distributing the stuff ever decide they have the right to market an identical product, Levi Strauss’s ownership of the label notwithstanding, Levi’s is screwed.

As a general phenomenon, the shift from physical to human capital as the primary source of productive capacity in so many industries, along with the imploding price and widespread dispersion of ownership of capital equipment in so many industries, means that corporate employers are increasingly hollowed out and only maintain control over the physical production process through legal fictions. When so much of actual physical production is outsourced to the small sweatshop or the home shop, the corporation becomes a redundant “node” that can be bypassed; the worker can simply switch to independent production, cut out the middleman, and deal directly with suppliers and outlets.

A good example of the weakness of the second stage of the pseudomorph is the relationship of the big automakers with parts suppliers today, compared to when Galbraith wrote forty years ago. As portrayed in The New Industrial State , the relationship between large manufacturers and their suppliers was one of unilateral market control. Today, Toyota’s American factories share about two-thirds of their auto parts suppliers with the Detroit Three. [396] According to Don Tapscott and Anthony Williams, more than half of a vehicle’s value already consists of electrical systems, electronics and software rather than the products of mechanical engineering, and by 2015 suppliers will conduct most R&D and production. [397]

Taking into account only the technical capabilities of the suppliers, it’s quite feasible for parts suppliers to produce generic replacement parts in competition with the auto giants, to produce competing modular components designed for a GM or Toyota platform, or even to network to produce entirely new car designs piggybacked on a GM or Toyota chassis and engine block. The only thing stopping them is trademark and patent law.

And, in fact, supplier networks are beginning to carry out design functions among themselves, albeit on contract to large corporate patrons. For example, Boeing’s designers used to do all the work of developing detailed specs for each separate part, with suppliers just filling the order to the letter; Boeing then assembled the parts in its own plant. But now, according to Don Tapscott and Anthony Williams, “suppliers codesign airplanes from scratch and deliver complete subassemblies to Boeing’s factories....” Rather than retaining control of all R&D in-house, Boeing is now “handing significant responsibility for innovation over to suppliers....” [398]

An early indication that things may be reaching a tipping point is China’s quasi-underground “shanzhai” enterprises which, despite being commonly dismissed as mere producers of knockoffs, are in fact extremely innovative not only in technical design but in supply chain efficiency and the speed of their reactions to change. The shanzhai economy resembles the flexible manufacturing networks of the Third Italy. Significantly, supplier networks for transnational corporations have begun to operate underground to supply components for shanzhai enterprises.

Tapping into the supply chains of big brands is easy, producers say. “It’s really common for factories to do a night shift for other companies,” says Zhang Haizhen, who recently ran a shanzhai company here. “No one will refuse an order if it is over 5,000 mobile phones.” [399]

The Chinese motorcycle industry is a good illustration of these trends. Many of its major designs are reverse-engineered from Japanese products, and the industry’s R&D model is based on networked collaborative design efforts between many small, independent actors. And the reverse-engineered bikes are not simple copies of the original Japanese designs in all their major details; they build on the original designs that are in many ways superior to it. “Rather than copy Japanese models precisely, suppliers take advantage of the loosely defined specifications to amend and improve the performance of their components, often in collaboration with other suppliers.” [400]

And recently, according to Bunnie Huang, there have been indications that native Chinese auto firms have been producing an unauthorized version of the Corolla. Huang spotted what appeared to be a Toyota Corolla bearing the logo of the Chinese BYD auto company.

So when I saw this, I wasn’t sure if it was a stock Corolla to which a local enthusiast attached a BYD badge, or if it was a BYD copycat of our familiar brand-name Toyota car. Or, by some bizarre twist, perhaps Toyota is now using BYD to OEM their cars in China through a legitimized business relationship. I don’t know which is true, but according to the rumors I heard from people who saw this photo, this is actually a copycat Toyota made using plans purchased on the black market that were stolen from Toyota. Allegedly, someone in China who studies the automobile industry has taken one of these apart and noted that the welds are done by hand. In the original design, the welds were intended to be done by machine. Since the hand-welds are less consistent and of lower quality than the robotic welds, the car no longer has adequate crash safety. There are also other deviations, such as the use of cheap plastic lenses for the headlights. But, I could see that making a copycat Corolla is probably an effective exercise for giving local engineers a crash-course in world-class car manufacture. [401]

Generally speaking, the corporate headquarters’ control over the supplier is growing increasingly tenuous. As long ago as a decade ago, Naomi Klein pointed out that the “competing labels... are often produced side by side in the same factories, glued by the very same workers, stitched and soldered on the very same machines.” [402]

E. Failure to Counteract Limits to Capture of Value by Enclosure of the Digital Commons

As Michel Bauwens describes it, it is becoming increasingly impossible to capture value from the ownership of ideas, designs, and technique—all the “ephemera” and “intellect” that Tom Peters writes about as a component of the price of manufactured goods—leading to a crisis of sustainability for capitalism. “Cognitive capitalism” is capital’s attempt to adjust to the shift from physical to human capital, and to capture value from the immaterial realm. Bauwens cites McKenzie Wark’s theory that a new “vectoralist” class “has arisen which controls the vectors of information, i.e. the means through which information and creative products have to pass, for them to realize their exchange value.” This describes “the processes of the last 40 years, say, the post-1968 period, which saw a furious competition through knowledge-based competition and for the acquisition of knowledge assets, which led to the extraordinary weakening of the scientific and technical commons.” [403]

Cognitive capitalism arose as a solution to the unsustainability of the older pattern of capitalist growth, based on extensive addition of physical inputs and expansion into new geographical areas. Bauwens uses the analogy of the ancient slave economy, which became untenable when avenues of extensive development (i.e. expansion into new territory, and acquisition of new slaves) were closed off. When the slave system reached its limits of external expansion, it turned to intensive development via the feudal manor system, transforming the slave into a peasant who had an incentive to work the land more efficiently.

The alternative to extensive development is intensive development, as happened in the transition from slavery to feudalism. But notice that to do this, the system had to change, the core logic was no longer the same. The dream of our current economy is therefore one of intensive development, to grow in the immaterial field, and this is basically what the experience economy means. The hope that it expresses is that business can simply continue to grow in the immaterial field of experience. [404]

And the state, as enforcer of the total surveillance society and copyright lockdown, is central to this business model. Johann Soderberg relates the crisis of realization under state capitalism to capital’s growing dependence on the state to capture value from social production and redistribute it to private corporate owners. This takes the form both of “intellectual property” law, as well as direct subsidies from the taxpayer to the corporate economy. He compares, specifically, the way photocopiers were monitored in the old USSR to protect the power of elites in that country, to the way the means of digital reproduction are monitored in this country to protect corporate power. [405] The situation is especially ironic, Cory Doctorow notes, when you consider the pressure the U.S. has put on the post-Soviet regime to enforce the global digital copyright regime: “post-Soviet Russia forgoes its hard-won freedom of the press to protect Disney and Universal!” [406] That’s doubly ironic, considering the use of the term “Samizdat pirate” under the Soviet regime.

James O’Connor’s theme, of the ever-expanding portion of the operating expenses of capital which come from the state, is also relevant here, considering the extent to which the technical prerequisites of the digital revolution were developed with state financing.

The ability to capture value from efficiency increases, through artificial scarcity and artificial property rights, is central to the New Growth Theory of Paul Romer. Consider his remarks in an interview with Reason ’s Ron Bailey:

reason: Yet there is a mechanism in the market called patents and copyright, for quasi-property rights in ideas. Romer: That’s central to the theory. To the extent that you’re using the market system to refine and bring ideas into practical application, we have to create some kind of control over the idea. That could be through patents. It could be through copyright. It might even be through secrecy. A firm can keep secret a lot of what it knows how to do.... So for relying on the market—and we do have to rely on the market to develop a lot of ideas—you have to have some mechanisms of control and some opportunities for people to make a profit developing those ideas. ** ** * Romer: There was an old, simplistic notion that monopoly was always bad. It was based on the realm of objects—if you only have objects and you see somebody whose cost is significantly lower than their price, it would be a good idea to break up the monopoly and get competition to reign freely. So in the realm of things, of physical objects, there is a theoretical justification for why you should never tolerate monopoly. But in the realm of ideas, you have to have some degree of monopoly power. There are some very important benefits from monopoly, and there are some potential costs as well. What you have to do is weigh the costs against the benefits. Unfortunately, that kind of balancing test is sensitive to the specifics, so we don’t have general rules. Compare the costs and benefits of copyrighting books versus the costs and benefits of patenting the human genome. They’re just very different, so we have to create institutions that can respond differentially in those cases.

Although Romer contrasts the realm of “science” with the realm of “the market,” and argues that there should be some happy medium between their respective open and proprietary cultures, it’s interesting that he identifies “intellectual property” as an institution of “the market.”

And Romer makes it clear that what he means by “growth” is economic growth, in the sense of monetized exchange value:

Romer: ....Now, what do I mean when I say growth can continue? I don’t mean growth in the number of people. I don’t even mean growth in the number of physical objects, because you clearly can’t get exponential growth in the amount of mass that each person controls. We’ve got the same mass here on Earth that we had 100,000 years ago and we’re never going to get any more of it. What I mean is growth in value, and the way you create value is by taking that fixed quantity of mass and rearranging it from a form that isn’t worth very much into a form that’s worth much more. [407]

Romer’s thought is another version of Daniel Bell’s post-industrialism thesis. As summarized by Manuel Castells, that thesis held that:

(1) The source of productivity and growth lies in the generation of knowledge, extended to all realms of economic activity through information processing. (2) Economic activity would shift from goods production to services delivery.... (3) The new economy would increase the importance of occupations with a high informational and knowledge content in their activity. Managerial, professional, and technical occupations would grow faster than any other occupational position and would constitute the core of the new social structure. [408]

The problem is that post-industrialism is self-liquidating: technological progress destroys the conditions necessary for capturing value from technological progress.

By their nature technological innovation and increased efficiency destroy growth. Anything that lowers the cost of inputs to produce a given output, in a free market with competition unfettered by entry barriers, will result in the reduction of exchange value (i.e. price). And since GDP is an accounting mechanism that measures the total value of inputs consumed, increased efficiency will reduce the size of “the economy.”

Romer’s model is essentially Schumpeterian. Recouping outlays for innovation requires prices that reflect average cost rather than marginal cost. Hence Romer’s Schumpeterian schema precludes price-taking behavior in a competitive market; rather, it presupposes some form of market power (“monopolistic competition”) by which firms can set prices to cover costs. Romer argues that his model of economic growth based on innovation is incompatible with price-taking behavior. A firm that invested significant sums in innovation, but sold only at marginal cost, could not survive as a price-taker. It is necessary, therefore, that the benefits of innovation—even though non-rival by their nature—be at least partially excludable through “intellectual property” law. [409]

Some right-wing libertarians mock big government liberals for a focus on “jobs” as an end in themselves, rather than as a means to an end. But Romer’s focus on “growth” and “increased income,” rather than on the amount of labor required to obtain a consumption good, is an example of the very same fallacy (and Bailey cheers him on, of course).

Jeff Jarvis sparked a long chain of discussions by arguing that innovation, by increasing efficiency, results in “shrinkage” rather than growth. The money left in customers’ pockets, to the extent that it is reinvested in more productive venues, may affect the small business sector and not even show up in econometric statistics. [410]

Anton Steinpilz, riffing off Jarvis, suggested that the reduced capital expenditures might not reappear as increased spending anywhere , but might (essentially a two-sided coin) be pocketed by the consumer in the form of increased leisure and/or forced on the worker in the form of technological unemployment. [411] And Eric Reasons, writing about the same time, argued that innovation was being passed on to consumers, resulting in “massive deflation” and “less money involved” overall. [412]

Reasons built on this idea, massive deflation resulting from increased efficiency, in a subsequent blog post. The problem, Reasons argued, was that while the deflation of prices in the old proprietary content industries benefited consumers by leaving dollars in their pockets, many of those consumers were employees of industries made obsolete by the new business models.

Effectively, the restrictions that held supply in check for IP are slowly falling away. As effective supply rises, price plummets. Don’t believe me? You probably spend less money now on music than you did 15 years ago, and your collection is larger and more varied than ever. You probably spend less time watching TV news, and less money on newspapers than you did 10 years ago, and are better informed. I won’t go so far as to say that the knowledge economy is going to be no economy at all, but it is a shrinking one in terms of money, both in terms of cost to the consumer, and in terms of the jobs produced in it. [413]

And the issue is clearly shrinkage, not just a shift of superfluous capital and purchasing power to new objects. Craigslist employs fewer people than the industries it destroyed, for example. The ideal, Reasons argued, is for unproductive activity to be eliminated, but for falling work hours to be offset by lower prices, so that workers experience the deflation as a reduction in the ratio of effort to consumption:

Given the amount of current consumption of intellectual property (copyrighted material like music, software, and newsprint; patented goods like just about everything else), couldn’t we take advantage of this deflation to help cushion the blow of falling wages? How much of our income is dedicated to intellectual property, and its derived products? If wages decrease at the same time as cost-of-living decreases, are we really that bad off? Deflation moves in both directions, as it were.... Every bit of economic policy coming out of Washington is based on trying to maintain a status quo that can not be maintained in a global marketplace. This can temporarily inflate some sectors of our economy, but ultimately will leave us with nothing but companies that make the wrong things, and people who perform the wrong jobs. You know what they say: “As GM goes, so goes the country.” [414]

Contrary to “Free” optimists like Chris Anderson and Kevin Kelley, Reasons suspects that reduced rents on proprietary content cannot be replaced by monetization in other areas. The shrinkage of proprietary content industries will not be replaced by growth elsewhere, or the reduced prices offset by a shift of demand elsewhere, on a one-to-one basis. [415]

Mike Masnick, of Techdirt , praised Reasons’ analysis, but suggested—from a fairly conventional standpoint—that it was incomplete:

So this is a great way to think about the threat side of things. Unfortunately, I don’t think Eric takes it all the way to the next side (the opportunity side), which we tried to highlight in that first link up top, here. Eric claims that this “deflation” makes the sector shrink, but I don’t believe that’s right. It makes companies who rely on business models of artificial scarcity to shrink, but it doesn’t make the overall sector shrink if you define the market properly. Economic efficiency may make certain segments of the market shrink (or disappear), but it expands the overall market. Why? Because efficiency gives you more output for the same input (bigger market!). The tricky part is that it may move around where that output occurs. And, when you’re dealing with what I’ve been calling “infinite goods” you can have a multiplicative impact on the market. That’s because a large part of the “output” is now infinitely reproduceable at no cost. For those who stop thinking of these as “goods that are being copied against our will” and start realizing that they’re “ inputs into a wider market where we don’t have to pay for any of the distribution or promotion!” there are much greater opportunities. It’s just that they don’t come from artificial scarcity any more. They come from abundance. [416]

Reasons responded, in a comment below Masnick’s post (aptly titled “The glass is twice the size it needs to be...”), that “this efficiency will make the economic markets they affect “shrink” in terms of economy and capital. It doesn’t mean that the number of variation of the products available will shrink, just the capital involved.” [417]

He stated this assessment in even sharper terms in a comment under Michel Bauwens’s blog post on the exchange:

While I certainly wouldn’t want to go toe-to-toe with Mike Masnick on the subject, I did try to clarify in comments that it isn’t that I don’t see the opportunity in the “knowledge economy”, but simply that value can be created where capital can’t be captured from it. The trick is to reap that value, and distinguish where capital can and where it cannot add value. Of course there’s money to be made in the knowledge economy—ask Google or Craigslist—but by introducing such profound efficiencies, they deflate the markets they touch at a rate far faster than the human capital can redeploy itself in other markets. Since so much capital is dependent upon consumerism generated by that idled human capital, deflation follows. [418]

Neoclassical economists would no doubt dismiss Reasons’ argument, and other theories of technological unemployment, as variations on the “lump of labor fallacy.” But their dismissal of it, under that trite label, itself makes an implicit assumption that’s hardly self-evident: that demand is infinitely, upwardly elastic.

That assumption is stated, in the most vulgar of terms, from an Austrian standpoint by a writer at LewRockwell.com:

You know, properly speaking, the “correct” level of unemployment is zero. Theoretically, the demand for goods and services is infinite. My own desire for goods and services has no limit, and neither does anyone else’s. So even if everyone worked 24/7, they could never satisfy all the potential demand. It’s just a matter of allowing people to work at wages that others are willing and able to pay. [419]

Aside from the fact that this implicitly contradicts Austrian arguments that increased labor productivity from capital investment are responsible for reduced working hours (see, e.g., George Reisman, quoted elsewhere in this chapter), this is almost cartoonish nonsense. If the demand for goods and services is unconstrained by the disutility of labor, then it follows that absent a minimum wage people would be working at least every possible waking hour—even if not “24/7.” On the other hand if there is a tradeoff between infinite demand and the disutility of labor, then demand is not infinitely upwardly elastic. Some productivity increases will be lost through “leakages” in the form of increased leisure, rather than consumption of increased output of goods. That means that the demand for labor, even if somewhat elastic, will not grow as quickly as labor productivity.

Tom Walker (aka Sandwichman), an economist who has devoted most of his career to unmasking the “lump of labor” caricature as a crude strawman, confesses a degree of puzzlement as to why orthodox economists are so strident on the issue. After all, what they denounce as the “lump of labor fallacy” is based on what, “[w]hen economists do it, ...is arcane and learned ceteris paribus hokus pokus.” [420] Given existing levels of demand for consumer goods, any increase in labor productivity will result in a reduction in total work hours available.

Of course the orthodox economist will argue that ceteris is never paribus . But that demand freed up by reduced wage expenditures in one sector will automatically translate, on a one-to-one basis, into increased demand (and hence employment) in another sector is itself by no means self-evident. And an assumption that such will occur, so strong that one feels sufficiently confident to invent a new “fallacy” for those who argue otherwise, strikes me as a belief that belongs more in the realm of theology than of economics.

P. M. Lawrence, in a discussion sparked by Casey’s argument, expressed similar views in a private email:

I always thought that “lump” reasoning was perfectly sound in any area in analysing instantaneous responses, as there’s a lag before it changes while supply and demand respond — which means, it’s important for matters of survival until those longer runs, and also you can use it in mathematically or verbally modelling how the lump does in fact change over time.... [421]

These shortcomings of Romer’s New Growth apply, more particularly, to the “progressive” and “green” strands of cognitive capitalism. Bill Gates and Richard Florida are typical of this tendency. Florida specifically refers to Romer’s New Growth Theory, “which assigns a central role to creativity or idea generation.” But he never directly addresses the question of just how such “idea generation” can be the source of economic growth, unless it is capitalized as the source of rents through artificial property rights. He quotes, without seeming to grasp its real significance, this remark of Romer’s: “We are not used to thinking of ideas as economic goods, but they are surely the most significant ones that we produce.” “Economic goods” are goods with exchange value; and ideas can only have exchange value when they are subject to monopoly. Florida continues to elaborate on Romer’s theory, arguing that an idea can be used over and over again, “and in fact grows in value the more it is used. It offers not diminishing returns, but increasing returns .” This displays a failure to grasp the distinction between use-value and exchange value. An idea can, indeed, result in exponential increases in our standard of living the more they are used, by reducing the labor and material inputs required to produce a unit of consumption. But in so doing, it reduces exchange value and causes marginal returns to fall to zero. Innovation causes economic value to implode. [422]

Florida himself, for all his celebration of networks and free agency, assumes a great deal of continuity with the existing corporate economy.

In tracing economic shifts, I often say that our economy is moving from an older corporate-centered system defined by large companies to a more people-driven one. This view should not be confused with the unfounded and silly notion that big companies are dying off. Nor do I buy the fantasy of an economy organized around small enterprises and independent “free agents.” Companies, including very big ones, obviously still exist, are still influential and probably always will be. [423] A related myth is that the age of large corporations is over—that they have outlived their usefulness, their power has been broken, and they will eventually fade away along with other big organizational forms. The classic metaphor is the lumbering dinosaur made obsolete and susurped by small, nimble mammals—the usurpers in this case being small, nimble startup companies.... But big companies are by no means going away. Microsoft and Intel continue to control much of the so-called information economy, along with Oracle, Cisco, IBM and AOL Time Warner. Big industrial concerns, from General Motors to General Electric, General Dynamics and General Foods, still turn out most of the nation’s goods. Our money is managed not by upstarts but by large financial institutions. The resources that power our economy are similarly managed and controlled by giant corporations.... The economy, like nature, is a dynamic system. New companies form and help us to propel it forward, with some dying out while others carry on to grow quite large themselves, like Microsoft and Intel. An economy composed only of small, short-lived entities would be no more sustainable than an ecosystem composed only of insects. [424]

Florida fails to explain just why large organizations are necessary. Large, hierarchical organizations originally came into existence as a result of the enormous capital outlays required for production, and the need to manage and control those capital assets. When physical capital outlays collapse by one or two orders of magnitude for most kinds of production, what further need is there for the large organizations? The large size of Microsoft and Intel results, in most cases (aside from the enormous capital outlay required for a microchip foundry, of course), from patents on hardware, software copyrights, and the like, that artificially increase required capital outlays, otherwise raise entry barriers, and thereby lock them into an artificial position of control.

And the purported instabilities of an economy of small firms, over which Florida raises so much alarm, are a strawman. Networked industrial ecologies of small firms achieve stability and permanence, as we shall see in Chapter Six, from modular design for common platforms. The individual producers may come and go, but the common specifications and protocols live on.

Florida’s focus on individual career paths based on free agency, and on internal corporate cultures of “creativity,” at the expense of genuine changes in institutional structure and size, remind me of Charles Reich’s approach in The Greening of America . The great transformation Reich envisioned amounted to little more than leaving the giant corporations and central government agencies in place, but staffing them entirely with people in beads and bell-bottoms who, you know, had their heads in the right place, man.

But this approach is now failing in the face of the increasing inability to capture value from the immaterial realm. The strategy of shifting the burden of realization onto the state is untenable. Strong encryption, coupled with the proliferation of bittorrent and episodes like the DeCSS uprising (see later in this chapter), have shown that “intellectual property” is ultimately unenforceable. J. A. Pouwelse and his coauthors estimate that the continuing exponential advance of file-sharing technology will make copyright “impossible to enforce by 2010.” [425] In particular, they mention

anonymous downloading, uploading, and injection of content using a darknet. A darknet inhibits both Internet censorship and enforcement of copyright law. The freenetproject.org has in 2000 already produced a darknet, but it was slow, difficult to use, and offered little content. Darknets struggle with the second cardinal feature of P2P platforms. Full anonymity costs both extra bandwidth and is difficult to combine with enforcement of resource contributions. By 2010 darknets should be able to offer the same performance as traditional P2P software by exploiting social networking. No effective legal or technological method currently exits [sic] to stop darknets, with the exception of banning general-purpose computing. Technologies such as secure computing and DRM are convincingly argued to be unable to stop darknets. [426]

And in fact, as reported by Ars Technica back in 2007, attempts by university administrators to ban P2P at the RIAA’s behest have caused students to migrate to darknets in droves. [427]

The rapid development of circumvention technology intersects—powerfully so—with the cultural attitudes of a generation for which industry “anti-songlifting” propaganda is as gut-bustingly hilariously as Reefer Madness . Girlintraining, commenting under a Slashdot post, had this to say of such propaganda:

I used to read stuff like this and get upset. But then I realized that my entire generation knows it’s baloney. They can’t explain it intellectually. They have no real understanding of the subtleties of the law, or arguments about artists’ rights or any of that. All they really understand is there is are large corporations charging private citizens tens, if not hundreds of thousands of dollars, for downloading a few songs here and there. And it’s intuitively obvious that it can’t possibly be worth that. An entire generation has disregarded copyright law. It doesn’t matter whether copyright is useful or not anymore. They could release attack dogs and black helicopters and it wouldn’t really change people’s attitudes. It won’t matter how many websites they shut down or how many lives they ruin, they’ve already lost the culture war because they pushed too hard and alienated people wholesale. The only thing these corporations can do now is shift the costs to the government and other corporations under color of law in a desperate bid for relevance. And that’s exactly what they’re doing. What does this mean for the average person? It means that we google and float around to an ever-changing landscape of sites. We communicate by word of mouth via e-mail, instant messaging, and social networking sites where the latest fix of free movies, music, and games are. If you don’t make enough money to participate in the artificial marketplace of entertainment goods—you don’t exclude yourself from it, you go to the grey market instead. All the technological, legal, and philosophical barriers in the world amount to nothing. There is a small core of people that understand the implications of what these interests are doing and continually search for ways to liberate their goods and services for “sale” on the grey market. It is (economically and politically) identical to the Prohibition except that instead of smuggling liquor we are smuggling digital files. Billions have been spent combating a singularily simple idea that was spawned thirty years ago by a bunch of socially-inept disaffected teenagers working out of their garages: Information wants to be free. Except information has no wants—it’s the people who want to be free. And while we can change attitudes about smoking with aggressive media campaigns, or convince them to cast their votes for a certain candidate, selling people on goods and services they don’t really need, what we cannot change is the foundations upon which a generation has built a new society out of. [428]

Cory Doctorow, not overly fond of the more ideologically driven wing of the open-source movement (or as he calls them, “patchouli-scented info-hippies”), says it isn’t about whether “information wants to be free.” Rather, the simple fact of the matter is “that computers are machines for copying bits and that once you... turn something into bits, they will get copied.... [I]f your business model is based on bits not getting copied you are screwed.” [429]

Raise your hand if you’re thinking something like, “But DRM doesn’t have to be proof against smart attackers, only average individuals!...” ...I don’t have to be a cracker to break your DRM. I only need to know how to search Google, or Kazaa, or any of the other general-purpose search tools for the cleartext that someone smarter than me has extracted. [430] It used to be that copy-prevention companies’ strategies went like this: “We’ll make it easier to buy a copy of this data than to make an unauthorized copy of it. That way, only the uber -nerds and the cash-poor/time rich classes will bother to copy instead of buy.” But every time a PC is connected to the Internet and its owner is taught to use search tools like Google (or The Pirate Bay), a third option appears: you can just download a copy from the Internet.... As I write this, I am sitting in a hotel room in Shanghai, behind the Great Firewall of China. Theoretically, I can’t access blogging services that carry negative accounts of Beijing’s doings, like WordPress, Blogger, and LiveJournal, nor the image-sharing site Flickr, nor Wikipedia. The (theoretically) omnipotent bureaucrats at the local Minitrue have deployed their finest engineering talent to stop me. Well, these cats may be able to order political prisoners executed and their organs harvested for Party members, but they’ve totally failed to keep Chinese people... off the world’s Internet. The WTO is rattling its sabers at China today, demanding that they figure out how to stop Chinese people from looking at Bruce Willis movies without permission—but the Chinese government can’t even figure out how to stop Chinese people from looking at seditious revolutionary tracts online. [431]

File-sharing networks spring up faster than they can be shut down. As soon as Napster was shut down, the public migrated to Kazaa and Gnutella. When Kazaa was shut down, its founders went on to create Skype and Joost. Other file-sharing services also sprang up in Kazaa’s niche, like the Russian AllofMP3, which reappeared under a new name as soon as the WTO killed it. [432]

The proliferation of peer production and the open-source model, and the growing unenforceability of the “intellectual property” rules on which the capture of value depends, is creating “a vast new information commons..., which is increasingly out of the control of cognitive capitalism.” [433] Capital, as a result, is incapable of realizing returns on ownership in the cognitive realm. As Bauwens explains it:

1) The creation of non-monetary value is exponential 2) The monetization of such value is linear In other words, we have a growing discrepancy between the direct creation of use value through social relationships and collective intelligence..., [and the fact that] only a fraction of that value can actually be captured by business and money. Innovation is becoming... an emergent property of the networks rather than an internal R & D affair within corporations; capital is becoming an a posteriori intervention in the realization of innovation, rather than a condition for its occurrence.... What this announces is a crisis of value..., but also essentially a crisis of accumulation of capital. Furthermore, we lack a mechanism for the existing institutional world to re-fund what it receives from the social world. So on top of all of that, we have a crisis of social reproduction.... Thus, while markets and private ownership of physical capital will persist, “the core logic of the emerging experience economy, operating as it does in the world of non-rival exchange, is unlikely to have capitalism as its core logic.” [434]

A good example is the way in which digital culture, according to Douglas Rushkoff, destroyed California’s economy:

The fact is, most Internet businesses don’t require venture capital. The beauty of these technologies is that they decentralize value creation. Anyone with a PC and bandwidth can program the next Twitter or Facebook plug-in, the next iPhone app, or even the next social network. While a few thousand dollars might be nice, the hundreds of millions that venture capitalists want to—need to—invest, simply aren’t required.... The banking crisis began with the dot.com industry, because here was a business sector that did not require massive investments of capital in order to grow. (I spent an entire night on the phone with one young entrepreneur who secured $20 million of capital from a venture firm, trying to figure out how to possibly spend it. We could only come up with $2 million of possible expenditures.) What’s a bank to do when its money is no longer needed?... So they fail, the tax base decreases, companies based more on their debt structures than their production fail along with them, and we get an economic crisis. Yes, the Internet did all this. But that’s also why the current crisis should be seen as a cause for celebration as well: the Internet actually did what it was supposed to by decentralizing our ability to create and exchange value. This was the real dream, after all. Not simply to pass messages back and forth, but to dis-intermediate our exchanges. To cut out the middleman, and let people engage and transact directly. This is, quite simply, cheaper to do. There’s less money in it. Not necessarily less money for us, the people doing the exchanging, but less money for the institutions that have traditionally extracted value from our activity. If I can create an application or even a Web site like this one without borrowing a ton of cash from the bank, then I am also undermining America’s biggest industry—finance. While we rightly mourn the collapse of a state’s economy, as well as the many that are to follow, we must—at the very least—acknowledge the real culprit. For digital technology not only killed the speculative economy, but stands ready to build us a real one. [435]

The actual physical capital outlays required for digital creation are simply unable to absorb anything like the amounts of surplus capital in search of a profitable investment outlet—unless artificial property rights and artificial scarcity can be used to exclude independent production by all but the corporate owners of “intellectual property,” and mandate outlays totally unrelated to the actual physical capital requirements for production. Since such artificial property rights are, in fact, becoming increasingly unenforceable, corporate capital is unable either to combat the growing superfluity of its investment capital in the face of low-overhead production, or to capture value through artificial scarcity by suppressing low-cost competition.

If we view the transition from the perspective of innovators rather than venture capitalists, of course, it’s a much more positive development. Michel Bauwens described the collapse of the dot-com bubble and the rise of Web 2.0 as the decoupling of innovation and entrepreneurship from capital, and the shift of innovation to networked communities.

As an internet entrepreneur, I personally experienced both the manic phase, and the downturn, and the experience was life changing because of the important discovery I and others made at that time. All the pundits where predicting, then as now, that without capital, innovation would stop, and that the era of high internet growth was over for a foreseeable time. In actual fact, the reality was the very opposite, and something apparently very strange happened. In fact, almost everything we know, the Web 2.0, the emergence of social and participatory media, was born in the crucible of that downturn. In other words, innovation did not slow down, but actually increased during the downturn in investment. This showed the following new tendency at work: capitalism is increasingly being divorced from entrepreneurship, and entrepreneurship becomes a networked activity taking place through open platforms of collaboration. The reason is that internet technology fundamentally changes the relationship between innovation and capital. Before the internet, in the Schumpeterian world, innovators need capital for their research, that research is then protected through copyright and patents, and further funds create the necessary factories. In the post-schumpeterian world, creative souls congregate through the internet, create new software, or any kind of knowledge, create collaboration platforms on the cheap, and paradoxically, only need capital when they are successful, and the servers risk crashing from overload. As an example, think about Bittorrent, the most important software for exchanging multimedia content over the internet, which was created by a single programmer, surviving through a creative use of some credit cards, with zero funding. But the internet is not just for creative individual souls, but enables large communities to cooperate over platforms. Very importantly, it is not limited to knowledge and software, but to everything that knowledge and software enables, which includes manufacturing. Anything that needs to be physically produced, needs to be ‘virtually designed’ in the first place. This phenomena is called social innovation or social production, and is increasingly responsible for most innovation. [436]

As we will see in Chapter Five, initial capital outlay requirements for physical production are imploding in exactly the same way, which means that venture capital will lose most of its outlets in manufacturing as well.

For this reason, the Austrian dogma of von Mises, that the only way to raise real wages is to increase the amount of capital invested, is shown to rely on a false assumption: the assumption that there is some necessary link between productivity and the sheer quantity of capital invested. George Reisman displays this tendency at its crudest.

The truth, which real economists, from Adam Smith to Mises, have elaborated, is that in a market economy, the wealth of the rich—of the capitalists—is overwhelmingly invested in means of production, that is, in factories, machinery and equipment, farms, mines, stores, and the like. This wealth, this capital, produces the goods which the average person buys, and as more of it is accumulated and raises the productivity of labor higher and higher, brings about a progressively larger and ever more improved supply of goods for the average person to buy. [437]

But it has been at the heart of most twentieth century assumptions about economy of scale, and an unquestioned assumption behind the work of liberal managerialists like Chandler and Galbraith.

For the same reason that the Austrian fixation on the quantity of capital investment as a source of productivity is obsolete, Marxist theories of the “social structure of accumulation” as an engine of growth are likewise obsolete. Technical innovation, in such theories, provides the basis for a new long-wave of investment to soak up surplus capital. The creation of some sort of new infrastructure is both a long-term sink for capital, and the foundation for new levels of productivity.

Gopal Balakrishnan, in New Left Review , correctly observes capitalism’s inability, this time around, to gain a new lease on life through a new Kondratieff long-wave cycle: i.e., “a new socio-technical infrastructure, to supersede the existing fixed-capital grid.” But he mistakenly sees it as the result either of an inability to bear the expense (as if productivity growth required an enormous capital outlay), or of technological stagnation. His claim of “technological stagnation,” frankly, is utterly astonishing. He equates the outsourced production in job-shops, on the flexible manufacturing model that prevails in various forms in Shenzhen, Emilia-Romagna, and assorted corporate supplier networks, with a lower level of technological advancement. [438] But the shift of production from the old expensive, capital-intensive, product-specific infrastructure of mass-production industry to job-shops is in fact the result of an amazing level of technological advance: namely, the rise of cheap CNC machine tools scaled to small shops that are more productive than the old mass-production machinery. By technological stagnation, apparently, Balakrishnan simply means that less money is being invested in new generations of capital; but the crisis of capitalism results precisely from the fact that new forms of technology permit unprecedented levels of productivity with physical capital costs an order of magnitude lower. Both the Austrians and the neo-Marxists, in their equation of progress and productivity with the mass of capital invested, are stuck in the paleotechnic age.

This shows why the “cognitive capitalism” model of Gates, Romer, etc. is untenable. The natural tendency of technical innovation is not to add to GDP, but to destroy it. GDP measures, not the utility of production outputs to the consumer, but the value of inputs consumed in production. [439] So anything that reduces the total labor and material inputs required to produce a given unit of output should reduce GDP, unless artificial scarcity puts a floor under commodity price and prevents prices from falling to the new cost of production.

This is essentially what we saw Eric Reasons point out above. As Chris Anderson argues in Free , Microsoft’s launch of Encarta on CD-Rom in the 1990s resulted in $100 billion in sales for Encarta—while destroying some $600 billion in sales for the traditional encyclopedia industry. And Wikipedia, in turn, destroyed still more sales for both traditional encyclopedias and Encarta. [440]

As Niall Cook describes it, enterprise software vendors are experiencing similar deflationary pressure.

‘The design of business applications is more important than ever, says Joe Kraus, CEO of JobSpot. ‘If I’m a buyer at a manufacturing company and I’m using Google Earth to look at the plants of my competition, and the Siebel sales rep asks me to spend $2 million on glorified database software, that causes a real disconnect.’ In the 1990s some enterprise software vendors were busy telling customers that even the simplest problems needed large, complex systems to solve them. Following the dot-com crash at the start of the millennium few of these vendors survived, usurped by cheap—if not free—alternatives. This trend continues unabated in the form of social software. As Peter Merholz..., president and founder of user experience firm Adaptive Path, put it, ‘enterprise software is being eaten away from below’. [441]

The usual suspects proclaim that demand is upwardly elastic, and endlessly so, so that a reduction of costs in one industry will simply free up demand for increased output elsewhere. But it’s unlikely, as Reasons pointed out, that there will be a one-to-one transfer of the demand freed up by lower prices from falling production costs to new forms of consumer goods, for the same reason that there’s a backward-bending supply curve for labor. What economists mean by this latter wonkish-sounding term is that labor doesn’t follow the upward sloping supply curve as most normal commodities, with higher wages resulting in willingness to work longer hours. Rather, part of the increase in income from higher wages is likely to be used to reduce work hours; rather than workers increasing demand for new products to absorb the total increase, it’s more likely that total demand will grow less than the wage increase, and it will take fewer hours to earn the desired level of consumption. The reason is that the expenditure of labor carries disutility. For the same reason, rather than reduced production costs and prices in one industry simply freeing up demand for an equal value in new products elsewhere, it’s likely that total GDP, i.e. total expenditure of labor and material inputs, will decline.

Rushkoff’s reference to the collapsing tax base is especially interesting. As we have already seen, in an economy of subsidized inputs, the demand for such inputs grows exponentially, faster than the state can meet them. The state capitalist system will soon reach a point at which, thanks to the collapse of the portion of value comprised of rents on artificial property, the base of taxable value is imploding at the very time big business most needs subsidies to stay afloat. In the words of Charles Hughes Smith,

what if the “end of paying work” will bring down the entire credit/consumption-dependent economy and the Federal government which depends on tax revenues from all that financial churn?... What if the Web, which is busily (creatively) destroying print media, the music industry, the movie business, Microsoft and many other rentier-type enterprises, ends up destroying income and profit-based tax revenues? How can the government support a status quo which requires $2 trillion in new borrowing every year just to keep from collapsing? What if that debt load is unsustainable? [442]

So the fiscal crisis of the state is accelerated not only by Peak Oil, but by the collapse of proprietary information as a source of value.

The growing importance of human capital relative to physical capital, another effect of the implosion of material outlays and overhead for production, is also creating governability problems for the standard absentee-owned, hierarchical corporate enterprise. At the same time, there is a growing inability to enforce corporate boundaries on human capital because of the unenforceability of “intellectual property.” Fifty years ago, enormous outlays on physical capital were the main structural basis for the corporation as a locus of control over physical assets. Today, for a growing number of industries, the physical capital requirements for entering the market have imploded, and “intellectual property” is the main structural support to corporate boundaries.

In this environment, the only thing standing between the old information and media dinosaurs and their total collapse is their so-called “intellectual property” rights—at least to the extent they’re still enforceable. Ownership of “intellectual property” becomes the new basis for the power of institutional hierarchies, and the primary structural bulwark for corporate boundaries. Without them, in any industry where the basic production equipment is affordable to all, and bottom-up networking renders management obsolete, it is likely that self-managed, cooperative production will replace the old managerial hierarchies. The network revolution, if its full potential is realized,

will lead to substantial redistribution of power and money from the twentieth century industrial producers of information, culture, and communications—like Hollywood, the recording industry, and perhaps the broadcasters and some of the telecommunications giants—to a combination of widely diffuse populations around the globe, and the market actors that will build the tools that make this population better able to produce its own information environment rather than buying it ready-made.” [443]

The same thing is true in the physical realm, of course. As we shall see in Chapter Five, the revolution in cheap CNC machine tools (including homebrew 3-D printers, cutting/routing tables, etc., that cost a few hundred dollars in parts) is having almost as radical an effect on the capital outlays required for physical production as the desktop revolution had on the immaterial production. And the approach of the old corporate dinosaurs—trying to maintain artificial scarcity and avoid having to compete with falling production costs—is exactly the same in the physical as in the immaterial realm.

F. Networked Resistance, Netwar, and Asymmetric Warfare Against Corporate Management

We already mentioned the corporate governance issues caused by the growing importance of human relative to physical capital, and the untenability of “intellectual property” as a legal support for corporate boundaries. Closely related is the vulnerability of corporate hierarchies to asymmetric warfare by networked communities of consumers and their own employees. Centralized, hierarchical institutions are increasingly vulnerable to open-source warfare.

In the early 1970s, in the aftermath of a vast upheaval in American political culture, Samuel Huntington wrote of a “crisis of democracy”; the American people, he feared, were becoming ungovernable. In The Crisis of Democracy , he argued that the system was collapsing from demand overload, because of an excess of democracy. Huntington’s analysis is illustrative of elite thinking behind the neoliberal policy agenda of the past thirty years.

For Huntington, America’s role as “hegemonic power in a system of world order” depended on a domestic system of power; this system of power, variously referred to in this work as corporate liberalism, Cold War liberalism, and the welfare-warfare state, assumed a general public willingness to stay out of government affairs. [444] And this was only possible because of a domestic structure of political authority in which the country “was governed by the president acting with the support and cooperation of key individuals and groups in the Executive office, the federal bureaucracy, Congress, and the more important businesses, banks, law firms, foundations, and media, which constitute the private establishment.” [445]

America’s position as defender of global capitalism required that its government have the ability “to mobilize its citizens for the achievement of social and political goals and to impose discipline and sacrifice upon its citizens in order to achieve these goals.” [446] Most importantly, this ability required that democracy be largely nominal, and that citizens be willing to leave major substantive decisions about the nature of American society to qualified authorities. It required, in other words, “some measure of apathy and non-involvement on the part of some individuals and groups.” [447]

Unfortunately, these requirements were being gravely undermined by “a breakdown of traditional means of social control, a delegitimation of political and other means of authority, and an overload of demands on government, exceeding its capacity to respond.” [448]

The overload of demands that caused Huntington to recoil in horror in the early 1970s must have seemed positively tame by the late 1990s. The potential for networked resistance created by the Internet exacerbated Huntington’s crisis of democracy beyond his wildest imagining.

Networked resistance is based on a principle known as stigmergy. “Stigmergy” is a term coined by biologist Pierre-Paul Grasse in the 1950s to describe the process by which termites coordinated their activity. Social insects like termites and ants coordinate their efforts through the independent responses of individuals to environmental triggers like chemical trails, without any need for a central coordinating authority. [449]

Applied by way of analogy to human society, it refers primarily to the kinds of networked organization associated with wikis, group blogs, and “leaderless” organizations organized along the lines of networked cells.

Matthew Elliott contrasts stigmergic coordination with social negotiation. Social negotiation is the traditional method of organizing collaborative group efforts, through agreements and compromise mediated by discussions between individuals. The exponential growth in the number of communications with the size of the group, obviously, imposes constraints on the feasible size of a collaborative group, before coordination must be achieved by hierarchy and top-down authority. Stigmergy, on the other hand, permits collaboration on an unlimited scale by individuals acting independently. This distinction between social negotiation and stigmergy is illustrated, in particular, by the contrast between traditional models of co-authoring and collaboration in a wiki. [450] Individuals communicate indirectly, “via the stigmergic medium.” [451]

The distinction between social negotiation and stigmergic coordination parallels Elliott’s distinction, elsewhere, between “discursive collaboration” and “stigmergic collaboration.” The “discursive elaboration of shared representations (ideas)” is replaced by “the annotation of material and digital artefacts as embodiments of these representations. “Additionally, when stigmergic collaboration is extended by computing and digital networks, a considerable augmentation of processing capacity takes place which allows for the bridging of the spatial and temporal limitations of discursive collaboration, while subtly shifting points of negotiation and interaction away from the social and towards the cultural.” [452]

There is a wide body of literature on the emergence of networked modes of resistance in the 1990s, beginning with the Rand studies on netwar by David Ronfeldt, John Arquilla and other writers. In their 1996 paper “The Advent of Netwar,” Arquilla and Ronfeldt wrote that technological evolution was working to the advantage of networks and the detriment of hierarchies. Although their focus was on the military aspect (what has since been called “Fourth Generation Warfare”), they also mentioned governability concerns in civil society much like those Huntington raised earlier. “Intellectual property pirates,” “militant single-issue groups” and “transnational social activists,” in particular, were “developing netwar-like attributes.”

Now... the new information technologies and related organizational innovations increasingly enable civil-society actors to reduce their isolation, build far-flung networks within and across national boundaries, and connect and coordinate for collective action as never before. As this trend deepens and spreads, it will strengthen the power of civil-society actors relative to state and market actors around the globe.... For years, a cutting edge of this trend could be found among left-leaning activist NGOs concerned with human-rights, environmental, peace, and other social issues at local, national, and global levels. Many of these rely on APC affiliates for communications and aim to construct a “global civil society” strong enough to counter the roles of state and market actors. In addition, the trend is spreading across the political spectrum. Activists on the right—from moderately conservative religious groups, to militant antiabortion groups—are also building national and transnational networks based in part on the use of new communications systems. [453]

In “Tribes, Institutions, Markets, Networks” (1996) Ronfeldt focused on the special significance of the network for networked global civil society.

...[A]ctors in the realm of civil society are likely to be the main beneficiaries. The trend is increasingly significant in this realm, where issue–oriented multiorganizational networks of NGOs—or, as some are called, nonprofit organizations (NPOs), private voluntary organizations (PVOs), and grassroots organizations (GROs)—continue to multiply among activists and interest groups who identify with civil society. Over the long run, this realm seems likely to be strengthened more than any other realm, in relative if not also absolute terms. While examples exist across the political spectrum, the most evolved are found among progressive political advocacy and social activist NGOs—e.g., in regard to environmental, human-rights, and other prominent issues—that depend on using new information technologies like faxes, electronic mail (e-mail), and on-line conferencing systems to consult and coordinate. This nascent, yet rapidly growing phenomenon is spreading across the political spectrum into new corners and issue areas in all countries. The rise of these networks implies profound changes for the realm of civil society. In the eighteenth and nineteenth centuries, when most social theorists focused on state and market systems, liberal democracy fostered, indeed required, the emergence of this third realm of activity. Philosophers such as Adam Ferguson, Alexis de Tocqueville, and G. W. F. Hegel viewed civil society as an essential realm composed of all kinds of independent nongovernmental interest groups and associations that acted sometimes on their own, sometimes in coalitions, to mediate between state and society at large. However, civil society was also considered to be a weaker realm than the state or the market. And while theorists treated the state and the market as systems, this was generally not the case with civil society. It was not seen as having a unique form of organization equivalent to the hierarchical institution or the competitive market, although some twentieth century theorists gave such rank to the interest group. Now, the innovative NGO-based networks are setting in motion new dynamics that promise to reshape civil society and its relations with other realms at local through global levels. Civil society appears to be the home realm for the network form, the realm that will be strengthened more than any other—either that, or a new, yet-to-be-named realm will emerge from it. And while classic definitions of civil society often encompassed state- and market-related actors (e.g., political parties, businesses and labor unions), this is less the case with new and emerging definitions—the separation of “civil society” from “state” and “market” realms may be deepening. The network form seems particularly well suited to strengthening civil-society actors whose purpose is to address social issues. At its best, this form may thus result in vast collaborative networks of NGOs geared to addressing and helping resolve social equity and accountability issues that traditional tribal, state, and market actors have tended to ignore or are now unsuited to addressing well. The network form offers its best advantages where the members, as often occurs in civil society, aim to preserve their autonomy and to avoid hierarchical controls, yet have agendas that are interdependent and benefit from consultation and coordination. [454]

In The Zapatista “Social Netwar” in Mexico , [455] Arquilla, Ronfeldt et al. expressed grave concern over the possibilities of decentralized “netwar” techniques for destabilizing the political and economic order. They saw ominous signs of such a movement in the global political support network for the Zapatistas. Loose, ad hoc coalitions of affinity groups, organizing through the Internet, could throw together large demonstrations at short notice, and “swarm” the government and mainstream media with phone calls, letters, and emails far beyond their capacity to cope. Ronfeldt and Arquilla noted a parallel between such techniques and the “leaderless resistance” ideas advocated by right-wing white supremacist Louis Beam, circulating in some Constitutionalist/militia circles.

The interesting thing about the Zapatista netwar, according to Ronfeldt and Arquilla, is that to all appearances it started out as a run-of-the-mill Third World army’s suppression of a run-of-the-mill local insurgency. Right up until Mexican troops entered Chiapas, there was every indication the uprising would be suppressed quickly, and that the world outside Mexico would “little note nor long remember” it. It looked that way until Subcommandante Marcos and the Zapatistas made their appeal to global civil society and became the center of a networked movement that stirred activists the world over. The Mexican government was blindsided by the global reaction. [456]

Similarly, global corporations have been caught off guard when what once would have been isolated and easily managed conflicts become global political causes.

Natural-resource companies had grown accustomed to dealing with activists who could not escape the confines of their nationhood: a pipeline or mine could spark a peasants’ revolt in the Philippines or the Congo, but it would remain contained, reported only by the local media and known only to people in the area. But today, every time Shell sneezes, a report goes out on the hyperactive “shell-nigeria-action” listserve, bouncing into the in-boxes of all the far-flung organizers involved in the campaign, from Nigerian leaders living in exile to student activists around the world. And when a group of activists occupied part of Shell’s U.K. Headquarters in January 1999, they made sure to bring a digital camera with a cellular linkup, allowing them to broadcast their sit-in on the Web, even after Shell officials turned off the electricity and phones.... The Internet played a similar role during the McLibel Trial, catapulting London’s grassroots anti-McDonald’s movement into an arena as global as the one in which its multinational opponent operates. “We had so much information about McDonald’s, we thought we should start a library,” Dave Morris explains, and with this in mind, a group of Internet activists launched the McSpotlight Web site. The site not only has the controversial pamphlet online, it contains the complete 20,000-page transcript of the trial, and offers a debating room where McDonald’s workers can exchange horror stories about McWork under the Golden Arches. The site, one of the most popular destinations on the Web, has been accessed approximately sixty-five million times. ...[This medium is] less vulnerable to libel suits than more traditional media. [McSpotlight programmer] Ben explains that while McSpotlight’s server is located in the Netherlands, it has “mirror sites” in Finland, the U.S. New Zealand and Australia. That means that if a server in one country is targeted by McDonald’s lawyers, the site will still be available around the world from the other mirrors. [457]

In “Swarming & the Future of Conflict,” Ronfeldt and Arquilla focused on swarming, in particular, as a technique that served the entire spectrum of networked conflict—including “civic-oriented actions.” [458] Despite the primary concern with swarming as a military phenomenon, they also gave some attention to networked global civil society—and the Zapatista support network in particular—as examples of peaceful swarming with which states were ill-equipped to deal:

A recent example of swarming can be found in Mexico, at the level of what we call activist “social netwar” (see Ronfeldt et al. 1998). Briefly, we see the Zapatista movement, begun in January 1994 and continuing today, as an effort to mobilize global civil society to exert pressure on the government of Mexico to accede to the demands of the Zapatista guerrilla army (EZLN) for land reform and more equitable treatment under the law. The EZLN has been successful in engaging the interest of hundreds of NGOs, who have repeatedly swarmed their media-oriented “fire” (i.e., sharp messages of reproach) against the government. The NGOs also swarmed in force—at least initially—by sending hundreds of activists into Chiapas to provide presence and additional pressure. The government was able to mount only a minimal counterswarming “fire” of its own, in terms of counterpropaganda. However, it did eventually succeed in curbing the movement of activists into Chiapas, and the Mexican military has engaged in the same kind of “blanketing” of force that U.S. troops employed in Haiti—with similar success. [459] At present, our best understanding of swarming—as an optimal way for myriad, small, dispersed, autonomous but internetted maneuver units to coordinate and conduct repeated pulsing attacks, by fire or force—is best exemplified in practice by the latest generation of activist NGOs, which assemble into transnational networks and use information operations to assail government actors over policy issues. These NGOs work comfortably within a context of autonomy from each other; they also take advantage of their high connectivity to interact in the fluid, flexible ways called for by swarm theory. The growing number of cases in which activists have used swarming include, in the security area, the Zapatista movement in Mexico and the International Campaign to Ban Landmines (ICBL). The former is a seminal case of “social netwar,” in which transnationally networked NGOs helped deter the Mexican government and army from attacking the Zapatistas militarily. In the latter case, a netwar-like movement, after getting most nations to sign an international antilandmine treaty, won a Nobel Peace Prize. Swarming tactics have also been used, to a lesser degree, by pro-democracy movements aiming to put a dictatorship on the defensive and/or to alter U.S. trade and other relations with that dictatorship. Burma is an example of this. Social swarming is especially on the rise among activists that oppose global trade and investment policies. Internet-based protests helped to prevent approval of the Multilateral Agreement on Investment (MAI) in Europe in 1998. Then, on July 18, 1999—a day that came to be known as J18—furious anticapitalist demonstrations took place in London, as tens of thousands of activists converged on the city, while other activists mounted parallel demonstrations in other countries. J18 was largely organized over the Internet, with no central direction or leadership. Most recently, with J18 as a partial blueprint, several tens of thousands of activists, most of them Americans but many also from Canada and Europe, swarmed into Seattle to shut down a major meeting of the World Trade Organization (WTO) on opening day, November 30, 1999—in an operation known to militant activists and anarchists as N30, whose planning began right after J18. The vigor of these three movements and the effectiveness of the activists’ obstructionism came as a surprise to the authorities. The violent street demonstrations in Seattle manifested all the conflict formations discussed earlier—the melee, massing, maneuver, and swarming. Moreover, the demonstrations showed that information-age networks (the NGOs) can prevail against hierarchies (the WTO and the Seattle police), at least for a while. The persistence of this “Seattle swarming” model in the April 16, 2000, demonstrations (known as A16) against the International Monetary Fund and the World Bank in Washington, D.C., suggests that it has proven effective enough to continue to be used. From the standpoints of both theory and practice, some of the most interesting swarming was conducted by black-masked anarchists who referred to themselves collectively as the N30 Black Bloc, which consisted of anarchists from various affinity groups around the United States. After months of planning, they took to the field individually and in small groups, dispersed but internetted by two-way radios and other communications measures, with a concept of collective organization that was fluid and dynamic, but nonetheless tight. They knew exactly what corporate offices and shops they intended to damage—they had specific target lists. And by using spotters and staying constantly in motion, they largely avoided contact with the police (instead, they sometimes clashed with “peace keepers” among the protesters). While their tactics wrought physical destruction, they saw their larger philosophical and strategic goals in disruptive informational terms, as amounting to breaking the “spell” of private property, corporate hegemony, and capitalism over society. In these social netwars—from the Zapatistas in 1994, through the N30 activists and anarchists in 1999—swarming appears not only in real-life actions but also through measures in cyberspace. Swarms of email sent to government figures are an example. But some “hacktivists” aim to be more disruptive—pursuing “electronic civil disobedience.” One notable recent effort associated with a collectivity called the Electronic Disturbance Theater is actually named SWARM. It seeks to move “digital Zapatismo” beyond the initial emphasis of its creators on their “FloodNet” computer system, which has been used to mount massive “ping” attacks on government and corporate web sites, including as part of J18. The aim of its proponents is to come up with new kinds of “electronic pulse systems” for supporting militant activism. This is clearly meant to enable swarming in cyberspace by myriad people against government, military, and corporate targets. [460]

Swarming—in particular the swarming of public pressure through letters, phone calls, emails, and public demonstrations, and the paralysis of communications networks by such swarms—is the direct descendant of the “overload of demands” Huntington wrote of in the 1970s.

Netwar, Ronfeldt and Arquilla wrote elsewhere, is characterized by “the networked organizational structure of its practitioners—with many groups actually being leaderless —and the suppleness in their ability to come together quickly in swarming attacks.” [461]

Jeff Vail discusses netwar techniques, in his A Theory of Power blog, using a term of his own: “Rhizome.” Vail predicts that the political struggles of the 21 st century will be defined by the structural conflict between rhizome and hierarchy.

Rhizome structures, media and asymmetric politics will not be a means to support or improve a centralized, hierarchical democracy--they will be an alternative to it. Many groups that seek change have yet to identify hierarchy itself as the root cause of their problem..., but are already beginning to realize that rhizome is the solution. [462]

Many open-source thinkers, going back to Eric Raymond in The Cathedral and the Bazaar , have pointed out the nature of open-source methods and network culture as force-multipliers. [463] Open-source design communities pick up the innovations of individual members and quickly distribute them wherever they are needed, with maximum economy. By way of analogy, recall the argument from Cory Doctorow we saw above: proprietary content owners—who still don’t “get” network culture—think if they only make DRM too difficult for the average consumer to circumvent, the losses to hard-core geeks who have the time and skills to get around it will be insignificant (”...DRM doesn’t have to be proof against smart attackers, only average individuals!”). But network culture makes it unnecessary to figure out a way to route around DRM obstructions more than once; as soon as the first person does it, it becomes part of the common pool of intelligence, available to anyone who can search The Pirate Bay (or whatever TPB successor exists at any given time).

Australia, in fact, was recently the location of a literal “geeks helping grandmas” story, as geeks at The Pirate Party provided technical expertise to seniors wishing to circumvent government blockage of right-to-die websites:

Exit International is an assisted suicide education group in Australia, whose average member is over 70 years old. The Exit International website has been will likely be blocked by the Great Firewall of Australia, so Exit International has turned to Australia’s Pirate Party and asked for help in producing a slideshow explaining firewall circumvention for seniors. It’s a pretty informative slideshow — teachers could just as readily use it for schoolkids in class in a teaching unit on getting access to legit educational materials that’s mistakenly blocked by school censorware. [464]

Open-source insurgency follows a similar development model, with each individual innovation quickly becoming part of a common pool of intelligence. John Robb writes:

The decentralized, and seemingly chaotic guerrilla war in Iraq demonstrates a pattern that will likely serve as a model for next generation terrorists. This pattern shows a level of learning, activity, and success similar to what we see in the open source software community. I call this pattern the bazaar. The bazaar solves the problem: how do small, potentially antagonistic networks combine to conduct war? Lessons from Eric Raymond’s “The Cathedral and the Bazaar” provides a starting point for further analysis. Here are the factors that apply (from the perspective of the guerrillas):

Release early and often. Try new forms of attacks against different types of targets early and often. Don’t wait for a perfect plan.

Given a large enough pool of co-developers, any difficult problem will be seen as obvious by someone, and solved. Eventually some participant of the bazaar will find a way to disrupt a particularly difficult target. All you need to do is copy the process they used.

Your co-developers (beta-testers) are your most valuable resource. The other guerrilla networks in the bazaar are your most valuable allies. They will innovate on your plans, swarm on weaknesses you identify, and protect you by creating system noise. [465]

Tom Knapp provides a good practical example of the Bazaar in operation—the G-20 protests in Philadelphia:

During the G-20 summit in the Pittsburgh area last week, police arrested two activists. These particular activists weren’t breaking windows. They weren’t setting cars on fire. They weren’t even parading around brandishing giant puppets and chanting anti-capitalist slogans. In fact, they were in a hotel room in Kennedy, Pennsylvania, miles away from “unsanctioned” protests in Lawrenceville … listening to the radio and availing themselves of the hotel’s Wi-Fi connection. Now they stand accused of “hindering apprehension, criminal use of a communication facility and possessing instruments of crime.” The radio they were listening to was (allegedly) a police scanner. They were (allegedly) using their Internet access to broadcast bulletins about police movements in Lawrenceville to activists at the protests, using Twitter.... Government as we know it is engaged in a battle for its very survival, and that battle, as I’ve mentioned before, looks in key respects a lot like the Recording Industry Association of America’s fight with peer-to-peer “file-sharing” networks. The RIAA can — and is — cracking down as hard as it can, in every way it can think of, but it is losing the fight and there’s simply no plausible scenario under which it can expect to emerge victorious. The recording industry as we know it will change its business model, or it will go under. The Pittsburgh Two are wonderfully analogous to the P2P folks. Their arrest boils down, for all intents and purposes, to a public debugging session. Pittsburgh Two 2.0 will set their monitoring stations further from the action (across jurisdictional lines), use a relay system to get the information to those stations in a timely manner, then retransmit that information using offshore and anonymizing proxies. The cops won’t get within 50 miles of finding Pittsburgh Two 2.0, and anything they do to counter its efficacy will be countered in subsequent versions. [466]

Two more recent examples are the use of Twitter in Maricopa County to alert the Latino community to raids by Sherrif Joe Arpaio, and to alert drivers to sobriety checkpoints. [467]

One especially encouraging development is the stigmergic sharing of innovations in the technologies of resistance between movements around the world, aiding each other across national lines and bringing combined force to bear against common targets. The Falun Gong has played a central role in this effort:

When these dissident Iranians chatted with each other and the outside world, they likely had no idea that many of their missives were being guided and guarded by 50 Falun Gong programmers spread out across the United States. These programmers, who almost all have day jobs, have created programs called Freegate and Ultrasurf that allow users to fake out Internet censors. Freegate disguises the browsing of its users, rerouting traffic using proxy servers. To prevent the Iranian authorities from cracking their system, the programmers must constantly switch the servers, a painstaking process. The Falun Gong has proselytized its software with more fervor than its spiritual practices. It distributes its programs for free through an organization called the Global Internet Freedom Consortium (GIFC), sending a downloadable version of the software in millions of e-mails and instant messages. In July 2008, it introduced a Farsi version of its circumvention tool. While it is hardly the only group to offer such devices, the Falun Gong’s program is particularly popular thanks to its simplicity and relative speed.... For all their cleverness, [Falun Gong] members found themselves constantly outmaneuvered. They would devise a strategy that would break past China’s filtering tools, only to find their new sites quickly hacked or stymied. In 2002, though, they had their Freegate breakthrough. According to David Tian, a programmer with the GIFC and a research scientist at nasa, Freegate was unique because it not only disguised the ISP addresses, or Web destinations, but also cloaked the traffic signatures, or the ways in which the Chinese filters determined whether a Web user was sending an e-mail, navigating a website, sending an instant message, or using Skype. “In the beginning, Freegate was rudimentary, then the communists analyzed the software, they tried to figure out how we beat them. They started to block Freegate. But then, we started hiding the traffic signature,” says Mr. Tian. “They have not been able to stop it since.”.... The Falun Gong was hardly alone in developing this kind of software. In fact, there’s a Coke-Pepsi rivalry between Freegate and the other main program for skirting the censors: The Onion Router, or TOR. Although TOR was developed by the U.S. Navy—to protect Internet communication among its vessels—it has become a darling of the libertarian left. The TOR project was originally bankrolled, in part, by the Electronic Frontier Foundation (EFF), the group that first sued the U.S. government for warrantless wiretapping. Many libertarians are drawn to TOR because they see it as a way for citizens to shield themselves from the prying eyes of government. TOR uses an algorithm to route traffic randomly across three different proxy servers. This makes it slow but extremely secure—so secure that both the FBI and international criminal gangs have been known to use it. Unlike the Falun Gong, the TOR programmers have a fetish for making their code available to anyone. There’s an irony in the EFF’s embrace of TOR, since the project also receives significant funding from the government. The Voice of America has contributed money so that its broadcasts can be heard via the Internet in countries that have blocked their site, a point of envy for the GIFC. For the past four years, the Falun Gong has also been urging the U.S. government to back Freegate financially, going so far as to enlist activists such as Michael Horowitz, a Reagan administration veteran, and Mark Palmer, a former ambassador to Hungary, to press Congress. (Neither was paid for his work.) But, when the two finally persuaded Congress to spend $15 million on anti-censorship software last year, the money was redirected to a program for training journalists. Both Palmer and Horowitz concluded that the State Department despised the idea of funding the Falun Gong. That’s a reasonable conclusion. The Chinese government views the Falun Gong almost the way the United States views Al Qaeda. As Richard Bush, a China expert at the Brookings Institution, puts it, “An effort to use U.S. government resources in support of a Falun Gong project would be read in the worst possible way by the Chinese government.” Still, there will no doubt be renewed pressure to direct money to the likes of the GIFC and TOR. In the wake of the Iran demonstrations, three bills to fund anti-censorship software are rocketing through Congress, with wide support. Tom Malinowski, the Washington director for Human Rights Watch, argues that such software “is to human rights work today what smuggling mimeograph machines was back in the 1970s, except it reaches millions more people.” [468]

The last three paragraphs are suggestive concerning the internal contradictions of state capitalism and its IP regime. The desire of would-be hegemons to aid each other’s internal resistance often leads to the creation of virally replicable technologies of benefit to their own internal resistance; on the other hand, this danger sometimes sparks a sense of honor among thieves in which competing hegemons refrain from supporting each other’s resistance. But overall, global interstate conflict is a source of technologies that can be exploited by non-state actors for internal resistance against the state.

Of course the conflict continues—but the resistance seems to be capable of developing counter-countermeasures before the state’s counter-measures are even implemented.

And, while the Falun Gong has managed to win the upper hand in its battle with the Chinese government, it has reason to be less sanguine about the future. The Chinese have returned to the cyber-nanny model that U.S. libraries have deployed. This notorious project is called the Green Dam, or, more precisely, the Green Dam Youth Escort. Under the Green Dam, every new Chinese computer is required to come with a stringent filter pre-installed and, therefore, nearly impossible to remove. As the filter collects data on users, it relies on a government database to block sites. If anything, the Green Dam is too comprehensive. In its initial run, the software gummed up computers, crashing browsers and prohibiting virtually every Web search. In August, Beijing announced that it would delay the project indefinitely. Still, China had revealed a model that could, in theory, defeat nearly every Web-circumvention tool. When I asked David Tian, the GIFC programmer, about Green Dam, he spoke about it with a mix of pride and horror. The pride comes from the fact that the GIFC’s successes have placed the Chinese on the defensive. “One of the reasons they started this Green Dam business and moved the filter to the computer is because they cannot stop our products with the current filters,” he said. But he conceded that Green Dam will render Freegate useless. In the world of product development—and freedom fighting—you innovate or die. The Falun Gong is determined not to go the way of the Commodore 64 into technological irrelevance. It has released a beta version of a new piece of software to overcome the Green Dam. Without a real chance to test it, it’s hard to tell whether it will work. But it has overcome the first hurdle of product development. It has marketed its product with a name that captures the swagger of the enterprise. It is called Green Tsunami. [469]

We will examine the general principles of the Bazaar and network culture, as they relate to the superior agility and resilience of the alternative economy as a whole, in Chapter Seven.

The concept of networked resistance is especially interesting, from our standpoint, as it relates to two things: the kind of anti-corporate “culture jamming” Naomi Klein describes in No Logo , and to labor struggle as a form of asymmetric warfare.

In both cases, governments and corporations, hierarchies of all kinds, are learning to their dismay that, in a networked age, it’s impossible to suppress negative publicity. As Cory Doctorow put it, “Paris Hilton, the Church of Scientology, and the King of Thailand have discovered... [that] taking a piece of information off the Internet is like getting food coloring out of a swimming pool. Good luck with that.” [470]

It’s sometimes called the Streisand effect, in honor of Barbra Streisand (whose role in its discovery—about which more below—was analogous to Sir Isaac Newton’s getting hit on the head by an apple).

One of the earliest examples of the phenomenon was the McLibel case in Britain, in which McDonald’s attempt to suppress a couple of embarrassing pamphleteers with a SLAPP lawsuit wound up bringing them far worse publicity as a direct result. The pamphleteers were indigent and represented themselves in court much of the time, and repeatedly lost appeals in the British court system throughout the nineties (eventually they won an appeal in the European Court of Human Rights). But widespread coverage of the case on the Internet, coupled with the defendants’ deliberate use of the courtroom as a bully pulpit to examine the factual issues, caused McDonald’s one of the worst embarrassments in its history. [471] (Naomi Klein called it “the corporate equivalent of a colonoscopy.”) [472]

Two important examples in 2004, the Sinclair Media boycott and the Diebold corporate emails, both decisively demonstrated the impossibility of suppressing online information in an age of mirror sites. A number of left-wing websites and liberal bloggers organized a boycott of Sinclair Media after its stations aired an anti-Kerry documentary by the Swift Boat campaign.

In the ensuing boycott campaign, advertisers were deluged with more mail and phone calls than they could handle. By October 13, some sponsors were threatening litigation, viewing unsolicited boycott emails as illegal SPAM. Nick Davis, creator of one of the boycott sites, posted legal information explaining that anti-SPAM legislation applied only to commercial messages, and directed threatening sponsors to that information. At the same time, some Sinclair affiliates threatened litigation against sponsors who withdrew support in response to the boycott. Davis organized a legal support effort for those sponsors. By October 15, sponsors were pulling ads in droves. The price of Sinclair stock crashed, recovering only after Sinclair reversed its decision to air the documentary. [473]

Diebold, similarly, attempted to shut down websites which hosted leaked corporate emails questioning the security of the company’s electronic voting machines. But the data was widely distributed among student and other activist databases, and the hosting sites were mirrored in jurisdictions all over the world.

In August, someone provided a cache of thousands of Diebold internal emails to Wired magazine and to Bev Harris. Harris posted the emails on her site. Diebold threatened litigation, demanding that Harris, her ISP, and other sites reproducing the emails take them down. Although the threatened parties complied, the emails had been so widely replicated and stored in so many varied settings that Diebold was unable to suppress them. Among others, university students at numerous campuses around the U.S. stored the emails and scrutinized them for evidence. Threatened by Diebold with provisions of the DMCA that required Web-hosting companies to remove infringing materials, the universities ordered the students to remove the materials from their sites. The students responded with a campaign of civil disobedience, moving files between students’ machines, duplicating them on FreeNet (an “anti-censorship peer-to-peer publication network”) and other peer-to-peer file-sharing systems.... They remained publicly available at all times. [474]

An attempt to suppress information on the Wikileaks hosting site, in 2007, resulted in a similar disaster.

Associated Press (via the first amendment center) reports that “an effort at (online) damage control has snowballed into a public relations disaster for a Swiss bank seeking to crack down on Wikileaks for posting classified information about some of its wealthy clients. While Bank Julius Baer claimed it just wanted stolen and forged documents removed from the site (rather than close it down), instead of the information disappearing, it rocketed through cyberspace, landing on other Web sites and Wikileaks’ own “mirror” sites outside the U.S.... The digerati call the online phenomenon of a censorship attempt backfiring into more unwanted publicity the “Streisand effect.” Techdirt Inc. chief executive Mike Masnick coined the term on his popular technology blog after the actress Barbra Streisand’s 2003 lawsuit seeking to remove satellite photos of her Malibu house. Those photos are now easily accessible, just like the bank documents. “It’s a perfect example of the Streisand effect,” Masnick said. “This was a really small thing that no one heard about and now it’s everywhere and everyone’s talking about it.” [475]

The so-called DeCSS uprising, in which corporate attempts to suppress publication of a code for cracking the DRM on DVDs failed in the face of widespread defiance, is one of the most inspiring episodes in the history of the free culture movement.

Journalist Eric Corley—better known as Emmanuel Goldstein, a nom de plume borrowed from Orwell’s 1984 —posted the code for DeCSS (so called because it decrypts the Content Scrambling System that encrypts DVDs) as a part of a story he wrote in November for the well-known hacker journal 2600. The Motion Picture Association of America (MPAA) claims that Corley defied anticircumvention provisions of the Digital Millennium Copyright Act (DMCA) by posting the offending code.... The whole affair began when teenager Jon Johansen wrote DeCSS in order to view DVDs on a Linux machine. The MPAA has since brought suit against him in his native Norway as well. Johansen testified on Thursday that he announced the successful reverse engineering of a DVD on the mailing list of the Linux Video and DVD Project (LiViD), a user resource center for video- and DVD-related work for Linux.... The judge in the case, the honorable Lewis Kaplan of the US District Court in southern New York, issued a preliminary injunction against posting DeCSS. Corley duly took down the code, but did not help his defense by defiantly linking to myriad sites which post DeCSS.... True to their hacker beliefs, Corley supporters came to the trial wearing the DeCSS code on t-shirts. There are also over 300 Websites that still link to the decryption code, many beyond the reach of the MPAA. [476] In the Usmanov case of the same year, attempts to suppress embarrassing information led to similar Internet-wide resistance. The Register, UK Political websites have lined up in defence of a former diplomat whose blog was deleted by hosting firm Fasthosts after threats from lawyers acting for billionaire Arsenal investor Alisher Usmanov. Four days after Fasthosts pulled the plug on the website run by former UK ambassador to Uzbekistan Craig Murray it remains offline. Several other political and freedom of speech blogs in the UK and abroad have picked up the gauntlet however, and reposted the article that originally drew the takedown demand. The complaints against Murray’s site arose after a series of allegations he made against Usmanov.... After being released from prison, and pardoned, Usmanov became one of a small group of oligarchs to make hay in the former USSR’s post-communist asset carve-up.... On his behalf, libel law firm Schillings has moved against a number of Arsenal fan sites and political bloggers repeating the allegations.... [477]

That reference to “[s]everal other political and freedom of speech blogs,” by the way, is like saying the ocean is “a bit wet.” An article at Chicken Yoghurt blog provides a list of all the venues that have republished Murray’s original allegations, recovered from Google’s caches of the sites or from the Internet Archive. It is a very, very long list [478] —so long, in fact, that Chicken Yoghurt helpfully provides the html code with URLs already embedded in the text, so it can be easily cut and pasted into a blog post. In addition, Chicken Yoghurt provided the IP addresses of Usmanov’s lawyers as a heads-up to all bloggers who might have been visited by those august personages.

A badly edited photo of a waif in a Ralph Lauren ad, which made the model appear not just emaciated but deformed, was highlighted on the Photoshop Disasters website. Lauren sent the site legal notices of DMCA infringement, and got the site’s ISP to take it down. In the process, though, the photo—and story—got circulated all over the Internet. Doctorow issued his defiance at BoingBoing :

So, instead of responding to their legal threat by suppressing our criticism of their marketing images, we’re gonna mock them. Hence this post.... ...And every time you threaten to sue us over stuff like this, we will: a) Reproduce the original criticism, making damned sure that all our readers get a good, long look at it, and; b) Publish your spurious legal threat along with copious mockery, so that it becomes highly ranked in search engines where other people you threaten can find it and take heart; and c) Offer nourishing soup and sandwiches to your models. [479]

The Trafigura case probably represents a new speed record, in terms of the duration between initial thuggish attempts to silence criticism and the company lawyers’ final decision to cave. The Trafigura corporation actually secured a court injunction against The Guardian , prohibiting it from reporting a question by an MP on the floor of Parliament about the company’s alleged dumping of toxic waste in Africa. Without specifically naming either Trafigura or the MP, reporter Alan Rusbridger was able to comply with the terms of the injunction and still include enough hints in his cryptic story for readers to scour the Parliamentary reports and figure it out for themselves. By the time he finished work that day, “Trafigura” was already the most-searched-for term on Twitter; by the next morning Trafigura’s criminal acts—plus their attempt at suppressing the story—had become front-page news, and by noon the lawyers had thrown in the towel. [480]

John Robb describes the technical potential for information warfare against a corporation, swarming customers, employees, and management with propaganda and disinformation (or the most potent weapon of all, I might add—the truth), and in the process demoralizing management.

As we move forward in this epochal many to many global conflict, and given many early examples from wide variety of hacking attacks and conflicts, we are likely to see global guerrillas come to routinely use information warfare against corporations. These information offensives will use network leverage to isolate corporations morally, mentally, and physically.... Network leverage comes in three forms:

Highly accurate lists of targets from hacking “black” marketplaces. These lists include all corporate employee e-mail addresses and phone numbers — both at work and at home. ~<$0.25 a dossier (for accurate lists).

Low cost e-mail spam. Messages can be range from informational to phishing attacks. <$0.1 a message.

Low cost phone spam. Use the same voice-text messaging systems and call centers that can blanket target lists with perpetual calls. Pennies a call....

In short, the same mechanisms that make spamming/direct marketing so easy and inexpensive to accomplish, can be used to bring the conflict directly to the employees of a target corporation or its partner companies (in the supply chain). Executives and employees that are typically divorced/removed from the full range of their corporation’s activities would find themselves immediately enmeshed in the conflict. The objective of this infowar would be to increase...:

Uncertainty. An inability to be certain about future outcomes. If they can do this, what’s next? For example: a false/troll e-mail or phone campaign from the CEO that informs employees at work and at home that it will divest from the target area or admits to heinous crimes.

Menace. An increase [sic] personal/familial risk. The very act of connecting to directly to employee [sic] generates menace. The questions it should evoke: should I stay employed here given the potential threat?

Mistrust. A mistrust of the corporations moral and legal status. For example: The dissemination of information on a corporations actions, particularly if they are morally egregious or criminal in nature, through a NGO charity fund raising drive.

With an increase in uncertainty, menace, and mistrust within the target corporation’s ranks and across the supply chain partner companies, the target’s connectivity (moral, physical, and mental) is likely to suffer a precipitous fall. This reduction in connectivity has the potential to create non-cooperative centers of gravity within the targets as cohesion fails. Some of these centers of gravity would opt to leave the problem (quit or annul contractual relationships) and some would fight internally to divest themselves of this problem. [481]

More generally, hierarchical institutions are finding that the traditional means of suppressing communication, that worked as recently as twenty years ago, are useless. Take something as simple as suppressing a school newspaper whose content violates the administrators’ sensibilities. An increasingly common response is to set up an informal student newspaper online, and if necessary to tweak the hosting arrangements to thwart attempts at further suppression. [482]

Corporations are immensely vulnerable to informational warfare, both by consumers and by workers. The last section of Naomi Klein’s No Logo discusses in depth the vulnerability of large corporations and brand name images to netwar campaigns. [483] She pays special attention to “culture jamming,” which involves riffing off of corporate logos and thereby “tapping into the vast resources spent to make [a] logo meaningful.” [484] A good example is the anti-sweatshop campaign by the National Labor Committee, headed by Charles Kernaghan.

Kernaghan’s formula is simple enough. First, select America’s most cartoonish icons, from literal ones like Mickey Mouse to virtual ones like Kathie Lee Gifford. Next, create head-on collisions between image and reality. “They live by their image,” Kernaghan says of his corporate adversaries. “That gives you a certain power over them... these companies are sitting ducks.” [485]

At the time she wrote, technological developments were creating unprecedented potential for culture jamming. Digital design and photo editing technology made it possible to make incredibly sophisticated parodies of corporate logos and advertisements. [486] Interestingly, a lot of corporate targets shied away from taking culture jammers to court for fear a public might side with the jammers against the corporate plaintiffs. The more intelligent corporate bosses understand that “legal battles... will clearly be fought less on legal than on political grounds.” In the words of one advertising executive, “No one wants to be in the limelight because they are the target of community protests or boycotts.” [487]

Klein riffed off of Saul Alinsky’s term “political jujitsu” to describe “using one part of the power structure against another part.” Culture jamming is a form of political jujitsu that uses the power of corporate symbols—symbols deliberately developed to tap into subconscious drives and channel them in directions desired by the corporation—against their corporate owners. [488]

Anticorporate activism enjoys the priceless benefits of borrowed hipness and celebrity—borrowed, ironically enough, from the brands themselves. Logos that have been burned into our brains by the finest image campaigns money can buy, ...are bathed in a glow.... ...Like a good ad bust, anticorporate campaigns draw energy from the power and mass appeal of marketing, at the same time as they hurl that energy right back at the brands that have so successfully colonized our everyday lives. You can see this jujitsu strategy in action in what has become a staple of many anticorporate campaigns: inviting a worker from a Third World country to come visit a First World superstore—with plenty of cameras rolling. Few newscasts can resist the made-for-TV moment when an Indonesian Nike worker gasps as she learns that the sneakers she churned out for $2 a day sell for $120 at San Francisco Nike Town. [489]

The effect of “sully[ing] some of the most polished logos on the brandscape,” as Klein characterized Kernaghan’s efforts, [490] is much like that of “Piss Christ.” He plays on the appeal of the dogs in 101 Dalmatians by comparing the living conditions of the animals on the set to those of the human sweatshop workers who produce the tie-in products. He shows up for public appearances with “his signature shopping bag brimming with Disney clothes, Kathie Lee Gifford pants and other logo gear,” along with pay slips and price tags used as props to illustrate the discrepancy between worker pay and retail price. In El Salvador, he pulls items out of the bag with price tags attached to show workers what their products fetch in the U.S. After a similar demonstration of Disney products in Haiti, “workers screamed with shock, disbelief, anger, and a mixture of pain and sadness, as their eyes fixed on the Pocahontas shirt”—a reaction captured in the film Mickey Mouse Goes to Haiti . [491]

Culture jamming is also an illustration of the effects of network culture. Although corporate imagery is still created by people thinking in terms of one-way broadcast communication, the culture jammers have grown up in an age where audiences can talk back to the advertisement or mock it to one another. The content of advertising becomes just another bit of raw material for mashups, as products once transmitted on a one-way conveyor belt from giant factory to giant retailer to consumer have now become raw material for hacking and reverse-engineering. [492]

The Wobbly idea of “direct action on the job” was a classic example of asymmetric warfare. And modern forms of networked resistance are ideally suited to labor struggle. In particular, network technology creates previously unimaginable possibilities for the Wobbly tactic of “open-mouth sabotage.” As described in “How to Fire Your Boss”:

Sometimes simply telling people the truth about what goes on at work can put a lot of pressure on the boss. Consumer industries like restaurants and packing plants are the most vulnerable. And again, as in the case of the Good Work Strike, you’ll be gaining the support of the public, whose patronage can make or break a business. Whistle Blowing can be as simple as a face-to-face conversation with a customer, or it can be as dramatic as the P.G.&E. engineer who revealed that the blueprints to the Diablo Canyon nuclear reactor had been reversed.... Waiters can tell their restaurant clients about the various shortcuts and substitutions that go into creating the faux-haute cuisine being served to them. Just as Work to Rule puts an end to the usual relaxation of standards, Whistle Blowing reveals it for all to know. [493] The authors of The Cluetrain Manifesto are quite expansive on the potential for frank, unmediated conversations between employees and customers as a way of building customer relationships and circumventing the consumer’s ingrained habit of blocking out canned corporate messages. [494] They characterize the typical corporate voice as “sterile happytalk that insults the intelligence,” “the soothing, humorless monotone of the mission statement, marketing brochure, and your-call-is-important-to-us busy signal.” [495]

When employees engage customers frankly about the problems they experience with the company’s product, and offer useful information, customers usually respond positively.

What the Cluetrain authors don’t mention is the potential for disaster, from the company’s perspective, when disgruntled workers see the customer as a potential ally against a common enemy. What would happen if employees decided, not that they wanted to help their company by rescuing it from the tyranny of PR and the official line and winning over customers with a little straight talk—but that they hated the company and that its management was evil? What if, rather than simply responding to a specific problem with what the customer had needed to know, they’d aired all the dirty laundry about management’s asset stripping, gutting of human capital, hollowing out of long-term productive capability, gaming of its own bonuses and stock options, self-dealing on the job, and logrolling with directors?

Corporate America, for the most part, still views the Internet as “just an extension of preceding mass media, primarily television.” Corporate websites are designed on the same model as the old broadcast media: a one-to-many, one-directional communications flow, in which the audience couldn’t talk back. But now the audience can talk back.

Imagine for a moment: millions of people sitting in their shuttered homes at night, bathed in that ghostly blue television aura. They’re passive, yeah, but more than that: they’re isolated from each other. Now imagine another magic wire strung from house to house, hooking all these poor bastards up. They’re still watching the same old crap. Then, during the touching love scene, some joker lobs an off-color aside — and everybody hears it. Whoa! What was that?... The audience is suddenly connected to itself. What was once The Show, the hypnotic focus and tee-vee advertising carrier wave, becomes... an excuse to get together.... Think of Joel and the ‘bots on Mystery Science Theater 3000. The point is not to watch the film, but to outdo each other making fun of it. And for such radically realigned purposes, some bloated corporate Web site can serve as a target every bit as well as Godzilla, King of the Monsters.... So here’s a little story problem for ya, class. If the Internet has 50 million people on it, and they’re not all as dumb as they look, but the corporations trying to make a fast buck off their asses are as dumb as they look, how long before Joe is laughing as hard as everyone else? The correct answer of course: not long at all. And as soon as he starts laughing, he’s not Joe Six-Pack anymore. He’s no longer part of some passive couch-potato target demographic. Because the Net connects people to each other, and impassions and empowers through those connections, the media dream of the Web as another acquiescent mass-consumer market is a figment and a fantasy. The Internet is inherently seditious. It undermines unthinking respect for centralized authority, whether that “authority” is the neatly homogenized voice of broadcast advertising or the smarmy rhetoric of the corporate annual report. [496] ....Look at how this already works in today’s Web conversation. You want to buy a new camera. You go to the sites of the three camera makers you’re considering. You hastily click through the brochureware the vendors paid thousands to have designed, and you finally find a page that actually gives straightforward factual information. Now you go to a Usenet discussion group, or you find an e-mail list on the topic. You read what real customers have to say. You see what questions are being asked and you’re impressed with how well other buyers—strangers from around the world—have answered them.... Compare that to the feeble sputtering of an ad. “SuperDooper Glue—Holds Anything!” says your ad. “Unless you flick it sideways—as I found out with the handle of my favorite cup,” says a little voice in the market. “BigDisk Hard Drives—Lifetime Guarantee!” says the ad. “As long as you can prove you oiled it three times a week,” says another little voice in the market. What these little voices used to say to a single friend is now accessible to the world. No number of ads will undo the words of the market. How long does it take until the market conversation punctures the exaggerations made in an ad? An hour? A day? The speed of word of mouth is now limited only by how fast people can type.... [497] ...Marketing has been training its practitioners for decades in the art of impersonating sincerity and warmth. But marketing can no longer keep up appearances. People talk. [498]

Even more important for our purposes, employees talk. It’s just as feasible for the corporation’s workers to talk directly to its customers, and for workers and customers together to engage in joint mockery of the company.

In an age when unions have virtually disappeared from the private sector workforce, and downsizings and speedups have become a normal expectation of working life, the vulnerability of employer’s public image may be the one bit of real leverage the worker has over him—and it’s a doozy. If they go after that image relentlessly and systematically, they’ve got the boss by the short hairs.

Web 2.0, the “writeable web,” is fundamentally different from the 1990s vision of an “information superhighway” (one-way, of course), a more complex version of the old unidirectional hub-and-spoke architecture of the broadcast era—or as Tapscott and Williams put it, “one big content-delivery mechanism—a conveyor belt for prepackaged, pay-per-use content” in which “publishers... exert control through various digital rights management systems that prevent users from repurposing or redistributing content.” [499] Most large corporations still see their websites as sales brochures, and Internet users as a passive audience. But under the Web 2.0 model, the Internet is a platform in which users are the active party.

Given the ease of setting up anonymous blogs and websites (just think of any company and then look up the URL employernamesucks.com), the potential for using comment threads and message boards, the possibility of anonymous saturation emailing of the company’s major suppliers and customers and advocacy groups concerned with that industry.... well, let’s just say the potential for “swarming” and “netwar” is corporate management’s worst nightmare.

It’s already become apparent that corporations are quite vulnerable to bad publicity from dissident shareholders and consumers. For example, Luigi Zingales writes,

shareholders’ activist Robert Monks succeeded [in 1995] in initiating some major changes at Sears, not by means of the norms of the corporate code (his proxy fight failed miserably) but through the pressure of public opinion. He paid for a full-page announcement in the Wall Street Journal where he exposed the identities of Sears’ directors, labeling them the “non-performing assets” of Sears.... The embarrassment for the directors was so great that they implemented all the changes proposed by Monks. [500]

There’s no reason to doubt that management would be equally vulnerable to embarrassment by such tactics from disgruntled production workers, in today’s networked world.

For example, although Wal-Mart workers are not represented by NLRB-certified unions, in any bargaining unit in the United States, the “associates” have been quite successful at organized open-mouth sabotage through Wake Up Wal-Mart and similar activist organizations.

Consider the public relations battle over Wal-Mart “open availability” policy. Corporate headquarters in Bentonville quickly moved, in the face of organized public criticism, to overturn the harsher local policy announced by management in Nitro, West Virginia.

A corporate spokesperson says the company reversed the store’s decision because Wal-Mart has no policy that calls for the termination of employees who are unable to work certain shifts, the Gazette reports. “It is unfortunate that our store manager incorrectly communicated a message that was not only inaccurate but also disruptive to our associates at the store,” Dan Fogleman tells the Gazette. “We do not have any policy that mandates termination.” [501]

The Wal-Mart Workers’ Association acts as an unofficial union, and has repeatedly obtained concessions from store management teams in several publicity campaigns designed to embarrass and pressure the company. [502] As Ezra Klein noted,

This is, of course, entirely a function of the pressure unions have exerted on Wal-Mart—pressure exerted despite the unions having almost no hope of actually unionizing Wal-Mart. Organized Labor has expended tens of millions of dollars over the past few years on this campaign, and while it hasn’t increased union density one iota, it has given a hundred thousand Wal-Mart workers health insurance, spurred Wal-Mart to launch an effort to drive down prescription drug prices, drove them into the “Divided We Fail” health reform coalition, and contributed to the company’s focus on greening their stores (they needed good press to counteract all the bad). [503]

Another example is the IWW-affiliated Starbucks union, which publicly embarrassed Starbucks Chairman Howard Schultz. It organized a mass email campaign, notifying the Co-op Board of a co-op apartment he was seeking to buy into of his union-busting activities. [504]

Charles Johnson points to the Coalition of Imolakee Workers as an example of an organizing campaign outside the Wagner framework, relying heavily on the open mouth:

They are mostly immigrants from Mexico, Central America, and the Caribbean; many of them have no legal immigration papers; they are pretty near all mestizo, Indian, or Black; they have to speak at least four different languages amongst themselves; they are often heavily in debt to coyotes or labor sharks for the cost of their travel to the U.S.; they get no benefits and no overtime; they have no fixed place of employment and get work from day to day only at the pleasure of the growers; they work at many different sites spread out anywhere from 10–100 miles from their homes; they often have to move to follow work over the course of the year; and they are extremely poor (most tomato pickers live on about $7,500–$10,000 per year, and spend months with little or no work when the harvesting season ends). But in the face of all that, and across lines of race, culture, nationality, and language, the C.I.W. have organized themselves anyway, through efforts that are nothing short of heroic, and they have done it as a wildcat union with no recognition from the federal labor bureaucracy and little outside help from the organized labor establishment . By using creative nonviolent tactics that would be completely illegal if they were subject to the bureaucratic discipline of the Taft-Hartley Act, the C.I.W. has won major victories on wages and conditions over the past two years. They have bypassed the approved channels of collective bargaining between select union reps and the boss, and gone up the supply chain to pressure the tomato buyers, because they realized that they can exercise a lot more leverage against highly visible corporations with brands to protect than they can in dealing with a cartel of government-subsidized vegetable growers that most people outside of southern Florida wouldn’t know from Adam.

The C.I.W.’s creative use of moral suasion and secondary boycott tactics have already won them agreements with Taco Bell (in 2005) and then McDonald’s (this past spring), which almost doubled the effective piece rate for tomatoes picked for these restaurants. They established a system for pass-through payments, under which participating restaurants agreed to pay a bonus of an additional penny per pound of tomatoes bought, which an independent accountant distributed to the pickers at the farm that the restaurant bought from. Each individual agreement makes a significant but relatively small increase in the worker’s effective wages...[,] but each victory won means a concrete increase in wages, and an easier road to getting the pass-through system adopted industry-wide, which would in the end nearly double tomato-pickers’ annual income.

Burger King held out for a while after this, following Taco Bell’s earlier successive strategies of ignoring, stonewalling, slick PR, slander (denouncing farm workers as “richer than most minimum-wage workers,” consumer boycotts as extortion, and C.I.W. as scam artists), and finally even an attempt at federal prosecution for racketeering. [505]

As Johnson predicted, the dirty tricks were of no avail. He followed up on this story in May 2008, when Burger King caved in. Especially entertaining, after the smear campaign and other dirty tricks carried out by the Burger King management team, was this public statement by BK CEO John Chidsey:

We are pleased to now be working together with the CIW to further the common goal of improving Florida tomato farmworkers’ wages, working conditions and lives. The CIW has been at the forefront of efforts to improve farm labor conditions, exposing abuses and driving socially responsible purchasing and work practices in the Florida tomato fields. We apologize for any negative statements about the CIW or its motives previously attributed to BKC or its employees and now realize that those statements were wrong. [506]

Of course corporations are not entirely oblivious to these threats. The corporate world is beginning to perceive the danger of open-mouth sabotage, as well. For example, one Pinkerton thug almost directly equates sabotage to the open mouth, to the near exclusion of all other forms of direct action. According to Darren Donovan, a vice president of Pinkerton’s eastern consulting and investigations division,

[w]ith sabotage, there’s definitely an attempt to undermine or disrupt the operation in some way or slander the company.... There’s a special nature to sabotage because of the overtness of it—and it can be violent.... Companies can replace windows and equipment, but it’s harder to replace their reputation.... I think that’s what HR execs need to be aware of because it is a crime, but it can be different from stealing or fraud. [507]

As suggested by both the interest of a Pinkerton thug and his references to “crime,” there is a major focus in the corporate world on identifying whistleblowers and leakers through surveillance technology, and on the criminalization of free speech to combat negative publicity.

And if Birmingham Wragge is any indication, there’s a market for corporations that seek to do a Big Brother on anonymous detractors.

Birmingham’s largest law firm has launched a new team to track down people who make anonymous comments about companies online. The Cyber Tracing team at Wragge & Co was set up to deal with what the law firm said was a rising problem with people making anonymous statements that defamed companies, and people sharing confidential information online. And Wragge boasted the new team would ensure there was “nowhere to hide in cyberspace”. The four-strong team at the Colmore Row firm is a combination of IT litigation and employment law specialists. One of the members of the team said redundancies and other reorganisations caused by the recession meant the numbers of disgruntled employees looking to get their own back on employers or former employers was also on the rise. Adam Fisher said: “Organisations are suffering quite a lot from rogue employees at the moment, partly because of redundancies or general troubles. “We have had a number of problematic cases where people have chosen to put things online or have shared information on their company email access.” He said much of the job involved trying to get Internet Service Providers to give out details of customers who had made comments online.... A spokeswoman for Wragge said: “Courts can compel Internet Service Providers or telephone service providers to make information available regarding registered names, email addresses and other key account holder information. [508]

But if corporate managers think this will actually work, they’re even stupider than I thought they were. Firms like Birmingham Wragge, and policies like RIAA lawsuits and “three strikes” cutoff of ISPs, will have only one significant effect: the rapid mainstreaming of proxy servers and encryption.

In late 2004 and 2005, the phenomenon of “Doocing” (the firing of bloggers for negative commentary on their workplace, or for the expression of other non-approved opinions on their blogs) began to attract mainstream media attention, and exemplified a specialized case of the Streisand Effect. Employers, who fired disgruntled workers out of fear for the bad publicity their blogs might attract, were blindsided by the far worse publicity—far, far worse—that resulted from news of the firing (the term “Doocing” itself comes from Dooce, the name of a blog whose owner was fired). Rather than an insular blog audience of a few hundred reading that “it sucks to work at Employer X,” or “Employer X gets away with treating its customers like shit,” it became a case of tens of millions of readers of the major newspapers of record and wire services reading that “Employer X fires blogger for revealing how bad it sucks to work at Employer X.” Again, the bosses are learning that, for the first time since the rise of the giant corporation and the broadcast culture, workers and consumers can talk back—and not only is there absolutely no way to shut us up, but we actually just keep making more and more noise the more they try to do so. [509]

There’s a direct analogy between the Zapatista netwar and assymetrical warfare by labor and other anti-corporate activists. The Zapatistas turned an obscure and low-level military confrontation within an isolated province into a global political struggle. They waged their netwar with the Mexican government mostly outside Chiapas, isolating the authorities and pitting them against the force of world opinion. Similarly, networked labor activists turn labor disputes within a corporation into society-wide economic, political and media struggle, isolating corporate management and exposing it to swarming from an unlimited number of directions. Netwarriors choose their own battlefield.

The problem with authoritarianism like that of the Pinkertons and Birmingham Wragge, from the standpoint of the bosses and their state, is that before you can waterboard open-mouth saboteurs at Gitmo you’ve got to catch them first. If the litigation over Diebold’s corporate files and emails teaches anything, it’s that court injunctions and similar expedients are virtually useless against guerrilla netwar. The era of the SLAPP lawsuit is over, except for those cases where the offender is considerate enough to volunteer his home address to the target. Even in the early days of the Internet, the McLibel case turned into “the most expensive and most disastrous public-relations exercise ever mounted by a multinational company.” [510] As we already noted, the easy availability of web anonymity, the “writeable web” in its various forms, the feasibility of mirroring shut-down websites, and the ability to replicate, transfer, and store huge volumes of digital information at zero marginal cost, means that it is simply impossible to shut people up. The would-be corporate information police will just wear themselves out playing whack-a-mole. They will be exhausted and destroyed in exactly the same way that the most technically advanced army in the world was defeated by a guerrilla force in black pajamas.

Whether it be disgruntled consumers, disgruntled workers, or networked public advocacy organizations, the basic principles are the same. Jon Husband, of Wirearchy blog, writes of the potential threat network culture and the free flow of information pose to traditional hierarchies.

Smart, interested, engaged and articulate people exchange information with each other via the Web, using hyperlinks and web services. Often this information... is about something that someone in a position of power would prefer that other people (citizens, constituents, clients, colleagues) not know.... The exchanged-via-hyperlinks-and-web-services information is retrievable, re-usable and when combined with other information (let’s play connect-the-dots here) often shows the person in a position of power to be a liar or a spinner, or irresponsible in ways that are not appropriate. This is the basic notion of transparency (which describes a key facet of the growing awareness of the power of the Web).... Hyperlinks, the digital infrastructure of the Web, the lasting retrievability of the information posted to the Web, and the pervasive use of the Web to publish, distribute and transport information combine to suggest that there are large shifts in power ahead of us. We have already seen some of that .. we will see much more unless the powers that be manage to find ways to control the toings-and-froings on the Web. ....[T]he hoarding and protection of sensitive information by hierarchical institutions and powerful people in those institutions is under siege.... [511]

Chris Dillow, of Stumbling and Mumbling blog, argues we’re now at the stage where the leadership of large, hierarchical organizations has achieved “negative credibility.” The public, in response to a public statement by Gordon Brown, seemingly acted on the assumption that the truth was the direct opposite.

Could it be that the ruling class now has negative credibility? Maybe people are now taking seriously the old Yes, Minister joke—that one should never believe anything until it’s officially denied. If so, doesn’t this have serious implications? It means not merely that the managerial class has lost one of the weapons it can use to control us, but that the weapon, when used, actually fires upon its user. [512]

Thanks to network culture, the cost of “manufacturing consent” is rising at an astronomical rate. The communications system is no longer the one described by Edward Herman, with the state and its corporate media allies controlling a handful of expensive centralized hubs and talking to us via one-way broadcast links. We can all talk directly to each other now, and virally circulate evidence that calls the state’s propaganda into doubt. For an outlay of well under $1000, you can do what only the White House Press Secretary or a CBS news anchor could do forty years ago. The forces of freedom will be able to contest the corporate state’s domination over public consciousness, for the first time in many decades, on even terms.

We have probably already passed a “singularity,” a point of no return, in the use of networked information warfare. It took some time for employers to reach a consensus that the old corporate liberal labor regime no longer served their interests, and to take note of and fully exploit the union-busting potential of Taft-Hartley. But once they began to do so, the implosion of Wagner-style unionism was preordained. Likewise, it will take time for the realization to dawn on workers that things are only getting worse, that there’s no hope in traditional unionism, and that in a networked world they have the power to bring the employer to his knees by their own direct action. But when they do, the outcome is also probably preordained. The twentieth century was the era of the giant organization. By the end of the twenty-first, there probably won’t be enough of them left to bury.

Appendix: Three Works on Abundance and Technological Unemployment

A review essay [513].

William M. Dugger and James T. Peach. Economic Abundance: An Introduction (Armonk, New York and London, England: M.E. Sharpe, 2009).

Adam Arvidsson. “The Makers—again: or the need for keynesian management of abundance,” P2P Foundation Blog , February 25, 2010. [514]

Martin Ford. The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future (Acculant Publishing, 2009).

I’ve grouped these three authors together because their focus overlaps in one particular: their approach to abundance, to the imploding requirements for labor and/or capital to produce a growing share of the things we consume, is in some way to guarantee full employment of the idle labor and capital.

They all share, in some sense, a “demand-side” focus on the problem of abundance: assuming that the prices of goods and services either will or should be propped up despite the imploding cost of production, and then looking for ways to provide the population with sufficient purchasing power to buy those goods. My approach, which will gradually be developed below, is just the opposite—a “supply-side” approach. That means, in practical terms, flushing artificial scarcity rents of all kinds out of the system so that people will no longer need as many hours of wage labor to pay for stuff....

I get the impression that Dugger and Peach are influenced by Veblen’s The Engineers and the Price System , which likewise focused on the social and institutional barriers to running industry at the technical limits of its output capacity and then distributing the entire output. The most important task from their standpoint is to solve the problem of inadequate demand, in order to eliminate idle industrial capacity and unemployment. They accept as normal, for the most part, the mass-production industrial model of the mid-twentieth century, and seek only to remove barriers to disposing of its full product.

For Dugger and Peach, scarcity is a problem of either the incomplete employment of all available production inputs, or the unequal distribution of purchasing power for production outputs. Their goal is to achieve “universal employment.”

Instead of the natural rate of unemployment or full employment, we propose driving the unemployment rate down closer and closer to absolute zero. Provide universal employment and the increased production will provide the wherewithal to put abundance within our grasp.

That’s the kind of vision I’d identify more with Michael Moore than, say, Chris Anderson: a society in which virtually everyone works a forty hour week, the wheels of industry run at full capacity churning out endless amounts of stuff, and people earn enough money to keep buying all that stuff.

But in our existing economy, the volume of stuff produced is mainly a response to the problem of overaccumulation: the need to find new ways to keep people throwing stuff away and replacing it so that our overbuilt industry can keep running at capacity. If goods were not designed to become obsolete, and it took much smaller industrial capacity to produce what we consume, some people might view it as silly to think up all sorts of new things to consume just so they could continue working forty hours a week and keep industry running at full capacity. They might prefer to liquidate a major portion of industrial capacity and work fewer hours, rather than churning out more and more products to earn the money to buy more and more products to keep themselves employed producing more and more products so they could keep consuming more and more, ad nauseam.

In failing to distinguish between natural and artificial scarcity, Dugger and Peach conflate the solutions to two different problems.

When scarcity is natural—i.e. where it costs money or effort to produce a good—then the main form of economic injustice is the broken link between effort and consumption. Privilege enables some people to consume at others’ expense. The peasant must work harder to feed a landlord in addition to himself, and the factory worker must produce a surplus consumed by the idle rentier. The problem of privilege, and the zero-sum relationship that results from it, is genuine. And it is almost entirely the focus of Dugger’s and Peach’s analysis. What’s more, their focus on the distribution of claims to the product as a solution is entirely appropriate in the case of natural scarcity. But natural scarcity and the unjust distribution of scarce goods are nothing new; they’re problems that have existed, in what amounts to its present form, from the beginning of class society. Their analysis, which treats inequitable distribution of naturally scarce goods as the whole of scarcity, is completely irrelevant to the problem of artificial scarcity—i.e., artificially inflated input costs or prices that embody rents on artificial property rights. The solution to this latter problem is not to find ways to keep everyone on the treadmill forty hours a week, but to eliminate the artificial scarcity component of price so that people can work less.

The real problem, in short, is not to achieve full employment, but to reduce the amount of employment it takes to purchase our present standard of living.

In the first installment of this review essay, I dealt with Economic Abundance by William Dugger and James Peach. I found it only tangentially related, at best, to the post-scarcity tradition we’re familiar with.

Adam Arvidsson and Martin Ford both write from something much closer to that tradition.

Arvidsson, following up on his initial review of Makers by Cory Doctorow [515] , set out to explain the difference between his views and mine.

In my review of Makers [516] , I argued that the central cause of the economic crisis was (first) the excess capacity of mass-production industry, and (second) the superfluous investment capital which lacked any profitable outlet thanks to the imploding cost of micromanufacturing technology. Arvidsson responded:

However an oversupply of capital is only that in relation to an insufficient demand. The reason why hundreds of thousands or even millions of ventures can not prosper is that there is insufficient demand for their products. This suggests that an economy of abundance (also a relative concept- the old industrial economy was surely an economy of abundance in relation to the old artisanal economy) needs a Keynesian regime of regulation. That is, the state or some other state-like actor must install a mechanism for the redistribution of value that guarantees a sustained demand for new products. To accomplish this entails two things. First, to redistribute the new value that is generated away from the restricted flows of corporate and financial rent that circulate among Kettlewell and his investors and to larger swats of the population (thus activating the multiplier effect!). Since the Maker boom builds on highly socialized, or even ubiquitous productivity, it seems logical that such a redistribution takes the form of some kind of guaranteed minimum income. Second, the state (or state-like actor) must guarantee a direction of market expansion that is sustainable in the future. In our present situation that would probably mean to offer incentives to channel the productivity of a new maker culture into providing solutions to the problem of transitioning to sustainability within energy, transport and food production systems. This would, no doubt open up new sources of demand that would be able to sustain the new economy of abundance for a long time, and after that we can go into space ! Without such a Keynesian governance, a future economy of abundance is doomed to collapse, just like the industrial economy of abundance collapsed in 1929.

This might have been true of the excess industrial capacity of the 1930s, when the primary problem was overinvestment and the maldistribution of purchasing power rather than a rapid decline in the money price of capital goods. Under those circumstances, with the technical means themselves changing in a fairly gradual manner, the size of the gap between existing demand and demand on a scale necessary to run at full capacity might well be small enough to solve with a guaranteed income, or social credit, or some similar expedient.

But the problem in Makers is entirely different. It’s not simply excess industrial capacity in an environment of gradual and stable technological advance. It takes place in an environment in which the cost of capital goods required for industrial production has fallen a hundredfold. In that environment, the only way to avoid superfluous investment capital with no profitable outlet would be if demand increased a hundredfold in material terms. If a given consumption good produced in a million dollar factory can now be produced in a $10,000 garage shop, that would mean I’d have to buy a hundred of that good where I’d bought only one before, in order to cause a hundred times as many garage shops to be built and soak up the excess capital. Either that, or I’d have to think of a hundred times as many material goods to create sufficient demand to expand industrial capacity a hundredfold. I don’t think demand is anywhere near that upwardly elastic. The oversupply of capital in Makers is mainly in relation to the cost of producer goods.

So the solution, in my opinion, is—again—to approach the problem from the supply side. Allow the embedded scarcity rents in the prices of our goods to evaporate, and the bubble-inflated values of real estate and other assets along with them, so that it takes less money and fewer hours of work to obtain the things we need.

Of the three works considered in this series of review essays, Ford’s pays by far the most attention to the issue of technological unemployment. It’s the central theme of his book.

Members of the P2P Research and Open Manufacturing lists are probably familiar with the worst-case scenarios for technological unemployment frequently outlined in the posts of member Paul Fernhout. Coupled with draconian social controls and strong IP enforcement, it’s the scenario of Marshall Brain’s Manna . Still others are surely familiar with similar projections in Jeremy Rifkin’s The End of Work .

Ford writes very much in the same tradition.

But there are significant mitigating features to technological unemployment which Ford fails to address—features which I’ve also raised on-list in debates with Fernhout. Most important is the imploding price of means of production.

Most discussions of technological unemployment by people like Rifkin and Ford implicitly assume a capital-intensive mass production model, using expensive, product-specific machines: conventional factories, in other words, in just about every particular except the radically reduced need for people to work in them. They seem to be talking about something like a GM factory, with microcontrollers and servomotors in place of workers, like the Ithaca works in Vonnegut’s Player Piano. If such expensive, capital-intensive, mass-production methods constituted the entire world of manufacturing employment, as they were in 1960, then the Rifkin/Ford scenario would indeed be terrifying.

But the mass-production model of manufacturing in large factories has drastically shrunk in significance over the past thirty years, as described by Michel Piore and Charles Sabel in The Second Industrial Divide . Manufacturing corporations have always deferred investments in plant and equipment in economic downturns, because—as John Kenneth Galbraith pointed out in The New Industrial State —the kinds of expensive product-specific machinery used in Sloanist mass production require full utilization to amortize fixed costs, which in turn requires a high degree of confidence in the stability of demand before companies will invest in them. During recessions, therefore, manufacturing corporations tend to expand production when necessary by contracting out to the craft periphery. But the economic crisis of the 1970s was the beginning of a prolonged period of economic stagnation, with each decade’s economic growth slower than the previous and anemic levels of employment and demand. And it was also the beginning of a long-term structural trend toward shifting production capacity from the mass-production core to the craft periphery. Around the turn of the century, the total share of industrial production carried out in job-shops using general purpose machinery surpassed the amount still carried out in conventional mass-production industry.

On pp. 76 and 92, Ford argues that some jobs, like auto mechanic or plumber, are probably safe from automation for the time being because of the nature of the work: a combination of craft skills and general-purpose machinery. But manufacturing work, to the extent that it has shifted to small shops like those in Emilia-Romagna and Shenzhen, using general-purpose machinery for short production runs, has taken on the same character in many instances. If manufacturing continues to be organized primarily on a conventional assembly-line model using automated, highly specialized machines, but with the additional step of automating all handing off of goods from one step to the next, then the threat of 100% automation will be credible. But if most manufacturing shifts to the small shop, with a craftsman setting up general purpose machines and supplying feed stock by hand, then Ford’s auto mechanic/housekeeper model is much more relevant.

Indeed, the shift toward lean production methods like the Toyota Production System have been associated with the conscious choice of general-purpose machinery and skilled labor in deliberate preference to automated mass-production machinery. The kinds of product-specific machinery that are most conducive to automation are directly at odds with the entire lean philosophy, because they require subordinating the organization of production and marketing to the need to keep the expensive machines running at full capacity. Conventional Sloanist mass-production optimized the efficiency of each separate stage in the production process by maximizing throughput to cut down the unit costs on each expensive product-specific machine; but it did so at the cost of pessimizing the production process as a whole (huge piles of in-process inventory piled up between machines, waiting for somebody downstream to actually need it, and warehouses piled full of finished goods awaiting orders). Lean production achieves sharp reduction in overall costs by using “less efficient,” more generalized machinery at each stage in the production process, in order to site production as close as possible to the market, scale the overall flow of production to orders, and scale the machinery to the flow of production.

Ford himself concedes that the high capital outlays for automating conventional mass-production industry may delay the process in the medium term (p. 215). And indeed, the pathological behaviors (like optimizing the efficiency of each stage at the expense of pessimizing the overall production flow we saw immediately above) that result from the high cost of automated product-specific machinery, are precisely what Toyota pursued a different production model to avoid. Large-scale, automated, product-specific machinery creates fixed costs that inevitably require batch production, large inventories and push distribution.

What’s more, Ford’s scenario of the motivation of the business owner in adopting automation technology to cut costs implicitly assumes a model of production and ownership that may not be warranted. As the costs of machinery fall, the conventional distinctions between worker and owner and between machinery and tools are eroding, and the idea of the firm as a large agglomeration of absentee-owned capital hiring wage workers will become less and less representative of the real world. Accordingly, scenarios in which the “business owner” is the primary actor deciding whether to buy automated machinery or hire workers are apt to be less relevant. The more affordable and smaller in scale production tools become, the more frequently the relevant decisionmakers in the capital vs. labor tradeoff will be people working for themselves.

Besides the shift that’s already taken place under the Toyota Production System and flexible manufacturing networks like Emilia-Romagna, the shift toward small scale, low cost, general purpose machinery is continuing with the ongoing micromanufacturing revolution as it’s currently being worked out in such venues as Factor e Farm, hackerspaces, Fab Labs, tech shops, Ponoko, and 100kGarages.

Technological unemployment, as described in the various scenarios of Rifkin, Brain and Ford, is meaningful mainly because of the divorce of capital from labor which resulted from the high price of producer goods during the mass production era. Indeed, the very concept of “employment” and “jobs,” as the predominant source of livelihood, was a historical anomaly brought about by the enormous cost of industrial machinery (machinery which only the rich, or enterprises with large aggregations of rich people’s capital, could afford). Before the industrial revolution, the predominant producer goods were general-purpose tools affordable to individual laborers or small shops. The industrial revolution, with the shift from affordable tools to expensive machinery, was associated with a shift from an economy based primarily on self-employed farmers and artisans, and subsistence production for direct use in the household sector, to an economy where most people were hired as wage laborers by the owners of the expensive machinery and purchased most consumption goods with their wages.

But the threat of technological unemployment becomes less meaningful if the means of production fall in price, and there is a retrograde shift from expensive machinery to affordable tools as the predominant form of producer good. And we’re in the middle of just such a shift, as a few thousand dollars can buy general-purpose CNC machine tools with the capabilities once possessed only by a factory costing hundreds of thousands of dollars. The same forces making more and more jobs superfluous are simultaneously reducing barriers to the direct ownership of production tools by labor.

So rather than Ford’s scenario of the conventional factory owner deciding whether to invest in automated machinery or hire workers, we’re likely to see an increasing shift to a scenario in which the typical actor is a group of workers deciding to spend a few thousand workers to set up a garage factory to supply their neighborhood with manufactured goods in exchange for credit in the barter network, and in turn purchasing the output of other micromanufacturing shops or the fruit, vegetables, bread, cheese, eggs, beer, clothing, haircare services, unlicensed cab service, etc., available within the same network. Unlike Ford, as we will see in the next section, I see our primary task as eliminating the barriers to this state of affairs.

I do agree with Ford that we’ve been experiencing a long-term trend toward longer jobless recoveries and lower levels of employment (p. 134). Total employment has declined 10% since it peaked in 2000, for example. And despite all the Republican crowing over Obama’s projection that unemployment would reach only 8.5% in 2009, that’s exactly the level of unemployment that Okun’s law would have predicted with the decline in GDP that we actually experienced. Our conventional econometric rules of thumb for predicting job losses with a given scale of economic downturn have become worthless because of the long-term structural reduction in demand for labor, and long-term unemployment is at the highest level since the Great Depression.

But while some of this is probably due to technological change that reduces the labor inputs required for a given unit of output, I think the lion’s share of it is explained by the overaccumulation thesis of neo-Marxists like Paul Sweezy, Harry Magdoff, and other members of the Monthly Review group. The main reason for rising unemployment is corporate capitalism’s same chronic tendenices to overinvestment and underconsumption that caused the Great Depression. Cartelized state capitalist industry accumulates excessive surpluses and invests them in so much plant and equipment that it can’t dispose of its entire output running at capacity. This crisis was postponed by WWII, which destroyed most plant and equipment in the world outside the U.S., and created a permanent warfare state to absorb a portion of surplus production. But even so, by 1970 Japan and Europe had rebuilt their industrial economies and global capital markets were saturated. Since 1970, one expedient after another has been adopted to absorb surplus capital in an era when consumer demand is insufficient for even existing plant and equipment to operate profitably.

Ford is also correct that rising oil (and hence shipping) costs will provide a strong economic incentive to distributed manufacturing with factories located as close as possible to consumers, which—intersecting with trends to automation—will lead to “much smaller and more flexible factories located in direct proximity to markets…” (p. 126) But I think he underestimates the extent to which the shift in economies of scale he describes has already taken place. The flexible manufacturing trend has been toward small job-shops like those in Shenzhen described by Tom Igoe, with ever cheaper general purpose machinery. And the model of automation for such small-scale CNC machinery is most conducive to craft production using general-purpose tools. Coupled with the cutting-edge trend to even cheaper CNC machinery affordable by individuals, a major part of the relocalization of industry in the U.S. is likely to be associated with self-employed artisan producers or small cooperative shops churning out manufactured goods for neighborhood market areas of a few thousand people. Of those cheap tools, Tom Igoe writes:

Cheap tools. Laser cutters, lathes, and milling machines that are affordable by an individual or a group. This is increasingly coming true. The number of colleagues I know who have laser cutters and mills in their living rooms is increasing…. There are some notable holes in the open hardware world that exist partially because the tools aren’t there. Cheap injection molding doesn’t exist yet, but injection molding services do, and they’re accessible via the net. But when they’re next door (as in Shenzen), you’ve got a competitive advantage: your neighbor.

Ford also equates automation to increasing capital-intensiveness. The traditional model presupposes that “capital-intensive” methods are more costly because capital equipment is expensive, and the most capital-intensive forms of production use the most expensive, product-specific forms of machinery. Production is “capital-intensive” in the sense that expenditures are shifted from labor compensation to machinery, and “high-tech” necessarily means “high-cost.” But in fact the current trajectory of technical project in manufacturing hardware is toward drastically reduced cost, bringing new forms of micromanufacturing machinery affordable to average workers. This means that the term “capital-intensive,” as conventionally understood, becomes meaningless.

He goes on to argue that manufacturing will become too capital-intensive to maintain existing levels of employment.

Beyond this threshold or tipping point, the industries that make up our economy will no longer be forced to hire enough new workers to make up for the job losses resulting from automation; they will instead be able to meet any increase in demand primarily by investing in more technology. (p. 133)

But again, this presupposes that capital equipment is expensive, and that access to it is controlled by employers rich enough to afford it. And as the cost of machines fall to the point where they become affordable tools for workers, the “job” becomes meaningless for a growing share of our consumption needs.

Even before the rise of micromanufacturing, there was already a wide range of consumption goods whose production was within the competence of low-cost tools in the informal and household sector. As Ralph Borsodi showed as far back as the 1920s and 1930s, small, electrically powered machinery scaled to household production could make a wide range of consumer goods at far lower unit cost than the factories. Although the unit cost of production was somewhat lower for factory goods, this was more than offset by drastic reductions in distribution cost when production was at or near the point of consumption, and by the elimination of supply-push marketing costs when production was directly driven by the consumer. Vegetables grown and canned at home, clothing produced on a home sewing machine from fabric woven on an efficiently designed power loom, bread baked in a kitchen oven from flour grown in a kitchen mill, all required significantly less labor to produce than the labor required to earn the wages to buy them at a store. What’s more, directly transforming one’s own labor into consumption goods with one’s own household tools was not subject to disruption by the loss of wage employment.

If anything, Borsodi underestimated the efficiency advantage. He assumed that the household subsistence economy would be autarkic, with each household having not only its own basic food production, but weaving and sewing, wood shop, etc. He opposed the production of a surplus for external sale, because the terms of commercial sale would be so disadvantageous that it would be more efficient to devote the same time to labor in the wage economy to earn “foreign exchange” to purchase things beyond the production capacity of the household. So for Borsodi, all consumption goods were either produced by the household for itself, or factory made and purchased with wages. He completely neglected the possibility of a division of labor within the informal economy. When such a division is taken into account, efficiencies increase enormously. Instead of each house having its own set of underutilized capital equipment for all forms of small-scale production, a single piece of capital equipment can serve the neighborhood barter network and be fully utilized. Instead of the high transaction costs and learning curve from each household learning how to do everything well, like Odysseus, a skilled seamstress can concentrate on producing clothing for the neighbors and a skilled baker can concentrate on bread—but achieve these efficiencies while still keeping their respective labors in the household economy, without the need either for a separate piece of commercial real estate or for expensive capital goods beyond those scaled to the ordinary household.

Most technological unemployment scenarios assume the automation of conventional, mass-production industry, in a world where manufacturing machinery remains extremely expensive. But when the cost barriers to owning manufacturing machinery are lowered, the threat becomes a lot less terrifying.

By way of analogy: If a Star Trek-style matter replicator can replace human labor for producing most goods, but it costs so much that only a large corporation can own it, then the threat of technological unemployment is real. But if anyone can own such a replicator for a few hundred dollars, then the way we supply a major part of our needs will simply shift from selling labor for wages to producing them for ourselves on a cheap replicator.

In a world where most production is with affordable tools, employers will no longer be able to restrict our access to the means of production. It will become feasible to produce a growing share of our total consumption needs either directly for ourselves, or for exchange with other household producers, without the intermediation of the corporate money economy.

Paul Fernhout’s emails (which you probably read regularly if you’re on the P2P Research or Open Manufacturing email list) include a quote in the sig line about today’s problems resulting from an attempt to deal with abundance in a scarcity framework. Dugger and Peach, as we saw above, failed to recognize the nature of abundance at all, and despite their use of the term worked from an ideological framework entirely adapted to scarcity. Ford, on the other hand, is halfway there. He recognizes the new situation created by abundance of consumer goods and the falling need for labor to produce them. But his solution is still adapted to a framework in which, while consumer goods are abundant, means of production remain scarce and expensive.

When means of production are cheap and readily available, the “need” for labor becomes irrelevant. The need for labor is only relevant when the amount needed is determined by someone other than the worker who controls access to the means of production. By way of analogy, when a subsistence farmer figured out a way to cut in half the labor required to perform some task on his own farm, he didn’t lament the loss of “work.” He didn’t try to do things in a way that required twice the effort in order to keep himself “employed” or achieve “job security.” He celebrated it because, being in a position to fully appropriate the benefits of his own productivity, everything came down to the ratio between his personal effort and his personal consumption. In your own home, you don’t deliberately store the dishes in a cupboard as far as possible from the sink in order to guarantee yourself “sufficient work.” Likewise, when the worker himself can obtain the means of production as cheap, scalable tools, and the cost of producing subsistence needs directly for oneself in the informal economy (or for exchange with other such producers), the question of the amount of labor “needed” for a unit of output is as meaningless as it would have been for the farmer.

Ford also raises the question of how the increasingly plausible prospect of stagnating employment will destabilize long-term consumer behavior. As people come to share a consensus that jobs will be fewer and harder to get in the future, and pay less, their propensity to spend will decrease. The same consumer pessimism that leads to the typical recessionary downward wage-demand spiral, thanks to technological unemployment, will become a permanent structural trend. (p. 109)

But this neglects the possibility that these trends will spur underemployed workers to meet more of their consumption needs through free alternatives in the informal economy. Even as technological change reduces the need for wage labor, it is simultaneously causing an increasing share of consumption goods to shift into the realm of things either available for free, or by direct production in the informal-household sector using low-cost tools. As a result, an increasing portion of what we consume is available independently of wage labor.

Ford argues that “free market forces” and automation, absent some government intervention to redistribute purchasing power, will lead to greater and greater concentration of incomes and consequently a constantly worsening crisis of underconsumption. The ultimate outcome of skyrocketing productivity, coupled with massive technological unemployment, is a society in which 95% of the population are impoverished and live on a subsistence level, while most income goes to the remaining 5% (p. 181). But this state of affairs could never come about in a genuine free market. The enormous wealth and incomes of the plutocracy result from rents on artificial scarcity; they are only able to become super-rich from technological innovation when artificial property rights like patents enable them to capitalize the increased productivity as a source of rents, rather than allowing the competitive market to “socialize” it in the form of lower prices to consumers.

Indeed Ford himself goes on, in the passage immediately following, to admit “the reality” that this level of income polarization would never come about, because the economic decline from insufficient purchasing power would cause asset values to collapse. Exactly! But my proposal (in the next section) is precisely to allow such collapse of asset values, and allow the collapse of the price of goods from the imploding marginal cost of production, so that it takes less wage income to buy them.

The collapse of exchange value is a good thing, from the perspective of the underemployed worker, who experiences the situation Bruce Sterling wrote of (I suspect about three-quarters facetiously, although it’s hard to tell with him):

Waiting for the day of realization that Internet knowledge-richness actively MAKES people economically poor. “Gosh, Craigslist has such access to ultra-cheap everything now… hey, wait a second, where did my job go?”

Someday the Internet will offer free food and shelter. At that point, hordes simply walk away. They abandon capitalism the way a real-estate bustee abandons an underwater building.

Ford draws a parallel between the mechanization of agriculture in the 20 th century and the ongoing automation of manufacturing and service industries (pp. 124–125). But the parallel works against him, in a sense.

The mechanization of agriculture may, to a considerable extent, have resulted in “a massive and irreversible elimination of jobs.” That is, it has eliminated agriculture for many people as a way to earn money by working and then to spend that money buying food. But it has not, by any means, eliminated the possibility of using our own labor to feed ourselves by growing food. Likewise, developments in manufacturing technology, at the same time as they eliminate jobs in manufacturing as a source of income to buy stuff, are making tools for direct production more affordable.

In the particular case of agriculture, as Ralph Borsodi showed eighty years ago, the total labor required to feed ourselves growing and canning our own food at home is considerably less than that required to earn the money to buy it at the store. And nobody can “fire” you from the “job” of feeding yourself with your own labor.

What’s more, the allegedly superior efficiencies of mechanized large-scale agriculture are to a large extent a myth perpetuated in the propaganda of corporate agribusiness and the USDA. The efficiencies of mechanization are legitimate for cereal crops, although economies of scale still top out on a family farm large enough to fully utilize one complete set of farming machinery. But cereal crops occupy a disproportionate share of the total food production spectrum precisely because of government subsidies to cereal crop production at the expense of fruits and vegetables.

In the case of most fruits and vegetables, the economies of mechanization are largely spurious, and reflect (again) an agitrop campaign to legitimize government subsidies to corporate agribusiness. Even small-scale conventional farming is more efficient in terms of output per acre, if not in terms of output per man-hour—to say nothing of soil-intensive forms of raised-bed horticulture like that developed by John Jeavons (biointensive horticulture can feed one person on a tenth of an acre). And while large-scale production may be more efficient in terms of labor inputs at the point of production, it is probably less efficient in labor terms when the wages required to pay the embedded costs of supply-push marketing and distribution are included. Although it may take more labor for me to grow a tomato than it takes a factory farm to grow it, it probably takes less labor for me to grow it myself than to pay for the costs of shipping and marketing it in addition to factory farming it. So, absent government subsidies and preferences to large-scale agribusiness, the most efficient method for producing a considerable portion of our food is probably something like Ford’s housekeeping or auto repair labor model.

Likewise, it’s quite plausible that it would cost a decent home seamstress more in total labor time to earn the money to buy clothing even from a totally automated textile mill, when the costs of high inventories and supply-push distribution are taken into account, than to make them herself.

Besides, if I’m unemployed or working a twenty hour week, labor is something I have plenty of, and (again) I can’t be “fired” from using my own labor to feed and clothe myself. The more forms of production that can be carried out in the informal sector, using our own labor with individually affordable tools, the less of what we consume depends on a boss’s whim. And the higher the levels of unemployment, the stronger the incentives will be to adopt such methods. Just as economic downturns are associated with a shift of production from the mass-production core to the craft periphery, they’re also (as James O’Connor described in Accumulation Crisis ) associated with a shift of production from wage labor to the informal sector.

This is not meant, by any means, to gloss over or minimize the dislocations will occur in the meantime. Plummeting average housing prices don’t mean that many won’t be left homeless, or live precarious existences as squatters in their own foreclosed homes or in shantytowns. The falling price of subsistence relative to an hour’s wage doesn’t mean many won’t lack sufficient income to scrape by. Getting from here to there will involve many human tragedies, and how to minimize the pain in the transition is a very real and open question. My only purpose here is to describe the trends in play, and the end-state they’re pointing toward – not to deny the difficulty of the transition.

So while Ford argues that “consumption, rather than production, will eventually have to become the primary economic contribution made by the bulk of average people” (p. 105), I believe just the opposite: the shrinking scale and cost, and increasing productivity, of tools for production will turn the bulk of average people into genuine producers—as opposed to extensions of machines mindlessly obeying the orders of bosses—for the first time in over a century.

This whole discussion parallels a similar one I’ve had with Marxists like Christian Siefkes. Competitive markets, he argues, have winners and losers, so how do you keep the losers from being unemployed, bankrupt and homeless while the winners buy out their facilities and concentrate production in fewer and fewer hands? My answer, in that case as in the one raised by Ford, is that,with falling prices of producer goods and the rise of networked models of production, the distinction between “winners” and “losers” becomes less and less meaningful. There’s no reason to have any permanent losers at all. First of all, the overhead costs are so low that it’s possible to ride out a slow period indefinitely. Second, in low-overhead flexible production, in which the basic machinery for production is widely affordable and can be easily reallocated to new products, there’s really no such thing as a “business” to go out of. The lower the capitalization required for entering the market, and the lower the overhead to be borne in periods of slow business, the more the labor market takes on a networked, project-oriented character—like, e.g., peer production of software. In free software, and in any other industry where the average producer owns a full set of tools and production centers mainly on self-managed projects, the situation is likely to be characterized not so much by the entrance and exit of discrete “firms” as by a constantly shifting balance of projects, merging and forking, and with free agents constantly shifting from one to another.

Education has a special place in Ford’s vision of the abundant society (p. 173). As it is, he is dismayed by the prospect that technological unemployment may lead to large-scale abandonment of higher education, as knowledge work is downsized and the skilled trades offer the best hopes for stable employment.

On the other hand, education is one of the centerpieces of Ford’s post-scarcity agenda (about which more below) for dealing with the destabilizing effects of abundance. As part of his larger agenda of making an increasing portion of purchasing power independent of wage labor, he proposes paying people to learn (p. 174).

But for me, one of the up-sides of post-scarcity is that the same technological trends are decoupling the love of learning from careerism, dismantling the entire educational-HR complex as a conveyor belt for human raw material, and ending “education” as a professionalized process shaping people for meritocratic “advancement” or transforming them into more useful tools.

The overhead costs of the network model of education are falling, and education is becoming a free good like music or open-source software. MIT’s Open Courseware project, which puts complete course syllabuses online for the university’s entire catalog of courses, is only the most notable offering of its kind. Projects like Google Books, Project Gutenberg, specialized ventures like the Anarchist Archives and Marxist.Org (which has digitized most of Marx’s and Engels’ Collected Works and the major works of many other Marxist thinkers from Lenin and Trotsky to CLR James), not to mention a whole host of “unauthorized” scanning projects, make entire libraries of scholarly literature available for free. Academically oriented email discussion lists offer unprecedented opportunities for the self-educated to exchange ideas with established academicians. It’s never been easier to contact a scholar with some special question or problem, by using Google to track down their departmental email.

In short, there have never been greater opportunities for independent and amateur scholars to pursue knowledge for its own sake, or to participate in freely accessible communities of scholars outside brick-and-mortar universities. The Internet is creating, in the real world, something like the autonomous and self-governing learning networks Ivan Illich described in Deschooling Society . But instead of the local mainframe computer at the community center pairing lists of would-be learners with expert volunteers, or renting out tape-recorded lectures, the technical possibilities of today’s open education initiatives taking advantage of communications technology beyond Illich’s imagining at the time he wrote.

Likewise, it’s becoming increasingly feasible to pursue a technical education by the same means, in order to develop one’s own capabilities as a producer in the informal economy. Someone might, say, use the engineering curriculum in something like MIT’s Open Courseware in combination with mentoring by peers in a hackerspace, and running questions past the membership of a list like Open Manufacturing. Open hardware projects are typically populated by people teaching themselves programming languages or tinkering with hardware on the Edison model, who are at best tangentially connected to the “official” educational establishment.

Phaedrus’ idea of the Church of Reason in Zen and the Art of Motorcycle Maintenance is relevant. He describes the typical unmotivated drifter who currently predominates in higher education, when deprived of the grades and meritocratic incentives for getting a career or “good job,” finally dropping out for lack of interest or motivation.

The student’s biggest problem was a slave mentality which had been built into him by years of carrot-and- whip grading, a mule mentality which said, “If you don’t whip me, I won’t work.” He didn’t get whipped. He didn’t work. And the cart of civilization, which he supposedly was being trained to pull, was just going to have to creak along a little slower without him…. The hypothetical student, still a mule, would drift around for a while. He would get another kind of education quite as valuable as the one he’d abandoned, in what used to be called the “school of hard knocks.” Instead of wasting money and time as a high-status mule, he would now have to get a job as a low-status mule, maybe as a mechanic. Actually his real status would go up. He would be making a contribution for a change. Maybe that’s what he would do for the rest of his life. Maybe he’d found his level. But don’t count on it. In time…six months; five years, perhaps…a change could easily begin to take place. He would become less and less satisfied with a kind of dumb, day-to-day shopwork. His creative intelligence, stifled by too much theory and too many grades in college, would now become reawakened by the boredom of the shop. Thousands of hours of frustrating mechanical problems would have made him more interested in machine design. He would like to design machinery himself. He’d think he could do a better job. He would try modifying a few engines, meet with success, look for more success, but feel blocked because he didn’t have the theoretical information. He would discover that when before he felt stupid because of his lack of interest in theoretical information, he’d now find a brand of theoretical information which he’d have a lot of respect for, namely, mechanical engineering. So he would come back to our degreeless and gradeless school, but with a difference. He’d no longer be a grade-motivated person. He’d be a knowledge-motivated person. He would need no external pushing to learn. His push would come from inside. He’d be a free man. He wouldn’t need a lot of discipline to shape him up. In fact, if the instructors assigned him were slacking on the job he would be likely to shape them up by asking rude questions. He’d be there to learn something, would be paying to learn something and they’d better come up with it.

In this last installment, I will discuss Ford’s proposed agenda for dealing with abundance, and then present my own counter-agenda.

Ford uses the term “Luddite fallacy” for those who deny the possibility of technological unemployment in principle.

This line of reasoning says that, while technological progress will cause some workers to lose their jobs as a result of outdated skills, any concern that advancing technology will lead to widespread, increasing unemployment is, in fact, a fallacy. In other words, machine automation will never lead to economy-wide, systemic unemployment. The reasoning offered by economists is that, as automation increases the productivity of workers, it leads to lower prices for products and services, and in turn, those lower prices result in increased consumer demand. As businesses strive to meet that increased demand, they ramp up production—and that means new jobs. (pp. 95–96)

The problem with their line of reasoning, as I argued here [517] and I think Ford would agree, is that it assumes demand is infinitely, upwardly elastic, and that some of the productivity increase won’t be taken in the form of leisure.

My critique of Ford’s scenario is from a perspective almost directly opposite what he calls the Luddite fallacy. I believe the whole concept of employment will become less meaningful as the falling cost of producer goods causes them to take on an increasingly tool-like character, and as the falling price of consumer goods reduces the need for wage income.

Ford refers to something like my perspective, among the hypothetical objections he lists at the end of the book: “In the future, wages/income may be very low because of job automation, but technology will also make everything plentiful and cheap—so low income won’t matter” (pp. 220–221). Or as I would put it, the reduced need for labor will be offset by labor’s reduced need for employment.

Ford’s response is that, first, manufactured goods are only a small percentage of the average person’s total expenditures, and the costs of housing and healthcare would still require a significant income. Second, he points to “intellectual property” the source of prices that are above marginal cost, even at present, when technology has already lowered production costs, and argues that in the future “intellectual property” will cause the prices of goods to exceed their marginal costs of production.

Ford’s objections, ironically, point directly to my own agenda: to make housing and healthcare cheap as well by allowing asset prices to collapse, eliminate the artificial scarcities and cost floors that make healthcare expensive, and eliminate “intellectual property” as a source of artificially high prices.

Where Ford supports new government policies to maintain purchasing power, I propose eliminating existing government policies that put a floor under product prices, asset prices, and the cost of means of production.

Ford, like Fernhout and Arvidsson and many other post-scarcity thinkers, proposes various government measures to provide individuals with purchasing power independent of wage labor (p. 161). As a solution to the problem of externalities, he proposes a differential in government-provided income based on how socially responsible one’s actions are—essentially Pigovian taxation in reverse (p. 177). He also proposes shifting the tax base for the social safety net from current payroll taxes to taxes on gross margins that remain stable regardless of employment levels (p. 142).

Such proposals have been common for solving the problems of overproduction and underconsumption, going back at least to Major Douglas and Social Credit. (I’m surprised Ford didn’t hit on the same idea as Douglas, and dispense with the idea of taxation altogether—just create enough purchasing power out of thin air to fill the demand gap, and deposit it into people’s bank accounts.) Something like it is also popular with many Georgists and Geolibertarians: tax the site value of land and other economic rents, resource extraction, and negative externalities like pollution and carbon emissions, and then use the revenue to fund a citizen’s dividend or guaranteed minimum income.

Interestingly, some who propose such an agenda also favor leaving patent and copyright law in place and then taxing it as a rent to fund the basic income.

Ford raises the question, from a hypothetical critic, of whether this is not just “Robin Hood socialism”: stealing from the productive in order to pay people to do nothing (p. 180). I’d attack it from the other side and argue that it’s in fact the opposite of Robin Hood socialism: it leaves scarcity rents in place and then redistributes them, rather than allowing the competitive market to socialize the benefits of innovation through free goods.

I prefer just the opposite approach: where rents and inflated prices result, not from the market mechanism itself, but from government-enforced artificial scarcity, we should eliminate the artificial scarcity. And when negative externalities result from government subsidies to waste or insulation from the real market costs of pollution, we should simply eliminate the legal framework that promotes the negative externality in the first place. Rather than maintaining the purchasing power needed to consume present levels of output, we should reduce the amount of purchasing power required to consume those levels of output. We should eliminate all artificial scarcity barriers to meeting as many of our consumption needs as possible outside the wage economy.

And Ford seems to accept the conventional mass-consumption economy as a given. The problem, he says, “is really not that Americans have spent too much. The problem is that their spending has been sustained by borrowing rather than by growth in real income (p. 161).”

I disagree. The problem is that a majority of our spending goes to pay the embedded costs of subsidized waste and artificial scarcity rents. Overbuilt industry could run at full capacity, before the present downturn, only at the cost of landfills piled with mountains of discarded goods. Most of the money we spend is not on the necessary costs of producing the use-value we consume, but on the moral equivalent of superfluous steps in a Rube Goldberg machine: essentially digging holes and filling them back in. They include—among many other things—rents on copyright and patents, long-distance shipping costs, planned obsolescence, the costs of large inventories and high-pressure marketing associated with supply-push distribution, artificial scarcity rents on capital resulting from government restraints on competition in the supply of credit, and rents on artificial property in land (i.e. holding land out of use or charging tribute to the first user through government enforced titles to vacant and unimproved land).

The waste of resources involved in producing disposable goods for the landfill (after a brief detour through our living rooms), or shipping stuff across country that could be more efficiently produced in a small factory in the same town where it was consumed, was motivated by the same considerations of surplus disposal that, as Emmanuel Goldstein’s “Book” described it in 1984, caused the superpowers to sink millions of tons of industrial output to the bottom of the ocean or blast them into the stratosphere. It’s motivated by the same considerations that caused Huxley’s World-State to indoctrinate every consumer-citizen with tens of thousands of hypnopaedic injunctions that “ending is better than mending.” Human beings have become living disposal units to prevent the wheels of industry from being clogged with unwanted output.

If all these artificial scarcity rents and subsidized inefficiencies were eliminated, and workers weren’t deprived of part of the value of our labor by state-enforced unequal bargaining power, right now we could purchase all the consumption goods we currently consume with the wages of fifteen or twenty hours of labor a week.

What we need is not to guarantee sufficient purchasing power to absorb the output of overbuilt industry. It is to eliminate the excess capacity that goes to producing for planned obsolescence.

As with mass consumption, Ford seems to accept the job culture as a bulwark of social stability and purpose. What he has in mind, as I read it, is that the guaranteed income, as a source of purchasing power, be tied to some new “moral equivalent of jobs” that will maintain a sense of normalcy and fill the void left by the reduced need for wage labor (pp. 168–169). His agenda for decoupling purchasing power from wage income involves, rather than the basic income proposals of the Social Credit movement and some Geolibertarians, the use of government income subsidies as a targeted incentive or carrot to encourage favored kinds of behavior like continuing education, volunteering, and the like. “If we cannot pay people to work, then we must pay them to do something else that has value” (p. 194).

Again, I disagree. The loss of the job as an instrument of social control is a good thing.

I share Claire Wolfe’s view of the job culture as unnatural from the standpoint of libertarian values, and as a historical anomaly. From an American historical perspective, the whole idea of the job was a radical departure from the previous mainstream in which most people were self-employed artisans and family farmers. It arose mainly because of the high cost of production machinery in the Industrial Revolution. From that perspective, the idea of the “job” as the main source of livelihood over the past 150–200 years—a situation in which the individual spends eight hours a day as a “poor relation” on someone else’s property, and takes orders from an authority figure behind a desk in the same way that a schoolchild would from a teacher or a prisoner would from a guard, is just plain weird.

The generation after the American Revolution viewed standing armies as a threat to liberty, not primarily because of their potential for suppressing freedom by force, but because their internal culture inculcated authoritarian values that undermined the cultural atmosphere necessary for the preservation of political freedom in society at large. At the time, standing armies (along with perhaps the Post Office and ecclesiastical hierarchies like that of the Anglican Church) were just about the only large-scale hierarchical institutions around, in a society where most people were self-employed. As such, they were a breeding ground for a personality type fundamentally at odds with the needs of a republican society—people in the habit of taking orders from other people. And today, it seems self-evident that people who spend eight hours a day taking orders, and serving the values and goals of people who utterly unaccountable to them, are unlikely to resist the demands of any other form of authority in the portion of their lives where they’re still theoretically “free.”

The shift to the pre-job pattern of self-employment in the informal sector promises to eliminate this pathological culture in which one secures his livelihood by winning the approval of an authority figure. In my opinion, therefore, we should take advantage of the opportunity to eliminate this pattern of livelihood, instead of—as Ford proposes—replacing the boss with a bureaucrat as the authority figure on whose whims our livelihood depends. The sooner we destroy the idea of the “job” as a primary source of livelihood, and replace the idea of work as something we’re given with the idea of work as something we do, the better. And then we should sow the ground with salt.

So here’s my post-scarcity agenda:

Eliminating all artificial scarcity rents and mandated artificial levels of overhead for small-scale production, in order to reduce the overhead cost of everyday life, and to reduce the household revenue stream necessary to service it. That means, among other things...

Eliminating “intellectual property” as a source of scarcity rents in informational and cultural goods, and embedded rents on patents as a component of the price of manufactured goods. See, for example, Tom Peters’ enthusiastic description in The Tom Peters Seminar that ninety percent of the cost of his new Minolta camera was “intellect” or “ephemera” rather than parts and labor.

An end to local business licensing, zoning laws, and spurious “safety” and “health” codes insofar as they prohibit operating microenterprises out of family residences, or impose arbitrary capital outlays and overhead on such microenterprises by mandating more expensive equipment than the nature of the case requires. It means, for example, eliminating legal barriers to running a microbakery out of one’s own home using an ordinary kitchen oven and selling the bread out of one’s home or at the Farmer’s Market (such as, e.g., requirements to rent a stand-alone piece of commercial real estate, buy an industrial-size oven and dishwasther, etc.).

Likewise, an end to local building codes whose main effect is to lock in conventional building techniques used by established contractors, and to criminalize innovative practices like the use of new low-cost building techniques and cheap vernacular materials.

An end to occupational licensing, or at least an end to artificial restrictions on the number of licenses granted and licensing fees greater than necessary to fund the costs of administration. This would mean that, in place of a limited number of NYC cab medallions costing hundreds of thousands of dollars apiece, medallions would be issued to anyone who met the objective licensing requirements and the cost would be just enough to cover a driving record and criminal background check and a vehicle inspection.

An end to government policies aimed at propping up asset prices, allowing the real estate bubble to finish popping.

An increase in work-sharing and shorter work weeks to evenly distribute the amount of necessary work that remains. Ford also calls for job-sharing (pp. 185–186), and quotes Keynes 1930 essay on post-scarcity on the principle “spread the bread thinly on the butter—to make what work there is still to be done to be as widely shared as possible” (p. 190). Our disagreement seems to rely in this: I believe that, absent artificial scarcity rents to disrupt the link between effort and consumption, the average individual share of available work would provide sufficient income to purchase a comfortable standard of living. Ford explicitly denies that a part-time income would be sufficient to pay for the necessities of life (p. 191), but seems to operate on the assumption that most of the mechanisms of artificial scarcity would continue as before.

The decoupling of the social safety net from both wage employment and the welfare state, through 4a) an increase in extended family or multi-family income-pooling arrangements, cohousing projects, urban communes, etc., and 4b) a rapid expansion of mutuals (of the kind described by Kropotkin, E.P. Thompson, and Colin Ward) as mechanisms for pooling cost and risk. Ford also recognizes the imperative of decoupling the safety net from employment (p. 191), although he advocates government funding as a substitute. But libertarian considerations aside, government is increasingly subject to what James O’Connor called the “fiscal crisis of the state.” And this crisis is exacerbated by the tendencies Douglas Rushkoff described in California, as the imploding capital costs required for production rendered most investment capital superfluous and destroyed the tax base. The whole gross margin from capital that Ford presupposes as a partial replacement for payroll taxes is, for that reason, becoming obsolete.

A shift of consumption wherever feasible, from the purchase of store goods with wage income, to subsistence production or production for barter in the household economy using home workshops, sewing machines, ordinary kitchen food prep equipment, etc. If every unemployed or underemployed person with a sewing machine and good skills put them to full use producing clothing for barter, and if every unemployed or underemployed person turned to such a producer as their first resort in obtaining clothing (and ditto for all other forms of common home production, like baking, daycare services, hairstyling, rides and running errands, etc.) the scale of the shift from the capitalist economy to the informal economy would be revolutionary.

A rapid expansion in local alternative currency and barter networks taking advantage of the latest network technology, as a source of liquidity of direct exchange between informal/household producers.

Putting it all together, the agenda calls for people to transfer as much of their subsistence needs out of the money economy as it’s feasible to do right now, and to that extent to render themselves independent of the old laws of economic value; and where scarcity and exchange value and the need for purchases in the money economy persist, to restore the linkages of equity between effort and purchasing power.

Suppose that the amount of necessary labor, after technological unemployment, was only enough to give everyone a twenty-hour work week—but at the same time the average rent or mortgage payment fell to $150/month, anyone could join a neighborhood cooperative clinic (with several such cooperatives pooling their resources to fund a hospital out of membership fees) for a $50 monthly fee, the price of formerly patented drugs fell 95%, and a microfactory in the community was churning out quality manufactured goods for a fraction of their former price. For most people, myself included, I would call that a greatly improved standard of living.

Chapter Four: Back to the Future

Even with the decentralizing potential of electrical power neglected and sidetracked into the paleotechnic framework, and even with the diversion of technical development into the needs of mass-production industry, small-scale production tools were still able to achieve superior productivity—even working with the crumbs and castoffs of Sloanist mass-production, and even at the height of Moloch’s glory. Two models of production have arisen within the belly of the Sloanist beast, and between them offer the best hopes for replacing the mass-production model: 1) the informal and household economy; and 2) relocalized industry using general-purpose machinery to produce in small batches for the local market, frequently switching between production runs.

A. Home Manufacture

First, even at the height of mass-productionist triumphalism, the superior productivity of home manufacture was demonstrated in many fields. In the 1920s and 1930s, the zenith of mass production’s supposed triumph, Ralph Borsodi showed that with electricity, most goods could be produced in small shops and even in the home with an efficiency at least competitive with that of the great factories, once the greatly reduced distribution costs of small-scale production were taken into account. Borsodi’s law—the tendency of increased distribution costs to offset reduced unit costs of production at a relatively small scale—applies not only to the relative efficiencies of large versus small factories, but also to the comparative efficiencies of factory versus home production. Borsodi argued that for most light goods like food, textiles, and furniture, the overall costs were actually lower to manufacture them in one’s own home. The reason was that the electric motor put small-scale production machinery in the home on the same footing as large machinery in the factory. Although economies of large-scale machine production exist, most economies of machine production are captured with the bare adoption of the machinery itself, even with household electrical machinery. After that, the downward production cost curve is very shallow, while the upward distribution cost curve is steep.

Borsodi’s study of the economics of home production began with the home-grown tomatoes his wife canned. Expressing some doubts as to Mrs. Borsodi’s confidence that it “paid” to do it, he systematically examined all the costs going into the tomatoes, including the market value of the labor they put into growing them and canning them, the cost of the household electricity used, etc. Even with all these things factored in, Bordodi still found the home product cost 20–30% less than the canned tomatoes at the market. The reason? The home product, produced at the point of consumption, had zero distribution cost. The modest unit cost savings from large-scale machinery were insufficient to offset the enormous cost of distribution and marketing. [518]

Borsodi went on to experiment with home clothing production with loom and sewing machine, and building furniture in the home workshop.

I discovered that more than two-thirds of the things which the average family now buys could be produced more economically at home than they could be bought factory made; —that the average man and woman could earn more by producing at home than by working for money in an office or factory and that, therefore, the less time they spent working away from home and the more time they spent working at home, the better off they would be; —finally, that the home itself was still capable of being made into a productive and creative institution and that an investment in a homestead equipped with efficient domestic machinery would yield larger returns per dollar of investment than investments in insurance, in mortgages, in stocks and bonds.... These discoveries led to our experimenting year after year with domestic appliances and machines. We began to experiment with the problem of bringing back into the house, and thus under our own direct control, the various machines which the textile-mill, the cannery and packing house, the flour-mill, the clothing and garment factory, had taken over from the home during the past two hundred years.... In the main the economies of factory production, which are so obvious and which have led economists so far astray, consist of three things (1) quantity buying of materials and supplies; (2) the division of labor with each worker in industry confined to the performance of a single operation; and (3) the use of power to eliminate labor and permit the operation of automatic machinery. Of these, the use of power is unquestionably the most important. Today, however, power is something which the home can use to reduce costs of production just as well as can the factory. The situation which prevailed in the days when water power and steam-engines furnished the only forms of power is at an end. As long as the only available form of power was centralized power, the transfer of machinery and production from the home and the individual, to the factory and the group, was inevitable. But with the development of the gas-engine and the electric motor, power became available in decentralized forms. The home, so far as power was concerned, had been put in position to compete with the factory. With this advantage of the factory nullified, its other advantages are in themselves insufficient to offset the burden of distribution costs on most products.... The average factory, no doubt, does produce food and clothing cheaper than we produce them even with our power-driven machinery on the Borsodi homestead. But factory costs, because of the problem of distribution, are only first costs. They cannot, therefore, be compared with home costs, which are final costs. [519]

Even the internal economies of the factory, it should be added, were offset by the overhead costs of administration, and the dividends and interest on capital. Profliferating departmentalization entails

gang bosses, speed bosses, inspectors, repair bosses, planning department representatives and of course corresponding “office” supervisors: designers, planners, record keepers and cost clerks.... there are office managers, personnel managers, sales managers, advertising managers and traffic managers.... All tend to absorb the reductions in manufacturing costs which are made possible by the factory machinery and factory methods.

These are only the costs within the factory. Above the factory, in a firm of numerous factories and branch offices, comes an additional layer of administrative overhead for the corporate headquarters.

And on top of all that, there are the distribution costs of producing for a large market area: “wholesaling transportation and warehousing costs, wholesaling expenses, wholesaling profits, retailing transportation and warehousing costs, retailing expenses, retailing profits.” [520]

Since Borsodi’s time, the variety and sophistication of electrically powered small machinery has increased enormously. As we saw in Chapter One, after the invention of clockwork the design of machine processes for every conceivable function was nearly inevitable. Likewise, once electrically powered machinery was introduced, the development of small-scale electrical machinery for every purpose followed as a matter of course.

Since first reading Borsodi’s account, I have encountered arguments that his experience was misleading or atypical, given that he was a natural polymath and therefore perhaps a quicker study than most, and therefore failed to include learning time in his estimate of costs. These objections cannot be entirely dismissed.

One of Borsodi’s genuine shortcomings was his treatment of household production in largely autarkic terms. He generally argued that the homestead should produce for itself when it was economical to do so, and buy from the conventional money economy with wages when it was not, with little in between. The homesteader should not produce a surplus for the market, he said, because it could only be sold on disadvantageous terms in the larger capitalist economy and would waste labor that could be more efficiently employed either producing other goods for home consumption or earning wages on the market. He did mention the use of surpluses for gifting and hospitality, but largely ignored the possibility of a thriving informal and barter economy outside the capitalist system.

A relatively modest degree of division of labor in the informal and barter economy would be sufficient to overcome a great deal of the learning curve for craft production. Most neighborhoods probably have a skilled home seamstress, a baker famous for his homemade bread, a good home brewer, someone with a well-equipped woodworking or metal shop, and so forth. Present-day home hobbyists, producing for barter, could make use of their existing skills. What’s more, in so doing they would optimize efficiency even over Borsodi’s model: they would fully utilize the spare capacity of household equipment that would have been idle much of the time with entirely autarkic production, and spread the costs of such capital equipment over a number of households (rather than, as in Borsodi’s model, duplicating it in each household).

One of the most important effects of licensing, zoning, and assorted “health” and “safety” codes, at the local level, is to prohibit production on a scale intermediate between individual production for home consumption, and production for the market in a conventional business enterprise. Such regulations criminalize the intermediate case of the household microenterprise, producing either for the market or for barter on a significant scale. This essentially mandates the level of autarky that Borsodi envisioned, and enables larger commercial enterprises to take advantage of the rents resulting from individual learning curves. Skilled home producers are prevented from taking advantage of the spare capacity of their capital equipment, and other households are forced either to acquire all the various specialty skills for themselves or to buy from a commercial enterprise.

B. Relocalized Manufacturing

Borsodi’s other shortcoming was his inadequate recognition of the possibility of scales of manufacturing below the mass production factory. In Prosperity and Security , he identified four scales of production: “(I) family production, (II) custom production, (III) factory production, and (IV) social production.” [521] He confused factory production with mass-production. In fact, custom production fades into factory production, with some forms of small-scale factory production that bear as much (or more) resemblance to custom production than to stereotypically American mass-production. In arguing that large-scale factory production was more economical only for a handful of products—“automobiles, motors, electrical appliances, wire, pipe, and similar goods”—he ignored the possibility that even many of those goods could be produced more economically in a small factory using general-purpose machinery in short production runs. [522]

In making “serial production” the defining feature of the factory, as opposed to the custom shop, he made the gulf between factory production and custom production greater and more fixed than was necessary, and ignored the extent to which the line between them is blurred in reality.

In the sense in which I use the term factory it applies only to places equipped with tools and machinery to produce “goods, wares or utensils” by a system involving serial production, division of labor, and uniformity of products. ….A garage doing large quantities of repair work on automobiles is much like a factory in appearance. So is a railroad repair shop. Yet neither of these lineal descendants of the roadside smithy is truly a factory. The distinctive attribute of the factory itself is the system of serial production. It is not, as might be thought, machine production nor even the application of power to machinery.... Only the establishment in which a product of uniform design is systematically fabricated with more or less subdivision of labor during the process is a factory. [523] ….But none of the economies of mass production, mass distribution, and mass consumption is possible if the finished product is permitted to vary in this manner. Serial production in the factory is dependent at all stages upon uniformities: uniformities, of design, material and workmanship. Each article exactly duplicates every other.... [524]

In arguing that some products (“of which copper wire is one example”) could “best be made, or made most economically, by the factory,” he neglected the question of whether such things as copper wire could be made more economically in much smaller factories with much less specialized machinery. [525] Elsewhere, citing the superior cost efficiency of milling grain locally or in the home using small electric mills rather than shipping bolted white flour from the mega-mills in Minneapolis, he appealed to the vision of a society of millions of household mills, along with “a few factories making these domestic mills and supplying parts and replacements for them....” [526] This begs the question of whether a large, mass-production factory is best suited to the production of small appliances.

In fact, the possibility of an intermediate model of industrial production has been well demonstrated in industrial districts like Emilia-Romagna. As we mentioned in Chapter One, Sabel’s and Piore’s “path not taken” (integrating flexible, electrically powered machinery into craft production) was in fact taken in a few isolated enclaves. In the late 1890s, for example, even after the tide had turned toward mass-production industry, “the German Franz Ziegler could still point to promising examples of the technological renovation of decentralized production in Remscheid, through the introduction of flexible machine tools, powered by small electric motors.” [527]

But with the overall economy structured around mass-production industry, the successful industrial districts were relegated mainly to serving niche markets in the larger Sloanist economy. In some cases, like the Lyon textile district (see below), the state officially promoted the liquidation of the industrial district and its absorption by the mass-production economy. In the majority of cases, with the predominance of large-scale mass-production industry encouraged by the state and an economic environment artificially favorable to such forms of organization, flexible manufacturing firms in the industrial districts were “spontaneously” absorbed into a larger corporate framework. The government having created an economy dominated by large-scale, mass-production industry, the pattern of development of small-scale producers was distorted by the character of the overall system. Two examples of the latter phenomenon were the Sheffield and Birmingham districts, in which flexible manufacturers increasingly took on the role of supplying inputs to large manufacturers (they were drawn “ever more closely into the orbit of mass producers,” in Piore’s and Sabel’s words), and as a result gradually lost their flexibility and their ability to produce anything but inputs for the dominant manufacturer. Their product became increasingly standardized, and their equipment more and more dedicated to the needs of a particular large manufacturer. [528] The small-scale machine tools of Remscheid, a decade after Ziegler wrote, were seen as doomed. [529]

But all this has changed with the decay of Mumford’s “cultural pseudomorph,” and the adoption of alternatives to mass production (as we saw in Chapter Three) as a response to economic crisis. Today, in both Toyota’s “single-minute exchange of dies” and in the flexible production in the shops of north-central Italy, factory production takes on many of the characteristics of custom production. With standardized, modular components and the ability to switch quickly between various combinations of features, production approaches a state of affairs in which every individual item coming out of the factory is unique. A small factory or workshop, frequently switching between products, can still obtain most of the advantages of Borsodi’s “uniformity” through the simple expedient of modular design. Lean production is a synthesis of the good points of mass production and custom or craft production.

Lean production, broadly speaking, has taken two forms, typified respectively by the Toyota Production System and Emilia-Romagna. Robert Begg et al. characterize them, respectively, as two ways of globally organizing flexible specialization: producer-driven commodity chains and consumer-driven commodity chains. The former, exemplified in the TPS and to some extent by most global manufacturing corporations, outsources production to small, networked supplier firms. Such firms usually bear the brunt of economic downturns, and (because they must compete for corporate patronage) have little bargaining power against the corporate purchasers of their output. The latter, exemplified by Emilia-Romagna, entail cooperative networks of small firms for which a large corporate patron most likely doesn’t even exist, and production is driven by demand. [530] (Of course, the large manufacturing corporations in the former model are far more vulnerable to bypassing by networked suppliers than the authors’ description would suggest.)

The interesting thing about the Toyota Production System is that it’s closer to custom production than to mass production. In many ways, it’s Craft Production 2.0.

Craft production, as described by James Womack et al. in The Machine That Changed the World, was characterized by

A workforce that was highly skilled in design, machine operation, and fitting.... Organizations that were extremely decentralized, although concentrated within a single city. Most parts and much of the vehicle’s design came from small machine shops. The system was coordinated by an owner/entrepreneur in direct contact with everyone involved—customers, employers, and suppliers. The use of general-purpose machine tools to perform drilling, grinding, and other operations on metal and wood. A very low production volume.... [531]

The last characteristic, low volume (Panhard et Levassor’s custom automobile operation produced a thousand or fewer vehicles a year) resulted from the inability to standardize parts, which, in turn, resulted from the inability of machine tools to cut hardened steel. Before this capability was achieved, it would have been a waste of time to try producing to gauge; steel parts had to be cut and then hardened, which distorted them so that they had to be custom-fitted. The overwhelming majority of production time was taken up by filing and fitting each individual part to the other parts on (say) a car.

Most of the economies of speed achieved by Ford resulted, not from the assembly line (although as a secondary matter it may be useful for maintaining production flow), but from precision and interchangeability. Ford was the first to take advantage of recent advances in machine tools which enabled them to work on prehardened metal. As a result, he was able to produce parts to a standardized gauging system that remained constant throughout the manufacturing process. [532] In so doing, he eliminated the old job of fitter, which was the primary source of cost and delay in custom production.

But this most important innovation of Ford’s—interchangeable parts produced to gauge—could have been introduced just as well into craft production, radically increasing the output and reducing the cost of craft industry. Ford managed to reduce task cycle time for assemblers from 514 minutes to 2.3 minutes by August 1913, before he ever introduced the moving assembly line. The assembly line itself reduced cycle time only from 2.3 to 1.19 minutes. [533]

With this innovation, a craft producer might still have used general-purpose machinery and switched frequently between products, while using precision machining techniques to produce identical parts for a set of standardized modular designs. By radically reducing setup times and removing the main cost of fitting from craft production (“all filing and adjusting of parts had... been eliminated”), craft producers would have achieved many of the efficiencies of mass production with none of the centralization costs we saw in Chapter Two.

In a brilliant illustration of history’s tendency to reappear as farce, by the way, GM’s batch-and-queue production resurrected the old job of fitter, supposedly eliminated forever by production to gauge, to deal with the enormous output of defective parts. At GM’s Framingham plant, besides the weeks’ worth of inventory piled among the work stations, Waddell and his co-authors found workers “struggling to attach poorly fitting parts to the Oldsmobile Ciera models they were building.” [534]

The other cost of craft production was setup time: the cost and time entailed in skilled machinists readjusting machine tools for different products. Ford reduced setup time through the use of product-specific machinery, foolproofed with simple jigs and gauges to ensure they worked to standard. [535] The problem was that this required batch production, the source of all the inefficiencies we saw in Chapter Two.

This second cost was overcome in the Toyota Production System by Taichi Ohno’s “single-minute exchange of dies” (SMED), which reduced the changeover time between products by several orders of magnitude. By the time of World War II, in American-style mass production, manufacturers were dedicating a set of presses to specific parts for months or even years at a time in order to minimize the unit costs from a day or more of downtime to change dies. [536] Ohno, beginning in the late 1940s to experiment with used American machinery, by the late 1950s, managed to reduce die-change time to three minutes. In so doing, he discovered that (thanks to the elimination of in-process inventories, and thanks to the fact that defects showed up immediately at the source) “it actually cost less per part to make small batches of stampings than to run off enormous lots.” [537] In effect, he turned mass-production machinery into general-purpose machinery.

In industrial districts like Emilia-Romagna, the problem of setup and changeover time was overcome by the development of flexible general purpose machine tools, particularly the small numerically controlled machine tools which the microprocessor revolution permitted in the 1970s. Ford’s innovations in precision cutting of pre-hardened metal to gauge, and the elimination of setup time with small CNC tools in the 1970s, between them made it possible for craft production to capture all the efficiencies of mass production.

Ohno’s system was essentially a return to craft production methods, but with the speed of Ford’s mass production assembly line. With the single-minute exchange of dies, factory machinery bore more of a functional resemblance to general-purpose machinery than to the dedicated and inflexible machinery of GM. But with precision cutting capabilities and a few standardized, modular designs, it achieved nearly the same economies of speed as mass production.

We already described, in Chapter Two, how Sloanism’s “economies of speed” differ from those of the Toyota Production System. The irony, according to Waddell and Bodek, is that Toyota and other lean manufacturers reduce direct labor costs (supposedly the raison d’etre of Sloanism) “at rates that leave Sloan companies in the dust.”

The critical technology to cutting direct labor hours by fifty percent or more is better than sixty years old. Electric motors small enough and powerful enough to drive a machine tool had a negligible impact on productivity in America, but a huge impact in Japan. When belt drives came off of machines, and each machine was powered by its own electric motor the door opened up to a productivity improvement equal to that realized by Henry Ford with the advent of the assembly line.... ...[T]he day came in the evolution of electrical technology that each machine could be equipped with its own motor. Motors were powerful enough, small enough and cheap enough for the belts and shafts to go by the wayside.... To American thinking, this was not much of an event. Sloan’s system was firmly entrenched by the time the shafts and belts were eliminated. Economy was perceived to result exclusively from running machines as fast as possible, making big batches at a time. There was still one man to one machine, for the most part, and maximizing the output from that man’s labor cost was the objective. Whether machines were lined up in rows, or scattered at random around the factory did not make much difference to the results of that equation. Shigeo Shingo presented a paper at a technical conference conducted by the Japan Management Association in 1946 entitled “Production Mechanism of Process and Operation.” It was based on the principle that optimizing the overall production process... is the key to manufacturing. To quote Shingo, “Improvement of process must be accomplished prior to improvement of operation.” While the Americans saw manufacturing as a set of isolated operations, all linked by sizeable inventories, the Japanese saw manufacturing as a flow. Where the machines are is a big deal to people concerned about flow while it matters little to people concerned only with isolated operations. To Shingo, the flexibility to put machines anywhere he wanted opened the door to fantastic productivity improvements. [538]

In other words, lean manufacturing—as Sabel and Piore put it—amounts to the discovery, after a century-long dead end, of how to integrate electrical power into manufacturing.

Emilia-Romagna is part of a larger phenomenon, the so-called “Third Italy” (as distinguished from the old industrial triangle of Milan-Turin-Genoa, and the cash crop plantation agriculture of the South):

a vast network of very small enterprises spread through the villages and small cities of central and Northeast Italy, in and around Bologna, Florence, Ancona, and Venice.... These little shops range across the entire sprectrum of the modern industrial structure, from shoes, ceramics, textiles, and garments on one side to motorcycles, agricultural equipment, automotive parts, and machine tools on the other. [539]

Although these small shops (quite small on average, with ten workers or fewer not unusual) “perform an enormous variety of the operations associated with mass production,” they do so using “artisans’ methods rather than industrial techniques of production.” [540]

A typical factory is housed on the ground floor of a building, with two or three floors of apartments above for the several extended families that own it.

The workrooms are clean and spacious. A number of hand operations are interspersed with the mechanized ones. The machinery, however, is fully modern technology and design; sometimes it is exactly the same as that found in a modern factory, sometimes a reduced version of a smaller machine. The work is laid out rationally: the workpieces flow along miniature conveyors, whose twists and turns create the impression of a factory in a doll house. [541]

At the smaller end of the scale, “production is still centered in the garage...”

Despite high productivity, the pace of work is typically relaxed, with production stopping daily for workers to retreat to their upstairs apartments for an extended lunch or siesta. [542]

Some [factories] recall turn-of-the century sweatshops.... But many of the others are spotless; the workers extremely skilled and the distinction between them and their supervisors almost imperceptible; the tools the most advanced numerically controlled equipment of its type; the products, designed in the shop, sophisticated and distinctive enough to capture monopolies in world markets. If you had thought so long about Rousseau’s artisan clockmakers at Neuchatel or Marx’s idea of labor as joyful, self-creative association that you had begun to doubt their possibility, then you might, watching these craftsmen at work, forgive yourself the sudden conviction that something more utopian than the present factory system is practical after all. [543]

Production on the Emilia-Romagna model is regulated on a demand-pull basis: general-purpose machinery makes it possible to produce in small batches and switch frequently and quickly from one product line to another, as orders come in. Further, with the separate stages of production broken down in a networked relationship between producers, constant shifts in contractual relationships between suppliers and outlets are feasible at relatively low cost. [544]

While the small subcontractors in a sector are zealous of their autonomy and often vigorously competitive, they are also quite likely to collaborate as they become increasingly specialized, “subcontracting to each other or sharing the cost of an innovation in machine design that would be too expensive for one producer to order by himself.” There is a tendency toward cooperation, especially, because the network relationships betgween specialized firms may shift rapidly with changes in demand, with the same firms alternately subcontracting to one another. [545] Piore and Sabel describe the fluidity of supply chains in an industrial district:

The variability of demand meant that patterns of subcontracting were constantly rearranged. Firms that had underestimated a year’s demand would subcontract the overflow to less well situated competitors scrambling to adapt to the market. But the next year the situation might be reversed, with winners in the previous round forced to sell off equipment to last year’s losers. Under these circumstances, every employee could become a subcontractor, every subcontractor a manufacturer, every manufacturer an employee. [546]

The Chinese shanzhai phenomenon bears a striking resemblance to the Third Italy. The literal meaning of shanzhai is “mountain fortress,” but it carries the connotation of a redoubt or stronghold outside the state’s control, or a place of refuge for bandits or rebels (much like the Cossack communities on the fringes of the Russian Empire, or the Merry Men in Sherwood Forest). Andrew “Bunnie” Huang writes:

The contemporary shanzhai are rebellious, individualistic, underground, and self-empowered innovators. They are rebellious in the sense that the shanzhai are celebrated for their copycat products; they are the producers of the notorious knock-offs of the iPhone and so forth. They individualistic in the sense that they have a visceral dislike for the large companies; many of the shanzhai themselves used to be employees of large companies (both US and Asian) who departed because they were frustrated at the inefficiency of their former employers. They are underground in the sense that once a shanzhai “goes legit” and starts doing business through traditional retail channels, they are no longer considered to be in the fraternity of the shanzai. They are self-empowered in the sense that they are universally tiny operations, bootstrapped on minimal capital, and they run with the attitude of “if you can do it, then I can as well.” An estimate I heard places 300 shanzhai organizations operating in Shenzhen. These shanzai consist of shops ranging from just a couple folks to a few hundred employees; some just specialize in things like tooling, PCB design, PCB assembly, cell phone skinning, while others are a little bit broader in capability. The shanzai are efficient: one shop of under 250 employees churns out over 200,000 mobile phones per month with a high mix of products (runs as short as a few hundred units is possible); collectively an estimate I heard places shanzhai in the Shenzhen area producing around 20 million phones per month. That’s an economy approaching a billion dollars a month. Most of these phones sell into third-world and emerging markets: India, Africa, Russia, and southeast Asia; I imagine if this model were extended to the PC space the shanzhai would easily accomplish what the OLPC failed to do. Significantly, the shanzai are almost universally bootstrapped on minimal capital with almost no additional financing — I heard that typical startup costs are under a few hundred thousand for an operation that may eventually scale to over 50 million revenue per year within a couple years. Significantly, they do not just produce copycat phones. They make original design phones as well.... These original phones integrate wacky features like 7.1 stereo sound, dual SIM cards, a functional cigarette holder, a high-zoom lens, or a built-in UV LED for counterfeit money detection. Their ability to not just copy, but to innovate and riff off of designs is very significant. They are doing to hardware what the web did for rip/mix/burn or mashup compilations.... Interestingly, the shanzhai employ a concept called the “open BOM” — they share their bill of materials and other design materials with each other, and they share any improvements made; these rules are policed by community word-of-mouth, to the extent that if someone is found cheating they are ostracized by the shanzhai ecosystem. To give a flavor of how this is viewed in China, I heard a local comment about how great it was that the shanzhai could not only make an iPhone clone, they could improve it by giving the clone a user-replaceable battery. US law would come down on the side of this activity being illegal and infringing, but given the fecundity of mashup on the web, I can’t help but wonder out loud if mashup in hardware is all that bad.... In a sense, I feel like the shanzhai are brethren of the classic western notion of hacker-entrepreneurs, but with a distinctly Chinese twist to them. My personal favorite shanzhai story is of the chap who owns a house that I’m extraordinarily envious of. His house has three floors: on the top, is his bedroom; on the middle floor is a complete SMT manufacturing line; on the bottom floor is a retail outlet, selling the products produced a floor above and designed two floors above. How cool would it be to have your very own SMT line right in your home! It would certainly be a disruptive change to the way I innovate to own infrastructure like that — not only would I save on production costs, reduce my prototyping time, and turn inventory aggressively (thereby reducing inventory capital requirements), I would be able to cut out the 20–50% minimum retail margin typically required by US retailers, assuming my retail store is in a high-traffic urban location. ....I always had a theory that at some point, the amount of knowledge and the scale of the markets in the area would reach a critical mass where the Chinese would stop being simply workers or copiers, and would take control of their own destiny and become creators and ultimately innovation leaders. I think it has begun — these stories I’m hearing of the shanzhai and the mashup they produce are just the beginning of a hockey stick that has the potential to change the way business is done, perhaps not in the US, but certainly in that massive, untapped market often referred to as the “rest of the world”. [547]

And like the flexible manufacturing networks in the Third Italy, Huang says, the density and economic diversity of the environment in which shanzhai enterprises function promotes flow and adaptability.

...[T]he retail shop on the bottom floor in these electronic market districts of China enables goods to actually flow; your neighbor is selling parts to you, the guy across the street sells your production tools, and the entire block is focused on electronics production, consumption or distribution in some way. The turnover of goods is high so that your SMT and design shop on the floors above can turn a profit. [548]

The success of shanzhai enterprises results not only from their technical innovativeness, according to Vassar professor Yu Zhou, but from “how they form supply chains and how rapidly they react to new trends.” [549]

C. New Possibilities for Flexible Manufacturing

Considerable possibilities existed for increasing the efficiency of craft production through the use of flexible machinery, even in the age of steam and water power. The Jacquard loom, for example, used in the Lyon silk industry, was a much lower-tech precursor of Ohno’s Single Minute Exchange of Dies (SMED). With the loom controlled by perforated cards, the setup time for switching to a new pattern was reduced substantially. In so doing, it made small-batch production profitable that would have been out of the question with costly, dedicated mass-production machinery. [550] Lyon persisted as a thriving industrial district, by the way, until the French government killed it off in the 1960s: official policy being to encourage conversion to a more “progressive,” mass-production model through state-sponsored mergers and acquisitions, the local networked firms became subsidiaries of French-based transnational corporations. [551]

Such industrial districts, according to Piore and Sabel, demonstrated considerable “technological vitality” in the “speed and sophistication with which they adapted power sources to their needs.”

The large Alsatian textile firms not only made early use of steam power but also became—through their sponsorship of research institutes—the nucleus of a major theoretical school of thermodynamics. Small firms in Saint-Etienne experimented with compressed air in the middle of the nineteenth century, before turning, along with Remscheid and Solingen, to the careful study of small steam and gasoline engines. After 1890, when the long-distance transmission of electric power was demontrated at Frankfurt, these three regions were among the first industrial users of small electric motors. [552]

With the introduction of electric motors, the downscaling of power machinery to virtually any kind of small-scale production was no longer a matter of technological possibilities. It was only a question of institutional will, in deciding whether to allocate research and development resources into large- or small-scale production. As we saw in Chapter One, the state tipped the balance toward large-scale mass-production industry, and production with small-scale power machinery was relegated to a few isolated industrial districts. Nevertheless, as we saw in earlier chapters, Borsodi demonstrated that small-scale production—even starved for developmental resources and with one hand tied behind its back—was able to surpass mass-production industry in efficiency.

For the decades of Sloanist dominance, local industrial districts were islands in a hostile sea.

But with the decay of the first stage of the paleotechnic pseudomorph, flexible manufacturing has become the wave of the future—albeit still imprisoned within a centralized corporate framework. And better yet, networked, flexible manufacturing shows great promise for breaking through the walls of the old corporate system and becoming the basis of a fundamentally different kind of society.

By the 1970s, anarchist Murray Bookchin was proposing small general-purpose machinery as the foundation of a decentralized successor to the mass-production economy.

In a 1970s interview with Mother Earth News , Borsodi repeated his general theme: that when distribution costs were taken into account, home and small shop manufacture were the most efficient way to produce some two-thirds of what we consume. But he conceded that some goods, like “electric wire or light bulbs,” could not be produced “very satisfactorily on a limited scale.” [553]

But as Bookchin and Kirkpatrick Sale pointed out, developments in production technology since Borsodi’s experiments had narrowed considerably the range of goods for which genuine economies of scale existed. Bookchin proposed the adoption of multiple-purpose production machinery for frequent switching from one short production run to another.

The new technology has produced not only miniaturized electronic components and smaller production facilities but also highly versatile, multi-purpose machines. For more than a century, the trend in machine design moved increasingly toward technological specialization and single purpose devices, underpinning the intensive division of labor required by the new factory system. Industrial operations were subordinated entirely to the product. In time, this narrow pragmatic approach has “led industry far from the rational line of development in production machinery,” observe Eric W. Leaver and John J. Brown. “It has led to increasingly uneconomic specialization.... Specialization of machines in terms of end product requires that the machine be thrown away when the product is no longer needed. Yet the work the production machine does can be reduced to a set of basic functions--forming, holding, cutting, and so on--and these functions, if correctly analyzed, can be packaged and applied to operate on a part as needed.” Ideally, a drilling machine of the kind envisioned by Leaver and Brown would be able to produce a hole small enough to hold a thin wire or large enough to admit a pipe.... The importance of machines with this kind of operational range can hardly be overestimated. They make it possible to produce a large variety of products in a single plant. A small or moderate-sized community using multi-purpose machines could satisfy many of its limited industrial needs without being burdened with underused industrial facilities. There would be less loss in scrapping tools and less need for single-purpose plants. The community’s economy would be more compact and versatile, more rounded and self-contained, than anything we find in the communities of industrially advanced countries. The effort that goes into retooling machines for new products would be enormously reduced. Retooling would generally consist of changes in dimensioning rather than in design. [554]

And Sale, commenting on this passage, observed that many of Borsodi’s stipulated exceptions could in fact now be produced most efficiently in a small community factory. The same plant could (say) finish a production run of 30,000 light bulbs, and then switch to wiring or other electrical products—thus “in effect becoming a succession of electrical factories.” A machine shop making electric vehicles could switch from tractors to reapers to bicycles. [555]

Eric Husman, commenting on Bookchin’s and Sale’s treatment of multiple-purpose production technology, points out that they were 1) to a large extent reinventing the wheel, and 2) incorporating a large element of Sloanism into their model:

Human Scale (1980) was written without reference to how badly the Japanese production methods... were beating American mass production methods at the time.... What Sale failed to appreciate is that the Japanese method (...almost diametrically opposed to the Sloan method that Sale is almost certainly thinking of as “mass production”) allows the production of higher quality articles at lower prices.... ....Taichi Ohno would laugh himself silly at the thought of someone toying with the idea [of replacing large-batch production on specialized machinery with shorter runs on general-purpose machinery] 20 years after he had perfected it. Ohno’s development of Toyota’s Just-In-Time method was born exactly out of such circumstances, when Toyota was a small, intimate factory in a beaten country and could not afford the variety and number of machines used in such places as Ford and GM. Ohno pushed, and Shingo later perfected, the idea of Just-In-Time by using Single Minute Exchange of Dies (SMED), making a mockery of a month-long changeover. The idea is to use general machines (e.g. presses) in specialized ways (different dies for each stamping) and to vary the product mix on the assembly line so that you make some of every product every day. The Sale method (the slightly modified Sloan/GM method) would require extensive warehouses to store the mass-produced production runs (since you run a year’s worth of production for those two months and have to store it for the remaining 10 months). If problems were discovered months later, the only recourse would be to wait for the next production run (months later). If too many light bulbs were made, or designs were changed, all those bulbs would be waste. And of course you can forget about producing perishables this way. The JIT method would be to run a few lightbulbs, a couple of irons, a stove, and a refrigerator every hour, switching between them as customer demand dictated. No warehouse needed, just take it straight to the customer. If problems are discovered, the next batch can be held until the problems are solved, and a new batch will be forthcoming later in the shift or during a later shift. If designs or tastes change, there is no waste because you only produce as customers demand. [556] Since Bookchin wrote Post-Scarcity Anarchism , incidentally, Japanese technical innovations blurred even further the line between the production model he proposed above and the Japanese model of lean manufacturing. The numerically controlled machine tools of American mass-production industry, scaled down thanks to the microprocessor revolution, became suitable as a form of general-purpose machinery for the small shop. As developed by the Japanese, it was a new kind of machine tool: numerically controlled general-purpose equipment that is easily programmed and suited for the thousands of small and medium-sized job shops that do much of the batch production in metalworking. Until the mid-1970s, U.S. practice suggested that computer-controlled machine tools could be economically deployed only in large firms (typically in the aerospace industry); in these firms such tools were programmed, by mathematically sophisticated technicians, to manufacture complex components. But advances in the 1970s in semiconductor and computer technology made it possible to build a new generation of machine tools: numerically controlled (NC) or computer-numerical-control (CNC) equipment. NC equipment could easily be programmed to perform the wide range of simple tasks that make up the majority of machining jobs. The equipment’s built-in microcomputers allowed a skilled metalworker to teach the machine a sequence of cuts simply by performing them once, or by translating his or her knowledge into a program through straightforward commands entered via a keyboard located on the shop floor. [557]

According to Piore and Sabel, CNC machinery offers the same advantages over traditional craft production—i.e., flexibility with reduced setup cost—that craft production offered over mass production.

Efficiency in production results from adapting the equipment to the task at hand: the specialization of the equipment to the operation. With conventional technology, this adaptation is done by physical adjustments in the equipment; whenever the product is changed, the specialized machine must be rebuilt. In craft production, this means changing tools and the fixtures that position the workpiece during machining. In mass production, it means scrapping and replacing the machinery. With computer technology, the equipment (the hardware) is adapted to the operation by the computer program (the software); therefore, the equipment can be put to new uses without physical adjustments—simply by reprogramming. [558] The more setup time and cost are reduced, and the lower the cost of redeploying resources, the less significant both economies of scale and economies of specialization become. Hence, the wider the range of products it is feasible to produce for the local or regional market. [559]

Interestingly, as recounted by David Noble, numeric control was first introduced for large-batch production with expensive machinery in heavy industry, and, because of its many inefficiencies, was profitable only with massive government subsidies. But the small-scale numerically controlled machine tools, made possible by the invention of the microprocessor, were ideally suited to small-batch production by small local shops.

This is a perennial phenomenon, which we will examine at length in Chapter Seven: even when the state capitalist system heavily subsidizes the development of technologies specifically suited to large-scale, centralized production, decentralized industry takes the crumbs from under the table and uses them more efficiently than state capitalist industry. Consider, also, the role of the state in creating the technical prerequisites for the desktop and Internet revolutions, which are destroying the proprietary culture industries and proprietary industrial design. State capitalism subsidizes its gravediggers.

If Husman compared the Bookchin-Sale method to the Toyota Production System, and found it wanting, H. Thomas Johnson in turn has subjected the Toyota Production System to his own critique. As amazing as Ohno’s achievements were at Toyota, introducing his lean production methods within the framework of a transnational corporation amounted to putting new wine in old bottles. Ohno’s lean production methods, Johnson argued, are ideally suited to a relocalized manufacturing economy. (This is another example of the decay of the cultural pseudomorph discussed in the previous chapter—the temporary imprisonment of lean manufacturing techniques in the old centralized corporate cocoon.)

In his Foreword to Waddell’s and Bodek’s The Rebirth of American Industry (something of a bible for American devotees of the Toyota Production System), Johnson writes:

Some people, I am afraid, see lean as a pathway to restoring the large manufacturing giants the United States economy has been famous for in the past half century. ...The cheap fossil fuel energy sources that have always supported such production operations cannot be taken for granted any longer. One proposal that has great merit is that of rebuilding our economy around smaller scale, locally-focused organizations that provide just as high a standard living [sic] as people now enjoy, but with far less energy and resource consumption. Helping to create the sustainable local living economy may be the most exciting frontier yet for architects of lean operations. Time will tell. [560]

The “warehouses on wheels” (or “container ships”) distribution model used by centralized manufacturing corporations, even “lean” ones like Toyota, is fundamentally at odds with the principles of lean production. Lean production calls for eliminating inventory by gearing production to orders on a demand-pull basis. But long distribution chains simply sweep the huge factory inventories of Sloanism under the rug, and shift them to trucks and ships. There’s still an enormous inventory of finished goods at any given time—it’s just in motion.

Husman, whom we have already seen is an enthusiastic advocate for lean production, has himself pointed to “warehouses on wheels” as just an outsourced version of Sloanist inventories:

For another view of self-sufficiency—and I hate to beat this dead horse, but the parallel seems so striking—we have the lean literature on local production. In Lean Thinking , Womack et al. discuss the travails of the simple aluminum soda can. From the mine to the smelter to the rolling mill to the can maker alone takes several months of storage and shipment time, yet there is only about 3 hours worth of processing time. A good deal of aluminum smelting is done in Norway and/or Sweden, where widely available hydroelectric power makes aluminum production from alumina very cheap and relatively clean. From there, the cans are shipped to bottlers where they sit for a few more days before being filled, shipped, stored, bought, stored, and drank. All told, it takes 319 days to go from the mine to your lips, where you spend a few minutes actually using the can. The process also produces about 24% scrap (most of which is recycled at the source) because the cans are made at one location and shipped empty to the bottler and they get damaged in transit. It’s an astounding tale of how wasteful the whole process is, yet still results in a product that—externalities aside—costs very little to the end user. Could this type of thing be done locally? After all, every town is awash in a sea of used aluminum cans, and the reprocessing cost is much lower than the original processing cost (which is why Reynolds and ALCOA buy scrap aluminum). Taking this problem to the obvious conclusion, Bill Waddell and other lean consultants have been trying to convince manufacturers that if they would only fire the MBAs and actually learn to manufacture, they could do so much more cheaply locally than they can by offshoring their production. Labor costs simply aren’t the deciding factor, no matter what the local Sloan school is teaching: American labor may be more expensive then [sic] foreign labor, but it is also more productive. Further, all of the (chimerical) gains to be made from going to cheaper labor are likely to be lost in shipping costs. Think of that flotilla of shipping containers on cargo ships between here and Asia as a huge warehouse on the ocean, warehouses that not only charge rent, but also for fuel. [561]

Regarding the specific example of aluminum cans, Womack et al. speculate that the slow acceptance of recycling results from evaluating its efficiencies as a discrete step, rather than in terms of its effects on the entire production stream. If the rate of recycling approached 100%,

interesting possibilities would emerge for the entire value stream. Mini-smelters with integrated mini-rolling mills might be located near the can makers in England, eliminating in a flash most of the time, storage, and distances involved today in the steps above the can maker. [562]

A similar dynamic might result from the proliferation of mini-mills scaled to local needs, with most of the steel inputs for small-scale industry supplied from recycled local scrap.

As Womack et al. point out, lean production—properly understood—requires not only the scaling of machinery to production flow within the factory. It also requires scaling the factory to local demand, and siting it as close as possible to the point of consumption, in order to eliminate as much as possible of the “inventory” in trucks and ships. It is necessary “to locate both design and physical production in the appropriate place to serve the customer.”

Just as many manufacturers have concentrated on installing larger and faster machines to eliminate the direct labor, they’ve also gone toward massive centralized facilities for product families... while outsourcing more and more of the actual component part making to other centralized factories serving many final assemblers. To make matters worse, these are often located on the wrong side of the world from both their engineering operations and their customers... to reduce the cost per hour of labor. The production process in these remotely located, high-scale facilities may even be in some form of flow, but... the flow of the product stops at the end of the plant. In the case of bikes, it’s a matter of letting the finished product sit while a whole sea container for a given final assembler’s warehouse in North America is filled, then sending the filled containers to the port, where they sit some more while waiting for a giant container ship. After a few weeks on the ocean, the containers go by truck to one of the bike firm’s regional warehouses, where the bikes wait until a specific customer order needs filling often followed by shipment to the customer’s warehouse for more waiting. In other words, there’s no flow except along a tiny stretch of the total value stream inside one isolated plant. The result is high logistics costs and massive finished unit inventories in transit and at retailer warehouses.... When carefully analyzed, these costs and revenue losses are often found to more than offset the savings in production costs from low wages, savings which can be obtained in any case by locating smaller flow facilities incorporating more of the total production steps much closer to the customer. [563] To achieve the scale needed to justify this degree of automation it will often be necessary to serve the entire world from a single facility, yet customers want to get exactly the product they want exactly when they want it.... It follows that oceans and lean production are not compatible. We believe that, in almost every case, locating smaller and less-automated production systems within the market of sale will yield lower total costs (counting logistics and the cost of scrapped goods no one wants by the time they arrive) and higher customer satisfaction. [564]

Husman, incidentally, describes a localized “open-source production” model, with numerous small local machine shops networked to manufacture a product according to open-source design specifications and then to manufacture replacement parts and do repairs on an as-needed basis, as “almost an ideally Lean manufacturing process. Dozens of small shops located near their customers, each building one at a time.” [565]

The authors of Natural Capitalism devote a separate chapter to lean production. And perhaps not surprisingly, their description of the lean approach seems almost tailor-made for relocalized manufacturing on the Emilia-Romagna model:

The essence of the lean approach is that in almost all modern manufacturing, the combined and often synergistic benefits of the lower capital investment, greater flexibility, often higher reliability, lower inventory cost, and lower shipping cost of much smaller and more localized production equipment will far outweigh any modest decreases in its narrowly defined “efficiency” per process step. It’s more efficient overall, in resources and time and money, to scale production properly, using flexible machines that can quickly shift between products. By doing so, all the different processing steps can be carried out immediately adjacent to one another with the product kept in continuous flow. The goal is to have no stops, no delays, no backflows, no inventories, no expediting, no bottlenecks, no buffer stocks, and no muda [i.e., waste or superfluity]. [566]

Decentralizing technologies undermined the rationale for large scale not only in mass-production industries, but in continuous-processing industries. In steel, for example, the introduction of the minimill with electric-arc furnace eliminated the need for operating on a large enough scale to keep a blast furnace in continuous operation. Not only did the minimill make it possible to scale steel production to the local industrial economy, but it processed scrap metal considerably more cheaply than conventional blast furnaces processed iron ore. [567]

Sidebar on Marxist Objections to Non-Capitalist Markets: The Relevance of the Decentralized Industrial Model

In opposing a form of socialism centered on cooperatives and non-capitalist markets, a standard argument of Marxists and other non-market socialists is that it would be unsustainable and degenerate into full-blown capitalism: “What happens to the losers?” Non-capitalist markets would eventually become capitalistic, through the normal operation of the laws of the market. Here’s the argument as stated by Christian Siefkes, a German Marxist active in the P2P movement, on the Peer to Peer Research List:

Yes, they would trade, and initially their trading wouldn’t be capitalistic, since labor is not available for hire. But assuming that trade/exchange is their primary way of organizing production, capitalism would ultimately result, since some of the producers would go bankrupt, they would lose their direct access to the means of production and be forced to sell their labor power. If none of the other producers is rich enough to hire them, they would be unlucky and starve (or be forced to turn to other ways of survival such as robbery/thievery, prostitutioing—which is what we also saw as a large-scale phenomenon with the emergence of capitalism, and which we still see in so-called developing countries where there is not enough capital to hire all or most of the available labor power). But if there are other producers/people would can hire them, the seed of capitalism with it’s capitalist/worker divide is laid. Of course, the emerging class of capitalists won’t be just passive bystanders watching this process happen. Since they need a sufficiently large labor force, and since independent producers are unwanted competition for them, they’ll actively try to turn the latter into the former. Means for doing so are enclosure/privatization laws that deprive the independent producers of their means of productions, technical progress that makes it harder for them to compete (esp. if expensive machines are required which they simple lack the money to buy), other laws that increase the overhead for independent producers (e.g. high bookkeeping requirements), creation of big sales points that non-capitalist producers don’t have access to (department stores etc.), simple overproduction that drives small-scale producers (who can’t stand huge losses) out of the market, etc. But even if they were passive bystanders (which is an unrealistic assumption), the conversion of independent producers into workers forced to sell their labor power would still take place through the simple laws of the market, which cause some producers to fail and go bankrupt. So whenever you start with trade as the primary way of production, you’ll sooner or later end up with capitalism. It’s not a contradiction, it’s a process. [568]

One answer, in the flexible production model, is that there’s no reason to have any permanent losers. First of all, the overhead costs are so low that it’s possible to ride out a slow period indefinitely. Second, in low-overhead flexible production, in which the basic machinery for production is widely affordable and can be easily reallocated to new products, there’s really no such thing as a “business” to go out of. The lower the capitalization required for entering the market, and the lower the overhead to be borne in periods of slow business, the more the labor market takes on a networked, project-oriented character—like, e.g., peer production of software. In free software, and in any other industry where the average producer owns a full set of tools and production centers mainly on self-managed projects, the situation is likely to be characterized not so much by the entrance and exit of discrete “firms” as by a constantly shifting balance of projects, merging and forking, and with free agents constantly shifting from one to another. The same fluidity prevails, according to Piore and Sabel, in the building trades and the garment industry. [569]

Another point: in a society where most people own the roofs over their heads and can meet a major part of their subsistence needs through home production, workers who own the tools of their trade can afford to ride out periods of slow business, and to be somewhat choosy in waiting to contract out to the projects most suited to their preference. It’s quite likely that, to the extent some form of wage employment still existed in a free economy, it would take up a much smaller share of the total economy, wage labor would be harder to find, and attracting it would require considerably higher wages; as a result, self-employment and cooperative ownership would be much more prevalent, and wage employment would be much more marginal. To the extent that wage employment continued, it would be the province of a class of itinerant laborers taking jobs of work when they needed a bit of supplementary income or to build up some savings, and then periodically retiring for long periods to a comfortable life living off their own homesteads. This pattern—living off the common and accepting wage labor only when it was convenient—was precisely what the Enclosures were intended to stamp out.

For the same reason, the standard model of “unemployment” in American-style mass-production industry is in fact quite place-bound, and largely irrelevant to flexible manufacture in European-style industrial districts. In such districts, and to a considerable extent in the American garment industry, work-sharing with reduced hours is chosen in preference to layoffs, so the dislocations from an economic downturn are far less severe. Unlike the American presumption of a fixed and permanent “shop” as the central focus of the labor movement, the industrial district assumes the solidaristic craft community as the primary long-term attachment for the individual worker, and the job site at any given time as a passing state of affairs. [570]

And finally, in a relocalized economy of small-scale production for local markets, where most money is circulated locally, there is apt to be far less of a tendency toward boom-bust cycles or wild fluctuations in commodity prices. Rather, there is likely to be a fairly stable long-term matching of supply to demand.

In short, the Marxist objection assumes the high-overhead industrial production model as “normal,” and judges cooperative and peer production by their ability to adapt to circumstances that almost certainly wouldn’t exist.

Chapter Five: The Small Workshop, Desktop Manufacturing, and PowerCube Household Production

A. neighborhood and backyard industry.

A recurring theme among early writers on decentralized production and the informal and household economies is the community workshop, and its use in particular for repair and recycling. Even in the 1970s, when the price of the smallest machine tools was much higher in real terms, it was feasible by means of cooperative organization to spread the capital outlay cost over a large pool of users.

Kirkpatrick Sale speculated that neighborhood recycling and repair centers would put back into service the almost endless supply of defunct appliances currently sitting in closets or basements—as well as serving as “remanufacturing centers” for (say) diesel engines and refrigerators. [571]

Writing along similar lines, Colin Ward suggested “the pooling of equipment in a neighborhood group.”

Suppose that each member of the group had a powerful and robust basic tool, while the group as a whole had, for example, a bench drill, lathes and a saw bench to relieve the members from the attempt to cope with work which required these machines with inadequate tools of their own, or wasting their resources on under-used individually-owned plant. This in turn demands some kind of building to house the machinery the Community Workshop. But is the Community Workshop idea nothing more than an aspect of the leisure industry, a compensation for the tedium of work? [572]

In other words, is it just a “hobby?” Ward argued, to the contrary, that it would bridge the growing gap between the worlds of work and leisure by making productive activity in one’s free time a source of real use-value.

Could [the unemployed] make a livelihood for themselves today in the community workshop? If the workshop is conceived merely as a social service for ‘creative leisure’ the answer is that it would probably be against the rules.... But if the workshop were conceived on more imaginative lines than any existing venture of this kind, its potentialities could become a source of livelihood in the truest sense. In several of the New Towns in Britain, for example, it has been found necessary and desirable to build groups of small workshops for individuals and small businesses engaged in such work as repairing electrical equipment or car bodies, woodworking and the manufacture of small components. The Community Workshop would be enhanced by its cluster of separate workplaces for ‘gainful’ work. Couldn’t the workshop become the community factory, providing work or a place for work for anyone in the locality who wanted to work that way, not as an optional extra to the economy of the affluent society which rejects an increasing proportion of its members, but as one of the prerequisites of the worker-controlled economy of the future? Keith Paton..., in a far-sighted pamphlet addressed to members of the Claimants’ Union, urged them not to compete for meaningless jobs in the economy which has thrown them out as redundant, but to use their skills to serve their own community. (One of the characteristics of the affluent world is that it denies its poor the opportunity to feed, clothe, or house themselves , or to meet their own and their families’ needs, except from grudgingly doled-out welfare payments). He explains that: ...[E]lectrical power and ‘affluence’ have brought a spread of intermediate machines, some of them very sophisticated, to ordinary working class communities. Even if they do not own them (as many claimants do not) the possibility exists of borrowing them from neighbours, relatives, ex-workmates. Knitting and sewing machines, power tools and other do-it-yourself equipment comes in this category. Garages can be converted into little workshops, home-brew kits are popular, parts and machinery can be taken from old cars and other gadgets. If they saw their opportunity, trained metallurgists and mechanics could get into advanced scrap technology, recycling the metal wastes of the consumer society for things which could be used again regardless of whether they would fetch anything in a shop. Many hobby enthusiasts could begin to see their interests in a new light. [573]

Karl Hess also discussed community workshops—or as he called them, “shared machine shops”—in Community Technology .

The machine shop should have enough basic tools, both hand and power, to make the building of demonstration models or test facilities a practical and everyday activity. The shared shop might just be part of some other public facility, used in its off hours. Or the shop might be separate and stocked with cast-off industrial tools, with tools bought from government surplus through the local school system... Work can, of course, be done as well in home shops or in commercial shops of people who like the community technology approach.... Thinking of such a shared workshop in an inner city, you can think of its use... for the maintenance of appliances and other household goods whose replacement might represent a real economic burden in the neighborhood.... ...The machine shop could regularly redesign cast-off items into useful ones. Discarded refrigerators, for instance, suggest an infinity of new uses, from fish tanks, after removing doors, to numerous small parts as each discarded one is stripped for its components, which include small compressors, copper tubing, heat transfer arrays, and so on. The same goes for washing machines.... [574]

Hess’s choice of words, by the way, evidenced a failure to anticipate the extent to which flexible networked manufacturing would blur the line between “demonstration models” or test facilities and serial production.

Sharing is a way of maximizing the utilization of idle productive goods owned by individuals. Just about any tool or appliance you need for a current project, but lack, is probably gathering dust on the shelf of someone within a few blocks of where you live. If the pooling of such idle resources doesn’t seem like much of a deal for the person with the unused appliances, keep in mind first that he isn’t getting anything at all out of them now, second that he may trade access to them for access to other people’s tools that he needs, and third that the arrangement may increase the variety of goods and services he has to choose from outside the wage system.

The same idea has appeared in the San Francisco Bay area, albeit in a commercial rather than communitarian form, as TechShop: [575]

TechShop is a 15,000 square-foot membership-based workshop that provides members with access to tools and equipment, instruction, and a creative and supportive community of like-minded people so you can build the things you have always wanted to make.... TechShop provides you with access to a wide variety of machinery and tools, including milling machines and lathes, welding stations and a CNC plasma cutter, sheet metal working equipment, drill presses and band saws, industrial sewing machines, hand tools, plastic and wood working equipment including a 4’ x 8’ ShopBot CNC router, electronics design and fabrication facilities, Epilog laser cutters, tubing and metal bending machines, a Dimension SST 3-D printer, electrical supplies and tools, and pretty much everything you’d ever need to make just about anything.

Hess linked his idea for a shared machine shop to another idea, “[s]imilar in spirit,” the shared warehouse:

A community decision to share a space in which discarded materials can be stored, categorized, and made easily available is a decision to use an otherwise wasted resource.... The shared warehouse... should collect a trove of bits and pieces of building materials.... There always seems to be a bundle of wood at the end of any project that is too good to burn, too junky to sell, and too insignificant to store. Put a lot of those bundles together and the picture changes to more and more practical possibilities of building materials for the public space. Spare parts are fair game for the community warehouse. Thus it can serve as a parts cabinet for the community technology experimenter.... A problem common to many communities is the plight of more resources leaving than coming back in.... The shared work space and the shared warehouse space involve a community in taking a first look at this problem at a homely and nonideological level. [576]

This ties in closely with Jane Jacobs’ recurring themes of the development of local, diversified economies through the discovery of creative uses for locally generated waste and byproducts, and the use of such innovative technologies to replace imports. [577]

E. F. Schumacher recounted his experiences with the Scott Bader Commonwealth, encouraging (often successfully) the worker-owners to undertake such ventures as a community auto repair shop, communally owned tools and other support for household gardening, a community woodworking shop for building and repairing furniture, and so forth. The effect of such measures was to take off some of the pressure to earn wages, so that workers might scale back their work hours. [578]

The potential for such common workspaces increases by an order of magnitude, of course, with the kinds of small, cheap, computerized machine tools we will consider later in this chapter.

The building, bottom-up, of local economies based on small-scale production with multiple-purpose machinery might well take place piecemeal, beginning with such small shops, at first engaged primarily in repair and remanufacture of existing machinery and appliances. As Peak Oil and the degradation of the national transportation system cause corporate logistic chains for spare parts to dry up, small garage and backyard machine shops may begin out of sheer necessity to take up the slack, custom-machining the spare parts needed to keep aging appliances in operation. From this, the natural progression would be to farming out the production of components among a number of such small shops, and perhaps designing and producing simple appliances from scratch. (An intermediate step might be “mass customization,” the custom design of modular accessories for mass-produced platforms.) In this manner, networked production of spare parts by small shops might be the foundation for a new industrial revolution.

As Jacobs described it, the Japanese bicycle industry had its origins in just such networking between custom producers of spare parts.

To replace these imports with locally made bicycles, the Japanese could have invited a big American or European bicycle manufacturer to establish a factory in Japan... Or the Japanese could have built a factory that was a slavish imitation of a European or American bicycle factory. They would have had to import most or all of the factory’s machinery, as well as hiring foreign production managers or having Japanese production managers trained abroad.... ...[Instead], shops to repair [imported bicycles] had sprung up in the big cities.... Imported spare parts were expensive and broken bicycles were too valuable to cannibalize the parts. Many repair shops thus found it worthwhile to make replacement parts themselves—not difficult if a man specialized in one kind of part, as many repairmen did. In this way, groups of bicycle repair shops were almost doing the work of manufacturing entire bicycles. That step was taken by bicycle assemblers, who bought parts, on contract, from repairmen the repairmen had become “light manufacturers.” [579]

Karl Hess and David Morris, in Neighborhood Power , suggested a progression from retail to repair to manufacturing as the natural model for a transition to relocalized manufacturing. They wrote of a process by which “repair shops begin to transform themselves into basic manufacturing facilities...” [580] Almost directly echoing Jacobs, they envisioned a bicycle collective’s retail shop adding maintenance facilities, and then:

After a number of people have learned the skills in repairs in a neighborhood, a factory could be initiated to produce a few vital parts, like chains or wheels or tires. Finally, if the need arises, full-scale production of bicycles could be attempted.

Interestingly enough, Don Tapscott and Anthony Williams describe just such a process taking place in micromanufacturing facilities (about which more below) which have been introduced in the Third World. Indian villagers are using fab labs (again, see below) “to make replacement gears for out-of date copying machines....” [581]

The same process could be replicated in many areas of production. Retail collectives might support community-supported agriculture as a primary source of supply, followed by a small canning factory and then by a glass recycling center to trade broken bottles and jars for usable ones on an arrangement with the bottling companies. [582] Again, the parallels with Jane Jacobs are striking:

Cities that replace imports significantly replace not only finished goods but, concurrently, many, many items of producers’ goods and services. They do it in swiftly emerging, logical chains. For example, first comes the local processing of fruit preserves that were formerly imported, then the production of jars or wrappings formerly imported for which there was no local market of producers until the first step had been taken. Or first comes the assembly of formerly imported pumps for which, once the assembly step has been taken, parts are imported; then the making of parts for which metal is imported; then possibly even the smelting of metal for these and other import-replacements. The process pays for itself as it goes along. When Tokyo went into the bicycle business, first came repair work cannibalizing imported bicycles, then manufacture of some of the parts most in demand for repair work, then manufacture of still more parts, finally assembly of whole, Tokyo-made bicycles. And almost as soon as Tokyo began exporting bicycles to other Japanese cities, there arose in some of those customer cities much the same process of replacing bicycles imported from Tokyo, ...as had happened with many items sent from city to city in the United States. [583]

A directly analogous process of import substitution can take place in the informal economy, with production for barter at the household and neighborhood level using household capital goods (about which more below) replacing the purchase of consumption goods in the wage economy.

Paul and Percival Goodman wrote, in Communitas , of the possibility of decentralized machining of parts by domestic industry, given the universal availability of power and the ingenuity of small machinery, coupled with assembly at a centralized location. It is, they wrote, “almost always cheaper to transport material than men.” [584]

A good example of this phenomenon in practice is the Japanese “shadow factories” during World War II. Small shops attached to family homes played an important role in the Japanese industrial economy, according to Nicholas Wood. Many components and subprocesses were farmed out for household manufacture, in home shops consisting of perhaps a few lathes, drill presses or milling machines. In the war, the government had actively promoted such “shadow factories,” distributing machine tools in workers’ homes in order to disperse concentrated industry and reduce its vulnerability to American strategic bombing. [585] After the war, the government encouraged workers to purchase the machinery. [586] As late as the late fifties, such home manufacturers were still typically tied to particular companies, in what amounted to industrial serfdom. But according to Wood, by the time of his writing (1964), many home manufacturers had become free agents, contracting out to whatever firm made the best offer. [587] The overhead costs of home production, after the war, were reduced by standardization and modular design. For example, household optical companies found it impossible at first to produce and stock the many sizes of lenses and prisms for the many different models. But subsequently all Japanese companies standardized their designs to a few models. [588]

A similar shadow factory movement emerged in England during the war, as described by Goodman: “Home manufacture of machined parts was obligatory in England during the last war because of the bombings, and it succeeded.” [589]

The Chinese pursued a system of localized production along roughly similar lines in the 1970s. According to Lyman van Slyke, they went a long way toward meeting their small machinery needs in this way. This was part of a policy known as the “Five Smalls,” which involved agricultural communes supplying their own needs locally (hydroelectric energy, agro-chemicals, cement, iron and steel smelting, and machinery) in order to relieve large-scale industry of the burden. In the case of machinery, specifically, van Slyke gives the example of the hand tractor:

...[O]ne of the most commonly seen pieces of farm equipment is the hand tractor, which looks like a large rototiller. It is driven in the field by a person walking behind it.... This particular design is common in many parts of Asia, not simply in China. Now, at the small-scale level, it is impossible for these relatively small machine shops and machinery plants to manufacture all parts of the tractor. In general, they do not manufacture the engine, the headlights, or the tires, and these are imported from other parts of China. But the transmission and the sheet-metal work and many of the other components may well be manufactured at the small plants. Water pumps of a variety of types, both gasoline and electric, are often made in such plants, as are a variety of other farm implements, right down to simple hand tools. In addition, in many of these shops, a portion of plant capacity is used to build machine tools. That is, some lathes and drill presses were being used not to make the farm machinery but to make additional lathes and drill presses. These plants were thus increasing their own future capabilities at the local level. Equally important is a machinery-repair capability. It is crucial, in a country where there isn’t a Ford agency just down the road, that the local unit be able to maintain and repair its own equipment. Indeed, in the busy agricultural season many small farm machinery plants close down temporarily, and the work force forms mobile repair units that go to the fields with spare parts and tools in order to repair equipment on the spot. Finally, a very important element is the training function played in all parts of the small-scale industry spectrum, but particularly in the machinery plants. Countless times we saw two people on a machine. One was a journeyman, the regular worker, and the second was an apprentice, a younger person, often a young woman, who was learning to operate the machine. [590]

It should be stressed that this wasn’t simply a repeat of the disastrous Great Leap Forward, which was imposed from above in the late 1950s. It was, rather, an example of local ingenuity in filling a vacuum left by the centrally planned economy. If anything, in the 1970s—as opposed to the 1950s—the policy was considered a painful concession to necessity, to be abandoned as soon as possible, rather than a vision pursued for its own sake. Van Slyke was told by those responsible for small-scale industry, “over and over again,” that their goals were to move “from small to large, from primitive to modern, and from here-and-there to everywhere.” [591] Aimin Chen reported in 2002 that the government was actually cracking down on local production under the “Five Smalls” in order to reduce idle capacity in the beleaguered state sector. [592] The centrally planned economy under state socialism, like the corporate economy, can only survive by suppressing small-scale competition.

The raw materials for such relocalized production are already in place in most neighborhoods, to a large extent, in the form of unused or underused appliances, power tools gathering dust in basements and garages, and the like. It’s all just waiting to be integrated onto a local economy, as soon as producers can be hooked up to needs, and people realize that every need met by such means reduces their dependence on wage labor by an equal amount—and probably involves less labor and more satisfaction than working for the money. The problem is figuring out what’s lying around, who has what skills, and how to connect supply to demand. As Hess and Morris put it,

In one block in Washington, D.C., such a survey uncovered plumbers, electricians, engineers, amateur gardeners, lawyers, and teachers. In addition, a vast number of tools were discovered; complete workshops, incomplete machine-tool shops, and extended family relationships which added to the neighborhood’s inventory—an uncle in the hardware business, an aunt in the cosmetics industry, a brother teaching biology downtown. The organizing of a directory of human resources can be an organizing tool itself. [593]

Arguably the neighborhood workshop and the household microenterprise (which we will examine later in this chapter) achieve an optimal economy of scale, determined by the threshold at which a household producer good is fully utilized, but the overhead for a permanent hired staff and a stand-alone dedicated building is not required.

The various thinkers quoted above wrote on community workshops at a time when the true potential of small-scale production machinery was just starting to emerge.

B. The Desktop Revolution and Peer Production in the Immaterial Sphere

Since the desktop revolution of the 1970s, computers have promised to be a decentralizing force on the same scale as electrical power a century earlier. The computer, according to Michel Piore and Charles Sabel, is “a machine that meets Marx’s definition of an artisan’s tool: it is an instrument that responds to and extends the productive capacities of the user.”

It is therefore tempting to sum the observations of engineers and ethnographers to the conclusion that technology has ended the domination of specialized machines over un- and semiskilled workers, and redirected progress down the path of craft production. The advent of the computer restores human control over the production process; machinery again is subordinated to the operator. [594]

As Johan Soderberg argues, “[t]he universally applicable computer run on free software and connected to an open network... have [sic] in some respects leveled the playing field. Through the global communication network, hackers are matching the coordinating and logistic capabilities of state and capital.” [595]

Indeed, the computer itself is the primary item of capital equipment in a growing number of industries, like music, desktop publishing and software design. The desktop computer, supplemented by assorted packages of increasingly cheap printing or sound editing equipment, is capable of doing what previously required a minimum investment of hundreds of thousands of dollars.

The growing importance of human capital, and the implosion of capital outlay costs required to enter the market, have had revolutionary implications for production in the immaterial sphere. In the old days, the immense outlay for physical assets was the primary basis for the corporate hierarchy’s power, and in particular for its control over human capital and other intangible assets.

As Luigi Zingales observes, the declining importance of physical assets relative to human capital has changed this. Physical assets, “which used to be the major source of rents, have become less unique and are not commanding large rents anymore.” And “the demand for process innovation and quality improvement... can only be generated by talented employees,” which increases the importance of human capital. [596] This is even more true since Zingales wrote, with the rise of what has been variously called the wikified workplace, [597] the hyperlinked organization, [598] etc. What Niall Cook calls Enterprise 2.0 [599] is the application of the networked platform technologies (blogs, wikis, etc.) associated with Web 2.0 to the internal organization of the business enterprise. It refers to the spread of self-managed peer network organization inside the corporation, with the internal governance of the corporation increasingly resembling the organization of the Linux developer community.

Tom Peters remarked in quite similar language, some six years earlier in The Tom Peters Seminar , on the changing balance of physical and human capital. Of Inc. magazine’s 500 top-growth companies, which included a good number of information, computer technology and biotech firms, 34% were launched on initial capital of less than $10,000, 59% on less than $50,000, and 75% on less than $100,000. [600] The only reason those companies remain viable is that they control the value created by their human capital. And the only way to do that is through the ownership of artificial property rights like patents, copyrights and trademarks.

In many information and culture industries, the initial outlay for entering the market in the broadcast days was in the hundreds of thousands of dollars or more. The old broadcast mass media, for instance, were “typified by high-cost hubs and cheap, ubiquitous, reception-only systems at the end. This led to a limited range of organizational models for production: those that could collect sufficient funds to set up a hub.” [601] The same was true of print periodicals, with the increasing cost of printing equipment from the mid-nineteenth century on serving as the main entry barrier for organizing the hubs. Between 1835 and 1850, the typical startup cost of a newspaper increased from $500 to $100,000--or from roughly $10,000 to $2.38 million in 2005 dollars. [602]

The networked economy, in contrast, is distinguished by “network architecture and the [low] cost of becoming a speaker.”

The first element is the shift from a hub-and-spoke architecture with unidirectional links to the end points in the mass media, to distributed architecture with multidirectional connections among all nodes in the networked information environment. The second is the practical elimination of communications costs as a barrier to speaking across associational boundaries. Together, these characteristics have fundamentally altered the capacity of individuals, acting alone or with others, to be active participants in the public sphere as opposed to its passive readers, listeners, or viewers. [603]

In the old days, the owners of the hubs—CBS News, the Associated Press, etc.—decided what you could hear. Today you can set up a blog, or record a podcast, and anybody in the world who cares enough to go to your URL can look at it free of charge (and anyone who agrees with it—or wants to tear it apart—can provide a hyperlink to his readers).

The central change that makes these things possible is that “the basic physical capital necessary to express and communicate human meaning is the connected personal computer.”

The core functionalities of processing, storage, and communications are widely owned throughout the population of users.... The high capital costs that were a prerequisite to gathering, working, and communicating information, knowledge, and culture, have now been widely distributed in the society. The entry barrier they posed no longer offers a condensation point for the large organizations that once dominated the information environment. [604]

The desktop revolution and the Internet mean that the minimum capital outlay for entering most of the entertainment and information industry has fallen to a few thousand dollars at most, and the marginal cost of reproduction is zero. If anything that overstates the cost of entry in many cases, considering how rapidly computer value depreciates and the relatively miniscule cost of buying a five-year-old computer and adding RAM.

The networked environment, combined with endless varieties of cheap software for creating and editing content, makes it possible for the amateur to produce output of a quality once associated with giant publishing houses and recording companies. [605] That is true of the software industry, desktop publishing, and to a certain extent even to film (as witnessed by affordable editing technology and the success of Sky Captain ).

In the case of the music industry, thanks to cheap equipment and software for high quality recording and sound editing, the costs of independently producing and distributing a high-quality album have fallen through the floor. Bassist Steve Lawson writes:

...[T]he recording process — studio time and expertise used to be hugely expensive. But the cost of recording equipment has plummeted, just as the quality of the same has soared. Sure, expertise is still chargeable, but it’s no longer a non-negotiable part of the deal. A smart band with a fast computer can now realistically make a release quality album-length body of songs for less than a grand.... What does this actually mean? Well, it means that for me—and the hundreds of thousands of others like me—the process of making and releasing music has never been easier. The task of finding an audience, of seeding the discovery process, has never cost less or been more fun. It’s now possible for me to update my audience and friends (the cross-over between the two is happening on a daily basis thanks to social media tools) about what I’m doing—musically or otherwise---and to hear from them, to get involved in their lives, and for my music to be inspired by them.... So, if things are so great for the indies, does that mean loads of people are making loads of money? Not at all. But the false notion there is that any musicians were before! We haven’t moved from an age of riches in music to an age of poverty in music. We’ve moved from an age of massive debt and no creative control in music to an age of solvency and creative autonomy. It really is win/win. [606]

As Tom Coates put it, “the gap between what can be accomplished at home and what can be accomplished in a work environment has narrowed dramatically over the last ten to fifteen years.” [607]

Podcasting makes it possible to distribute “radio” and “television” programming, at virtually no cost, to anyone with a broadband connection. As radio historian Jesse Walker notes, satellite radio’s lackadaisical economic performance doesn’t mean people prefer to stick with AM and FM radio; it means, rather, that the ipod has replaced the transistor radio as the primary portable listening medium, and that downloaded files have replaced the live broadcast as the primary form of content. [608]

A network of amateur contributors has peer-produced an encyclopedia, Wikipedia, which Britannica sees as a rival.

It’s also true of news, with ever-expanding networks of amateurs in venues like Indymedia, with alternative new operations like those of Robert Parry, Bob Giordano and Greg Palast, and with natives and American troops blogging news firsthand from Iraq—all at the very same time the traditional broadcasting networks are relegating themselves to the stenographic regurgitation of press releases and press conference statements by corporate and government spokespersons, and “reporting” on celebrity gossip. Even conceding that the vast majority of shoe-leather reporting of original news is still done by hired professionals from a traditional journalistic background, blogs and other news aggregators are increasingly becoming the “new newspapers,” making better use of reporter-generated content than the old, high-overhead news organizations. But in fact most of the traditional media’s “original content” consists of verbatim conveyance of official press releases, which could just as easily be achieved by bloggers and news aggregators linking directly to the press releases at the original institutional sites. Genuine investigative reporting consumes an ever shrinking portion of news organizations’ budgets.

The network revolution has drastically lowered the transaction costs of organizing education outside the conventional institutional framework. In most cases, the industrial model of education, based on transporting human raw material to a centrally located “learning factory” for processing, is obsolete. Over thirty years ago Ivan Illich, in Deschooling Society , proposed decentralized community learning nets that would put people in contact with the teachers they wished to learn from, and provide an indexed repository of learning materials. The Internet has made this a reality beyond Illich’s wildest dreams. MIT’s Open Courseware project was one early step in this direction. But most universities, even if they don’t have a full database of lectures, at least have some sort of online course catalog with bare-bones syllabi and assigned readings for many individual courses.

A more recent proprietary attempt at the same thing is the online university StraighterLine. [609] Critics like to point to various human elements of the learning process that students are missing, like individualized attention to students with problems grasping the material. This criticism might be valid, if StraighterLine were competing primarily with the intellectual atmosphere of small liberal arts colleges, with their low student-to-instructor ratios. But StraighterLine’s primary competition is the community college and state university, and its catalog [610] is weighted mainly toward the kinds of mandatory first- and second-year introductory courses that are taught by overworked grad assistants to auditoriums full of freshmen and sophomores. [611] The cost, around $400 per course, [612] is free of the conventional university’s activity fees and all the assorted overhead that comes from trying to manage thousands of people and physical plant at a single location. What’s more, StraighterLine offers the option of purchasing live tutorials. [613] Washington Monthly describes the thinking behind the business model:

Even as the cost of educating students fell, tuition rose at nearly three times the rate of inflation. Web-based courses weren’t providing the promised price competition—in fact, many traditional universities were charging extra for online classes, tacking a “technology fee” onto their standard (and rising) rates. Rather than trying to overturn the status quo, big, publicly traded companies like Phoenix were profiting from it by cutting costs, charging rates similar to those at traditional universities, and pocketing the difference. This, Smith explained, was where StraighterLine came in. The cost of storing and communicating information over the Internet had fallen to almost nothing. Electronic course content in standard introductory classes had become a low-cost commodity. The only expensive thing left in higher education was the labor, the price of hiring a smart, knowledgeable person to help students when only a person would do. And the unique Smarthinking call- center model made that much cheaper, too. By putting these things together, Smith could offer introductory college courses à la carte, at a price that seemed to be missing a digit or two, or three: $99 per month, by subscription. Economics tells us that prices fall to marginal cost in the long run. Burck Smith simply decided to get there first. StraighterLine, he argues, threatens to do to universities what Craigslist did to newspapers. Freshman intro courses, with auditoriums stuffed like cattle cars and low-paid grad students presiding over the operation, are the cash cow that supports the expensive stuff—like upper-level and grad courses, not to mention a lot of administrative perks. If the cash cow is killed off by cheap competition, it will have the same effect on universities that Craigslist is having on newspapers. [614]

Of course StraighterLine is far costlier and less user-friendly than it might be, if it were peer-organized and open-source. Imagine a similar project with open-source textbooks (or which assigned, with a wink and a nudge, digitized proprietary texts available via a file-sharing network), free lecture materials like those of MIT’s Open Courseware, and the creative use of email lists, blogs and wikis for the student community to help each other (much like the use of social networking tools for problem-solving among user communities for various kinds of computers or software).

For that matter, unauthorized course blogs and email lists created by students may have the same effect on StraighterLine that it is having on the traditional university—just as Wikipedia did to Encarta what Encarta did to the traditional encyclopedia industry.

The same model of organization can be extended to fields of employment outside the information and entertainment industries—particularly labor-intensive service industries, where human capital likewise outweighs physical capital in importance. The basic model is applicable in any industry with low requirements for initial capitalization and low or non-existent overhead. Perhaps the most revolutionary possibilities are in the temp industry. In my own work experience, I’ve seen that hospitals using agency nursing staff typically pay the staffing agency about three times what the agency nurse receives in pay. Cutting out the middleman, perhaps by means of some sort of cross between a workers’ co-op and a longshoremen’s union hiring hall, seems like a no-brainer. An AFL-CIO organizer in the San Francisco Bay area has attempted just such a project, as recounted by Daniel Levine. [615]

The chief obstacle to such attempts is non-competition agreements signed by temp workers at their previous places of employment. Typically, a temp worker signs an agreement not to work independently for any of the firm’s clients, or work for them through another agency, for some period (usually three to six months) after quitting. Of course, this can be evaded fairly easily, if the new cooperative firm has a large enough pool of workers to direct particular assignments to those who aren’t covered by a non-competition clause in relation to that particular client.

And as we shall see in the next section, the implosion of capital outlay requirements even for physical production has had a similar effect on the relative importance of human and physical capital, in a considerable portion of manufacturing, and on the weakening of firm boundaries.

These developments have profoundly weakened corporate hierarchies in the information and entertainment industries, and created enormous agency problems as well. As the value of human capital increases, and the cost of physical capital investments needed for independent production by human capital decreases, the power of corporate hierarchies becomes less and less relevant. As the value of human relative to physical capital increases, the entry barriers become progressively lower for workers to take their human capital outside the firm and start new firms under their own control. Zingales gives the example of the Saatchi and Saatchi advertising agency. The largest block of shareholders, U.S. fund managers who controlled 30% of stock, thought that gave them effective control of the firm. They attempted to exercise this perceived control by voting down Maurice Saatchi’s proposed increased option package for himself. In response, the Saatchi brothers took their human capital (in actuality the lion’s share of the firm’s value) elsewhere to start a new firm, and left a hollow shell owned by the shareholders. [616]

Interestingly, in 1994 a firm like Saatchi and Saatchi, with few physical assets and a lot of human capital, could have been considered an exception. Not any more. The wave of initial public offerings of purely human capital firms, such as consultant firms, and even technology firms whose main assets are the key employees, is changing the very nature of the firm. Employees are not merely automata in charge of operating valuable assets but valuable assets themselves, operating with commodity-like physical assets. [617]

In another, similar example, the former head of Salomon Brothers’ bond trading group formed a new group with former Salomon traders responsible for 87% of the firm’s profits.

...if we take the standpoint that the boundary of the firm is the point up to which top management has the ability to exercise power..., the group was not an integral part of Salomon. It merely rented space, Salomon’s name, and capital, and turned over some share of its profits as rent. [618]

Marjorie Kelly gave the breakup of the Chiat/Day ad agency as an example of the same phenomenon.

...What is a corporation worth without its employees? This question was acted out... in London, with the revolutionary birth of St. Luke’s ad agency, which was formerly the London office of Chiat/Day. In 1995, the owners of Chiat/Day decided to sell the company to Omnicon—which meant layoffs were looming and Andy Law in the London office wanted none of it. He and his fellow employees decided to rebel. They phoned clients and found them happy to join the rebellion. And so at one blow, London employees and clients were leaving. Thus arose a fascinating question: What exactly did the “owners” of the London office now own? A few desks and files? Without employees and clients, what was the London branch worth? One dollar, it turned out. That was the purchase price—plus a percentage of profits for seven years—when Omnicon sold the London branch to Law and his cohorts after the merger. They renamed it St. Luke’s.... All employees became equal owners... Every year now the company is re-valued, with new shares awarded equally to all. [619]

David Prychitko remarked on the same phenomenon in the tech industry, the so-called “break-away” firms, as far back as 1991:

Old firms act as embryos for new firms. If a worker or group of workers is not satisfied with the existing firm, each has a skill which he or she controls, and can leave the firm with those skills and establish a new one. In the information age it is becoming more evident that a boss cannot control the workers as one did in the days when the assembly line was dominant. People cannot be treated as workhorses any longer, for the value of the production process is becoming increasingly embodied in the intellectual skills of the worker. This poses a new threat to the traditional firm if it denies participatory organization. The appearance of break-away computer firms leads one to question the extent to which our existing system of property rights in ideas and information actually protects bosses in other industries against the countervailing power of workers. Perhaps our current system of patents, copyrights, and other intellectual property rights not only impedes competition and fosters monopoly, as some Austrians argue. Intellectual property rights may also reduce the likelihood of break-away firms in general, and discourage the shift to more participatory, cooperative formats. [620]

C. The Expansion of the Desktop Revolution and Peer Production into the Physical Realm

Although peer production first emerged in the immaterial realm—i.e., information industries like software and entertainment—its transferability to the realm of physical production is also a matter of great interest.

1. Open-Source Design: Removal of Proprietary Rents from the Design Stage, and Modular Design. One effect of the shift in importance from tangible to intangible assets is the growing portion of product prices that reflects embedded rents on “intellectual property” and other artificial property rights rather than the material costs of production.

The radical nature of the peer economy, especially as “intellectual property” becomes increasingly unenforceable, lies in its potential to cause the portion of existing commodity price that results from such embedded rents to implode.

Open source hardware refers, at the most basic level, to the development and improvement of designs for physical goods on an open-source basis, with no particular mode of physical production being specified. The design stage ceases to be a source of proprietary value, but the physical production stage is not necessarily affected. To take it in Richard Stallman’s terms, ‘free speech” only affects the portion of beer’s price that results from the cost of a proprietary design phase: open source hardware means the design is free as in free speech, not free beer. Although the manufacturer is not hindered by patents on the design, he must still bear the costs of physical production. Edy Ferreira defined open-source hardware as

any piece of hardware whose manufacturing information is distributed using a license that provides specific rights to users without the need to pay royalties to the original developers. These rights include freedom to use the hardware for any purpose, freedom to study and modify the design, and freedom to redistribute copies of either the original or modified manufacturing information.... In the case of open source software (OSS), the information that is shared is software code. In OSH, what is shared is hardware manufacturing information, such as... the diagrams and schematics that describe a piece of hardware. [621]

At the simplest level, a peer network may develop a product design and make it publicly available; it may be subsequently built by any and all individuals or firms with the necessary production machinery, without coordinating their efforts with the original designer(s). A conventional manufacturer may produce open source designs, with feedback from the user community providing the main source of innovation.

Karim Lakhani describes this general phenomenon, the separation of open-source design from an independent production stage, as “communities driving manufacturers out of the design space,” with

users innovating and developing products that can out compete traditional manufacturers. But this effect is not just limited to software. In physical products..., users have been shown to be the dominant source of functionally novel innovations. Communities can supercharge this innovation mechanism. And may ultimately force companies out of the product design space. Just think about it—for any given company—there are more people outside the company that have smarts about a particular technology or a particular use situation then [sic] all the R&D engineers combined. So a community around a product category may have more smart people working on the product then [sic] the firm it self. So in the end manufacturers may end up doing what they are supposed to—manufacture—and the design activity might move... into the community. [622]

As one example, Vinay Gupta has proposed a large-scale library of open-source hardware designs as an aid to international development:

An open library of designs for refrigerators, lighting, heating, cooling, motors, and other systems will encourage manufacturers, particularly in the developing world, to leapfrog directly to the most sustainable technologies, which are much cheaper in the long run. Manufacturers will be encouraged to use the efficient designs because they are free, while inefficient designs still have to be paid for. The library could also include green chemistry and biological solutions to industry challenges.... This library should be free of all intellectual property restrictions and open for use by any manufacturer, in any nation, without charge. [623]

One item of his own design, the Hexayurt, is “a refugee shelter system that uses an approach based on “autonomous building” to provide not just a shelter, but a comprehensive family support unit which includes drinking water purification, composting toilets, fuel-efficient stoves and solar electric lighting.” [624] The basic construction materials for the floor, walls and roof cost about $200. [625]

Michel Bauwens, of the P2P foundation, provides a small list of some of the more prominent open-design projects:

The Grid Beam Building System The Hexayurt Movisi Open Design Furniture Open Cores Open Source Green Vehicle Open Source Scooter The Ronja Wireless Device Open Source Sewing patterns Velomobiles Open Energy [626]

One of the most ambitious attempts at such an open design project is Open Source Ecology, which is developing an open-source, virally reproducible, vernacular technology-based “Open Village Construction Set” in its experimental site at Factor E Farm. [627] (Of course OSE is also directly involved in the physical implementation of its own designs; it is a manufacturing as well as a design network.)

A more complex scenario involves the coordination of an open source design stage with the production process, with the separate stages of production distributed and coordinated by the same peer network that created the design. Dave Pollard provides one example:

Suppose I want a chair that has the attributes of an Aeron without the $1800 price tag, or one with some additional attribute (e.g. a laptop holder) the brand name doesn’t offer? I could go online to a Peer Production site and create an instant market, contributing the specifications..., and, perhaps a maximum price I would be willing to pay. People with some of the expertise needed to produce it could indicate their capabilities and self-organize into a consortium that would keep talking and refining until they could meet this price.... Other potential buyers could chime in, offering more or less than my suggested price. Based on the number of ‘orders’ at each price, the Peer Production group could then accept orders and start manufacturing.... As [Erick] Schonfeld suggests, the intellectual capital associated with this instant market becomes part of the market archive, available for everyone to see, stripping this intellectual capital cost, and the executive salaries, dividends and corporate overhead out of the cost of this and other similar product requests and fulfillments, so that all that is left is the lowest possible cost of material, labour and delivery to fill the order. And the order is exactly what the customer wants, not the closest thing in the mass-producer’s warehouse. [628]

In any case, the removal of proprietary control over the implementation of designs means that the production phase will be subject to competitive pressure to adopt the most efficient production methods—a marked departure from the present, where “intellectual property” enables privileged producers to set prices as a cost-plus markup based on whatever inefficient production methods they choose.

The most ambitious example of an open-source physical production project is the open source car, or “OScar.”

Can open-source practices and approaches be applied to make hardware, to create tangible and physical objects, including complex ones? Say, to build a car?... Markus Merz believes they can. The young German is the founder and “maintainer” (that’s the title on his business card) of the OScar project, whose goal is to develop and build a car according to open-source (OS) principles. Merz and his team aren’t going for a super-accessorized SUV—they’re aiming at designing a simple and functionally smart car. And, possibly, along the way, reinvent transportation. [629]

As of June 2009, the unveiling of a prototype—a two-seater vehicle powered by hydrogen fuel cells—was scheduled in London. [630]

Well, actually there’s a fictional example of an open-source project even more ambitious than the OScar: the open-source moon project, a volunteer effort of a peer network of thousands, in Craig DeLancy’s “Openshot.” The project’s ship (the Stallman ), built largely with Russian space agency surplus, beats a corporate-funded proprietary project to the moon. [631]

A slightly less ambitious open-source manufacturing project, and probably more relevant to the needs of most people in the world, is Open Source Ecology’s open-source tractor (LifeTrac). It’s designed for inexpensive manufacture, with modularity and easy disassembly, for lifetime service and low cost repair. It includes, among other things, a well-drilling module, and is designed to serve as a prime mover for machinery like OSE’s Compressed Earth Block Press and saw mill. [632]

When physical manufacturing is stripped of the cost of proprietary design and technology, and the consumer-driven, pull model of distribution strips away most of the immense marketing cost, we will find that the portion of price formerly made up of such intangibles will implode, and the remaining price based on actual production cost will be as much as an order of magnitude lower.

Just as importantly, open-source design reduces cost not only by removing proprietary rents from “intellectual property,” but by the substantive changes in design that it promotes. Eliminating patents removes legal barriers to the competitive pressure for interoperability and reparability. And interoperability and reparability promote the kind of modular design that is most conducive to networked production, with manufacture of components distributed among small shops producing a common design.

The advantages of modular design of physical goods are analogous to those in the immaterial realm.

Current thinking says peer production is only suited to creating information-based goods—those made of bits, inexpensive to produce, and easily subdivided into small tasks and components. Software and online encyclopedias have this property. Each has small discrete tasks that participants can fulfill with very little hierarchical direction, and both can be created with little more than a networked computer. While it’s true that peer production is naturally suited to bit products, it’s also true that many of the attributes and advantages of peer production can be replicated for products made of atoms. If physical products are designed to be modular—i.e., they consist of many interchangeable parts that can be readily swapped in or out without hampering the performance of the overall product—then, theoretically at least, large numbers of lightly coordinated suppliers can engage in designing and building components for the product, much like thousands of Wikipedians add to and modify Wikipedia’s entries. [633]

This is hardly mere theory, but is reflected in the real-world reality of China’s motorcycle industry: “The Chinese approach emphasizes a modular motorcycle architecture that enables suppliers to attach component subsystems (like a braking system) to standard interfaces.” [634] And in an open-source world, independent producers could make unauthorized modular components or accessories, as well.

Costs from outlays on physical capital are not a constant, and modular design is one factor that can cause those costs to fall significantly. It enables a peer network to break a physical manufacturing project down into discrete sub-projects, with many of the individual modules perhaps serving as components in more than one larger appliance. According to Christian Siefkes,

Products that are modular, that can be broken down into smaller modules or components which can be produced independently before being assembled into a whole, fit better into the peer mode of production than complex, convoluted products, since they make the tasks to be handled by a peer project more manageable. Projects can build upon modules produced by others and they can set as their own (initial) goal the production of a specific module, especially if components can be used stand-alone as well as in combination. The Unix philosophy of providing lots of small specialized tools that can be combined in versatile ways is probably the oldest expression in software of this modular style. The stronger emphasis on modularity is another phenomenon that follows from the differences between market production and peer production. Market producers have to prevent their competitors from copying or integrating their products and methods of production so as not to lose their competitive advantage. In the peer mode, re-use by others is good and should be encouraged, since it increases your reputation and the likelihood of others giving something back to you.... Modularity not only facilitates decentralized innovation, but should also help to increase the longevity of products and components. Capitalism has developed a throw-away culture where things are often discarded when they break (instead of being repaired), or when one aspect of them is no longer up-to-date or in fashion. In a peer economy, the tendency in such cases will be to replace just a single component instead of the whole product, since this will generally be the most labor-efficient option (compared to getting a new product, but also to manually repairing the old one). [635]

Siefkes is wrong only in referring to producers under the existing corporate system as “market producers,” since absent “intellectual property” as a legal bulwark to proprietary design, the market incentive would be toward designing products that were interoperable with other platforms, and toward competition in the design of accessories and replacement parts tailored to other companies’ platforms. And given the absence of legal barriers to the production of such interoperable accessories, the market incentive would be to designing platforms as broadly interoperable as possible.

This process of modularization is already being promoted within corporate capitalism, although the present system is struggling mightily—and unsuccessfully—to keep itself from being torn apart by the resulting increase in productive forces. As Eric Hunting argues, the high costs of technical innovation, the difficulty of capturing value from it, and the mass customization or long tail market, taken together, create pressures for common platforms that can be easily customized between products, and for modularization of components that can be used for a wide variety of products. And Hunting points out, as we already saw in regard to flexible manufacturing networks in Chapters Four and Five, that the predominant “outsource everything” and “contract manufacturing” model increasingly renders corporate hubs obsolete, and makes it possible for contractees to circumvent the previous corporate principals and undertake independent production on their own account.

Industrial ecologies are precipitated by situations where traditional industrial age product development models fail in the face of very high technology development overheads or very high demassification in design driven by desire for personalization/customization producing Long Tail market phenomenon [sic]. A solution to these dilemmas is modularization around common architectural platforms in order to compartmentalize and distribute development cost risks, the result being ‘ecologies’ of many small companies independently and competitively developing intercompatible parts for common product platforms—such as the IBM PC. The more vertical the market profile for a product the more this trend penetrates toward production on an individual level due [to] high product sophistication coupled to smaller volumes.... Competitive contracting regulations in the defense industry (when they’re actually respected...) tend to, ironically, turn many kinds of military hardware into open platforms by default, offering small businesses a potential to compete with larger companies where production volumes aren’t all that large to begin with. Consequently, today we have a situation where key components of some military vehicles and aircraft are produced on a garage-shop production level by companies with fewer than a dozen employees. All this represents an intermediate level of industrial demassification that is underway today and not necessarily dependent upon open source technology or peer-to-peer activity but which creates a fertile ground for that in the immediate future and drives the complementary trend in the miniaturization of machine tools. [636]

In other words, the further production cost falls relative to the costs of design, the greater the economic incentive to modular design as a way of defraying design costs over as many products as possible.

In an email to the Open Manufacturing list, Hunting summed up the process more succinctly. Industrial relocalization

compels the modularization of product design, which results in the replacement of designs by platforms and the competitive commoditization of their components. Today, automobiles are produced as whole products made with large high-capital-cost machinery using materials—and a small portion of pre-made components—transported long distances to a central production site from which the end product is shipped with a very poor transportation efficiency to local sales/distribution points. In the future automobiles may be assembled on demand in the car dealership from modular components which ship with far greater energy efficiency than whole cars and can come from many locations. By modularizing the design of the car to allow for this, that design is changed from a product to a platform for which many competitors, using much smaller less expensive means of production, can potentially produce parts to accommodate customers desire for personalization and to extend the capabilities of the automobile beyond what was originally anticipated. End-users are more easily able to experiment in customization and improvement and pursue entrepreneurship based on this innovation at much lower start-up costs. This makes it possible to implement technologies for the automobile—like alternative energy technology—earlier auto companies may not have been willing to implement because of a lack of competition and because their capital costs for their large expensive production tools and facilities take so long (20 years, typically) to amortize. THIS is the reason why computers, based on platforms for modular commodity components, have evolved so rapidly compared to every other kind of industrial product and why the single-most advanced device the human race has ever produced is now something most anyone can afford and which a child can assemble in minutes from parts sourced around the world. [637]

The beauty of modular design, Hunting writes elsewhere (in the specific case of modular prefab housing), is that the bulk of research and development man-hours are incorporated into the components themselves, which can be duplicated across many different products. The components are smart, but the combinations are dumbed-down and user friendly. A platform is a way to spread the development costs of a single component over as many products as possible.

But underneath there are these open structural systems that are doing for house construction what the standardized architecture of the IBM PC did for personal computing, encoding a lot of engineering and pre-assembly labor into small light modular components created in an industrial ecology so that, at the high level of the end-user, it’s like Lego and things go together intuitively with a couple of hand tools. In the case of the Jeriko and iT houses based on T-slot profiles, this is just about a de-facto public domain technology, which means a zillion companies around the globe could come in at any time and start making compatible hardware. We’re tantalizingly close to factoring out the ‘experts’ in basic housing construction just like we did with the PC where the engineers are all down in the sub-components, companies don’t actually manufacture computers they just do design and assemble-on-demand, and now kids can build computers in minutes with parts made all over the world. Within 20 years you’ll be going to places like IKEA and Home Depot and designing your own home by picking parts out of catalogs or showrooms, having them delivered by truck, and then assembling most of them yourself with about the same ease you put in furniture and home appliances. [638]

More recently, Hunting wrote of the role of modularized development for common platforms in this history of the computer industry:

We commonly attribute the rapid shrinking in scale of the computer to the advance of integrated circuit technology. But that’s just a small part of the story that doesn’t explain the economy and ubiquity of computers. The real force behind that was a radically different industrial paradigm that emerged more-or-less spontaneously in response to the struggle companies faced in managing the complexity of the new technology. Put simply, the computer was too complicated for any one corporation to actually develop independently—not even for multi-national behemoths like IBM that once prided itself on being able to do everything. A radically new way of doing things was needed to make the computer practical. The large size of early computers was a result not so much of the primitive nature of the technology of the time but on the fact that most of that early technology was not actually specific to the application of computers. It was repurposed from electronic components that were originally designed for other kinds of machines. Advancing the technology to where the vast diversity of components needed could be made and optimized specifically for the computer demanded an extremely high development investment -more than any one company in the world could actually afford. There simply wasn’t a big enough computer market to justify the cost of development of very sophisticated parts exclusively for computers. While performing select R&D on key components, early computer companies began to position themselves as systems integrators for components made by sub-contractracted suppliers rather than manufacturing everything themselves. While collectively the development of the full spectrum of components computers needed was astronomically expensive, individually they were quite within the means of small businesses and once the market for computers reached a certain minimum scale it became practical for such companies to develop parts for these other larger companies to use in their products. This was aided by progress in other areas of consumer, communications, and military digital electronics—a general shift to digital electronics—that helped create larger markets for parts also suited to computer applications. The more optimized for computer use subcomponents became, the smaller and cheaper the computer as a whole became and the smaller and cheaper the computer the larger the market for it, creating more impetus for more companies to get involved in computer-specific parts development. ICs were, of course, a very key breakthrough but the nature of their extremely advanced fabrication demanded extremely large product markets to justify. The idea of a microprocessor chip exclusive to any particular computer is actually a rather recent phenomenon even for the personal computer industry. Companies like Intel now host a larger family of concurrently manufactured and increasingly use-specialized microprocessors than was ever imaginable just a decade ago. For this evolution to occur the nature of the computer as a designed product had to be very different from other products common to industrial production. Most industrial products are monolithic in the sense that they are designed to be manufactured whole from raw materials and very elemental parts in one central mass production facility. But the design of a computer isn’t keyed to any one resulting product. It has an ‘architecture’ that is independent of any physical form. A set of component function and interface standards that define the electronics of a computer system but not necessarily any particular physical configuration. Unlike other technologies, electronics is very mutable. There are an infinite variety of potential physical configurations of the same electronic circuit. This is why electronics engineering can be based on iconographic systems akin to mathematics—something seen in few other industries to a comparable level of sophistication. (chemical engineering) So the computer is not a product but rather a platform that can assume an infinite variety of shapes and accommodate an infinite diversity of component topologies as long as their electronic functions conform to the architecture. But, of course, one has to draw the line somewhere and with computer parts this is usually derived from the topology of standardized component connections and the most common form factors for components. Working from this a computer designer develops configurations of components integrated through a common motherboard that largely defines the overall shape possible for the resulting computer product. Though companies like Apple still defy the trend, even motherboards and enclosures are now commonly standardized, which has ironically actually encouraged diversity in the variety of computer forms and enclosure designs even if their core topological features are more-or-less standardized and uniform. Thus the computer industry evolved into a new kind of industrial entity; an Industrial Ecology formed of a food-chain of interdependencies between largely independent, competitive, and globally dispersed companies defined by component interfaces making up the basis of computer platform architectures. This food chain extends from discrete electronics components makers, through various tiers of sub-system makers, to the computer manufacturers at the top—though in fact they aren’t manufacturing anything in the traditional sense. They just cultivate the platforms, perform systems integration, customer support, marketing, and—decreasingly as even this is outsourced to contract job shops—assemble the final products. For an Industrial Ecology to exist, an unprecedented degree of information must flow across this food chain as no discrete product along this chain can hope to have a market unless it conforms to interface and function standards communicated downward from higher up the chain. This has made the computer industry more open than any other industry prior to it. Despite the obsessions with secrecy, propriety, and intellectual property among executives, this whole system depends on an open flow of information about architectures, platforms, interfaces standards, software, firmware, and so on—communicated through technical reference guides and marketing material. This information flow exists to an extent seen nowhere else in the Industrial Age culture.... Progressive modularization and interoperability standardization tends to consolidate and simplify component topologies near the top of the food chain. This is why a personal computer is, today, so simple to assemble that a child can do it—or for that matter an end-user or any competitor to the manufacturers at the top. All that ultimately integrates a personal computer into a specific physical form is the motherboard and the only really exclusive aspect of that is its shape and dimensions and an arrangement of parts which, due to the nature of electronics, is topologically mutable independent of function. There are innumerable possible motherboard forms that will still work the same as far as software is concerned. This made the PC an incredibly easy architecture to clone for anyone who could come up with some minor variant of that motherboard to circumvent copyrights, a competitive operating system, a work-around the proprietary aspects of the BIOS, and could dip into that same food chain and buy parts in volume. Once an industrial ecology reaches a certain scale, even the folks at the top become expendable. The community across the ecology has the basic knowledge necessary to invent platforms of its own, establish its own standards bottom-up, and seek out new ways to reach the end-user customer. And this is what happened to IBM when it stupidly allowed itself to become a bottleneck to the progress of the personal computer in the eyes of everyone else in its ecology. That ecology, for sake of its own growth, simply took the architecture of the PC from IBM and established its own derivative standards independent of IBM—and there was nothing even that corporate giant could ultimately do about it.... ...Again, this is all an astounding revolution in the way things are supposed to work in the Industrial Age. A great demassification of industrial power and control. Just imagine what the car industry would be like if things worked like this—as well one should as this is, in fact, coming. Increasingly, the model of the computer industry is finding application in a steadily growing number of other industries. Bit by bit, platforms are superceding products and Industrial Ecologies are emerging around them. [639]

The size limitations of fabrication in the small shop, and the lack of facilities for plastic injection molding or sheet metal stamping of very large objects, constitute a further impetus to modular design.

By virtue of the dimensional limits resulting from the miniaturization of fabrication systems, Post-Industrial design favors modularity following a strategy of maximum diversity of function from a minimum diversity of parts and materials—Min-A-Max.... Post-industrial artifacts tend to exhibit the characteristic of perpetual demountability, leading to ready adaptive reuse, repairability, upgradeability, and recyclability. By extension, they compartmentalize failure and obsolescence to discrete demountable components. A large Post-Industrial artifact can potentially live for as long as its platform can evolve -potentially forever. A scary prospect for the conventional manufacturer banking on the practice of planned obsolescence.... [640]

One specific example Hunting cites is the automobile. It was, more than anything, “the invention of pressed steel welded unibody construction in the 1930s,” with its requirement for shaping sheet metal in enormous multi-story stamping presses, that ruled out modular production by a cooperative ecology of small manufacturers. Against that background, Hunting sets the abortive Africar project of the 1980s, with a modular design suitable for networked production in small shops. [641] The Africar had a jeeplike body design; but instead of pressed sheet metal, its surface was put together entirely from components capable of being cut from flat materials (sheet metal or plywood) using subtractive machinery like cutting tables, attached to a structural frame of cut or bent steel.

A more recent modular automobile design project is Local Motors. It’s an open design community with all of its thousands of designs shared under Creative Commons licenses. All of them are designed around a common light-weight chassis, which is meant to be produced economically in runs of as little as two thousand. Engines, brakes, batteries and other components are modular, so as to be interchangeable between designs. Components are produced in networks of “microfactories.” The total capital outlay required to produce a Local Motors design is a little over a million dollars (compared to hundreds of millions for a conventional auto plant), with minimal inventories and turnaround times a fifth those of conventional Detroit plants. [642]

Michel Bauwens, in commenting on Hunting’s remarks, notes among the “underlying trends... supporting the emergence of peer production in the physical world,”

the ‘distribution’ of production capacity, i.e. lower capital requirements and modularisation making possible more decentralized and localized production, which may eventually be realized through the free self-aggregation of producers. [643]

Modular design is an example of stigmergic coordination. Stigmergy was originally a concept developed in biology, to describe the coordination of actions between a number of individual organisms through the individual response to markers, without any common decision-making process. Far from the stereotype of the “hive mind,” ants—the classic example of biological stigmergy—coordinate their behavior entirely through the individual’s reading of and reaction to chemical markers left by other individuals. [644] As defined in the Wikipedia entry, stigmergy is

a mechanism of spontaneous, indirect coordination between agents or actions, where the trace left in the environment by an action stimulates the performance of a subsequent action, by the same or a different agent. Stigmergy is a form of self-organization. It produces complex, apparently intelligent structures, without need for any planning, control, or even communication between the agents. As such it supports efficient collaboration between extremely simple agents, who lack any memory, intelligence or even awareness of each other. [645]

The development of the platform is a self-contained and entirely self-directed action by an individual or a peer design group. Subsequent modules are developed with reference to the platform, but the design of each module is likewise entirely independent and self-directed; no coordination with the platform developer or the developers of other modules takes place. The effect is to break design down into numerous manageable units.

2. Reduced Transaction Costs of Aggregating Capital. We will consider the cheapening of actual physical tools in the next section. But even when the machinery required for physical production is still expensive, the reduction of transaction costs involved in aggregating funds is bringing on a rapid reduction in the cost of physical production. In addition, networked organization increases the efficiency of physical production by making it possible to pool more expensive capital equipment and make use of “spare cycles.” This possibility was hinted at by proposals for pooling capital outlays through cooperative organization even back in the 1970s, as we saw in the first section. But the rise of network culture takes it to a new level (which, again, we will consider in the next section). As a result, Stallman’s distinction between “free speech” and “free beer” is eroding even when tools themselves are costly. Michel Bauwens writes:

P2P can arise not only in the immaterial sphere of intellectual and software production, but wherever there is access to distributed technology spare computing cycles, distributed telecommunications and any kind of viral communicator meshwork.

P2P can arise wherever other forms of distributed fixed capital is [sic] available such is the case for carpooling, which is the second mode of transportation in the U.S.....

P2P can arise wherever financial capital can be distributed. Initiatives such as the ZOPA bank point in that direction. Cooperative purchase and use of large capital goods are a possibility.... [646]

As the reference to “distributed financial capital” indicates, the availability of crowdsourced and distributed means of aggregating dispersed capital is as important as the implosion of outlay costs for actual physical capital. A good example of such a system is the Open Source Hardware Bank, a microcredit network organized by California hardware hackers to pool capital for funding new open source hardware projects. [647]

The availability (or unavailability) of capital to working class people will have a significant effect on the rate of self-employment and small business formation. The capitalist credit system, in particular, is biased toward large-scale, conventional, absentee-owned firms. David Blanchflower and Andrew Oswald [648] found that childhood personality traits and test scores had almost no value in predicting adult entrepreneurship. On the other hand, access to startup capital was the single biggest factor in predicting self-employment. There is a strong correlation between self-employment and having received an inheritance or a gift. [649] NSS data indicate that most small businesses were begun not with bank loans but with own or family money....” [650] The clear implication is that there are “undesirable impediments to the market supply of entrepreneurship.” [651] In short, the bias of the capitalist credit system toward conventional capitalist enterprise means that the rate of wage employment is higher, and self-employment is lower, than their likely free market values. The lower the capital outlays required for self-employment, and the easier it is to aggregate such capital outside the capitalist credit system, the more self-employment will grow as a share of the total labor market.

Jed Harris, at Anomalous Presumptions blog, reiterates Bauwens’ point that peer production makes it possible to produce without access to large amounts of capital. “The change that enables widespread peer production is that today, an entity can become self-sustaining, and even grow explosively, with very small amounts of capital. As a result it doesn’t need to trade ownership for capital, and so it doesn’t need to provide any return on investment.” [652]

Charles Johnson adds that, because of the new possibilities the Internet provides for lowering the transaction costs entailed in networked mobilization of capital, peer production can take place even when significant capital investments are required—without relying on finance by large-scale sources of venture capital:

it’s not just a matter of projects being able to expand or sustain themselves with little capital.... It’s also a matter of the way in which both emerging distributed technologies in general, and peer production projects in particular, facilitate the aggregation of dispersed capital—without it having to pass through a single capitalist chokepoint, like a commercial bank or a venture capital fund.... Meanwhile, because of the way that peer production projects distribute their labor, peer-production entrepreneurs can also take advantage of spare cycles on existing, widely-distributed capital goods—tools like computers, facilities like offices and houses, software, etc. which contributors own, which they still would have owned personally or professionally whether or not they were contributing to the peer production project.... So it’s not just a matter of cutting total aggregate costs for capital goods...; it’s also, importantly, a matter of new models of aggregating the capital goods to meet whatever costs you may have, so that small bits of available capital can be rounded up without the intervention of money-men and other intermediaries. [653]

So network organization not only lowers the transaction costs of aggregating capital for the purchase of physical means of production, but also increases the utilization of the means of production when they are expensive.

3. Reduced Capital Outlays for Physical Production. As described so far, the open-source model only removes proprietary rents from the portion of the production process—the design stage—that has no material cost, and from the process of aggregating capital. As Richard Stallman put it, to repeat, it’s about “free speech” rather than “free beer.” Simply removing proprietary rents from design, and removing all transaction costs from the free transfer of digital designs for automated production, will have a revolutionary effect by itself. Marcin Jakubowski, of Factor E Farm, writes:

The unique contribution of the information age arises in the proposition that data at one point in space allows for fabrication at another, using computer numerical control (CNC) of fabrication. This sounds like an expensive proposition, but that is not so if open source fabrication equipment is made available. With low cost equipment and software, one is able to produce or acquire such equipment at approximately $5k for a fully-equipped lab with metal working, cutting, casting, and electronics fabrication, assisted by open source CNC. [654]

Or as Janne Kyttänen describes it:

I’m trying to do for products what has already happened to music and digital photography, money, literature—to store them as information and be able to send the data files around the world to be produced. By doing this, you can reduce the waste of the planet, the labor cost, transportation …it’s going to have a huge impact in the next couple of decades for the manufacturing of goods; we believe it’s a new industrial revolution. We will be able to produce products without using the old mass production infrastructure that’s been around for two hundred years and is fully out of date. [655]

Jakubowski’s reference to the declining cost of fabrication equipment suggests that the revolution in open-source manufacturing goes beyond the design stage, and promises to change the way physical production itself is organized. Chris Anderson is not the first, and probably won’t be the last, to point to the parallels between what the desktop computer revolution did to the information and culture industries, and what the desktop manufacturing revolution will do in the physical realm:

The tools of factory production, from electronics assembly to 3-D printing, are now available to individuals, in batches as small as a single unit. Anybody with an idea and a little expertise can set assembly lines in China into motion with nothing more than some keystrokes on their laptop. A few days later, a prototype will be at their door, and once it all checks out, they can push a few more buttons and be in full production, making hundreds, thousands, or more. They can become a virtual micro-factory, able to design and sell goods without any infrastructure or even inventory; products can be assembled and drop-shipped by contractors who serve hundreds of such customers simultaneously. Today, micro-factories make everything from cars to bike components to bespoke furniture in any design you can imagine. The collective potential of a million garage tinkerers is about to be unleashed on the global markets, as ideas go straight into production, no financing or tooling required. “Three guys with laptops” used to describe a Web startup. Now it describes a hardware company, too. “Hardware is becoming much more like software,” as MIT professor Eric von Hippel puts it. That’s not just because there’s so much software in hardware these days, with products becoming little more than intellectual property wrapped in commodity materials, whether it’s the code that drives the off-the-shelf chips in gadgets or the 3-D design files that drive manufacturing. It’s also because of the availability of common platforms, easy-to-use tools, Web-based collaboration, and Internet distribution. We’ve seen this picture before: It’s what happens just before monolithic industries fragment in the face of countless small entrants, from the music industry to newspapers. Lower the barriers to entry and the crowd pours in.... A garage renaissance is spilling over into such phenomena as the booming Maker Faires and local “hackerspaces.” Peer production, open source, crowdsourcing, user-generated content—all these digital trends have begun to play out in the world of atoms, too. The Web was just the proof of concept. Now the revolution hits the real world. In short, atoms are the new bits. [656]

The distinction, not only between being “in business” and “out of business,” but between worker and owner, is being eroded. The whole concept of technological employment assumes the factory paradigm—in which means of production are extremely expensive, and the only access to work for most people is employment by those rich enough to own the machinery—will continue unaltered. But the imploding price of is making that paradigm obsolete. Neil Gerschenfeld, like Anderson, draws a parallel between hardware today and software thirty years ago:

The historical parallel between personal computation and personal fabrication provides a guide to what those business models might look like. Commercial software was first written by and for big companies, because only they could afford the mainframe computers needed to run it. When PCs came along anyone could become a software developer, but a big company was still required to develop and distribute big programs, notably the operating systems used to run other programs. Finally, the technical engineering of computer networks combined with the social engineering of human networks allowed distributed teams of individual developers to collaborate on the creation of the most complex software.... Similarly, possession of the means for industrial production has long been the dividing line between workers and owners. But if those means are easily acquired, and designs freely shared, then hardware is likely to follow the evolution of software. Like its software counterpart, open-source hardware is starting with simple fabrication functions, while nipping at the heels of complacent companies that don’t believe personal fabrication “toys” can do the work of their “real” machines. That boundary will recede until today’s marketplace evolves into a continuum from creators to consumers, servicing markets ranging from one to one billion. [657]

Diane Pfeiffer draws a comparison to the rise of desktop publishing in the 1980s. [658]

We already saw, in Chapter Three, what all this meant from the standpoint of investors: they’re suffering from the superfluity of most investment capital, resulting from the emerging possibility of small producers and entrepreneurs owning their own factories. From the perspective of the small producer and entrepreneur, the same trend is a good thing because it enables them to own their own factories without any dependency on finance capital. Innovations not only in small-scale manufacturing technology, but in networked communications technology for distribution and marketing, are increasingly freeing producers from the need for large amounts of capital. Charles Hugh Smith writes:

What I find radically appealing is not so much the technical aspects of desktop/workbench production of parts which were once out of financial reach of small entrepreneurs—though that revolution is the enabling technology—it is the possibility that entrepreneurs can own the means of production without resorting to vulture/bank investors/loans . Anyone who has been involved in a tech startup knows the drill--in years past, a tech startup required millions of dollars to develop a new product or the IP (intellectual property). To raise the capital required, the entrepreneurs had to sell their souls (and company) to venture capital (vulture capital) “investors” who simply took ORPM (other rich people’s money) and put it to work, taking much of the value of new promising companies in trade for their scarce and costly capital. The only alternative were banks, who generally shunned “speculative investments” (unless they were in the billions and related to derivatives, heh). So entrepreneurs came up with the ideas and did all the hard work, and then vulture capital swooped in to rake off the profits, all the while crying bitter tears about the great risks they were taking with other rich people’s spare cash. Now that these production tools are within reach of small entrepreneurs, the vulture capital machine will find less entrepreneural fodder to exploit. The entrepreneurs themselves can own/rent the means of production . That is a fine old Marxist phrase for the tools and plant which create value and wealth. Own that and you create your own wealth. In the post-industrial economies of the West and Asia, distribution channels acted as means of wealth creation as well: you want to make money selling books or music, for instance, well, you had to sell your product to the owners of the distribution channels: the record labels, film distributors, book publishers and retail cartels, all of whom sold product through reviews and adverts in the mainstream media (another cartel). The barriers to entry were incredibly high. It took individuals of immense wealth (Spielberg et al. ) to create a new film studio from scratch (DreamWorks) a few years ago. Now any artist can sell their music/books via the Web, completely bypassing the gatekeepers and distribution channels. In a great irony, publishers and labels are now turning to the Web to sell their product. If all they have is the Web, then what value can they add? I fully expect filmakers to go directly to the audience via the Web in coming years and bypass the entire film distribution cartel entirely. Why go to Wal-Mart to buy a DVD when you can download hundreds of new films off the Web? Both the supply chain and distribution cartels are being blown apart by the Web. Not only can entrepreneurs own/rent the means of production and arrange their own supply/assembly chains, they can also own their own distribution channels. The large-scale factory/distribution model is simply no longer needed for many products. As the barriers to owning the means of production and distribution fall, a Renaissance in small-scale production and wealth creation becomes not just possible but inevitable. [659]

Even without the latest generation of low-cost digital fabrication machinery, the kind of flexible manufacturing network that exists in Emilia-Romagna or Shenzen is ideally suited to the open manufacturing philosophy. Tom Igoe writes:

There are some obvious parallels here [in the shanzhai manufacturers of China—see Chapter Four] to the open hardware community. Businesses like Spark Fun, Adafruit, Evil Mad Scientist, Arduino, Seeed Studio, and others thrive by taking existing tools and products, re-combining them and repackaging them in more usable ways. We borrow from each other and from others, we publish our files for public use, we improve upon each others’ work, and we police through licenses such as the General Public License, and continual discussion between competitors and partners. We also revise products constantly and make our businesses based on relatively small runs of products tailored to specific audiences. [660]

The intersection of the open hardware and open manufacturing philosophies with the current model of flexible manufacturing networks will be enabled, Igoe argues, by the availability of

Cheap tools. Laser cutters, lathes, and milling machines that are affordable by an individual or a group. This is increasingly coming true. The number of colleagues I know who have laser cutters and mills in their living rooms is increasing (and their asthma is worsening, no doubt). There are some notable holes in the open hardware world that exist partially because the tools aren’t there. Cheap injection molding doesn’t exist yet, but injection molding services do, and they’re accessible via the net. But when they’re next door (as in Shenzen), you’ve got a competitive advantage: your neighbor. [661]

(Actually hand-powered, small-scale injection molding machines are now available for around $1500, and Kenner marketed a fully functional “toy” injection molding machine for making toy soldiers, tanks, and the like back in the 1960s.) [662]

And the flexible manufacturing network, unlike the transnational corporate environment, is actively conducive to the sharing of knowledge and designs.

Open manufacturing information. Manufacturers in this scenario thrive on adapting existing products and services. Call them knockoffs or call them new hybrids, they both involve reverse engineering something and making it fit your market. Reverse engineering takes time and money. When you’re a mom & pop shop, that matters a lot more to you. If you’ve got a friend or a vendor who’s willing to do it for you as a service, that helps. But if the plans for the product you’re adapting are freely available, that’s even better. In a multinational world, open source manufacturing is anathema. Why would Nokia publish the plans for a phone when they could dominate the market by doing the localization themselves? But in a world of networked small businesses, it spurs business. You may not have the time or interest in adapting your product for another market, but someone else will, and if they’ve got access to your plans, they’ll be grateful, and will return the favor, formally or informally. [663]

The availability of modestly priced desktop manufacturing technology (about which we will see more immediately below), coupled with the promise of crowdsourced means of aggregating capital, has led to a considerable shift in opinion in the peer-to-peer community, as evidenced by Michel Bauwens

I used to think that the model of peer production would essentially emerge in the immaterial sphere, and in those cases where the design phase could be split from the capital-intensive physical production sphere.... However, as I become more familiar with the advances in Rapid Manucturing [sic]... and Desktop Manufacturing..., I’m becoming increasingly convinced of the strong trend towards the distribution of physical capital. If we couple this with the trend towards the direct social production of money (i.e. the distribution of financial capital...) and the distribution of energy...; and how the two latter trends are interrelated..., then I believe we have very strong grounds to see a strong expansion of p2p-based modalities in the physical sphere. [664]

The conditions of physical production have, in fact, experienced a transformation almost as great as that which digital technology has brought about on immaterial production. The “physical production sphere” itself has become far less capital-intensive. If the digital revolution has caused an implosion in the physical capital outlays required for the information industries, the revolution in garage and desktop production tools promises an analogous effect almost as great on many kinds of manufacturing. The radical reduction in the cost of machinery required for many kinds of manufacturing has eroded Stallman’s distinction between “free speech” and “free beer.” Or as Chris Anderson put it, “Atoms would like to be free, too, but they’re not so pushy about it.” [665]

The same production model sweeping the information industries, networked organization of people who own their own production tools, is expanding into physical manufacturing. A revolution in cheap, general purpose machinery, and a revolution in the possibilities for networked design made possible by personal computers and network culture, according to Johann Soderberg, is leading to

an extension of the dream that was pioneered by the members of the Homebrew Computer Club [i.e., a cheap computer able to run on the kitchen table]. It is the vision of a universal factory able to run on the kitchen table.... [T]he desire for a ‘desktop factory’ amounts to the same thing as the reappropriation of the means of production. [666]

Clearly, the emergence of cheap desktop technology for custom machining parts in small batches will greatly lower the overall capital outlays needed for networked physical production of light and medium consumer goods.

We’ve already seen the importance of the falling costs of small-scale production machinery made possible by the Japanese development of small CNC machines in the 1970s. That is the technological basis of the flexible manufacturing networks we examined in the last chapter.

When it comes to the “Homebrew” dream of an actual desktop factory, the most promising current development is the Fab Lab. The concept started with MIT’s Center for Bits and Atoms. The original version of the Fab Lab included CNC laser cutters and milling machines, and a 3-D printer, for a total cost of around $50,000. [667]

Open-source versions of the machines in the Fab Lab have brought the cost down to around $2–5,000.

One important innovation is the multimachine, an open-source, multiple-purpose machine tool that includes drill press, lathe and milling machine; it can be modified for computerized numeric control. The multimachine was originally developed by Pat Delaney, whose YahooGroup has grown into a design community and support network of currently over five thousand people. [668]

As suggested by the size of Delaney’s YahooGroup membership, the multimachine has been taken up independently by open-source developers all around the world. The Open Source Ecology design community, in particular, envisions a Fab Lab which includes a CNC multimachine as “the central tool piece of a flexible workshop... eliminating thousands of dollars of expenditure requirement for similar abilities” and serving as “the centerpieces enabling the fabrication of electric motor, CEB, sawmill, OSCar, microcombine and all other items that require processes from milling to drilling to lathing.” [669]

It is a high precision mill-drill-lathe, with other possible functions, where the precision is obtained by virtue of building the machine with discarded engine blocks.... The central feature of the Multimachine is the concept that either the tool or the workpiece rotates when any machining operation is performed. As such, a heavy-duty, precision spindle (rotor) is the heart of the Multimachine—for milling, drilling and lathing applications. The precision arises from the fact that the spindle is secured within the absolutely precise bore holes of an engine block, so precision is guaranteed simply by beginning with an engine block. If one combines the Multimachine with a CNC XY or XYZ movable working platform—similar to ones being developed by the Iceland Fab Lab team [670] , RepRap [671] , CandyFab 4000 [672] team, and others—then a CNC mill-drill-lathe is the result. At least Factor 10 reduction in price is then available compared to the competition. The mill-drill-lathe capacity allows for the subtractive fabrication of any allowable shape, rotor, or cylindrically-symmetric object. Thus, the CNC Multimachine can be an effective cornerstone of high precision digital fabrication—down to 2 thousandths of an inch. Interesting features of the Multimachine are that the machines can be scaled from small ones weighing a total of ~1500 lb to large ones weighing several tons, to entire factories based on the Multimachine system. The CNC XY(Z) tables can also be scaled according to the need, if attention to this point is considered in development. The whole machine is designed for disassembly. Moreover, other rotating tool attachments can be added, such as circular saw blades and grinding wheels. The overarm included in the basic design is used for metal forming operations. Thus, the Multimachine is an example of appropriate technology, where the user is in full control of machine building, operation, and maintenance. Such appropriate technology is conducive to successful small enterprise for local community development, via its low capitalization requirement, ease of maintenance, scaleability and adaptability, and wide range of products that can be produced. This is relevant both in the developing world and in industrialized countries. [673]

The multimachine, according to Delaney, “can be built by a semi-skilled mechanic using just common hand tools,” from discarded engine blocks, and can be scaled from “a closet size version” to “one that would weigh 4 or 5 tons.” [674]

In developing countries, in particular, the kinds of products that can be built with a multimachine include:

AGRICULTURE: Building and repairing irrigation pumps and farm implements.

WATER SUPPLIES: Making and repairing water pumps and water-well drilling rigs.

FOOD SUPPLIES: Building steel-rolling-and-bending machines for making fuel efficient cook stoves and other cooking equipment.

TRANSPORTATION: Anything from making cart axles to rebuilding vehicle clutch, brake, and other parts....

JOB CREATION: A group of specialized but easily built MultiMachines can be combined to form a small, very low cost, metal working factory which could also serve as a trade school. Students could be taught a single skill on a specialized machine and be paid as a worker while learning other skills that they could take elsewhere. [675]

More generally, a Fab Lab (i.e. a digital flexible fabrication facility centered on the CNC multimachine along with a CNC cutting table and open-source 3-D printer like RepRap) can produce virtually anything—especially when coupled with the ability of such machinery to run open-source design files.

Flexible fabrication refers to a production facility where a small set of non-specialized, general-function machines (the 5 items mentioned [see below]) is capable of producing a wide range of products if those machines are operated by skilled labor. It is the opposite of mass production, where unskilled labor and specialized machinery produce large quantities of the same item (see section II , Economic Base ). When one adds digital fabrication to the flexible fabrication mix—then the skill level on part of the operator is reduced, and the rate of production is increased. Digital fabrication is the use of computer-controlled fabrication, as instructed by data files that generate tool motions for fabrication operations. Digital fabrication is an emerging byproduct of the computer age. It is becoming more accessible for small scale production, especially as the influence of open source philosophy is releasing much of the know-how into non-proprietary hands. For example, the Multimachine is an open source mill-drill-lathe by itself, but combined with computer numerical control (CNC) of the workpiece table, it becomes a digital fabrication device. It should be noted that open access to digital design—perhaps in the form a global repository of shared open source designs—introduces a unique contribution to human prosperity. This contribution is the possibility that data at one location in the world can be translated immediately to a product in any other location. This means anyone equipped with flexible fabrication capacity can be a producer of just about any manufactured object. The ramifications for localization of economies are profound, and leave the access to raw material feedstocks as the only natural constraint to human prosperity. [676]

Open Source Ecology, based on existing technology, estimates the cost of producing a CNC multimachine with their own labor at $1500. [677] The CNC multimachine is only one part of a projected “Fab Lab,” whose total cost of construction will be a few thousand dollars.

CNC Multimachine—Mill, drill, lathe, metal forming, other grinding/cutting. This constitutes a robust machining environment that may be upgraded for open source computer numerical control by OS software, which is in development. [678]

XYZ-controlled torch and router table—can accommodate an acetylene torch, plasma cutter, router, and possibly CO2 laser cutter diodes

Metal casting equipment—all kinds of cast parts from various metals

Plastic extruder—extruded sheet for advanced glazing, and extruded plastic parts or tubing

Electronics fabrication—oscilloscope, circuit etching, others—for all types of electronics from power control to wireless communications.

This equipment base is capable of producing just about anything—electronics, electromechanical devices, structures, and so forth. The OS Fab Lab is crucial in that it enables the self-replication of all the 16 technologies. [679]

(The “16 technologies” refers to Open Source Ecology’s entire line of sixteen products, including not only construction and energy generating equipment, a tractor, and a greenhouse, but using the Fab Lab to replicate the five products in the Fab Lab itself. See the material on OSE in the Appendix.)

Another major component of the Fab Lab, the 3-D printer, sells at a price starting at over $20,000 for commercial versions. The RepRap, an open-source 3-D printer project, has reduced the cost to around $500. [680] MakerBot [681] is a closely related commercial 3-D printer project, an offshoot of RepRap that shares much of its staff in common. [682] Makerbot has a more streamlined, finished (i.e., commercial-looking) appearance. Unlike RepRap, it doesn’t aim at total self-replicability; rather, most of its parts are designed to be built with a laser cutter. [683]

3-D printers are especially useful for making casting molds. Antique car enthusiast Jay Leno, in a recent issue of Popular Mechanics , described the use of a combination 3-D scanner/3-D printer to create molds for out-of-production parts for old cars like his 1907 White Steamer.

The 3D printer makes an exact copy of a part in plastic, which we then send out to create a mold.... The NextEngine scanner costs $2995. The Dimension uPrint Personal 3D printer is now under $15,000. That’s not cheap. But this technology used to cost 10 times that amount. And I think the price will come down even more. [684]

Well, yeah—especially considering RepRap can already be built for around $500 in parts. Even the Desktop Factory, a commercial 3-D printer, sells for about $5,000. [685]

Automated production with CNC machinery, Jakubowski argues, holds out some very exciting possibilities for producing at rates competitive with conventional industry.

It should be pointed out that a particularly exciting enterprise opportunity arises from automation of fabrication, such as arises from computer numerical control. For example, the sawmill and CEB discussed above are made largely of DfD, bolt-together steel. This lends itself to a fabrication procedure where a CNC XYZ table could cut out all the metal, including bolt holes, for the entire device, in a fraction of the time that it would take by hand. As such, complete sawmill or CEB kits may be fabricated and collected, ready for assembly, on the turn-around time scale of days.... The digital fabrication production model may be equivalent in production rates to that of any large-scale, high-tech firms. [686] The concept of a CNC XYZ table is powerful. It allows one to prepare all the metal, such as that for a CEB press or the boundary layer turbine, with the touch of a button if a design file for the toolpath is available. This indicates on-demand fabrication capacity, at production rates similar to that of the most highly-capitalized industries. With modern technology, this is doable at low cost. With access to low-cost computer power, electronics, and open source blueprints, the capital needed for producing a personal XYZ table is reduced merely to structural steel and a few other components: it’s a project that requires perhaps $1000 to complete. [687]

(Someone’s actually developed a CNC XYZ cutting table for $100 in materials, although the bugs are not yet completely worked out.) [688]

Small-scale fabrication facilities of the kind envisioned at Factor E Farm, based on CNC multimachines, cutting tables and 3D printers, can even produce motorized vehicles like passenger cars and tractors, when the heavy engine block is replaced with light electric motor. Such electric vehicles, in fact, are part of the total product package at Factor E Farm.

The central part of a car is its propulsion system. Fig. 6 shows a fuel source feeding a heat generator , which heats a flash steam generator heat exchanger , which drives a boundary layer turbine , which drives a wheel motor operating as an electrical generator. The electricity that is generated may either be fed into battery storage , or controlled by power electronics to drive 4 separate wheel motors. This constitutes a hybrid electric vehicle, with 4 wheel drive in this particular implementation. This hybrid electric vehicle is one of intermediate technology design that may be fabricated in a small-scale, flexible workshop. The point is that a complicated power delivery system (clutch-transmission-drive shaft-differential) has been replaced by four electrical wires going to the wheel electrical motors. This simplification results in high localization potential of car manufacturing. The first step in the development of open source, Hypercar-like vehicles is the propulsion system, for which the boundary layer turbine hybrid system is a candidate. Our second step will be structural optimization for lightweight car design. [689]

The CubeSpawn project is also involved in developing a series of modular desktop machine tools. The first stage is a cubical 3-axis milling machine (or “milling cell”). The next step will be to build a toolchanger and head changer so the same cubical framework and movement controls can be used for a 3-D printer. [690]

It starts by offering a simple design for a 3 axis, computer controlled milling machine. With this resource, you have the ability to make a significant subset of all the parts in existence! So, parts for additional machines can be made on the mill, allowing the system to add to itself, all based on standards to promote interoperability.... The practical consequence is a self expanding factory that will fit in a workshop or garage.... Cross pollenization with other open source projects is inevitable and beneficial although at first, commercial products will be used if no open source product exists. This has already begun, and CubeSpawn uses 5 other open source+ projects as building blocks in its designs These are electronics from the Sanguino / RepRap specific branch of the Arduino project, Makerbeam for cubes of small dimensions, and the EMC control software for an interface to individual cells. There is an anticipated use of SKDB for part version and cutting geometry file retrieval, with Debian Linux as a central host for the system DB.... By offering a standardized solution to the problems of structure, power connections, data connections, inter-cell transport, and control language, we can bring about an easier to use framework to collaborate on. The rapid adoption of open source hardware should let us build the “better world” industry has told us about for over 100 years. [691]

With still other heads, the same framework can be used as a cutting table.

If these examples are not enough, the P2P Foundation’s “Product Hacking” page provides, under the heading of “Production/Machinery,” a long list of open-source CNC router, cutting table, 3-D printer, modular electronics, and other projects. [692] DIYLILCNC is a cheap homebrew 3-axis milling machine that can be built with “basic shop skills and tool access.” [693]

One promising early attempt at distributed garage manufacturing is 100kGarages, which we will examine in some detail in the Appendix. 100kGarages is a joint effort of the ShopBot 3-axis router company and the Ponoko open design network (which itself linked a library of designs to local Makers with CNC laser cutters).

Besides Ponoko, a number of other commercial firms have appeared recently which offer production of custom parts to the customer’s digital design specifications, at a modest price, using small-scale, multipurpose desktop machinery. Two of the most prominent are Big Blue Saw [694] and eMachineShop. [695] The way the latter works, in particular, is described in a Wired article:

The concept is simple: Boot up your computer and design whatever object you can imagine, press a button to send the CAD file to Lewis’ headquarters in New Jersey, and two or three weeks later he’ll FedEx you the physical object. Lewis launched eMachineShop a year and a half ago, and customers are using his service to create engine-block parts for hot rods, gears for home-brew robots, telescope mounts—even special soles for tap dance shoes. [696]

Another project of the same general kind was just recently announced: CloudFab, which offers access to a network of job-shops with 3-D printers. [697] Also promising is mobile manufacturing (Factory in a Box). [698]

Building on our earlier speculation about networked small machine shops and hobbyist workshops, new desktop manufacturing technology offers an order of magnitude increase in the quality of work that can be done for the most modest expense.

Kevin Kelly argues that the actual costs of physical production are only a minor part of the cost of manufactured goods.

....material industries are finding that the costs of duplication near zero, so they too will behave like digital copies. Maps just crossed that threshold. Genetics is about to. Gadgets and small appliances (like cell phones) are sliding that way. Pharmaceuticals are already there, but they don’t want anyone to know. It costs nothing to make a pill. [699]

If, as Kelley suggests, the cheapness of digital goods reflects the imploding cost of copying them, it follows that the falling cost of “copying” physical goods will follow the same pattern.

There is a common thread running through all the different theories of the interface between peer production and the material world: as technology for physical production becomes feasible on increasingly smaller scales and at less cost, and the transaction costs of aggregating small units of capital into large ones fall, there will be less and less disconnect between peer production and physical production.

It’s worth repeating one last time: the distinction between Stallman’s “free speech” and “free beer” is eroding. To the extent that embedded rents on “intellectual property” are a significant portion of commodity prices, “free speech” (in the sense of the free use of ideas) will make our “beer” (i.e., the price of manufactured commodities) at least a lot cheaper. And the smaller the capital outlays required for physical production, the lower the transaction costs for aggregating capital, and the lower the overhead, the cheaper the beer becomes as well.

If, as we saw Sabel and Piore say above, the computer is a textbook example of an artisan’s tool—i.e., an extension of the user’s creativity and intellect—then small-scale, computer-controlled production machinery is a textbook illustration of E. F. Schumacher’s principles of appropriate technology:

cheap enough that they are accessible to virtually everyone;

suitable for small-scale application; and

compatible with man’s need for creativity.

D. The Microenterprise

We have already seen, in Chapter Four, the advantages of low overhead and small batch production that lean, flexible manufacturing offers over traditional mass-production industry. The household microenterprise offers these advantages, but increased by another order of magnitude. As we saw Charles Johnson suggest above, the use of “spare cycles” of capital goods people own anyway results in enormous cost efficiencies.

Consider, for example, the process of running a small, informal brew pub or restaurant out of your home, under a genuine free market regime. Buying a brewing vat and a few small fermenters for your basement, using a few tables in a remodeled spare room as a public restaurant area, etc., would require a small bank loan for at most a few thousand dollars. And with that capital outlay, you could probably make payments on the debt with the margin from one customer a day. A few customers evenings and weekends, probably found mainly among your existing circle of acquaintances, would enable you to initially shift some of your working hours from wage labor to work in the restaurant, with the possibility of gradually phasing out wage labor altogether or scaling back to part time, as you built up a customer base. In this and many other lines of business (for example a part-time gypsy cab service using a car and cell phone you own anyway), the minimal entry costs and capital outlay mean that the minimum turnover required to pay the overhead and stay in business would be quite modest. In that case, a lot more people would be able to start small businesses for supplementary income and gradually shift some of their wage work to self employment, with minimal risk or sunk costs.

But that’s illegal . You have to buy an extremely expensive liquor license, as well as having an industrial sized stove, dishwasher, etc. You have to pay rent on a separate, dedicated commercial building. And that level of capital outlay can only be paid off with a large dining room and a large kitchen/waiting staff, which means you have to keep the place filled or the overhead costs will eat you alive—in other words, Chapter Eleven. These high entry costs and the enormous overhead are the reason you can’t afford to start out really small and cheap, and the reason restaurants have such a high failure rate. It’s illegal to use the surplus capacity of the ordinary household items we have to own anyway but remain idle most of the time (including small-scale truck farming): e.g. RFID chip requirements and bans on unpasteurized milk, high fees for organic certification, etc., which make it prohibitively expensive to sell a few hundred dollars surplus a month from the household economy. As Roderick Long put it,

In the absence of licensure, zoning, and other regulations, how many people would start a restaurant today if all they needed was their living room and their kitchen? How many people would start a beauty salon today if all they needed was a chair and some scissors, combs, gels, and so on? How many people would start a taxi service today if all they needed was a car and a cell phone? How many people would start a day care service today if a bunch of working parents could simply get together and pool their resources to pay a few of their number to take care of the children of the rest? These are not the sorts of small businesses that receive SBIR awards; they are the sorts of small businesses that get hammered down by the full strength of the state whenever they dare to make an appearance without threading the lengthy and costly maze of the state’s permission process. [700]

Shawn Wilbur, an anarchist writer with half a lifetime in the bookselling business, describes the resilience of a low-overhead business model: “My little store was enormously efficient, in the sense that it could weather long periods of low sales, and still generally provide new special order books in the same amount of time as a Big Book Bookstore.” The problem was that, with the state-imposed paperwork burden associated with hiring help, it was preferable—i.e. less complicated—to work sixty-hour weeks. [701] The state-imposed administrative costs involved in the cooperative organization of labor amount to an entry barrier that can only be hurdled by the big guy. After some time out of the business of independent bookselling and working a number of wage-labor gigs in chain bookstores, Wilbur has recently announced the formation of Corvus—a micropublishing operation that operates on a print-on-demand basis. [702] In response to my request for information on his business model, Wilbur wrote:

In general..., Corvus Editions is a hand-me-down laptop and a computer that should probably have been retired five years ago, and which has more than paid for itself in my previous business, some software, all of which I previously owned and none of which is particularly new or spiffy, a $20 stapler, a $150 laser printer, a handful of external storage devices, an old flatbed scanner, the usual computer-related odds and ends, and the fruits of thousands of hours of archival research and sifting through digital sources (all of which fits on a single portable harddrive.) The online presence did not involve any additional expense, beyond the costs of the free archive, except for a new domain name. My hosting costs, including holding some domain registrations for friendly projects, total around $250/year, but the Corvus site and shop could be hosted for $130. Because Portland has excellent resources for computer recycling and the like, I suspect a similar operation, minus the archive, using free Linux software tools, could almost certainly be put together for less than $500, including a small starting stock of paper and toner—and perhaps more like $300.

The cost of materials is some 20% of Wilbur’s retail price on average, with the rest of the price being compensation free and clear for his labor: “the service of printing, folding, stapling and shipping....” There are no proprietary rents because the pdf files are themselves free for download; Wilbur makes money entirely from the convenience-value of his doing those printing, etc., services for the reader. [703]

As an example of a more purely service-oriented microenterprise, Steve Herrick describes the translators’ cooperative he’s a part of:

...We effectively operate as a job shop. Work comes in from clients, and our coordinator posts the offer on email. People offer to take it as they’re available. So far, the supply and demand have been roughly equal. When multiple people are available, members take priority over associates, and members who have taken less work recently take priority over those who have taken more. We have seven members, plus eight or ten associates, who have not paid a buy-in and who are not expected to attend meetings. They do, however, make the same pay for the same work. Interpreting and translating are commonly done alone. So, why have a co-op? First, we all hate doing the paperwork and accounting. We’d rather be doing our work. A co-op lets us do that. The other reason is branding/marketing/reputation. Clients can’t keep track of the contact info for a dozen people, but they can remember the email and phone number for our coordinator, who can quickly contact us all. Also, with us, they get a known entity, even if it’s a new person. (Unlike most other services an organization might contract for, clients don’t usually know how well their interpreters are doing for their pay. With us, they worry about that a lot less.) We keep our options open by taking many kinds of work. We don’t compete with the local medical and court interpreter systems (and some of us also work in them), but that leaves a lot of work to do: we work for schools and universities, non-profits, small businesses, individuals, unions, and so on. We’ve pondered whether there are clients we would refuse to work for, but so far, that hasn’t been an issue. We have almost no overhead. We are working on getting an accountant, but we don’t anticipate having to pay more than a few hours a month for that. Our books aren’t that complicated. We also pay rent to the non-profit we spun off from, but that’s set up as a percentage of our income, not a fixed amount, so it can’t put us under water. It also serves as an incentive for them to send us work! Other than that, we really have no costs. As a co-op, taxes are “pass-through,” meaning the co-op itself pays no taxes; we pay taxes on our income from the co-op. We will be doing some marketing soon, but we’re investigating very low-cost ways to reach our target market, like in-kind work. And we have no capital costs, apart from our interpreting mic and earpieces, which we inherited from the non-profit. Occasionally, we have to buy batteries, but I’m going to propose we buy rechargables, so even that won’t be a recurring cost. And finally, we’re looking in to joining our local Time Bank. What this means is that we can operate at a very low volume. As a ballpark figure, I’d say we average an hour of work per member per week. That’s not much more than a glorified hobby. Even so, 2009 brought in considerably more work than 2008, which saw twice as much work as 2007 (again, with essentially no marketing). We’re not looking for it to increase too rapidly, because each of us has at least one other job, and six of the seven of us have kids (ranging from mine at three weeks to one member with school-age grandkids). A slow, steady increase would be great. [704]

More generally, this business model applies to a wide range of service industries where overhead requirements are minimal. An out of work plumber or electrician can work out of his van with parts from the hardware store, and cut his prices by the amount that formerly went to commercial rent, management salaries and office staff, and so forth—not to mention working for a “cash discount.” Like Herrick’s translator cooperative, one of the main functions of a nursing or other temporary staffing agency is branding—providing a common reference point for accountability to clients. But the actual physical capital requirements don’t go much beyond a phone line and mail drop, and maybe a scanner/fax. The business consists, in essence, of a personnel list and a way of contacting them. The main entry barrier to cooperative self-employment in this field is non-competition agreements (when you work for a client of a commercial staffing agency, you agree not to work for that client either directly or through another agency for some period—usually three months—after your last assignment there). But with a large enough pool of workers in the cooperative agency, it should be possible to direct assignments to those who haven’t worked for a particular client, until the non-competition period expires.

The lower capital outlays and fixed costs fall, the more meaningless the distinction between being “in business” and “out of business” becomes.

Another potential way to increase the utilization of capacity of capital goods in the informal and household economy is through sharing networks of various kinds. The sharing of tools through neighborhood workshops, discussed earlier, is one application of the general principle. Other examples include ride-sharing, time-sharing one another’s homes during vacations, gift economies like FreeCycle, etc. Regarding ride-sharing in particular, Dilbert cartoonist Scott Adams speculates quite plausibly on the potential for network technologies like the iPhone to facilitate sharing in ways that previous technology could not, by reducing the transaction costs of connecting participants. The switch to network connections by mobile phone increases flexibility and capability for short-term changes and adjustments to plans by an order of magnitude over desktop computers. Adams describes how such a system might work:

...[T]he application should use GPS to draw a map of your location, with blips for the cars available for ridesharing. You select the nearest blip and a bio comes up telling you something about the driver, including his primary profession, age, a photo, and a picture of the car. If you don’t like something about that potential ride, move on to the next nearest blip. Again, you have a sense of control. Likewise, the driver could reject you as a passenger after seeing your bio. After you select your driver, and he accepts, you can monitor his progress toward your location by the moving blip on your iPhone.... I also imagine that all drivers would have to pass some sort of “friend of a friend” test, in the Facebook sense. In other words, you can only be a registered rideshare driver if other registered drivers have recommended you. Drivers would be rated by passengers after each ride, again by iPhone, so every network of friends would carry a combined rating. That would keep the good drivers from recommending bad drivers because the bad rating would be included in their own network of friends average.... And the same system could be applied to potential passengers. As the system grew, you could often find a ride with a friend of a friend. [705]

Historically the prevalence of such enterprises has been associated with economic downturn and unemployment.

The shift to value production outside the cash nexus in the tech economy has become a common subject of discussion in recent years. We already discussed at length, in Chapter Three, how technological innovation has caused the floor to drop out from beneath capital outlay costs, and thereby rendered a great deal of venture capital superfluous. Although this was presented as a negative from the standpoint of capitalism’s crisis of overaccumulation, we can also see it as a positive from the standpoint of opportunities for the growth of a new economy outside the cash nexus.

Michel Bauwens describes the way most innovation, since the collapse of the dotcom bubble, has shifted to the social realm and become independent of capital.

To understand the logic of this promise, we can look to a less severe, but nevertheless serious crisis: that of the internet bubble collapse in 2000–1. As an internet entrepreneur, I personally experienced both the manic phase, and the downturn, and the experience was life changing because of the important discovery I and others made at that time. All the pundits where [sic] predicting, then as now, that without capital, innovation would stop, and that the era of high internet growth was over for a foreseeable time. In actual fact, the reality was the very opposite, and something apparently very strange happened. In fact, almost everything we know, the Web 2.0, the emergence of social and participatory media, was born in the crucible of that downturn. In other words, innovation did not slow down, but actually increased during the downturn in investment. This showed the following new tendency at work: capitalism is increasingly being divorced from entrepreneurship, and entrepreneurship becomes a networked activity taking place through open platforms of collaboration. The reason is that internet technology fundamentally changes the relationship between innovation and capital. Before the internet, in the Schumpeterian world, innovators need capital for their research, that research is then protected through copyright and patents, and further funds create the necessary factories. In the post-schumpeterian world, creative souls congregate through the internet, create new software, or any kind of knowledge, create collaboration platforms on the cheap, and paradoxically, only need capital when they are successful, and the servers risk crashing from overload. As an example, think about Bittorrent, the most important software for exchanging multimedia content over the internet, which was created by a single programmer, surviving through a creative use of some credit cards, with zero funding. But the internet is not just for creative individual souls, but enables large communities to cooperate over platforms. Very importantly, it is not limited to knowledge and software, but to everything that knowledge and software enables, which includes manufacturing. Anything that needs to be physically produced, needs to be ‘virtually designed’ in the first place. This phenomena [sic] is called social innovation or social production, and is increasingly responsible for most innovation.... But what does this all mean for the Asian economic crisis and the plight of the young people that we touched upon at the beginning? The good news is this: first, the strong distinction between working productively for a wage, and idly waiting for one, is melting. All the technical and intellectual tools are available to allow young people, and older people for that matter, to continue being engage [sic] in value production, and hence also to continue to build their experience (knowledge capital), their social life (relationship capital) and reputation. All three of which will be crucial in keeping them not just employable, but will actually substantially increase their potential and capabilities. The role of business must be clear: it can, on top of the knowledge, software or design commons created by social production, create added value services that are needed and demanded by the market of users of such products (which includes other businesses), and can in turn sustain the commons from which it benefits, making the ecology sustainable. While the full community of developers create value for businesses to build upon, the businesses in term help sustain the infrastructure of cooperation which makes continued development possible. [706]

The shift of value-creation outside the cash nexus provoked an interesting blogospheric discussion between Tyler Cowen and John Quiggin. Cowen raised the possibility that much of the productivity growth in recent years has taken place “outside of the usual cash and revenue-generating nexus.” [707] Quiggin, in an article appropriately titled “The end of the cash nexus,” took the idea and ran with it:

There has been a huge shift in the location of innovation, with much of it either deriving from, or dependent on, public goods produced outside the market and government sectors, which may be referred to as social production.... If improvements in welfare are increasingly independent of the market, it would make sense to shift resources out of market production, for example by reducing working hours. The financial crisis seems certain to produce at least a temporary drop in average hours, but the experience of the Depression and the Japanese slowdown of the 1990s suggest that the effect may be permanent.... [708]

Michel Bauwens, as we saw in Chapter Three, draws a parallel between the current crisis of realization in capitalism and previous crises like that of the Roman slave economy. When the system hits limits to extensive development, it instead turns to intensive development in ways that lead to a phase transition. But there is another parallel, Bauwens argues: each systemic decline and phase transition is associated with an “exodus” of labor:

The first transition: Rome to feudalism

At some point in its evolution (3 rd century onwards?), the Roman empire ceases to expand (the cost of of maintaining empire and expansion exceeds its benefits). No conquests means a drying up of the most important raw material of a slave economy, i.e. the slaves, which therefore become more ‘expensive’. At the same time, the tax base dries up, making it more and more difficult to maintain both internal coercion and external defenses. It is in this context that Perry Anderson mentions for example that when Germanic tribes were about to lay siege to a Roman city, they would offer to free the slaves, leading to an exodus of the city population. This exodus and the set of difficulties just described, set of a reorientation of some slave owners, who shift to the system of coloni, i.e. serfs. I.e. slaves are partially freed, can have families, can produce from themselves and have villages, giving the surplus to the new domain holders. Hence, the phase transition goes something like this: 1) systemic crisis ; 2) exodus 3) mutual reconfiguration of the classes....

Hypothesis of a third transition: capitalism to peer to peer

Again, we have a system faced with a crisis of extensive globalization, where nature itself has become the ultimate limit. It’s way out, cognitive capitalism, shows itself to be a mirage. What we have then is an exodus, which takes multiple forms: precarity and flight from the salaried conditions; disenchantement with the salaried condition and turn towards passionate production. The formation of communities and commons are shared knowledge, code and design which show themselves to be a superior mode of social and economic organization. The exodus into peer production creates a mutual reconfiguration of the classes. A section of capital becomes netarchical and ‘empowers and enables peer production’, while attempting to extract value from it, but thereby also building the new infrastructures of cooperation. [709]

If, as we saw in earlier chapters, economic downturns tend to accelerate the expansion of the custom industrial periphery at the expense of the mass-production core, such downturns also accelerate the shift from wage labor to self-employment or informal production outside the cash nexus. James O’Connor described the process in the economic stagnation of the 1970s and 1980s: “the accumulation of stocks of means and objects of reproduction within the household and community took the edge off the need for alienated labor.”

Labor-power was hoarded through absenteeism, sick leaves, early retirement, the struggle to reduce days worked per year, among other ways. Conserved labor-power was then expended in subsistence production.... The living economy based on non- and anti-capitalist concepts of time and space went underground in the reconstituted household; the commune; cooperatives; the single-issue organization; the self-help clinic; the solidarity group. Hurrying along the development of the alternative and underground economies was the growth of underemployment... and mass unemployment associated with the crisis of the 1980s. “Regular” employment and union-scale work contracted, which became an incentive to develop alternative, localized modes of production.... ...New social relationships of production and alternative employment, including the informal and underground economies, threatened not only labor discipline, but also capitalist markets.... Alternative technologies threatened capital’s monopoly on technological development... Hoarding of labor-power threatened capital’s domination of production. Withdrawal of labor-power undermined basic social disciplinary mechanisms.... [710]

And back in the recession of the early eighties, Samuel Bowles and Herbert Gintis speculated that the “reserve army of the unemployed” was losing some of its power to depress wages. They attributed this to the “ partial deproletarianization of wage labor ” (i.e. the reduced profile of wage labor alone as the basis of household subsistence). Bowles and Gintis identified this reduced dependency largely on the welfare state, which seems rather quaint for anyone who since lived through the Reagan and Clinton years. [711] But the partial shift in value creation from paid employment to the household and social economies, which we have seen in the past decade, fully accords with the same principle.

Dante-Gabryell Monson speculated on the possibility that the open manufacturing movement was benefiting from the skills of corporate tech people underemployed in the current downturn, or even from their deliberate choice to hoard labor:

Is there a potential scenario for a brain drain from corporations to intentional peer producing networks ? ....Can part-time , non-paid ( in mainstream money ) “hobby” work in open, diy, collaborative convergence spaces become an *argument for long term material security of the participating peer** towards he’s/her family ? Hacker spaces seem to be convergence spaces for open source programmers, and possibly more and more other artists, open manufacturing, diy permaculture, ... ? Can we expect a “Massive Corporate Dropout”... to drain into such diy convergence and interaction spaces ? Can “Corporate Dropouts” help financing new open p2p infrastructures ? Is there an increase of part-time “Corporates”, working part time in open p2p ? Would such a transition , potentially part time “co-working / co-living “ space be a convergence “model” and scenario some of us would consider working on ?... I personally observe some of my friends working for money as little as possible, sometimes on or two months a year, and spend the rest of their time working on their own projects. [712]

The main cause for the apparently stabilizing level of unemployment in the present recession, despite a decrease in the number of employed, is that so many “discouraged workers” have disappeared from the unemployment rolls altogether. At the same time, numbers for self-employment are continuing to rise.

We [Canadians] lost another 45,000 jobs in July, but the picture is much worse on closer examination. There were 79,000 fewer workers in paid jobs compared to June, while self-employment rose by 35,000. This was on top of another big jump in self-employment of 37,000 last month. Put it all together and the picture is of large losses in paid jobs, with the impact on the headline unemployment rate cushioned by workers giving up the search for jobs or turning to self-employment. [713]

A recent article in the Christian Science Monitor discussed the rapid growth of the informal economy, even as the formal economy and employment within it shrink (Friedrich Schneider, a scholar who specializes in the shadow economy, expects it to grow at least five percent this year). Informal enterprise is mushrooming among the unemployed and underemployed of the American underclass: street vendors of all kinds (including clothing retail), unlicensed moving services consisting of a pickup truck and cell phone, people selling food out of their homes, etc.

And traditional small businesses in permanent buildings resent the hell out of it (if you ever saw that episode of The Andy Griffith Show where established retailer Ben Weaver tries to shut down Emmett’s pushcart, you get the idea).

“Competition is competition,” says Gene Fairbrother, the lead small-business adviser in Dallas for the National Association for the Self-Employed. But competition from producers who don’t pay taxes and licensing fees isn’t fair to the many struggling small businesses who play by the rules. Mr. Fairbrother says he’s seen an increase in the number of callers to his Shop Talk show who ask about starting a home-based business, and many say they’re working in a salon and would rather work out of their homes or that they want to start selling food from their kitchens. Businesses facing this price pressure should promote the benefits of regulation, he advises, instead of trying to get out from under it.

Uh huh. Great “benefits” if you’re one of the established businesses that uses the enormous capital outlays for rent on dedicated commercial real estate, industrial-sized ovens and dishwashers, licensing fees, etc., to crowd out competitors. Not so great if you’re one of the would-be microentrepreneurs forced to pay artificially inflated overhead on such unnecessary costs, or one of the consumers who must pay a price with such overhead factored in. Parasitism generally has much better benefits for the tapeworm than for the owner of the colon.

Fortunately, in keeping with our themes of agility and resilience throughout this book, microentrepreneurs tend to operate on a small scale beneath the radar of the government’s taxing, regulatory and licensing authorities. In most cases, the cost of catching a small operator with a small informal client network is simply more than it’s worth.

The Internal Revenue Service or local tax authorities would have to track down thousands of elusive small vendors and follow up for payment to equal, by one estimate, the $100 million a year that the US could gain by taxing several hundred holders of Swiss and other foreign bank accounts. [714]

So we can expect the long-term structural reduction in employment and the shortage of liquidity, in the current Great Recession or Great Malaise, to lead to rapid growth of an informal economy based on the kinds of household microenterprises we described above. Charles Hugh Smith, after considering the enormous fixed costs of conventional businesses and the inevitability of bankruptcy for businesses with such high overhead in a period of low sales, draws the conclusion that businesses with low fixed costs are the wave of the future. Here is his vision of the growing informal sector of the future:

The recession/Depression will cut down every business paying high rent and other fixed costs like a razor-sharp scythe hitting dry corn stalks....

...[H]igh fixed costs will take down every business which can’t remake itself into a low-fixed-cost firm....

For the former employees, the landscape is bleak: there are no jobs anywhere, at any wage....

So how can anyone earn a living in The End of Work? Look to Asia for the answer. The MSM snapshot of Asia is always of glitzy office towers in Shanghai or a Japanese factory or the docks loaded with containers: the export machine. But if you actually wander around Shanghai (or any city in Japan, Korea, southeast Asia, etc.) then you find the number of people working in the glitzy office tower is dwarfed by the number of people making a living operating informal businesses. Even in high-tech, wealthy Japan, tiny businesses abound. Wander around a residential neighborhood and you’ll find a small stall fronting a house staffed by a retired person selling cigarettes, candy and soft drinks. Maybe they only sell a few dollars’ worth of goods a day, but it’s something, and in the meantime the proprietor is reading a magazine or watching TV. In old Shanghai, entire streets are lined with informal vendors. Some are the essence of enterprise: a guy buys a melon for 40 cents, cuts it into 8 slices and then sells the slices for 10 cents each. Gross profit, 40 cents. In Bangkok, such areas actually have two shifts of street vendors: one for the morning traffic, the other for the afternoon/evening trade. The morning vendors are up early, selling coffee, breakfasts, rice soup, etc. to workers and school kids. By 10 o’clock or so, they’ve folded up and gone home. That clears the way for the lunch vendors, who have prepared their food at home and brought it to sell. In some avenues, a third shift comes in later to sell cold drinks, fruit and meat sticks as kids get out of school and workers head home.

Fixed costs of these thriving enterprises: a small fee to some authority, an old cart and umbrella—and maybe a battered wok or ice chest.

So this is what I envision happening as the Depression drives standard-issue high-fixed cost “formal” enterprises out of business in the U.S.:

The mechanic who used to tune your (used) vehicle for $300 at the dealership (now gone) tunes it up in his home garage for $120—parts included.

The gal who cut your hair for $40 at the salon now cuts it at your house for $10.

The chef who used to cook at the restaurant that charged $60 per meal now delivers a gourmet plate to your door for $10 each.

The neighbor kids’ lemonade stand is now a permanent feature; you pay 50 cents for a lemonade or soft drink instead of $3 at Starbucks.

Used book sellers spread their wares on the sidewalk, or in fold-up booths; for reasons unknown, one street becomes the “place to go buy used books.”

The neighborhood jazz guy/gal sets up and plays with his/her pals in the backyard; donations welcome.

The neighborhood chips in a few bucks each to make it worth a local Iraqi War vet’s time to keep an eye on things.

When your piece-of-crap Ikea desk busts, you call a guy who can fix it for $10 (glue, clamps, a few ledger strips and screws) rather than go blow $50 on another particle board P.O.C. which will bust anyway. (oh, and you don’t have the $50 anyway.)

The guy with a Dish runs cables to the other apartments in his building for a few bucks each.

One person has an “unlimited” Netflix account, and everyone pays him/her a buck a week to get as many movies as they want (he/she burns a copy of course).

The couple with the carefully tended peach or apple tree bakes 30 pies and trades them for vegtables, babysitting, etc. [715]

The crushing costs of formal business (State and local government taxes and junk fees rising to pay for unaffordable pensions, etc.) and the implosion of the debt-bubble economy will drive millions into the informal economy of barter, trade and “underground” (cash) work.

As small businesses close their doors and corporations lay off thousands, the unemployed will of necessity shift their focus from finding a new formal job (essentially impossible for most) to fashioning a livelihood in the informal economy.

One example of the informal economy is online businesses—people who make a living selling used items on eBay and other venues. Such businesses can be operated at home and do not require storefronts, rent to commercial landlords, employees, etc., and because they don’t require a formal presence then they also fly beneath all the government junk fees imposed on formal businesses. I have mentioned such informal businesses recently, and the easiest way to grasp the range of possibilities is this: whatever someone did formally, they can do informally. Chef had a high fixed-cost restaurant which bankrupted him/her? Now he/she prepares meals at home and delivers them to neighbors/old customers for cash. No restaurant, no skyhigh rent, no employees, no payroll taxes, no business licenses, inspection fees, no sales tax, etc. Every dime beyond the cost of food and utilities to prepare the meals stays in Chef’s pocket rather than going to the commercial landlords and local government via taxes and fees. All the customers who couldn’t afford $30 meals at the restaurant can afford $10. Everybody wins except commercial landlords (soon to be bankrupt) and local government (soon to be insolvent). How can you bankrupt all the businesses and not go bankrupt yourself? As long as Chef reports net income on Schedule C, he/she is good to go with Federal and State tax authorities. [And if Chef doesn’t, fuck ‘em.] Now run the same scenario for mechanics, accountants, therapists, even auto sales—just rent a house with a big yard or an apartment with a big parking lot and away you go; the savvy entrepreneur who moves his/her inventory can stock a few vehicles at a time. No need for a huge lot, high overhead, employees or junk fees. It’s cash and carry. Lumber yard? Come to my backyard lot. Whatever I don’t have I can order from a jobber and have delivered to your site. This is the result of raising the fixed costs of starting and running a small business to such a backbreaking level that few formal businesses can survive. [716]

Appendix: Case Studies in the Coordination of Networked Fabrication and Open Design

1. Open Source Ecology/Factor e Farm. Open Source Ecology, with its experimental demo site at Factor e Farm, is focused on developing the technological building blocks for a resilient local economy.

We are actively involved in demonstrating the world’s first replicable, post-industrial village. We take the word replicable very seriously—we do not mean a top-down funded showcase—but one that is based on ICT, open design, and digital fabrication—in harmony with its natural life support systems. As such, this community is designed to be self-reliant, highly productive, and sufficiently transparent so that it can truly be replicated in many contexts—whether it’s parts of the package or the whole. Our next frontier will be education to train Village Builders—just as we’re learning how to do it from the ground up. [717] Open Source Ecology’s latest core message is “Building the world’s first replicable, open source, modern off-grid global village—to transcend survival and evolve to freedom.”... Replicable means that the entire operation can be copied and ‘replicated’ at another location at low cost. Open source means that the knowledge of how it works and how to make it is documented to the point that others can “make it from scratch.” It can also be changed and added to as needed.... Permafacture: A car is a temporarily useful consumer product—eventually it breaks down and is no longer useful as a car. The same is true for almost any consumer product—they are temporary, and when they break down they are no longer useful for their intended purpose. They come from factories that use resources from trashing ecosystems and using lots of oil. Even the “green” ones. Most consumer food is grown on factory farms using similar processes, and resulting in similar effects. When the resources or financing for those factories and factory farms dries up they stop producing, and all the products and food they made stop flowing into the consumer world. Consumers are dependent on these products and food for their very survival, and every product and food they buy from these factories contributes to the systems that are destroying the ecosystems that they will need to survive when finances or resources are interrupted. The more the consumers buy, the more dependent they are on the factories consuming and destroying the last of the resources left in order to maintain their current easy and dependent survival. These factories are distributed all over the world, and need large amounts of cheap fuel to move the products to market through the global supply and production chain, trashing ecosystems all along the way. The consumption of the products and food is completely disconnected from their production and so consumers do not actually see any of these connections or their interruptions as the factories and supply chains try hard to keep things flowing smoothly, until things reach their breaking point and the supply of products to consumers is suddenly interrupted. Open Source Ecology aims to create the means of production and reuse on a small local scale, so that we can produce the machines and resources that make survival trivial without being dependent on global supply and production chains, trashing ecosystems, and cheap oil. [718]

The focus of OSE is to secure “right livelihood,” according to founder Marcin Jakubowski, who cites Vinay Gupta’s “The Unplugged” as a model for achieving it:

The focus of our Global Village Construction program is to deploy communities that live according to the intention of right livelihood. We are considering the ab initio creation of nominally 12 person communities, by networking and marketing this Buy Out at the Bottom (BOAB) package, at a fee of approximately $5k to participants. Buying Out at the Bottom is a term that I borrowed from Vinay Gupta in his article about The Unplugged—where unplugging means the creation of an independent life-support infrastructure and financial architecture--a society within society—which allowed anybody who wanted to “buy out” to “buy out at the bottom” rather than “buying out at the top.” Our Global Village Construction program is an implementation of The Unplugged lifestyle. With 12 people buying out at $5k each, that is $60k seed infrastructure capital. We have an option to stop feeding invading colonials, from our own empire-building governments to slave goods from China. Structurally, the more self-sufficient we are, the less we have to pay for our own enslavement—through education that dumbs us down to producers in a global workforce—through taxation that funds rich peoples’ wars of commercial expansion—through societal engineering and PR that makes the quest for an honest life dishonorable if we can’t keep up with the Joneses. [719]

Several of the most important projects interlock to form an “OSE Product Ecology.” [720] For example the LifeTrac Open Source Tractor acts as prime mover for Fabrication (i.e., the machine shop, in which the Multi-Machine features prominently), and the Compressed Earth Block Press and the Sawmill, which in turn are the basic tools for housing construction. The LifeTrac also functions, of course, as a tractor for hauling and powering farm machinery.

Like LifeTrac, the PowerCube—a modular power-transmission unit—is a multi-purpose mechanism designed to work with several of the other projects.

Power Cube is our open source, self-contained, modular, interchangeable, hydraulic power unit for all kinds of power eguipment. It has an 18 hp gasoline engine coupled to a hydraulic pump, and it will later be be powered by a flexible-fuel steam engine. Power Cube will be used to power MicroTrac (under construction) and it is the power source for the forthcoming CEB Press Prototype 2 adventures. It is designed as a general power unit for all devices at Factor e Farm, from the CEB press, power take-off (PTO) generator, heavy-duty workshop tools, even to the LifeTrac tractor itself. Power Cube will have a quick attachment, so it can be mounted readily on the quick attach plate of LifeTrac. As such, it can serve as a backup power source if the LifeTrac engine goes out.... The noteworthy features are modularity, hydraulic quick-couplers, lifetime design, and design-for-disassembly. Any device can be plugged in readily through the quick couplers. It can be maintained easily because of its transparency of design, ready access to parts, and design for disassembly. It is a major step towards realizing the true, life-size Erector Set or Lego Set of heavy-duty, industrial machinery in the style of Industrial Swadeshi. [721] A universal mechanical power source is one of the key components of the Global Village Construcgtion Set – the set of building blocks for creating resilient communities. The basic concept is that instead of using a dedicated engine on a particular powered device – which means hundreds of engines required for a complete resilient community, you need one (or a few) power unit. If this single power unit can be coupled readily to the powered device of interest, then we have the possibility of this single power unit being interchangeable between an unlimited number of devices. Our implementation of this is the hydrauilic PowerCube – whose power can be tapped simply by attaching 2 hydraulic hoses to a device of interest. A 3/4″ hydraulic hose... can transfer up to 100 horsepower in the form of usable hydraulic fluid flow. [722]

Among projects that have reached the prototype stage, the foremost is the Compressed Earth Block Press, which can be built for $5000—some 20% of the price of the cheapest commercial competitor. [723] In field testing, the CEB Press demonstrated the capability of producing a thousand blocks in eight-hours, on a day with bad weather (the expected norm in good weather is 1500 a day). [724] On August 20, 2009, Factor e Farm announced completion of a second model prototype, its most important new feature being an extendable hopper that can be fed directly by a tractor loader. Field testing is expected to begin shortly. [725]

The speed of the CEB Press was recently augmented by the prototyping of a complementary product, the Soil Pulverizer. Initial testing achieved 5 ton per hour soil throughput, while The Liberator CEB press requires about 1.5 tons of soil per hour.... Stationary soil pulverizers comparable in throughput to ours cost over $20k. Ours cost $200 in materials—which is not bad in terms of 100-fold price reduction. The trick to this feat is modular design. We are using components that are already part of our LifeTrac infrastructure. The hydraulic motor is our power take-off (PTO) motor, the rotor is the same tiller that we made last year—with the tiller tines replaced by pulverizer tines. The bucket is the same standard loader bucket that we use for many other applications.... It is interesting to compare this development to our CEB work from last year—given our lesson that soil moving is the main bottleneck in earth building. It takes 16 people, 2 walk-behind rototillers, many shovels and buckets, plus backbreaking labor—to load our machine as fast as it can produce bricks. We can now replace this number of people with 1 person—by mechanizing the earth moving work with the tractor-mounted pulverizer. In a sample run, it took us about 2 minutes to load the pulverizer bucket—with soil sufficient for about 30 bricks. Our machine produces 5 bricks per minute—so we have succeeded in removing the soil-loading bottleneck from the equation. This is a major milestone for our ability to do CEB construction. Our results indicate that we can press 2500 bricks in an 8 hour day—with 3 people. [726]

In October Jakubowski announced plans to release the CEB Beta Version 1.0 on November 1, 2009. The product as released will have a five block per minute capacity and include automatic controls (the software for which is being released on an open-source basis). [727] The product was released, on schedule, on November 1. [728] Shortly thereafter, OSE was considering options for commercial production of the CEB Press as a source of revenue to fund new development projects. [729]

The MicroTrac, a walk-behind tractor, has also been prototyped. Its parts, including the Power Cube, wheel, quick-attach motor and cylinder are interchangeable with LifeTrac and other machines. “We can take off the wheel motor from MicroTrac, and use it to power shop tools.” [730]

OSE’s planned facilities for replication and machining are especially exciting, including a 3-D printer and a Multi-Machine with added CNC controls.

There is a significant set of open source technologies available for rapid prototyping in small workshops. By combining 3D printing with low-cost metal casting, and following with machining using a computer controlled Multimachine, the capacity arises to make rapid prototypes and products from plastic and metal. This still does not address the feedstocks used, but it is a practical step towards the post-centralist, participatory, distributive economy with industrial swadeshi on a regional scale.... The interesting part is that the budget is $500 for RepRap, $200 for the casting equipment, and $1500 for a Multimachine with CNC control added. Using available knowhow, this can be put together in a small workshop for a total of about $2200—for full, LinuxCNC computer controlled rapid fabrication in plastic and metal. Designs may be downloaded from the internet, and local production can take place based on global design. This rapid fabrication package is one of our near-term (one year) goals. The research project in this area involves the fabrication and integration of the individual components as described.... Such a project is interesting from the standpoint of localized production in the context of the global economy—for creating significant wealth in local economies. This is what we call industrial swadeshi. For example, I see this as the key to casting and fabricating low-cost steam engines ($300 for 5 hp) for the Solar Turbine—as one example of Gandhi’s mass production philosophy. [731]

The entire Fab Lab project aims to produce “the following equipment infrastructure, in order of priorities...”:

300 lb/hour steel melting Foundry—$1000

Multimachine-based Lathe, mill, and drill, with addition of CNC control—$1500

CNC Torch Table (plasma and oxyacetylene), adaptable to a router table

RepRap or similar 3D printer for printing casting molds—$400

Circuit fabrication—precise xyz router table

Open Source Wire Feed Welder [732]

In August 2009, Lawrence Kincheloe moved to Factor e Farm under contract to build the torch table in August and September. [733] He ended his visit in October with work on the table incomplete, owing to “a host of fine tuning and technical difficulties which all have solutions but were not addressable in the time left.” [734] Nevertheless, the table was featured in the January issue of MAKE Magazine as RepTab (the name reflects the fact that—aside from motors and microcontrollers—it can replicate itself):

One of the interesting features of RepTab is that the cutting head is interchangeable (router, plasma, oxyacetylene, laser, water jet, etc.), making it versatile and extremely useful. “Other machines make that difficult without major modifications,” says Marcin Jakubowski, the group’s founder and director. “We can make up to 10-foot-long windmill blades if we modify the table as a router table. That’s pretty useful.”> [735]

Since then, Factor e Farm has undertaken to develop an open-source lathe, as well as a 100-ton ironworker punching/shearing/bending machine; Jakubowski estimates an open-source version can be built for a few hundred dollars in materials, compared to $10,000 for a commercial version. [736]

In December 2009 Jakubowski announced that a donor had committed $5,000 to a project for developing an open-source induction furnace for smelting, and solicited bids for the design contract.

You may have heard us talk about recasting civilization from scrap metal. Metal is the basis of advanced civilization. Scrap metal in refined form can be mined in abundance from heaps of industrial detritus in junkyards and fence rows. This can help us produce new metal in case of any unanticipated global supply chain disruptions. This will have to do until we can take mineral resources directly and smelt them to pure metal. I look forward to the day when our induction furnace chews up our broken tractors and cars – and spits them out in fluid form. This leads to casting useful parts, using molds printed by open source ceramic printers – these exist. This also leads to hot metal processing, the simplest of which is bashing upon an anvil – and the more refined of which is rolling. Can we do this to generate metal bar and sheet in a 4000 square foot workshop planned for Factor e Farm? We better. Technology makes that practical, though this is undeard-of outside of centralized steel mills. We see the induction furnace, hot rolling, forging, casting, and other processes critical to the fabrication component of the Global Village Construction Set. We just got a $5k commitment to open-source this technology. [737]

In January, Jacubowski reported initial efforts to build a lathe-drill-mill multimachine (not CNC, apparently) powered by the LifeTrac motor. [738]

In addition to the steel casting functions of the Foundry, Jakubowski ultimately envisions the production of aluminum from clay as a key source of feedstock for relocalized production. As an alternative to “high-temperature, energy-intensive smelting processes” involving aluminum oxide (bauxite), he proposes “extracting aluminum from clays using baking followed by an acid process.” [739]

OSE’s flexible and digital fabrication facility is intended to produce a basic set of sixteen products, five of which are the basic set of means of fabrication themselves:

Boundary layer turbine—simpler and more efficient alternative to most external and internal combustion engines and turbines, such as gasoline and diesel engines, Stirling engines, and air engines. The only more efficient energy conversion devices are bladed turbines and fuel cells.

Solar concentrators – alternative heat collector to various types of heat generators, such as petrochemical fuel combustion, nuclear power, and geothermal sources

Babington [740] and other fluid burners – alternative heat source to solar energy, internal combustion engines, or nuclear power

Flash steam generators – basis of steam power

Wheel motors — low-speed, high-torque electric motors

Electric generators – for generating the highest grade of usable energy: electricity

Fuel alcohol production systems – proven biofuel of choice for temperate climates

Compressed wood gas – proven technology; cooking fuel; usable in cars if compressed

Compressed Earth Block (CEB) press – high performance building material

Sawmill – production of dimensional lumber

Aluminum from clay – production of aluminum from subsoil clays

Means of fabrication:

CNC Multimachine [741] – mill, drill, lathe, metal forming, other grinding/cutting

XYZ-controlled torch and router table – can accommodate an acetylene torch, plasma cutter, router, and possibly CO2 laser cutter diodes

Metal castingequipment – various metal parts

Plastic extruder [742] – plastic glazing and other applications

Electronics fabrication – oscilloscope, multimeter, circuit fabrication; specific power electronics products include battery chargers, inverters, converters, transformers, solar charge controllers, PWM DC motor controllers, multipole motor controllers. [743]

The Solar Turbine, as it was initially called, uses the sun’s heat to power a steam-driven generator, as an alternative to photovoltaic electricity. [744] It has since been renamed the Solar Power Generator, because of the choice to use a simple steam engine as the heat engine instead of a Tesla turbine. [745]

The Steam Engine, still in the design stage, is based on a simple and efficient design for a 3kw engine, with an estimated bill of parts of $250. [746]

The Sawmill, which can be built with under $2000 in parts (a “Factor 10 cost reduction”), has “the highest production rate of any small, portable sawmills.” [747]

OSE’s strategy is to use the commercial potential of the first products developed to finance further development. As we saw earlier, Jakubowski speculates that a fully equipped digital fabrication facility could turn out CEB presses or sawmills with production rates comparable to those of commercial manufacturing firms, cutting out all the metal parts for the entire product with a turn-around time of days. The CEBs and sawmills could be sold commercially, in that case, to finance development of other products. [748]

And in fact, Jakubowski has made a strategic decision to give priority to developing the CEB Press as rapidly as possible, in order to leverage the publicity and commercial potential as a source of future funding for the entire project. [749]

OSE’s goal of replicability, once the first site is completed with a full range of production machinery and full product line, involves hosting interns who wish to replicate the original experiment at other sites, and using fabrication facilities to produce duplicate machinery for the new sites. [750] Jakubowski recently outlined a more detailed timeline:

Based on our track record, the schedule may be off by up to twenty years. Thus, the proposed timeline can be taken as either entertainment or a statement of intent—depending on how much one believes in the project. 2008 — modularity and low cost features of open source products have been demonstrated with LifeTrac and CEB Press projects 2009 — First product release 2010 — TED Fellows or equivalent public-relations fellowship to propel OSE to high visibility 2011 — $10k/month funding levels achieved for scaling product development effort 2012 — Global Village Construction Set finished 2013 — First true post-scarcity community built 2014 — OSE University (immersion training) established, to be competitive with higher education but with an applied focus 2015 — OSE Fellows program started (the equivalent of TED Fellows, but with explicit focus of solving pressing world issues) 2016 — First productive recursion completed (components can be produced locally anywhere) 2017 — Full meterial [sic] recursion demostrated (all materials become producible locally anywhere) 2018 — Ready self-replicability of resilient, post-scarcity communities demonstrated 2019 — First autonomous republic created, along the governance principles of Leashless 2020 — Ready replicability of autonomous republics demonstrated [751]

In August 2009, some serious longtime tensions came to a head at OSE, as the result of personality conflicts beyond the scope of this work, and the subsequent departure of members Ben De Vries and Jeremy Mason.

Since then, the project has given continuing signs of being functional and on track. As of early October 2009, Lawrence Kincheloe had completed torch table Prototype 1 (pursuant to his contract described above), and was preparing to produce a debugged Prototype 2 (with the major portion of its components produced with Prototype 1). [752] As recounted above, OSE also went into serial production of the CEB Press and has undertaken new projects to build the open-source lathe and ironworker.

2. 100kGarages. Another very promising open manufacturing project, besides OSE’s, is 100kGarages—a joint effort of ShopBot and Ponoko. ShopBot is a maker of CNC routers. [753] Ponoko is both a network of designers and a custom machining service, that produces items as specified in customer designs uploaded via Internet, and ships them by mail, and also has a large preexisting library of member product designs available for production. [754] 100kGarages is a nationwide American network of fabbers aimed at “distributed production in garages and small workshops” [755] : linking separate shops with partial tool sets together for the division of labor needed for networked manufacturing, enabling shops to contract for the production of specific components, or putting customers in contact with fabbers who can produce their designs. Ponoko and ShopBot, in a joint announcement, described it as helping 20,000 creators meet 6,000 fabricators, and specifically putting them in touch with fabricators in their own communities. [756] As described at the 100kGarages site:

100kGarages.com is a place for anyone who wants to get something made (“Makers”) to link up with those having tools for digital fabrication (“Fabbers”) used to make parts or projects.... At the moment, the structure is in place to for [sic] Makers to find Fabbers and to post jobs to the Fabber community.... We’re working hard to provide software and training resources to help those who want to design for Fabbers, whether doing their own one-off projects or to use the network of Fabbers for distributed manufacturing of products (as done by the current gallery of designers on the Ponoko site). In the first few weeks there have been about 40 Fabbers who’ve joined up. In the beginning, we are sticking to Fabbers who are ShopBotters. This makes it possible to have some confidence in the credibilty and capability of the Fabber, without wasting enormous efforts on certification.... But before long, we expect to open up 100kGarages.com to all digital fabrication tools, whether additive or subtractive. We’re hoping to grow to a couple of hundred Fabbers over the next few months, and this should provide a geographical distribution that brings fabrication capabilities pretty close to everyone and helps get the system energized. [757] As we all are becoming environmentally aware, we realize that our environment just can’t handle transporting all our raw materials across the country or around the world, just to ship them back as finished products. These new technologies make practical and possible doing more of our production and manufacturing in small distributed facilities, as small as our garages, and close to where the product is needed. Most importantly our new methods for collaboration and sharing means that we don’t have to do it all by ourselves ... that designers with creative ideas but without the capability to see their designs become real can work with fabricators that might not have the design skills that they need but do have the equipment and the skills and orientation that’s needed to turn ideas into reality … that those who just want to get stuff made or get their ideas realized can work with the Makers/designers who can help them create the plans and the local fabricators who fulfill them. To get this started ShopBot Tools, Makers of popular tools for digital fabrication and Ponoko, who are reinventing how goods are designed, made and distributed, are teaming-up to create a network of workshops and designers, with resources and infrastructure to help facilitate “rolling up our sleeves and getting to work.” Using grass roots enterprise and ingenuity this community can help get us back in action, whether it’s to modernize school buildings and infrastructure, develop energy-saving alternatives, or simply produce great new products for our homes and businesses. There are thousands of ShopBot digital-fabrication (CNC) tools in garages and small shops across the country, ready to locally fabricate the components needed to address our energy and environmental challenges and to locally produce items needed to enhance daily living, work, and business. Ponoko’s web methodologies offer people who want to get things made an environment that integrates designers and inventors with ShopBot fabricators. Multiple paths for getting from idea to object, part, component, or product are possible in a dynamic network like this, where ideas can be realized in immediate distributed production and where production activities can provide feedback to improve designs. [758]

Although all ShopBot CNC router models are quite expensive compared to the reverse-engineered stuff produced by hardware hackers (most models are in the $10–20,000 range, and the two cheapest are around $8,000), ShopBot’s recent open-sourcing of its CNC control code received much fanfare in the open manufacturing community. [759]

And as the 100kGarages site says, they plan to open up the network to machines other than routers, and to “home-brew routers” other than ShopBot, as the project develops. Ponoko already had a similar networking project among owners of CNC laser cutters. [760] As a first step toward its intention to “expand to all kinds of digital fabrication tools,” in October ShopBot ordered a MakerBot kit with a view to investigating the potential for incorporating additive fabrication into the mix. [761] 100kGarages announced in January 2010 it had signed up 150 Fabbers, and was still developing plans to add other digital tools like cutting tables and 3-D printers to its network. [762] In February they elaborated on their plans, specifying that 100kGarages would add the owners of other digitally controlled tools, with the same certification mechanism for reliability they already used for the ShopBot:

The plan we’ve come up with is to work with other Digital Fabrication Equipment manufacturers and let them do the same sort of ownership verification steps that ShopBot has done with the original Fabbers. If a person with a Thermwood (or an EZRouter, Universal Laser, etc) wants to join 100kGarages they can have the manufacturer of their tool verify that they are an owner. We’ll work out a simple process for this verification and will work to develop relationships with other manufacturers over time to make the process as painless as possible and to let them get involved if they would like.

Plans to incorporate homebrew tools are also in the works, although much less far along than plans for commercially manufactured tools.

It also leaves a question of the home-made and home-brew Fabbers. We appreciate that some of these tools can be pretty good. There may be other kinds of user organizations for some types of tools that could help with certification, but we’ve got to admit that we don’t know exactly how we’ll deal with it yet. It may be as simple as “send us a picture of yourself, your machine, and a portfolio of work”, or we may have to develop some sort of certification method involving cutting a sample. We’ll let you know when we come up with something, but we’ll try to make it as painless for you (and for us) as possible. [763]

Interestingly, this was almost identical to the relocalized manufacturing model described by John Robb:

It is likely that by 2025, the majority of the “consumer” goods you purchase/acquire, will be manufactured locally. However, this doesn’t likely mean what you think it means. The process will look like this:

You will purchase/trade for/build a design for the product you desire through online trading/sharing systems. That design will be in a standard file format and the volume of available designs for sale, trade, or shared openly will be counted in the billions.

You or someone you trust/hire will modify the design of the product to ensure it meets your specific needs (or customize it so it is uniquely yours). Many products will be smart (in that they include hardware/software that makes them responsive), and programmed to your profile.

The refined product design will be downloaded to a small local manufacturing company, co-operative, or equipped home for production. Basic feedstock materials will be used in its construction (from metal to plastic powders derived from generic sources, recycling, etc.). Delivery is local and nearly costless.

The relocalization of manufacturing will be promoted among other things, Robb says, by the fact that

[l]ocal fabrication will get cheap and easy. The cost of machines that can print, lathe, etch, cut materials to produce three dimensional products will drop to affordable levels (including consumer level versions). This sector is about to pass out of its “home brew computer club phase” and rocket to global acceptance. [764]

It’s impossible to underestimate the revolutionary significance of this development. As Lloyd Alter put it, “This really does change everything.” [765]

Back in January, Eric Hunting considered the slow takeoff in the open manufacturing/Making movement on the Open Manufacturing email list.

There seem to be a number of re-occurring questions that come up—openly or in the back of peoples minds- seeming to represent key obstacles or stumbling blocks in the progress of open manufacturing or Maker culture.... Why are Makers still fooling around with toys and mash-ups and not making serious things? (short answer; like early computer hackers lacking off-the-shelf media to study, they’re still stuck reverse- engineering the off-the-shelf products of existing industry to learn how the technology works and hacking is easier than making something from scratch) Why are Makers rarely employing many of the modular building systems that have been around since the start of the 20 th century? Why do so few tech-savvy people seem to know what T-slot is when it’s ubiquitous in industrial automation? Why little use of Box Beam/Grid Beam when its cheap, easy, and has been around since the 1960s? Why does no one in the world seem to know the origin and name of the rod and clamp framing system used in the RepRap? (short answer: no definitive sources of information) Why are ‘recipes’ in places like Make and Instructibles most [sic] about artifacts and rarely about tools and techniques? (short answer; knowledge of these are being disseminated ad hoc) Why is it so hard to collectivize support and interest for open source artifact projects and why are forums like Open Manufacture spending more time in discussion of theory rather than nuts & bolts making? (short answer; no equivalent of Source Forge for a formal definition of hardware projects—though this is tentatively being developed—and no generally acknowledged definitive channel of communication about open manufacturing activity) Why are Fab Labs not self-replicating their own tools? (short answer; no comprehensive body of open source designs for those tools and no organized effort to reverse-engineer off-the-shelf tools to create those open source versions) Why is there no definitive ‘users manual’ for the Fab Lab, its tools, and common techniques? (short answer; no one has bothered to write it yet) Why is there no Fab Lab in my neighborhood? Why so few university Fab Labs so far? Why is it so hard to find support for Fab Lab in certain places even in the western world? (short answer; 99% of even the educated population still doesn’t know what the hell a Fab Lab is or what the tools it’s based on are) Why do key Post-Industrial cultural concepts remain nascent in the contemporary culture, failing to coalesce into a cultural critical mass? Why are entrepreneurship, cooperative entrepreneurship, and community support networks still left largely out of the popular discussion on recovery from the current economic crash? Why do advocates of Post-Industrial culture and economics still often hang their hopes on nanotechnology when so much could be done with the technology at-hand? (short answer; no complete or documented working models to demonstrate potential with) Are you, as I am, starting to see a pattern here? It seems like there’s a Missing Link in the form of a kind of communications or media gap. There is Maker media—thanks largely to the cultural phenomenon triggered by Make magazine. But it’s dominated by ad hoc individual media produced and published on-line to communicate the designs for individual artifacts while largely ignoring the tools. People are learning by making, but they never seem to get the whole picture of what they potentially could make because they aren’t getting the complete picture of what the tools are and what they’re capable of. We seem to basically be in the MITS Altair, Computer Shack, Computer Faire, Creative Computing, 2600 era of independent industry. A Hacker era. Remember the early days of the personal computer? You had these fairs, users groups, and computer stores like Computer Shack basically acting like ad hoc ashrams of the new technology because there were no other definitive sources of knowledge. This is exactly what Maker fairs, Fab Labs, and forums like this one are doing.... There are a lot of parallels here to the early personal computer era, except for a couple of things; there’s no equivalent of Apple (yet..), no equivalent of the O’Reily Nutshell book series, no “##### For Dummies” books. [766]

100kGarages is a major step toward the critical mass Hunting wrote about. Although there’s as yet no Apple of CNC tools (in the sense of the CAD file equivalent of a user-friendly graphic user interface), there is now an organized network of entrepreneurs with a large repository of open designs. As Michel Bauwens puts it, “Suddenly, anyone can pick one of 20,000 Ponoko Designs (or build one themselves) and get it cut out and built just about anywhere.” [767] This is essentially what Marcin Jakubowski referred to above, when he speculated on distributed open source manufacturing shops linked to a “global repository of shared open source designs.” To get back to Lloyd Alter’s theme (“This changes everything”):

Ponoko is the grand idea of digital design and manufacture; they make it possible for designers to meet customers, “where creators, digital fabricators, materials suppliers and buyers meet to make (almost) anything.” It is a green idea, producing only when something is wanted, transporting ideas instead of physical objects. Except there wasn’t a computerized router or CNC machine on every block, no 3D Kinko’s where you could go and print out your object like a couple of photocopies. Until now, with the introduction of 100K Garages, a joint venture between Ponoko and ShopBot, a community of over six thousand fabricators. Suddenly, anyone can pick one of 20,000 Ponoko Designs (or build one themselves) and get it cut out and built just about anywhere. [768]

The answer to Hunting’s question about cooperative entrepreneurship seems to have come to a large extent from outside the open manufacturing movement, as such. And ShopBot and Ponoko, if not strictly speaking part of the committed open manufacturing movement, have grafted it onto their business model. This is an extension to the physical realm of a phenomenon Bauwens remarked on in the realm of open-source software:

...[M]ost peer production allies itself with an ecology of businesses. It is not difficult to understand why this is the case. Even at very low cost, communities need a basic infrastructure that needs to be funded. Second, though such communities are sustainable as long as they gain new members to compensate the loss of existing contributors; freely contributing to a common project is not sustainable in the long term. In practice, most peer projects follow a 1-10-99 rule, with a one percent consisting of very committed core individuals. If such a core cannot get funded for its work, the project may not survive. At the very least, such individuals must be able to move back and forth from the commons to the market and back again, if their engagement is to be sustainable. Peer participating individuals can be paid for their work on developing the first iteration of knowledge or software, to respond to a private corporate need, even though their resulting work will be added to the common pool. Finally, even on the basis of a freely available commons, many added value services can be added, that can be sold in the market. On this basis, cooperative ecologies are created. Typical in the open source field for example, is that such companies use a dual licensing strategy. Apart from providing derivative services such as training, consulting, integration etc., they usually offer an improved professional version with certain extra features, that are not available to non-paying customers. The rule here is that one percent of the customers pay for the availability of 99% of the common pool. Such model also consists of what is called benefit sharing practices, in which open source companies contribute to the general infrastructure of cooperation of the respective peer communities. Now we know that the world of free software has created a viable economy of open source software companies, and the next important question becomes: Can this model be exported, wholesale or with adaptations, to the production of physical goods? [769]

I think it’s in process of being done right now.

Jeff Vail expressed some misgivings about Ponoko, wondering whether it could go beyond the production of trinkets and produce primary goods essential to daily living. 100kGarages’ partnership with PhysicalDesignCo [770] (a group of MIT architects who design digitally prefabricated houses), announced in early October, may go a considerable way toward addressing that concern. PhysicalDesignCo will henceforth contract the manufacture of all its designs to 100kGarages. [771]

3. Assessment. Franz Nahrada, of the Global Village movement, has criticized Factor e Farm in terms of its relationship to a larger, surrounding networked economy. However, he downplayed the importance of autarky compared to that of cross-linking between OSE and the rest of the resilient community movement.

I really think we enter a period of densification and intensive cross-linking between various projects. I would like to consider Factor_E_Farm the flagship project for the Global Village community even though I am not blind to some shortcomings. I talked to many people and they find and constantly bring up some points that are easy to critisize [sic]. But I want to make clear: I also see these points and they all can be dealt with and are IMHO of minor importance.

the site itself seems not really being locally embedded in regional development initiatives, but rather a “spaceship from Mars” for the surrounding population. The same occured to me in Tamera 10 years ago when I stayed at a neighboring farmhouse with a very benevolent Portuguese lady who spoke perfect German (because she was the widow of a German diplomat). She was helpful im [sic] mediating, but still I saw the community through the “lenses of outsiders” and I saw how much damage too much cultural isolation can do to a village building effort and how many opportunities are missed that way. We must consider the local and the regional as equally important as the global, in fact the global activates the local and regional potential. It makes us refocus on our neighbors because we bring in a lot of interesting stuff for them — and they might do the same for us....

the overall OSE project is radically geared towards local autonomy—something which sometimes seemingly cuts deeply into efficiency and especially life quality. I think that in many respects the Factor e Farm zeal, the backbreaking heroism of labor, the choice of the hard bottom-up approach, is more a symbolic statement—and the end result will differ a lot. In the end, we might have regional cooperatives, sophisticated regional division of labor and a size of operations that might still be comparable to small factories; especially when it comes to metal parts, standard parts of all kinds, modules of the toolkit etc. But the statement “we can do it ourselves” is an important antidote to todays absolutely distorted system of technology and competences.

We cannot really figure out what is the threshold where this demonstration effort becomes unmanageable; I think that it is important to start with certain aspects of autarky, with the idea of partial autarky and self-reliance, but not with the idea of total self-sufficiency. This demonstration of aspectual autarky is important in itself and gives a strong message: we can build our own tractor. we can produce our own buidling materials. we can even build most of our own houses. [772]

So OSE is performing a valuable service in showing the outer boundaries of what can be done within a resilient, self-sufficient community. In a total systemic collapse, without (for example) any microchip foundries, the CNC tools in the Fab Lab will—obviously—be unsustainable on a long-term basis. But assuming that such resilient communities are part of a larger network with some of Nahrada’s “regional division of labor” and “small factories” (including, perhaps, a decentralized, recycling-based rubber industry), OSE’s toolkit will result in drastic increases in the degree of local independence and the length of periods a resilient local economy can weather on its own resources.

100kGarages and OSE may be converging toward a common goal from radically different starting points. That is, 100kGarages may be complementary to OSE in terms of Nahrada’s criticism. If 100kGarages’ networked distributed manufacturing infrastructure is combined with OSE’s open-source design ecology, with designs aimed specifically at bootstrapping technologies for maximum local resilience and economy autonomy, the synergies are potentially enormous. Imagine if OSE products like the LifeTrac tractor/prime mover, sawmill, CEB, etc., were part of the library of readily available designs that could be produced through 100kGarages.

Chapter Six: Resilient Communities and Local Economies

We already saw, in Chapter Five, the economy of networked micromanufacturing that’s likely to emerge from the decline of the state capitalist system. We further saw in Chapter Three that there is a cyclical tendency of industrial production to shift from the mass-production core to the craft periphery in economic downturns. And we’ve witnessed just such a long-term structural shift during the stagnation of the past thirty years.

There is a similar historic connection between severe economic downturns, with significant periods of unemployment, and the formation of barter networks and resilient communities. If the comparison to manufacturing holds, given the cumulative effect of all of state capitalism’s crises of sustainability which we examined in Paper No. 4, we can expect to see a long-term structural shift toward resilient communities and relocalized exchange. John Robb suggests that, given the severity of the present “Great Recession,” it may usher in a phase transition in which the new society crystallizes around resilient communities as a basic building block; resilient communities will play the same role in resolving the current “Time of Troubles” that the Keynesian state did in resolving the last one.

Historically, economic recessions that last longer than a year have durations/severities that can be plotted as power law distributions.... Given that we are already over a year into this recession, it implies that we are really into black swan territory (unknown and extreme outcomes) in regards to our global economy’s current downturn and that no estimates of recovery times or ultimate severity based on historical data of past recessions apply anymore. This also means that the system has exceeded its ability to adapt using standard methods (that shouldn’t be news to anyone). It may be even more interesting than that. The apparent non-linearity and turbulence of the current situation suggests we may be at a phase transition (akin to the shift in the natural world from ice to water).... As a result, a new control regime may emerge. To get a glimpse of what is in store for us, we need to look at the sources of emerging order (newly configured dissipative and self-organizing systems/networks/orgs that are better adapted to the new non-linear dynamics of the global system). In [the Great Depression] the sources of emerging organizational order were reconfigured nation-states that took a more active role in economics (total war economies during peacetime). In this situation, we are seeing emerging order at the local level: small resilient networks/communities reconfigured to handle this level of systemic environmental non-linearity and survive/thrive.... Further, it appears that these emerging communities and networks are well suited to drawing on a great behavioral shift occurring at the individual level, already evident in all economic statistics, that emphasis thrift/investment rather than consumption/gambling (the middle class consumer is becoming extinct). So what does this mean? These new communities will eventually start to link up, either physically or virtually..., into network clusters. IF the number of links in the largest cluster reaches some critical proportion of the entire system’s nodes..., there will be a phase transition as entire system shifts to the new mode of operation. In other words, resilient communities might become the new configuration of the global economic system. [773]

Robb’s phase transition resembles Jeff Vail’s description of the gradually shifting correlation of forces between the old legacy system and his “Diagonal Economy”:

The diagonal economy might rise amidst the decline of our current system—the “Legacy System.” Using America as an example (but certainly translatable to other regions and cultures), more and more people will gradually realize that there the “plausible promise” once offered by the American nation-state is no longer plausible. A decent education and the willingness to work 40 hours a week will no longer provide the “Leave it to Beaver” quid pro quo of a comfortable suburban existence and a secure future for one’s children. As a result, our collective willingness to agree to the conditions set by this Legacy System (willing participation in the system in exchange for this once “plausible promise”) will wane. Pioneers—and this is certainly already happening—will reject these conditions in favor of a form of networked civilizational entrepreneurship. While this is initially composed of professionals, independent sales people, internet-businesses, and a few market gardeners, it will gradually transition to take on a decidedly “third world” flavor of local self-sufficiency and import-replacement (leveraging developments in distributed, open-source, and peer-to-peer manufacturing) in the face of growing ecological and resource pressures. People will, to varying degrees, recognize that they cannot rely on the cradle-to-cradle promise of lifetime employment by their nation state. Instead, they will realize that they are all entrepreneurs in at least three—and possibly many more—separate enterprises: one’s personal brand in interaction with the Legacy System (e.g. your conventional job), one’s localized self-sufficiency business (ranging from a back yard tomato plant to suburban homesteads and garage workshops), and one’s community entrepreneurship and network development. As the constitutional basis of our already illusory Nation-State system... erode further, the focus on #2 (localized self-sufficiency) and #3 (community/networking) will gradually spread and increase in importance, though it may take much more than my lifetime to see them rise to general prominence in replacement of the Nation-State system. [774]

In this chapter we will examine the general benefits of resilient local economies, consider some notable past examples of the phenomenon, and then survey some current experiments in resilient community which are especially promising as building blocks for a post-corporate society.

A. Local Economies as Bases of Independence and Buffers Against Economic Turbulence

One virtue of the local economy is its insulation from the boom-bust cycle of the larger money economy.

Paul Goodman wrote that a “tight local economy” was essential for maintaining “a close relation between production and consumption,”

for it means that prices and the value of labor will not be so subject to the fluctuations of the vast general market. A man’s work, meaningful during production, will somewhat carry through the distribution and what he gets in return. That is, within limits, the nearer a system gets to simple household economy, the more it is an economy of specific things and services that are bartered, rather than an economy of generalized money. [775]

The greater the share of consumption needs met through informal (barter, household and gift) economies, the less vulnerable individuals are to the vagaries of the business cycle, and the less dependent on wage labor as well.

The ability to meet one’s own consumption needs with one’s own labor, using one’s own land and tools, is something that can’t be taken away by a recession or a corporate decision to offshore production to China (or just to downsize the work force and speed up work for the survivors). The ability to trade one’s surplus for other goods, with a neighbor also using his own land and tools, is also much more secure than a job in the capitalist economy.

Ralph Borsodi described the cumulative effect of the concatenation of uncertainties in an economy of large-scale factory production for anonymous markets:

Surely it is plain that no man can afford to be dependent upon some other man for the bare necessities of life without running the risk of losing all that is most precious to him. Yet that is precisely and exactly what most of us are doing today. Everybody seems to be dependent upon some one else for the opportunity to acquire the essentials of life. The factory-worker is dependent upon the man who employs him; both of them are dependent upon the salesmen and retailers who sell the goods they make, and all of them are dependent upon the consuming public, which may not want, or may not be able, to buy what they may have made. [776]

Imagine, on the other hand, an organic truck farmer who barters produce for clothing from a home seamstress living nearby. Neither the farmer nor the seamstress can dispose of her full output in this manner, or meet all of her subsistence needs. But both together have a secure and reliable source for all their sewing and vegetable needs, and a reliable outlet for the portion of the output of each that is consumed by the other. The more trades and occupations brought into the exchange system, the greater the portion of total consumption needs of each that can be reliably met within a stable sub-economy. At the same time, the less dependent each person is on outside wage income, and the more prepared to weather a prolonged period of unemployment in the outside wage economy.

Subsistence, barter, and other informal economies, by reducing the intermediate steps between production and consumption, also reduce the contingency involved in consumption. If the realization of capital follows a circuit, as described by Marx in Capital , the same is also true of labor. And the more steps in the circuit, the more likely the circuit is to be broken, and the realization of labor (the transformation of labor into use-value, through the indirect means of exchanging one’s own labor for wages, and exchanging those wages for use-value produced by someone else’s labor) is to fail. Marx, in The Poverty of Philosophy , pointed out long ago that the disjunction of supply from demand, which resulted in the boom-bust cycle, was inevitable given the large-scale production under industrial capitalism:

...[This true proportion between supply and demand] was possible only at a time when the means of production were limited, when the movement of exchange took place within very restricted bounds. With the birth of large-scale industry this true proportion had to come to an end, and production is inevitably compelled to pass in continuous succession through vicissitudes of prosperity, depression, crisis, stagnation, renewed prosperity, and so on. Those who... wish to return to the true proportion of production, while preserving the present basis of society, are reactionary, since, to be consistent, they must also wish to bring back all the other conditions of industry of former times. What kept production in true, or more or less true, proportions? It was demand that dominated supply, that preceded it. Production followed close on the heels of consumption. Large-scale industry, forced by the very instruments at its disposal to produce on an ever-increasing scale, can no longer wait for demand. Production precedes consumption, supply compels demands. [777]

In drawing the connection between supply-push distribution and economic crisis, Marx was quite perceptive. Where he went wrong was his assumption that large-scale industry, and production that preceded demand on the push model, were necessary for a high standard of living (“the present basis of society”).

Leopold Kohr, in the same vein, compared local economies to harbors in a storm in their insulation from the business cycle and its extreme fluctuations of demand. [778]

Ebenezer Howard, in his vision of Garden Cities, argued that the overhead costs of risk and distribution (as well as rent, given the cheap rural land on which the new towns would be built) would be far lower for both industry and retailers serving the less volatile local markets.

They might even sell considerably below the ordinary rate prevailing elsewhere, but yet, having an assured trade and being able very accurately to gauge demand, they might turn their money over with remarkable frequency. Their working expenses, too, would be absurdly small. They would not have to advertise for customers, though they would doubtless make announcements to them of any novelties; but all that waste of effort and of money which is so frequently expended by tradesmen in order to secure customers or to prevent their going elsewhere, would be quite unnecessary. [779]

His picture of the short cycle time and minimal overhead resulting from the gearing of supply to demand, by the way, is almost a word-for-word anticipation of lean principles.

We saw, in previous chapters, the way that lean production overcomes bottlenecks in supply by scaling production to demand and siting production as close as possible to the market. The small neighborhood shop and the household producer apply the same principle, on an even higher level. So the more decentralized and relocalized the scale of production, the easier it is to overcome the divorce of production from demand—the central contradiction of mass production. These remarks by Gandhi are relevant:

Question: “Do you feel, Gandhiji, that mass production will raise the standard of living of the people?” “I do not believe in it at all, there is a tremendous fallacy behind Mr. Ford’s reasoning. Without simultaneous distribution on an equally mass scale, the production can result only in a great world tragedy.” “Mass production takes no note of the real requirement of the consumer. If mass production were in itself a virtue, it should be capable of indefinite multiplication. But it can be definitely shown that mass production carries within it its own limitations. If all countries adopted the system of mass production, there would not be a big enough market for their products. Mass production must then come to a stop.” “I would categorically state my conviction that the mania for mass production is responsible for the world crises. If there is production and distribution both in the respective areas where things are required, it is automatically regulated, and there is less chance for fraud, none for speculation.”... Question : Have you any idea as to what Europe and America should do to solve the problem presented by too much machinery? “You see,” answered Gandhiji, “that these nations are able to exploit the so-called weaker or unorganized races of the world. Once those races gain this elementary knowledge and decide that they are no more going to be exploited, they will simply be satisfied with what they can provide themselves. Mass production, then, at least where the vital necessities are concerned, will disappear.”... Question : “But even these races will require more and more goods as their needs multiply.” “They will them [sic] produce for themselves. And when that happens, mass production, in the technical sense in which it is understood in the West, ceases.” Question: “You mean to say it becomes local?” “When production and consumption both become localized, the temptation to speed up production, indefinitely and at any price, disappears. Question : If distribution could be equalized, would not mass production be sterilized of its evils? “No. The evil is inherent in the system. Distribution can be equalized when production is localized; in other words, when the distribution is simultaneous with production. Distribution will never be equal so long as you want to tap other markets of the world to dispose of your goods. Question : Then, you do not envisage mass production as an ideal future of India ? “Oh yes, mass production, certainly. But not based on force. After all, the message of the spinning wheel is that. It is mass production, but mass production in people’s own homes. If you multiply individual production to millions of times, would it not give you mass production on a tremendous scale? But I quite understand that your ‘mass production’ is a technical term for production by the fewest possible number through the aid of highly complicated machinery. I have said to myself that that is wrong. My machinery must be of the most elementary type which I can put in the homes of the millions. Under my system, again, it is labour which is the current coin, not metal. Any person who can use his labour has that coin, has wealth. He converts his labour into cloth, he converts his labour into grain. If he wants paraffin oil, which he cannot himself produce, he uses his surplus grain for getting the oil. It is exchange of labour on free, fair and equal terms—hence it is no robbery. You may object that this is a reversion to the primitive system of barter. But is not all international trade based on the barter system? Concentration of production ad infinitum can only lead to unemployment. [780]

Gandhi’s error was assuming that localized and household production equated to low-tech methods, and that technological advancement was inevitably associated with large scale and capital intensiveness. As we saw in Chapter Five, nothing could be further from the truth.

Communities of locally owned small enterprises are much healthier economically than communities that are colonized by large, absentee-owned corporations. For example, a 1947 study compared two communities in California: one a community of small farms, and the other dominated by a few large agribusiness operations. The small farming community had higher living standards, more parks, more stores, and more civic, social and recreational organizations. [781]

Bill McKibben made the same point in Deep Economy . Most money that’s spent buying stuff from a national corporation is quickly sucked out of the local economy, while money that’s spent at local businesses circulates repeatedly in the local economy and leaks much more slowly to the outside. According to a study in Vermont, substituting local production for only ten percent of imported food would create $376 million in new economic output, including $69 million in wages at over 3600 new jobs. A similar study in Britain found the multiplier effect of ten pounds spent at a local business benefited the local economy to the tune of 25 pounds, compared to only 14 for the same amount spent at a chain store.

The farmer buys a drink at the local pub; the pub owner gets a car tune-up at the local mechanic; the mechanic brings a shirt to the local tailor; the tailor buys some bread at the local bakery; the baker buys wheat for bread and fruit for muffins from the local farmer. When these businesses are not owned locally, money leaves the community at every transaction. [782]

B. Historical Models of Resilient Community

The prototypical resilient community, in the mother of all “Times of Troubles,” was the Roman villa as it emerged in the late Empire and early Dark Ages. In Republican times, villas had been estates on which the country homes of the Senatorial class were located, often self-sufficient in many particulars and resembling villages in their own right. During the stresses of the “long collapse” in the fifth century, and in the Dark Ages following the fall of the Western Empire, the villas became stockaded fortresses, often with villages of peasants attached.

Since the rise of industrial capitalism, economic depression and unemployment have been the central motive forces behind the creation of local exchange systems and the direct production for barter by producers.

A good example is the Owenites’ use of the social economy as a base of independence from wage labor. According to E. P. Thompson, “[n]ot only did the benefit societies on occasion extend their activities to the building of social clubs or alms-houses; there are also a number of instances of pre-Owenite trade unions when on strike, employing their own members and marketing the product.” [783] G. D. H. Cole describes the same phenomenon:

As the Trade Unions grew after 1825, Owenism began to appeal to them, and especially to the skilled handicraftsmen.... Groups of workers belonging to a particular craft began to set up Co-operative Societies of a different type—societies of producers which offered their products for sale through the Co-operative Stores. Individual Craftsmen, who were Socialists, or who saw a way of escape from the exactions of the middlemen, also brought their products to the stores to sell.” [784] ...[This pattern of organization was characterized by] societies of producers, aiming at co-operative production of goods and looking to the Stores to provide them with a market. These naturally arose first in trades requiring comparatively little capital or plant. They appealed especially to craftsmen whose independence was being threatened by the rise of factory production or sub-contracting through capitalist middlemen. The most significant feature of the years we are discussing was the rapid rise of this... type of Co-operative Society and the direct entry of the Trades Unions into Co-operative production. Most of these Societies were based directly upon or at least very closely connected with the Unions of their trades, ...which took up production as a part of their Union activity—especially for giving employment to their members who were out of work or involved in trade disputes.... [785]

The aims and overall vision of such organization were well expressed in the rules of the Ripponden Co-operative Society, formed in 1832 in a weaving village in the Pennines:

The plan of co-operation which we are recommending to the public is not a visionary one but is acted upon in various parts of the Kingdom; we all live by the produce of the land, and exchange labour for labour, which is the object aimed at by all Co-operative societies. We labourers do all the work and produce all the comforts of life;—why then should we not labour for ourselves and strive to improve our conditions. [786]

Cooperative producers’ need for an outlet led to Labour Exchanges, where workmen and cooperatives could directly exchange their product so as “to dispense altogether with either capitalist employers or capitalist merchants.” Exchange was based on labor time. “Owen’s Labour Notes for a time not only passed current among members of the movement, but were widely accepted by private shopkeepers in payment for goods.” [787]

The principle of labor-based exchange was employed on a large-scale. In 1830 the London Society opened an Exchange Bazaar for exchange of products between cooperative societies and individuals. [788] The Co-operative Congress, held at Liverpool in 1832, included a long list of trades among its participants (the B’s alone had eleven). The National Equitable Labour Exchange, organized in 1832–33 in Birmingham and London, was a venue for the direct exchange of products between craftsmen, using Labour Notes as a medium of exchange. [789]

The Knights of Labor, in the 1880s, undertook a large-scale effort at organizing worker cooperatives. Their fate is an illustration of the central role of capital outlay requirements in determining the feasibility of self-employment and cooperative employment.

The first major wave of worker cooperatives, according to John Curl, was under the auspices of the National Trades’ Union in the 1830s. [790] Like the Owenite trade union cooperatives in Britain, they were mostly undertaken in craft employments for which the basic tools of the trade were relatively inexpensive. From the beginning, worker cooperatives were a frequent resort of striking workers. In 1768 twenty striking journeyman tailors in New York, the first striking wage-workers in American history, set up their own cooperative shop. Journeyman carpenters striking for a ten-hour day in Philadelphia, in 1761, formed a cooperative (with the ten-hour day they sought) and undercut their master’s price by 25%; they disbanded the cooperative when they went back to work. The same was done by shoemakers in Baltimore, 1794, and Philadelphia, 1806. [791] This was a common pattern in early American labor history, and the organization of cooperatives moved from being purely a strike tactic to providing an alternative to wage labor. [792] It was feasible because most forms of production were done by groups of artisan laborers using hand tools.

By the 1840s, the rise of factory production with expensive machinery had largely put an end to this possibility. As the prerequisites of production became increasingly unafforable, the majority of the population was relegated to wage labor with machinery owned by someone else. [793]

Most attempts at worker-organized manufacturing, after the rise of the factory system, failed on account of the capital outlays required. For example, when manufacturers refused to sell farm machinery to the Grangers at wholesale prices, the Nebraska Grange undertook its own design and manufacturing of machinery. (How’s that for a parallel to modern P2P ideas?) Its first attempt, a wheat head reaper, sold at half the price of comparable models and drove down prices on farm machinery in Nebraska. The National Grange planned a complete line of farm machinery, but most Grange manufacturing enterprises failed to raise the large sums of capital needed. [794]

The Knights of Labor cooperatives were on shaky ground in the best of times. Many of them were founded during strikes, started with “little capital and obsolescent machinery,” and lacked the capital to invest in modern machinery. Subjected to economic warfare by organized capital, the network of cooperatives disintegrated during the post-Haymarket repression. [795]

Ebenezer Howard’s Garden Cities were a way of “buying out at the bottom” (a phrase coined by Vinay Gupta—about whom more later): building the cities on cheap rural land and using it with maximum efficiency. The idea was that workers would take advantage of the rent differential between city and country, make more efficient use of underused land than the great landlords and capitalists could, and use the surplus income from production in the new cities (collected as a single tax on the site value of land) for quickly paying off the original capital outlays. [796] Howard also anticipated something like counter-economics: working people living within his garden cities, working through building societies, friendly societies, mutuals, consumer and worker cooperatives, etc., would find ways to employ themselves and each other outside the wage system.

It is idle for working-men to complain of this self-imposed exploitation, and to talk of nationalizing the entire land and capital of this country under an executive of their own class, until they have first been through an apprenticeship at the humbler task of organising men and women with their own capital in constructive work of a less ambitious character.... The true remedy for capitalist oppression where it exists, is not the strike of no work, but the strike of true work, and against this last blow the oppressor has no weapon. If labour leaders spent half the energy in co-operative organisation that they now waste in co-operative disorganisation, the end of our present unjust system would be at hand. [797]

Howard, heavily influenced by Kropotkin’s vision of the decentralized production made possible by small-scale electrically powered machinery, [798] wrote that “[t]own and country must be married, and out of this joyous union will spring a new hope, a new life, a new civilization.” [799] Large markets, warehouses, and industry would be located along a ring road on the outer edge of each town, with markets and industry serving the particular ward in which its customers and workers lived. [800] A cluster of several individual towns (the “social city” of around a quarter million population in an area of roughly ten miles square) would ultimately be linked together by “[r]apid railway transit,” much like the old mixed-use railroad suburbs which today’s New Urbanists propose to resurrect and link together with light rail. Larger industries in each town would specialize in the production of commodities for the entire cluster, in which greater economies of scale were necessary.

In the Great Depression, the same principles used by the Owenites and Knights of Labor were applied in the Homestead Unit project in the Dayton area, an experiment with household and community production in which Borsodi played a prominent organizing role. Despite some early success, it was eventually killed off by Harold Ickes, a technocratic liberal who wanted to run the homestead project along the same centralist lines as the Tennessee Valley Authority. The Homestead Units were built on cheap land in the countryside surrounding Dayton, with a combination of three-acre family homesteads and some division of labor on other community projects. The family homestead included garden, poultry and other livestock, and a small orchard and berry patch. The community provided woodlot and pasture, in addition. [801] A Unit Committee vice president in the project described the economic security resulting from subsistence production:

There are few cities where the independence of a certain sort of citizen has not been brought into relief by the general difficulties of the depression. In the environs of all cities there is the soil-loving suburbanite. In some cases these are small farmers, market gardeners and poultry raisers who try to make their entire living from their little acres. More often and more successful there is a combination of rural and city industry. Some member of the family, while the others grow their crops, will have a job in town. A little money, where wages are joined to the produce of the soil, will go a long way.... When the depression came most of these members of these suburban families who held jobs in town were cut in wages and hours. In many cases they entirely lost their jobs. What, then, did they do?.... The soil and the industries of their home provided them... work and a living, however scant. Except for the comparatively few dollars required for taxes and a few other items they were able, under their own sail, to ride out the storm. The sailing was rough, perhaps; but not to be compared with that in the wreck-strewn town.... Farming as an exclusive business, a full means of livelihood, has collapsed.... Laboring as an exclusive means of livelihood has also collapsed. The city laborer, wholly dependent on a job, is of all men most precariously placed. Who, then, is for the moment safe and secure? The nearest to it is this home and acres-owning family in between, which combines the two. [802]

An interesting experiment in restoring the “circuit of labor” through barter exchange was Depression-era organizations like the Unemployed Cooperative Relief Organization and Unemployed Exchange Association:

...The real economy was still there—paralyzed but still there. Farmers were still producing, more than they could sell. Fruit rotted on trees, vegetables in the fields. In January 1933, dairymen poured more than 12,000 gallons of milk into the Los Angeles City sewers every day. The factories were there too. Machinery was idle. Old trucks were in side lots, needing only a little repair. All that capacity on the one hand, legions of idle men and women on the other. It was the financial casino that had failed, not the workers and machines. On street corners and around bare kitchen tables, people started to put two and two together. More precisely, they thought about new ways of putting two and two together.... In the spring of 1932, in Compton, California, an unemployed World War I veteran walked out to the farms that still ringed Los Angeles. He offered his labor in return for a sack of vegetables, and that evening he returned with more than his family needed. The next day a neighbor went out with him to the fields. Within two months 500 families were members of the Unemployed Cooperative Relief Organization (UCRO). That group became one of 45 units in an organization that served the needs of some 150,000 people. It operated a large warehouse, a distribution center, a gas and service station, a refrigeration facility, a sewing shop, a shoe shop, even medical services, all on cooperative principles. Members were expected to work two days a week, and benefits were allocated according to need.... The UCRO was just one organization in one city. Groups like it ultimately involved more than 1.3 million people, in more than 30 states. It happened spontaneously, without experts or blueprints. Most of the participants were blue collar workers whose formal schooling had stopped at high schools. Some groups evolved a kind of money to create more flexibility in exchange. An example was the Unemployed Exchange Association, or UXA, based in Oakland, California.... UXA began in a Hooverville... called “Pipe City,” near the East Bay waterfront. Hundreds of homeless people were living there in sections of large sewer pipe that were never laid because the city ran out of money. Among them was Carl Rhodehamel, a musician and engineer. Rhodehamel and others started going door to door in Oakland, offering to do home repairs in exchange for unwanted items. They repaired these and circulated them among themselves. Soon they established a commissary and sent scouts around the city and into the surrounding farms to see what they could scavenge or exchange labor for. Within six months they had 1,500 members, and a thriving sub-economy that included a foundry and machine shop, woodshop, garage, soap, factory, print shop, wood lot, ranches, and lumber mills. They rebuilt 18 trucks from scrap. At UXA’s peak it distributed 40 tons of food a week. It all worked on a time-credit system.... Members could use credits to buy food and other items at the commissary, medical and dental services, haircuts, and more. A council of some 45 coordinators met regularly to solve problems and discuss opportunities. One coordinator might report that a saw needed a new motor. Another knew of a motor but the owner wanted a piano in return. A third member knew of a piano that was available. And on and on. It was an amalgam of enterprise and cooperation—the flexibility and hustle of the market, but without the encoded greed of the corporation or the stifling bureaucracy of the state.... The members called it a “reciprocal economy.”.... [803]

Stewart Burgess, in a 1933 article, described a day’s produce intake by the warehouse of Unit No. 1 in Compton. It included some fifteen different kinds of fruits and vegetables, including two tons of cabbage and seventy boxes of pears, all the way down to a single crate of beets—not to mention a sack of salt. The production facilities and the waste materials it used as inputs foreshadow the ideas of Colin Ward, Kirkpatrick Sale and Karl Hess on community warehouses and workshops, discussed in the last chapter:

In this warehouse is an auto repair shop, a shoe-repair shop, a small printing shop for the necessary slips and forms, and the inevitable woodpile where cast-off railroad ties are sawed into firewood. Down the street, in another building, women are making over clothing that has been bartered in. In another they are canning vegetables and fruit—Boy Scouts of the Burbank Unit brought in empty jars by the wagon-load. [804]

Such ventures, like the Knights of Labor cooperatives, were limited by the capital intensiveness of so many forms of production. The bulk of the labor performed within the barter networks was either in return for salvage goods in need of repair, for repairing such goods, or in return for unsold inventories of conventional businesses. When the supply of damaged machinery was exhausted by house-to-house canvassing, and local businesses disposed of their accumulated inventory, barter associations reached their limit. They could continue to function at a fairly low volume, directly undertaking for barter such low-capital forms of production as sewing, gardening on available land, etc., and trading labor for whatever percentage of output from otherwise idle capacity that conventional businesses were willing to barter for labor. But that level was quite low compared to the initial gains from absorbing excess inventory and salvageable machinery in the early days of the system. At most, once barter reached its sustainable limits, it was good as a partial mitigation of the need for wage labor.

But as production machinery becomes affordable to individuals independently of large employers, such direct production for barter will become increasingly feasible for larger and larger segments of the workforce.

The Great Depression was a renaissance of local barter currencies or “emergency currencies,” adopted around the world, which enabled thousands of communities to weather the economic calamity with “the medium of exchange necessary for their activities, to give each other work.” [805]

The revival of barter on the Internet coincides with a new economic downturn, as well. A Craigslist spokesman reported in March 2009 that bartering had doubled on the site over the previous year.

Proposed swaps listed on the Washington area Craigslist site this week included accounting services in return for food, and a woman offering a week in her Hilton Head, S.C., vacation home for dental work for her husband.

Barter websites for exchanging goods and services without cash are proliferating around the world.

With unemployment in the United States and Britain climbing, some people said bartering is the only way to make ends meet. “I’m using barter Web sites just to see what we can do to survive,” said Zedd Epstein, 25, who owned a business restoring historic houses in Iowa until May, when he was forced to close it as the economy soured. Epstein, in a telephone interview, said he has not been able to find work since, and he and his wife moved to California in search of jobs. Epstein said he has had several bartering jobs he found on Craigslist. He drywalled a room in exchange for some tools, he poured a concrete shed floor in return for having a new starter motor installed in his car, and he helped someone set up their TV and stereo system in return for a hot meal. “Right now, this is what people are doing to get along,” said Epstein, who is studying for an electrical engineering degree. “If you need your faucet fixed and you know auto mechanics, there’s definitely a plumber out there who’s out of work and has something on his car that needs to be fixed,” he said. [806]

C. Resilience, Primary Social Units, and Libertarian Values

As the crisis progresses, and with it the gradually increasing underemployment and unemployment and the partial shift of value production from wage labor to the informal sector, we can probably expect to see several converging trends: a long-term decoupling of health care and the social safety net from both state-based and employer-based provision of benefits; shifts toward shorter working hours and job-sharing; and the growth of all sorts of income-pooling and cost-spreading mechanisms in the informal economy.

These latter possibilities include a restored emphasis on mutual aid organizations of the kind described by left-libertarian writers like Pyotr Kropotkin and E. P. Thompson. As Charles Johnson wrote:

It’s likely also that networks of voluntary aid organizations would be strategically important to individual flourishing in a free society, in which there would be no expropriative welfare bureaucracy for people living with poverty or precarity to fall back on. Projects reviving the bottom-up, solidaritarian spirit of the independent unions and mutual aid societies that flourished in the late 19 th and early 20 th centuries, before the rise of the welfare bureaucracy, may be essential for a flourishing free society, and one of the primary means by which workers could take control of their own lives, without depending on either bosses or bureaucrats. [807]

More fundamentally, they are likely to entail people coalescing into primary social units at the residential level (extended family compounds or multi-family household income-pooling units, multi-household units at the neighborhood level, urban communes and other cohousing projects, squats, and stand-alone intentional communities), as a way of pooling income and reducing costs. As the state’s social safety nets come apart, such primary social units and extended federations between them are likely to become important mechanisms for pooling cost and risk and organizing care for the aged and sick. One early sign of a trend in that direction: multi-generational or extended family households are at a fifty-year high, growing five percent in the first year of the Great Recession alone. [808] Here’s how John Robb describes it:

My solution is to form a tribal layer. Resilient communities that are connected by a network platform (a darknet). A decentralized and democratic system that can provide you a better interface with the dominant global economic system than anything else I can think of. Not only would this tribe protect you from shocks and predation by this impersonal global system, it would provide you with the tools and community support necessary to radically improve how you and your family does [sic] across all measures of consequence. [809]

Poul Anderson, in the fictional universe of his Maurai series, envisioned a post-apocalypse society in the Pacific Northwest coalescing around the old fraternal lodges, with the Northwestern Federation (a polity extending from Alaska through British Columbia down to northern California) centered on lodges rather than geographical subdivisions as the component units represented in its legislature. The lodge emerged as the central social institution during the social disintegration following the nuclear war, much as the villa became the basic social unit of the new feudal society in the vacuum left by the fall of Rome. It was the principal and normal means for organizing benefits to the sick and unemployed, as well as the primary base for providing public services like police and fire protection. [810]

It’s to be hoped that, absent a thermonuclear war, the transition will be a bit less abrupt. Upward-creeping unemployment, the exhaustion of the state’s social safety net, and the explosion of affordable technologies for small-scale production and network organization, taken together, will likely encounter an environment in which the incentives for widespread experimentation are intense. John Robb speculates on one way these trends may come together:

In order to build out resilient communities there needs to be a business mechanism that can financially power the initial roll-out. Here are some markets that may be serviced by resilient community formation:

An already large and growing group of people that are looking for a resilient community within which to live if the global or US system breaks down (ala the collapse of the USSR/Argentina or worse). Frankly, a viable place to live is a lot better than investing in gold that may not be valuable (gold assumes people are willing to part with what they have).

A larger and growing number of prospective students that want to learn how to build and operate resilient communities (rather than campus experiments and standard classroom blather).

A large and growing group of young people that want to work and live within a resilient community. A real job after school ends.

Triangulating these markets yields the following business opportunity:

The ability of prospective residents of resilient communities to invest a portion of their IRA/401K and/or ongoing contributions in the construction and operation of a resilient community in exchange for home and connections to resilient systems (food, energy, local manufacturing, etc.) within that community.

An educational program, like Gaia University’s collaboration with Factor e Farm, that allows students to get a degree while building out a resilient community (active permaculture/acquaculture plots, micro manufactories, local energy production, etc.). This allows access to government sponsored student debt.

A work study program that allows students of the University to pay off their student debt and make a living doing over a 5 year (flexible) period. IF they want to do that.

I suspect there is a good way to construct a legal business framework that allows this to happen. What would make this even more interesting would be to combine this with a “Freedom” network/darknet that allows ideas to flow freely via an open source approach between active resilient communities on the network. The network would also allow goods and services to flow between sites (via an internal trading mechanism) and also allow these goods and intellectual property (protected by phalanxes of lawyers) to be sold to the outside world (via an Ali Baba approach). At some point, if it is designed correctly, this network could become self-sustaining and able to generate the income necessary to continue a global roll-out by itself. [811]

(All except the “intellectual property” part.)

An article by Reihan Salam in Time Magazine , of all places, put a comparatively upbeat spin on the possibilities:

Imagine a future in which millions of families live off the grid, powering their homes and vehicles with dirt-cheap portable fuel cells. As industrial agriculture sputters under the strain of the spiraling costs of water, gasoline and fertilizer, networks of farmers using sophisticated techniques that combine cutting-edge green technologies with ancient Mayan know-how build an alternative food-distribution system. Faced with the burden of financing the decades-long retirement of aging boomers, many of the young embrace a new underground economy, a largely untaxed archipelago of communes, co-ops, and kibbutzim that passively resist the power of the granny state while building their own little utopias. Rather than warehouse their children in factory schools invented to instill obedience in the future mill workers of America, bourgeois rebels will educate their kids in virtual schools tailored to different learning styles. Whereas only 1.5 million children were homeschooled in 2007, we can expect the number to explode in future years as distance education blows past the traditional variety in cost and quality. The cultural battle lines of our time, with red America pitted against blue, will be scrambled as Buddhist vegan militia members and evangelical anarchist squatters trade tips on how to build self-sufficient vertical farms from scrap-heap materials. To avoid the tax man, dozens if not hundreds of strongly encrypted digital currencies and barter schemes will crop up, leaving an underresourced IRS to play whack-a-mole with savvy libertarian “hacktivists.” Work and life will be remixed, as old-style jobs, with long commutes and long hours spent staring at blinking computer screens, vanish thanks to ever increasing productivity levels. New jobs that we can scarcely imagine will take their place, only they’ll tend to be home-based, thus restoring life to bedroom suburbs that today are ghost towns from 9 to 5. Private homes will increasingly give way to cohousing communities, in which singles and nuclear families will build makeshift kinship networks in shared kitchens and common areas and on neighborhood-watch duty. Gated communities will grow larger and more elaborate, effectively seceding from their municipalities and pursuing their own visions of the good life. Whether this future sounds like a nightmare or a dream come true, it’s coming. This transformation will be not so much political as antipolitical. The decision to turn away from broken and brittle institutions, like conventional schools and conventional jobs, will represent a turn toward what military theorist John Robb calls “resilient communities,” which aspire to self-sufficiency and independence. The left will return to its roots as the champion of mutual aid, cooperative living and what you might call “broadband socialism,” in which local governments take on the task of building high-tech infrastructure owned by the entire community. Assuming today’s libertarian revival endures, it’s easy to imagine the right defending the prerogatives of state and local governments and also of private citizens — including the weird ones. This new individualism on the left and the right will begin in the spirit of cynicism and distrust that we see now, the sense that we as a society are incapable of solving pressing problems. It will evolve into a new confidence that citizens working in common can change their lives and in doing so can change the world around them. [812]

I strongly suspect that, in whatever form of civil society stabilizes at the end of our long collapse, the typical person will be born into a world where he inherits a possessory right to some defined share in the communal land of an extended family or cohousing unit, and to some minimal level of support from the primary social unit in times of old age and sickness or unemployment in return for a customarily defined contribution to the common fund in his productive years. It will be a world in which the Amish barn-raiser and the sick benefit societies of Kropotkin and E.P. Thompson play a much more prominent role than Prudential or the anarcho-capitalist “protection agency.”

Getting from here to there will involve a fundamental paradigm shift in how most people think, and the overcoming of centuries worth of ingrained habits of thought. This involves a paradigm shift from what James Scott, in Seeing Like a State , calls social organizations that are primarily “legible” to the state, to social organizations that are primary legible or transparent to the people of local communities organized horizontally and opaque to the state. [813]

The latter kind of architecture, as described by Kropotkin, was what prevailed in the networked free towns and villages of late medieval Europe. The primary pattern of social organization was horizontal (guilds, etc.), with quality certification and reputational functions aimed mainly at making individuals’ reliability transparent to one another. To the state, such local formations were opaque.

With the rise of the absolute state, the primary focus became making society transparent (in Scott’s terminology “legible”) from above, and horizontal transparency was at best tolerated. Things like the systematic adoption of family surnames that were stable across generations (and the 20 th century followup of citizen ID numbers), the systematic mapping of urban addresses for postal service, etc., were all for the purpose of making society transparent to the state. To put it crudely, the state wants to keep track of where its stuff is, same as we do—and we’re its stuff.

Before this transformation, for example, surnames existed mainly for the convenience of people in local communities, so they could tell each other apart. Surnames were adopted on an ad hoc basis for clarification, when there was some danger of confusion, and rarely continued from one generation to the next. If there were multiple Johns in a village, they might be distinguished by trade (“John the Miller”), location (“John of the Hill”), patronymic (“John Richard’s Son”), etc. By contrast, everywhere there have been family surnames with cross-generational continuity, they have been imposed by centralized states as a way of cataloguing and tracking the population—making it legible to the state, in Scott’s terminology. [814]

To accomplish a shift back to horizontal transparency, it will be necessary to overcome a powerful residual cultural habit, among the general public, of thinking of such things through the mind’s eye of the state. E.g., if “we” didn’t have some way of verifying compliance with this regulation or that, some business somewhere might be able to get away with something or other. We must overcome six hundred years or so of almost inbred habits of thought, by which the state is the all-seeing guardian of society protecting us from the possibility that someone, somewhere might do something wrong if “the authorities” don’t prevent it.

In place of this habit of thought, we must think instead of ourselves creating mechanisms on a networked basis, to make us as transparent as possible to each other as providers of goods and services, to prevent businesses from getting away with poor behavior by informing each other , to prevent each other from selling defective merchandise, to protect ourselves from fraud, etc. In fact, the creation of such mechanisms—far from making us transparent to the regulatory state—may well require active measures to render us opaque to the state (e.g. encryption, darknets, etc.) for protection against attempts to suppress such local economic self-organization against the interests of corporate actors.

In other words, we need to lose the centuries-long habit of thinking of “society” as a hub-and-spoke mechanism and viewing the world from the perspective of the hub, and instead think of it as a horizontal network in which we visualize things from the perspective of individual nodes. We need to lose the habit of thought by which transparency from above ever even became perceived as an issue in the first place.

This will require, more specifically, overcoming the hostility of conventional liberals who are in the habit of reacting viscerally and negatively, and on principle, to anything not being done by “qualified professionals” or “the proper authorities.”

Arguably conventional liberals, with their thought system originating as it did as the ideology of the managers and engineers who ran the corporations, government agencies, and other giant organizations of the late 19 th and early 20thcentury, have played the same role for the corporate-state nexus that the politiques did for the absolute states of the early modern period.

This is reflected in a common thread running through writers like Andrew Keene, Jaron Lanier, and Chris Hedges, as well as documentary producers like Michael Moore. They share a nostalgia for the “consensus capitalism” of the early postwar period, in which the gatekeepers of the Big Three networks controlled what we were allowed to see and it was just fine for GM to own the whole damned economy—just so long as everyone had a lifetime employment guarantee and a UAW contract.

Paul Fussell, in Bad , ridicules the whole Do-it-Yourself ethos as an endless Sahara of the Squalid, with blue collar schmoes busily uglifying their homes by taking upon themselves projects that should be left to—all together now—the Properly Qualified Professionals.

Keith Olbermann routinely mocks exhortations to charity and self-help, reaching for shitkicking imagery of the nineteenth century barnraiser for want of any other comparision to sufficiently get across just how backward and ridiculous that kind of thing really is. Helping your neighbor out directly, or participating in a local self-organized friendly society or mutual, is all right in its own way, if nothing else is available. But it carries the inescapable taint, not only of the quaint, but of the provincial and the picayune—very much like the perception of homemade bread and home-grown veggies promoted in corporate advertising in the early twentieth century, come to think of it. People who help each other out, or organize voluntarily to pool risks and costs, are to be praised—grudgingly and with a hint of condescension—for doing the best they can in an era of relentlessly downscaled social services. But that people are forced to resort to such expedients, rather than meeting all their social safety net needs through one-stop shopping at the Ministry of Central Services office in a giant monumental building with a statue of winged victory in the lobby, a la Brazil , is a damning indictment of any civilized society. The progressive society is a society of comfortable and well-fed citizens, competently managed by properly credentialed authorities, happily milling about like ants in the shadows of miles-high buildings that look like they were designed by Albert Speer. And that kind of H.G. Wells utopia simply has no room for the barn-raiser or the sick benefit society.

Aesthetic sensibilities aside, such critics are no doubt motivated to some extent by genuine concern that networked reputational and certifying mechanisms just won’t take up the slack left by the disappearance of the regulatory state. Things like Consumer Reports , Angie’s List and the Better Business Bureau are all well and good, for educated people like themselves who have the sense and know-how to check around. But Joe Sixpack, God love him, will surely just go out and buy magic beans from the first disreputable salesman he encounters—and then likely put them right up his nose.

Seriously, snark aside, such reputational systems really are underused, and most people really do take inadequate precautions in the marketplace on the assumption that the regulatory state guarantees some minimum acceptable level of quality. But liberal criticism based on this state of affairs reflects a remarkably static view of society. It ignores the whole idea of crowding out, as well as the possibility that even the Great Unwashed may be capable of changing their habits quite rapidly in the face of necessity. Because people are not presently in the habit of automatically consulting such reputational networks to check up on people they’re considering doing business with, and are in the habit of unconsciously assuming the government will protect them, conventional liberals assume that people will not shift from one to the other in the face of changing incentives, and scoff at the idea of a society that relies primarily on networked rating systems.

But in a society where people are aware that most licensing and safety/quality codes are no longer enforceable, and “caveat emptor” is no longer just a cliche, it would be remarkable if things like Angie’s list, reputational certification by local guilds, customer word of mouth, etc., did not rapidly grow in importance for most people. They were, after all, at one time the main reputational mechanism that people did rely on before the rise of the absolute state, and as ingrained a part of ordinary economic behavior as reliance on the regulatory state is today.

People’s habits change rapidly. Fifteen years ago, when even the most basic survey of a research topic began with an obligatory painful crawl through the card catalog, Reader’s Guide and Social Science Index—and when the average person’s investigations were limited to the contents of his $1000 set of Britannica—who could have foreseen how quickly Google and SSRN searches would become second nature?

In fact, if anything the assumption that “they couldn’t sell it if it wasn’t OK, because it’s illegal” leaves people especially vulnerable, because it creates an unjustified confidence and complacency regarding what they buy. The standards of safety and quality, based on “current science,” are set primarily by the regulated industries themselves, and those industries are frequently able to criminalize voluntary safety inspections with more stringent standards—or advertising that one adheres to such a higher standard—on the grounds that it constitutes disparagement of the competitor’s product. For example, Monsanto frequently goes after grocers who label their milk rBGH free, and some federal district courts have argued that it’s an “unfair competitive practice” to test one’s beef cattle for Mad Cow Disease more frequently than the mandated industry standard. We have people slathering themselves with lotion saturated with estrogen-mimicing parabens, on the assumption that “they couldn’t sell it if it was dangerous.” So in many cases, this all-seeing central authority we count on to protect us is like a shepherd that puts the wolves in charge of the flock.

As an individualist anarchist, I’m often confronted with issues of how societies organized around such primary social units would affect the libertarian values of self-ownership and nonaggression.

First, it’s extremely unlikely in my opinion that the collapse of centralized state and corporate power will be driven by, or that the post-corporate state society that replaces it will be organized according to, any single libertarian ideology (although I am hopeful, for reasons discussed later in this section, that there will be a significant number of communities organized primarily around such values, and that those values will have a significant leavening effect on society as a whole).

Second, although the kinds of communal institutions, mutual aid networks and primary social units into which people coalesce may strike the typical right-wing flavor of free market libertarian as “authoritarian” or “collectivist,” a society in which such institutions are the dominant form of organization is by no means necessarily a violation of the substantive values of self-ownership and nonaggression.

I keep noticing, without ever really being able to put it in just the right words, that most conventional libertarian portrayals of an ideal free market society, and particularly the standard anarcho-capitalist presentation of a conceptual framework of individual self-ownership and non-aggression, seem implicitly to assume an atomized society of individuals living (at most) in nuclear families, with allodial ownership of a house and quarter-acre lot, and with most essentials of daily living purchased via the cash nexus from for-profit business firms.

But it seems to me that the libertarian concepts of self-ownership and nonaggression are entirely consistent with a wide variety of voluntary social frameworks, while at the same time the practical application of those concepts would vary widely. Imagine a society like most of the world before the rise of the centralized territorial state, where most ultimate (or residual, or reversionary) land ownership was vested in village communes, even though there might be a great deal of individual possession. Or imagine a society like the free towns that Kropotkin described in the late Middle Ages, where people organized social safety net functions through the guild or other convivial associations. Now, it might be entirely permissible for an individual family to sever its aliquot share of land from the peasant commune, and choose not to participate in the cooperative organization of seasonal labor like spring plowing, haying or the harvest. It might be permissible, in an anarchist society, for somebody to stay outside the guild and take his chances on unemployment or sickness. But in a society where membership in the primary social unit was universally regarded as the best form of insurance, such a person would likely be regarded as eccentric, like the individualist peasants in anarchist Spain who withdrew from the commune, or the propertarian hermits in Ursula LeGuin’s The Dispossessed . And for the majority of people who voluntarily stayed in such primary social units, most of the social regulations that governed people’s daily lives would be irrelevant to the Rothbardian conceptual framework of self-ownership vs. coercion.

By way of comparison, for the kinds of mainstream free market libertarians conventionally assigned to the Right, the currently predominating model of employment in a business firm is treated as the norm. Such libertarians regard the whole self-ownership vs. aggression paradigm as irrelevant to life within that organizational framework so long as participation in the framework is itself voluntary. Aha! but by the same token, when people are born into a framework in which they are guaranteed a share in possession of communal land and are offered social safety net protections in the event of illness or old age, in return for observance of communal social norms, the same principle applies.

And for most of human history, before the state started actively suppressing voluntary association, and discouraged a self-organized social safety net based on voluntary cooperation and mutual aid, membership in such primary social units was the norm. Going all the way back to the first homo sapiens hunter-gatherer groups, altruism was very much consistent with rational utility maximization as a form of insurance policy. When there’s no such thing as unemployment compensation, food stamps, or Social Security, it makes a whole lot of sense for the most skillful or lucky hunter, or the farmer with the best harvest, to share with the old, sick and orphaned—and not to be a dick about it or rub it in their faces. Such behavior is almost literally an insurance premium to guarantee your neighbors will take care of you when you’re in a similar position. Consider Sam Bowles’ treatment of the altruistic ethos in the “weightless” forager economy:

Network wealth is the contribution made by your social connections to your well-being. This could be measured by your number of connections, or by your centrality in different networks. A simple way to think about this is the number of people who will share food with you.... The culture of the foraging band emphasizes generosity and modesty. There are norms of sharing. You depricate what you catch, describing it as “not as big as a mouse”, or “not even worth cooking”, even when you’ve killed a large animal. In the Ache people of Eastern Paraguay, hunters are prohibited from eating their own catch. There’s complex sanctioning of individually assertive behavior, particularly those that disturb or disrupt cooperation and group stability. This makes sense – if hunters can’t expect that they’ll be fed by other hunters – particularly by a hunter who suddenly develops a taste for eating his own catch – the society collapses rapidly. [815]

Before states began creating social safety nets, functions comparable to unemployment compensation, food stamps, and Social Security were almost universally organized through primary social units like the clan, the village commune, or the guild.

The irony is that the mainstream of market anarchism, particularly right-leaning followers of Murray Rothbard, are pushing for a society where there’s no state to organize unemployment compensation, food stamps or Social Security. I suppose they just assume this function will be taken over by Prudential, but I suspect that what fills the void after the disintegration of the state will be a lot closer to Poul Anderson’s above-mentioned society of lodges in the Northwest Federation.

It seems likely the Rothbardians are neglecting the extent to which the kinds of commercialized business relations they use as a preferred social model are, themselves, a product of the statism that they react against. The central state that they want to do away with played a large role in dismantling organic social institutions like clans, village communes, extended families, guilds, friendly societies, and so forth, and replacing them with an atomized society in which everybody sells his labor, buys consumables from the store, and is protected either by the department of human services or Prudential.

Gary Chartier (a professor of ethics and philosophy at La Sierra University), in discussing some of these issues with me, raised some serious questions about my comparison between the right-libertarian view of civil rights in the employment relation, and the rights of the individual in the kinds of communal institutions I brought up. One of the central themes of “thick” libertarianism is that a social environment can have an unlibertarian character, and that nominally private and primary forms of exploitation and unfairness can exist, even when no formal injustice has taken place in terms of violation of the nonaggression principle. [816]

Cultural authoritarianism in the workplace, especially, is a central focus for many thick libertarians. Claire Wolfe, a writer with impeccable libertarian credentials and Gadsden Flag-waver nonpareil, has pointed out just how inconsistent the authoritarian atmosphere of the workplace is with libertarian cultural values. [817] At the other end of the spectrum are people like Hans Hermann Hoppe, who actively celebrate the potential for cultural authoritarianism when every square foot of the Earth has been appropriated and there is no such thing as a right of way or any other form of public space. Their ideal world is one in which the letter of self-ownership and nonaggression is adhered to, but in which one cannot move from Point A to Point B anywhere in the world without encountering a request for “Ihre Papiere, bitte!” from the private gendarmerie, or stopping for the biometric scanners, of whoever owns the bit of space they’re standing on at any given momemt.

So could not an organic local community and its communal institutions, likewise, create an environment that would be considered authoritarian by thick libertarian norms, even when self-ownership and nonaggression were formally respected? Chartier continues:

I think the interesting question, for a left libertarian who’s interested in minimizing negative social pressure on minority groups of various sorts and who doesn’t want to see people pushed around, is, What kinds of social arrangements would help to ensure that “the social regulation that governed people’s daily lives” didn’t replicate statism in a kindler, gentler fashion? (“Want access to the communal water supply? I’d better not see you working in your field on the Sabbath ....”) Ostracism is certainly a hell of a lot better than jail, but petty tyrannies are still petty tyrannies. What’s the best way, do you think, to keep things like zoning regulations from creeping in the back door via systems of persistent social pressure? I’d rather not live in a Hoppe/Tullock condominium community. One way of getting at this might be to note that, as [Michael] Taylor plausibly suggests, small scale communities are probably good at preventing things like workplace injustices and the kinds of abuses that are possible when there are vast disparities in wealth and so in social influence. But I’m less clear that they’re good at avoiding abuses, not in the economic realm, but in the social or cultural realm. I’m more of a localist than a number of the participants in the recent discussions of these matters, but I think people like Aster [Aster Francesca, pen name of Jeanine Ring, a prolific and incisive writer on issues of social and cultural freedom] are surely right that the very solidarity that can prevent people in a close-knit community from going hungry or being arbitrarily fired can also keep them from being open about various kinds of social non-conformity. (My own social world includes a lot of people who need to avoid letting others with whom they work or worship know that they drink wine at dinner or learn about their sexual behavior; a generation ago, they’d have also avoided letting anyone know they went to movies.) Self-ownership vs. aggression needn’t be immediately relevant to community life any more than it might be to the firm. But the same sorts of objections to intra-firm hierarchy would presumably still apply to some kinds of social pressure at the community level, yes? [818]

One thing that’s relevant is suggested by Michael Taylor’s [819] treatment of hippie communalism as a way of reinventing community. To the extent that a reaction against the centralized state and corporate power is motivated by anti-authoritarian values, and rooted in communities like file-sharers, pot-smokers, hippie back-to-the-landers, etc. (and even to the extent that it takes place in a milieu “corrupted” by the American MYOB ethos), there will be at least a sizeable minority of communities in a post-state panarchy where community is seen as a safety net and a place for voluntary interaction rather than a straitjacket. And in America, at least, the majority of communities will also probably be leavened to some extent by the MYOB ethos, and by private access to the larger world via a network culture that it’s difficult for the community to snoop on. (I’ve seen accounts of the monumental significance of net-connected cell phones to Third World teens who live in traditional patriarchal cultures without even their own private rooms—immensely liberating).

The best thing left-libertarians can do is probably try to strengthen ties between local resilience movements of various sorts and culturally left movements like open-source/filesharing, the greens, and all the other hippie-dippy stuff. The biggest danger from that direction is that, as in the rather unimaginatively PC environments of a lot of left-wing urban communes and shared housing projects today, people might have to hide the fact that they ate a non-vegan dinner.

As for communities that react against state and corporate power from the direction of cultural conservatism, the Jim Bob Duggar types (a revolt of “Jihad” against “McWorld”), probably the best we can hope for is 1) the leavening cultural effects of the American MYOB legacy and even surreptitious connection to the larger world, 2) the power of exit as an indirect source of voice, and 3) the willingness of sympathetic people in other communities to intervene on behalf of victims of the most egregious forms of bluestockingism and Mrs. Grundyism.

D. LETS Systems, Barter Networks, and Community Currencies

Local currencies, barter networks and mutual credit-clearing systems are a solution to a basic problem: “a world in which there is a lot of work to be done, but there is simply no money around to bring the people and the work together.” [820]

Unconventional currencies are buffers against unemployment and economic downturn. Tsutomu Hotta, the founder of the Hureai Kippu (“Caring Relationship Tickets,” a barter system in which participants accumulate credits in a “healthcare time savings account” by volunteering their own time), estimated that such unconventional currencies would replace a third to a half of conventional monetary functions. “As a result, the severity of any recession and unemployment will be significantly reduced.” [821]

One barrier to local barter currencies and crowdsourced mutual credit is a misunderstanding of the nature of money. For the alternative economy, money is not primarily a store of value, but an accounting system to facilitate exchange. Its function is not to store accumulated value from past production, but to provide liquidity to facilitate the exchange of present and future services between producers.

The distinction is a very old one, aptly summarized by Schumpeter’s contrast between the “money theory of credit” and the “credit theory of money.” The former, which Schumpeter dismisses as entirely fallacious, assumes that banks “lend” money (in the sense of giving up use of it) which has been “withdrawn from previous uses by an entirely imaginary act of saving and then lent out by its owners. It is much more realistic to say that the banks ‘create credit..,’ than to say that they lend the deposits that have been entrusted to them.” [822] The credit theory of money, on the other hand, treats finances “as a clearing system that cancels claims and debts and carries forward the difference....” [823]

Thomas Hodgskin, criticizing the Ricardian “wage fund” theory from a perspective something like Schumpeter’s credit theory of money, utterly demolished any moral basis for the creative role of the capitalist in creating a wage fund through “abstention,” and instead made the advancement of subsistence funds from existing production a function that workers could just as easily perform for one another through mutual credit, were the avenues of doing so not preempted.

The only advantage of circulating capital is that by it the labourer is enabled, he being assured of his present subsistence, to direct his power to the greatest advantage. He has time to learn an art, and his labour is rendered more productive when directed by skill. Being assured of immediate subsistence, he can ascertain which, with his peculiar knowledge and acquirements, and with reference to the wants of society, is the best method of labouring, and he can labour in this manner. Unless there were this assurance there could be no continuous thought, an invention, and no knowledge but that which would be necessary for the supply of our immediate animal wants.... The labourer, the real maker of any commodity, derives this assurance from a knowledge he has that the person who set him to work will pay him, and that with the money he will be able to buy what he requires. He is not in possession of any stock of commodities. Has the person who employs and pays him such a stock? Clearly not.... A great cotton manufacturer... employs a thousand persons, whom he pays weekly: does he possess the food and clothing ready prepared which these persons purchase and consume daily? Does he even know whether the food and clothing they receive are prepared and created? In fact, are the food and clothing which his labourers will consume prepared beforehand, or are other labourers busily employed in preparing food and clothing while his labourers are making cotton yarn? Do all the capitalists of Europe possess at this moment one week’s food and clothing for all the labourers they employ?... ...As far as food, drink and clothing are concerned, it is quite plain, then, that no species of labourer depends on any previously prepared stock, for in fact no such stock exists; but every species of labourer does constantly, and at all times, depend for his supplies on the co-existing labour of some other labourers. [824] ...When a capitalist therefore, who owns a brew-house and all the instruments and materials requisite for making porter, pays the actual brewers with the coin he has received for his beer, and they buy bread, while the journeymen bakers buy porter with their money wages, which is afterwards paid to the owner of the brew-house, is it not plain that the real wages of both these parties consist of the produce of the other; or that the bread made by the journeyman baker pays for the porter made by the journeyman brewer? But the same is the case with all other commodities, and labour, not capital, pays all wages.... In fact it is a miserable delusion to call capital something saved. Much of it is not calculated for consumption, and never is made to be enjoyed. When a savage wants food, he picks up what nature spontaneously offers. After a time he discovers that a bow or a sling will enable him to kill wild animals at a distance, and he resolves to make it, subsisting himself, as he must do, while the work is in progress. He saves nothing, for the instrument never was made to be consumed, though in its own nature it is more durable than deer’s flesh. This example represents what occurs at every stage of society, except that the different labours are performed by different persons—one making the bow, or the plough, and another killing the animal or tilling the ground, to provide subsistence for the makers of instruments and machines. To store up or save commodities, except for short periods, and in some particular cases, can only be done by more labour, and in general their utility is lessened by being kept. The savings, as they are called, of the capitalist, are consumed by the labourer, and there is no such thing as an actual hoarding up of commodities. [825]

What political economy conventionally referred to as the “labor fund,” and attributed to past abstention and accumulation, resulted rather from the present division of labor and the cooperative distribution of its product. “Capital” is a term for a right of property in organizing and disposing of this present labor. The same basic cooperative functions could be carried out just as easily by the workers themselves, through mutual credit. Under the present system, the capitalist monopolizes these cooperative functions, and thus appropriates the productivity gains from the social division of labor.

Betwixt him who produces food and him who produces clothing, betwixt him who makes instruments and him who uses them, in steps the capitalist, who neither makes nor uses them, and appropriates to himself the produce of both. With as niggard a hand as possible he transfers to each a part of the produce of the other, keeping to himself the large share. Gradually and successively has he insinuated himself betwixt them, expanding in bulk as he has been nourished by their increasingly productive labours, and separating them so widely from each other that neither can see whence that supply is drawn which each receives through the capitalist. While he despoils both, so completely does he exclude one from the view of the other that both believe they are indebted him for subsistence. [826]

Franz Oppenheimer made a similar argument in “A Post Mortem on Cambridge Economics”:

THE JUSTIFICATION OF PROFIT, to repeat, rests on the claim that the entire stock of instruments of production must be “saved” during one period by private individuals in order to serve during a later period. This proof, it has been asserted, is achieved by a chain of equivocations. In short, the material instruments, for the most part, are not saved in a former period, but are manufactured in the same period in which they are employed. What is saved is capital in the other sense, which may be called for present purposes “money capital.” But this capital is not necessary for developed production. Rodbertus, about a century ago, proved beyond doubt that almost all the “capital goods” required in production are created in the same period. Even Robinson Crusoe needed but one single set of simple tools to begin works which, like the fabrication of his canoe, would occupy him for several months. A modern producer provides himself with capital goods which other producers manufacture simultaneously, just as Crusoe was able to discard an outworn tool, occasionally, by making a new one while he was building the boat. On the other hand, money capital must be saved, but it is not absolutely necessary for developed technique. It can be supplanted by co-operation and credit, as Marshall correctly states. He even conceives of a development in which savers would be glad to tend their savings to reliable persons without demanding interest, even paying something themselves for the accommodation for security’s sake. Usually, it is true, under capitalist conditions, that a certain personally-owned money capital is needed for undertakings in industry, but certainly it is never needed to the full amount the work will cost. The initial money capital of a private entrepreneur plays, as has been aptly pointed out, merely the rôle of the air chamber in the fire engine; it turns the irregular inflow of capital goods into a regular outflow. [827]

Oscar Ameringer illustrated the real-world situation in a humorous socialist pamphlet, “Socialism for the Farmer Who Farms the Farm,” written in 1912. A river divided the nation of Slamerica into two parts, one inhabited by farmers and the other by makers of clothing. The bridge between them was occupied by a fat man named Ploot, who charged the farmers four pigs for a suit of clothes and the tailors four suits for a pig. The difference was compensation for the “service” he provided in letting them across the bridge and providing them with work. When a radical crank proposed the farmers and tailors build their own bridge, Ploot warned that by depriving him of his share of their production they would drive capital out of the land and put themselves out of work three-quarters of the time (while getting the same number of suits and pigs, of course). [828]

Schumpeter distinction between money theories of credit and credit theories of money is useful here. Critiquing the former, he wrote that it was misleading to treat bank credit as the lending of funds which had been “withdrawn from previous uses by an entirely imaginary act of saving and then lent out by their owners. It is much more realistic to say that the banks ‘create credit...,’ than to say that they lend the deposits that have been entrusted to them.” [829] The latter, in contrast, treat finances “as a clearing system that cancels claims and carries forward the difference.” [830]

E. C. Riegel argues that issuing money is a function of the individual within the market, a side-effect of his normal economic activities. Currency is issued by the buyer by the very act of buying, and it’s backed by the goods and services of the seller.

Money can be issued only in the act of buying, and can be backed only in the act of selling. Any buyer who is also a seller is qualified to be a money issuer. Government, because it is not and should not be a seller, is not qualified to be a money issuer. [831] Money is simply an accounting system for tracking the balance between buyers and sellers over time. [832]

And because money is issued by the buyer, it comes into existence as a debit. The whole point of money is to create purchasing power where it did not exist before: “...[N]eed of money is a condition precedent to the issue thereof. To issue money, one must be without it, since money springs only from a debit balance on the books of the authorizing bank or central bookkeeper.” [833]

IF MONEY is but an accounting instrument between buyers and sellers, and has no intrinsic value, why has there ever been a scarcity of it? The answer is that the producer of wealth has not been also the producer of money. He has made the mistake of leaving that to government monopoly. [834]

Money is “simply number accountancy among private traders.” [835] Or as Riegel’s disciple Thomas Greco argues, currencies are not “value units” (in the sense of being stores of value). They are means of payment denominated in value units. [836]

In fact, as Greco says, “barter” systems are more accurately conceived as “credit clearing” systems. In a mutual credit clearing system, rather than cashing in official state currency for alternative currency notes (as is the case in too many local currency systems), participating businesses spend the money into existence by incurring debits for the purchase of goods within the system, and then earning credits to offset the debits by selling their own services within the system. The currency functions as a sort of IOU by which a participant monetizes the value of his future production. [837] It’s simply an accounting system for keeping track of each member’s balance:

Your purchases have been indirectly paid for with your sales, the services or labor you provided to your employer. In actuality, everyone is both a buyer and a seller. When you sell, your account balance increases; when you buy, it decreases. It’s essentially what a checking account does, except a conventional bank does not automatically provide overdraft protection for those running negative balances, unless they pay a high price for it. [838] There’s no reason businesses cannot maintain a mutual credit-clearing system between themselves, without the intermediary of a bank or any other third party currency or accounting institution. The businesses agree to accept each other’s IOUs in return for their own goods and services, and periodically use the clearing process to settle their accounts. [839]

And again, since some of the participants run negative balances for a time, the system offers what amounts to interest-free overdraft protection. As such a system starts out, members are likely to resort to fairly frequent settlements of account, and put fairly low limits on the negative balances that can be run, as a confidence building measure. Negative balances might be paid up, and positive balances cashed out, every month or so. But as confidence increases, Greco argues, the system should ideally move toward a state of affairs where accounts are never settled, so long as negative balances are limited to some reasonable amount.

An account balance increases when a sale is made and decreases when a purchase is made. It is possible that some account balances may always be negative. That is not a problem so long as the account is actively trading and the negative balance does not exceed some appropriate limit. What is a reasonable basis for deciding that limit?... Just as banks use your income as a measure of your ability to repay a loan, it is reasonable to set maximum debit balances based on the amount of revenue flowing through an account.... [One possible rule of thumb is] that a negative account balance should not exceed an amount equivalent to three months’ average sales. [840]

It’s interesting how Greco’s proposed limit on negative balances dovetails with the credit aspect of the local currency system. His proposed balance limit, a de facto interest-free loan, is sufficient to fund the minimum capital outlays for many kinds of low-overhead micro-enterprise. Even at the average wages of unskilled labor, three months’ income is sufficient to acquire the basic equipment for a Fab Lab (at least the open-source versions described in Chapter Six). And it’s far more than sufficient to meet the capital outlays needed for a microbakery or microcab.

Greco recounts an experiment with one such local credit clearing system, the Tucson Traders. It’s fairly typical of his experience: initial enthusiasm, followed by gradual decline and dwindling volume, as the dwindling number of goods and services and the inconvenience of traveling between the scattered participating businesses take their toll. [841]

The reason for such failure, in normal economic times, is that local currency systems are crowded out by the official currency and the state-supported banking system.

For a credit clearing system to thrive, it must offer a valued alternative to those who lack sources of money in the conventional economy. That means it must have a large variety of participating goods and services, participating businesses must find it a valuable source of business that would not otherwise exist in the conventional economy, and unemployed and underemployed members must find it a valuable alternative for turning their skills into purchasing power they would not otherwise have. So we can expect LETS or credit clearing systems to increase in significance in periods of economic downturn, and even more so in the structural decline of the money and wage economy that is coming.

Karl Hess and David Morris cite Alan Watts’ illustration of the absurdity of saying it’s impossible for willing producers, faced with willing consumers, to produce for exchange because “there’s not enough money going around”:

Remember the Great Depression of the Thirties? One day there was a flourishing consumer economy, with everyone on the up-and-up; and the next: poverty, unemployment and breadlines. What happened? The physical resources of the country—the brain, brawn, and raw materials—were in no way depleted, but there was a sudden absence of money, a so-called financial slump. Complex reasons for this kind of disaster can be elaborated at lengths by experts in banking and high finance who cannot see the forest for the trees. But it was just as if someone had come to work on building a house and, on the morning of the Depression, that boss had to say, “Sorry, baby, but we can’t build today. No inches.” “Whaddya mean, no inches? We got wood. We got metal. We even got tape measures.” “Yeah, but you don’t understand business. We been using too many inches, and there’s just no more to go around.” [842]

The point of the mutual credit clearing system, as Greco describes it, is that two people who have goods and services to offer—but no money—are able to use their goods and services to buy other goods and services, even when there’s “no money.” [843] So we can expect alternative currency systems to come into play precisely at those times when people feel the lack of “inches.” Based on case studies in the WIR system and the Argentine social money movement, Greco says, “complementary currencies will take hold most easily when they are introduced into markets that are starved for exchange media.” [844] The widespread proliferation of local currencies in the Depression suggests that when this condition holds, the scale of adoption will follow as a matter of course. And as we enter a new, long-term period of stagnation in the conventional economy, it seems likely that local currency systems will play a growing role in the average person’s strategy for economic survival.

There has been a new revival of local currency systems starting in the 1990s with the Ithaca Hours system and spreading to a growing network of LETS currencies.

But Ted Trainer, a specialist on relocalized economies who writes at “The Simpler Way” site, points out that LETS systems are, by themselves, largely worthless. The problem with LETS systems, by themselves, is that

most people do not have much they can sell, i.e., they do not have many productive skills or the capital to set up a firm. It is therefore not surprising that LETSystems typically do not grow to account for more than a very small proportion of a town’s economic activity.... What is needed and what LETSystems do not create is productive capacity, enterprises. It will not set up a cooperative bakery in which many people with little or no skill can be organised to produce their own bread. So the crucial element becomes clear. Nothing significant can be achieved unless people acquire the capacity to produce and sell things that others want. Obviously, unless one produces and sells to others one can’t earn the money with which to purchase things one needs from others. So the question we have to focus on is how can the introduction of a new currency facilitate this setting up of firms that will enable those who had no economic role to start producing, selling, earning and buying . The crucial task is to create productive roles, not to create a currency. The new currency should be seen as little more than an accounting device, necessary but not the crucial factor. It is obvious here that what matters in local economic renewal is not redistribution of income or purchasing power. What matters is redistribution of production power . [845] It is ridiculous that millions of people are been unable to trade with each other simply because they do not have money, i.e., tokens which enable them to keep track of who owes what amount of goods and work to whom. LETS is a great solution to this elementary problem. However it is very important to understand that a LETSystem is far from sufficient. In fact a LETS on its own will not make a significant difference to a local economy. The evidence is that on average LETS transactions make up less than 5% of the economic activity of the average member of a scheme, let alone of the region. (See R. Douthwaite, Short Circuit, 1996, p. 76.) LETS members soon find that they can only meet a small proportion of their needs through LETS, i.e., that there is not that much they can buy with their LETS credits, and not that much they can produce and sell. Every day they need many basic goods and services but very few of these are offered by members of the system. This is the central problem in local economic renewal; the need for ways of increasing the capacity of local people to produce things local people need. The core problem in other words is how to set up viable firms.... The core task in town economic renewal is to enable, indeed create a whole new sector of economic activity involving the people who were previously excluded from producing and earning and purchasing. This requires much more than just providing the necessary money; it requires the establishment of firms in which people a can produce and earn. [846]

As he writes elsewhere, the main purpose of local currency systems is “to contribute to getting the unused productive capacity of the town into action, i.e., stimulating/enabling increase in output to meet needs.” Therefore the creation of a local currency system is secondary to creating firms by which the unemployed and underemployed can earn the means of exchange. [847]

For that reason, Trainer proposes Community Development Cooperatives as a way to promote the kinds of new enterprises that enable people to earn local currency outside the wage system.

The economic renewal of the town will not get far unless its CDC actively works on this problem of establishing productive ventures within the new money sector which will enable that sector to sell things to the old firms in the town. In the case of restaurants the CDC’s best option would probably be to set up or help others set up gardens to supply the restaurants with vegetables. Those who run the gardens would pay the workers in new money, sell the vegetables to the restaurants for new money, and use their new money incomes to buy meals from the restaurants. The Community Development Cooperative must work hard to find and set up whatever other ventures it can because the capacity of the previously poor and unemployed group of people in the town to purchase from normal/old firms is strictly limited by the volume that that group is able to sell to those firms. Getting these productive ventures going is by far the most important task of the Community Development Cooperative, much more important than just organising a new currency in which the exchanges can take place. The other very important thing the Community Development Cooperative must do is enable low skilled and low income people to cooperative [sic] produce many things for themselves. A considerable proportion of people in any region do not have the skills to get a job in the normal economy. This economy will condemn them to poverty and boredom. Yet they could be doing much useful work, especially work to produce many of the things they need. But again this will not happen unless it is organised. Thus the Community Development Cooperative must organise gardens and workshops and enterprises (such as furniture repair, house renovation and fuel wood cutting) whereby this group of people can work together to produce many of the things they need. They might be paid in new money according to time contributions, or they might just share goods and income from sales of surpluses. [848]

Trainer’s critique of stand-alone LETS systems makes a lot of sense. When people earn official dollars in the wage economy, and then trade them in for local currency notes at the local bank that can only be spent in local businesses, they’re trading dollars they already have for something that’s less useful; local currency, in those circumstances, becomes just another greenwashed yuppie lifestyle choice financed by participation in the larger capitalist economy. As Greco puts it,

a community currency that is issued on the basis of payment of a national currency (e.g., a local currency that is sold for dollars), amounts to a “gift certificate” or localized “traveler’s check.” It amounts to prepayment for the goods or services offered by the merchants that agree to accept the currency. That approach provides some limited utility in encouraging the holder of the currency to buy locally... [But] that sort of issuance requires that someone have dollars in order for the community currency to come into existence. [849]

Local currency should be a tool that’s more useful than the alternative, giving people who are outside the wage system and who lack official dollars a way to transform their skills into purchasing power they would otherwise not have. A unit of local currency shouldn’t be something one obtains by earning official money through wage employment and then trading it in for feel-good money at the bank to spend on establishment Main Street businesses. It should be an accounting unit for barter by the unemployed or underemployed person, establishing new microenterprises out of their own homes and exchanging goods and services directly with one another.

Trainer’s main limitation is his focus on large-scale capital investment in conventional enterprises as the main source of employment. In examining the need for capital for setting up viable firms, he ignores the enormous amounts of capital that already exist.

The capital exists in the form of the ordinary household capital goods that most people already own, sitting idle in their own homes: the ordinary kitchen ovens that might form the basis of household microbakeries producing directly for credit in the barter network; the sewing machines that might be used to make clothes for credit in the network; the family car and cell phone that might be used to provide cab service for the network in exchange for credit toward other members’ goods and services; etc. The unemployed or underemployed carpenter, plumber, electrician, auto mechanic, etc., might barter his services for credit to purchase tomatoes from a market gardener within the network, for the microbaker’s bread or the seamstress’s shirts, and so forth. The “hobbyist” with a well-equipped workshop in his basement or back yard might custom machine replacement parts to keep the home appliances of the baker, market gardener, and seamstress working, in return for their goods and services. Eventually “hobbyist” workshops and small local machine shops might begin networked manufacturing for the barter network, perhaps even designing their own open-source products with CAD software and producing them with CNC machine tools.

Hernando de Soto, in The Mystery of Capital , pointed to the homes and plots of land, to which so many ordinary people in the Third World hold informal title, as an enormous source of unrealized investment capital. Likewise, the spare capacity of people’s ordinary household capital goods is a potentially enormous source of “plant and equipment” for local alternative economies centered on the informal and household sector.

There is probably enough idle oven capacity in the households of the average neighborhood or small town to create the equivalent of a hundred cooperative bakeries. Why waste the additional outlay cost, and consequent overhead, for relocating this capital to a stand-alone building?

Another thing to remember is that, even when a particular kind of production requires capital investment beyond the capabilities of the individual of average means, new infrastructures for crowdsourced, distributed credit—microcredit—make it feasible to aggregate sizable sums of investment capital from many dispersed small capitals, without paying tribute to a capitalist bank for performing the service. That’s why it’s important for a LETS system to facilitate not only the exchange of present goods and services, but the advance of credit against future goods and services.

Such crowdsourced credit might be used by members of a barter network to form their own community or neighborhood workshops in cheap rental space, perhaps (again) contributing the unused tools sitting in their garages and basements.

Of course the idle capacity of conventional local businesses shouldn’t be entirely downplayed. Conventional enterprises with excess capacity can often use the spare capacity to produce at marginal costs a fraction of the normal cost, for barter against similar surpluses of other businesses. For instance, vacant hotel rooms in the off-season might be exchanged for discounted meals at restaurants during the slow part of the day, matinee tickets at the theater, etc. And local nonprofit organizations might pay volunteers in community currency units good for such surplus production at local businesses. In Minneapolis, for example, volunteers are paid in Community Service Dollars, which can be used for up to half the price of a restaurant meal before 7 p.m., or 90% of a matinee movie ticket. This enables local businesses to utilize idle capacity to produce goods sold at cost, and enables the unemployed to turn their time into purchasing power. [850]

As we already saw above, barter associations like UXA frequently exchanged their members’ skills for the surplus inventory of conventional businesses.

E. Community Bootstrapping

The question of economic development in apparently dead-end areas has been of widespread interest for a long time. Of one such area, the so-called Arkansas Delta region (the largely rural, black, cash crop southeastern portion of the state) was recently the subject of a column by John Brummett:

Back when then-Gov. Mike Huckabee was trying to consolidate high schools for better educational opportunities, I was among dozens openly agreeing with him. People in small towns cried out that losing their high schools would mean losing their towns. Only once did I work up the nerve to write that a town had no inalienable right to exist and that it wasn’t much of a town if all it had was a school. This comment was not well-received in some quarters. I was called an elitist enemy of the wholesome rural life. But that wasn’t so. I wasn’t an enemy of the blissful advantages of a bucolic eden; I was only against inefficiently small schools getting propped up illogically in little incorporated spots on the road, anachronistic remnants of an olden time. So imagine my reaction last week when I read Rex Nelson’s idea. It is to abandon, more or less, whole towns in the Delta and consolidate people from those towns in other towns that Nelson termed “worth saving” on account of having “critical mass.” Presumably you’d go into Gould and Marianna and Marvell and Elaine and Clarendon and Holly Grove and say something like this: “Y’all need to get out; come on, get packed; get to Pine Bluff or Helena or Forrest City, because that’s where the government money for schools and hospitals and infrastructure and such is going to go from now on. We can’t afford to keep messing with this dead little town that doesn’t have any remote hope of getting better. We don’t have enough money to send a doctor around to your little health clinic once a week. We’ve got to get you over to the town where he lives and where they have a hospital that can provide him equipment and a living. This is for your own good.” Nelson, former press aide to Tommy Robinson and Huckabee but a decent sort anyway, has just left a Republican-rewarded patronage job with the Delta Regional Authority. That’s an eight-state compact spending federal grants in the fast-dying Delta region along both sides of the Mississippi River. Newly relocated to an advertising agency in Little Rock, Nelson gave an interview to a friendly newspaper columnist and, after some discussion of his liking Southern food and culture, shared his valedictory thoughts on what in the wide world we might do for the Delta. So here’s the idea: You pick out communities with hospitals and schools and decent masses of population and give them more federal grants than you give all these proliferating and tiny dead communities. You try to correct all this chronic dissipation of effort and resources. It’s school consolidation writ large. It’s an attempt at redistribution of the population. It’s eminent domain on steroids. It’s cold. It’s difficult. And it’s absolutely right. What we call the Delta region of eastern Arkansas is a mechanized farm region, vast acreage of soybeans and rice, with pointless towns dotted at every crossroad. These one-time commerce centers thrived before farming was mechanized. Jobs for humans were to be had through the first half of the last century. Now they’re home to boarded windows and people trapped in tragic cycles of poverty without hope of jobs because none is left and none is coming. [851]

Despite Brummet’s assumptions, there is no shortage of examples of building an alternative economy almost scratch, a bit at a time, in an impoverished area. The Antigonish movement in Nova Scotia and the Mondragon cooperatives in Spain are two such examples. Both movements were sparked by radical Catholic priests serving impoverished areas, and heavily influenced by the Distributist ideas of G.K. Chesterton and Hilaire Belloc. The Antigonish movement, founded by Fr. Moses Coady, envisioned starting with credit unions and consumer retail cooperatives, which would obtain goods from cooperative wholesale societies, and which would in turn be supplied by factories owned by the whole movement. The result would be an integrated cooperative economy as a base of independence from capitalism. [852] In the specific example of Larry’s River, the community began by building a cooperative sawmill; they went on to build a cooperative lobster cannery, a credit union, a cooperative store, a blueberry cannery, and a fish processing plant. [853] Mondragon—founded in the Basque country by Fr. Don Jose Maria Arizmendiarrietta—started similarly with a small factory, gradually adding a trade school, a credit union, and another factory at a time, until it became an enormous federated system with its own finance arm and tens of thousands of member-owners employed in its enterprises. [854]

More recently, the people of the Salinas region of the Ecuadorian Andes created a similar regional economy by essentially the same process, as recounted by Massimo de Angelis of the editor’s blog . [855] The Salinas area, a region centering on the village of the same name, includes some thirty communities comprising a total of around six thousand people. The area economy is a network of cooperative enterprises, commonly called “the organization,” that includes some 95% of the population.

The “organization” is in reality a quick name for several associations, foundations, consortia and cooperatives, ranging from cheese producers to textile, ceramic and chocolate making, herbal medicine and trash collection, a radio station an hotel, a hostel, and a “office of community tourism”.

The origin of “the organization” is reminiscent of a couple of Antigonish and Mondragon. The Salinas area was originally the typical domain of a patron, under the Latin American hacienda system. Most land belonged to the Cordovez family, who collected rents pursuant to a Spanish crown grant, and the Cordovez family’s salt mine was the main non-agricultural employer. Like Antigonish and Mondragon, the organization started out with a single cooperative enterprise and from there grew by mitosis into an entire federated network of cooperatives. The first cooperative, formed in the 1970s, was a credit union created as a source of independence from the loan sharks who preyed on the poor. (This initial nucleus, like—again—Antigonish and Mondragon, was the project of an activist Catholic priest, the Italian immigrant Fr. Antonio Polo). The credit cooperative offered to buy the Cordovez family lands. With the encouragement of Fr. Polo, the village subsequently organized one cooperative enterprise after another to provide employment after the salt mine closed.

A significant social safety net operates in the village, funded by the surpluses of various cooperative enterprises, on a gift economy basis. And it’s possible to earn exchange value outside of wage labor by contributing to something like a time bank.

However, at the end of the year, the monetary surplus [of the cheese factory] is not distributed among coop members on the basis of their milk contribution, but is shared among them for common projects: either buying new equipment, or transferred to community funds. This way, as our guide told us, “the farmer who has 10 cows is helping the farmer that has only one cow”, allowing for some re-distribution. Another example is the use of Mingas. Minga is a quechua word used by various ethnical groups throughout the Andes and refer to unwaged community work, in which men, women and children all participate in pretty much convivial ways and generally ends up in big banquets. Infrastructure work such as road maintenance, water irrigation, planting, digging, but also garbage collection and cleaning up the square are all type of work that calls for a Minga of different size and are used in Salinas. Yet another example is the important use of foundations, that channel funds earned in social enterprises for projects for the community.

Angelis, despite his admiration, has serious doubts as to whether the project is relevant or replicable. For one thing, this mixed commons/market system may be less sustainable when more capital-intensive forms of production are undertaken, and may accordingly be more vulnerable to destabilization and decay into exploitative capitalism. He raises the example of the new factory for turning wool into thread, to be vertically integrated with the household production of sweaters and other woolens. The large capital outlay, he says, means a break even point can only be achieved with fairly large batch production.

For another, de Angelis says, the success of the Salinas model arguably depends on its uniqueness, so that it can serve wide-open global niche markets without a lot of global competition from other local economies pursuing the same development model.

And finally, debt financing of capital investment leads to a certain degree of self-exploitation to service that debt.

De Angelis analyzes the cumulative implications of these problems:

I have mixed feelings about this Salinas’ experience. There is no doubt that the 69 agro-industrial and 38 service communities enterprises are quite a means for the local population to meet reproduction needs in ways that shield them from the most exploitative practices of other areas in the region and make them active participants in commoning processes centred on dignity. But the increasing reliance on, and strong preoccupation with, global export circuits and on the markets seems excessive, with the risk that experiments like these really become the vehicles for commons co-optation.

The newest venture along these lines is the Evergreen Cooperative Initiative in the decaying rust belt city of Cleveland—aka “the Mistake by the Lake,” where the poverty rate is 30%. [856]

The Evergreen Cooperative Initiative is heavily influenced by the example of Mondragon. [857] The project had its origins in a study trip to Mondragon sponsored by the Cleveland Foundation, and is described as “the first example of a major city trying to reproduce Mondragon.” [858] Besides the cooperative development fund, its umbrella of support organizations includes Evergreen Business services, which provides “back-office services, management expertise and turn-around skills should a co-op get into trouble down the road.” Member enterprises are expected to plow ten percent of pre-tax profits back into the development fund to finance investment in new cooperatives. [859]

The Evergreen Cooperative Laundry [860] was the first of some twenty cooperative enterprises on the drawing board, followed by Ohio Cooperative Solar [861] (which carries out large-scale installation of solar power generating equipment on the roofs of local government and non-profit buildings). A second and third enterprise, a cooperative greenhouse [862] and the Neighborhood Voice newspaper, are slated to open in the near future.

The Initiative is backed by stakeholders in the local economy, local government and universities. The primary focus of the new enterprises, besides marketing to individuals in the local community, is on serving local “anchor institutions”—the large hospitals and universities—that will provide a guaranteed market for a portion of their services. The Cleveland Foundation and other local foundations, banks, and the municipal government are all providing financing. The Evergreen Cooperative Development Fund is currently capitalized at $5 million, and expects to raise at least $10–12 million more. [863]

Besides the Cleveland Foundation, other important stakeholders are the Cleveland Roundtable and the Democracy Collaborative. The Roundtable is a project of Community-Wealth.org [864] ; Community-Wealth [865] , in turn, is a project of the Democracy Collaborative at the University of Maryland, College Park. [866] All three organizations are cooperating intensively to promote the Evergreen Cooperative Initiative.

On December 7 – 8, 2006, The Democracy Collaborative, the Ohio Employee Ownership Center, and the Aspen Institute Nonprofit Sector Research Fund convened a Roundtable in Cleveland, Ohio. The event, titled “Building Community Wealth: New Asset-Based Approaches to Solving Social and Economic Problems in Cleveland and Northeast Ohio,” brought together national experts, local government representatives, and more than three-dozen community leaders in Cleveland to discuss community wealth issues and identify action steps toward developing a comprehensive strategy. The fifty participants included representatives of the Federal Reserve Bank of Cleveland, the Ohio Public Employees Retirement System, universities, and employee-owned firms; directors of nonprofit community and economic development organizations such as community development corporations, housing land trusts, and community development financial institutions; the economic development director of the City of Cleveland and members of his staff; a director of the new veterans administration hospital to be established in the city; the treasurer of Cuyahoga County; and others of the public, private, philanthropic, faith-based and non-profit communities. Funding and other support for the meeting was provided by the Gund Foundation, the Cleveland Foundation, and the Sisters of Charity Foundation. [867]

This is one of the largest and most promising experiments in cooperative economics ever attempted in the United States, with an unprecedented number of local stakeholders at the table.

What do Antigonish, Mondragon, Salinas and Cleveland have in common? They all take the conventional commercial enterprise using existing production technology as a given, and simply tinker around with applying the cooperative principle and economic localism to such enterprises.

Most of Brummett’s hits on the economic viability of small towns in the Delta are based on the technocratic liberal assumption that enormous capital outlays are required to accomplish particular economic functions. That’s an assumption shared by technocratic liberals of the same stripe who promoted a Third World economic development model based on maximizing economies of scale by concentrating available capital in a few giant, capital-intensive enterprises rather than integrating intermediate production technologies into village economies. [868] That’s true of most Progressive(TM) versions of community economic development—Obama’s “green jobs” programs, alternative energy projects, and the like. Typically they entail “private-public partnerships,” based on attracting colonization by “progressive” or “green” corporations with capital-intensive business models, and the capture of profits from new technology on the pattern of “cognitive capitalism”: a sort of mashup of the Gates Foundation, Warren Buffett and Bono.

And the government’s criteria for aiding such development efforts usually manage to exclude low-capital, bottom-up efforts by self-organized locals. [869]

And de Angelis’s critique of the Salinas experiment comes from a similar set of assumptions: namely, that capital-intensive forms of production, with the requirement for high capital outlays and debt finance, and an export-oriented economic model for servicing that debt and fully utilizing the expensive plant and equipment, are simply a given.

But as we saw in the previous chapter, decent standards of living no longer depend on building communities around enormous concentrations of capital assets housed in large buildings. Thanks to technical change, the capital outlays required to support a comfortable standard of living are scalable to smaller and smaller population units. So Muhammad no longer need go to the mountain.

This has enormous liberatory significance for experiments in cooperative local economies like Salinas. As production tools become cheaper and cheaper, for an ever increasing range of products, the more feasible it is to produce more and more of the things the local population consumes in small shops scaled to the local market, without high capital outlays and overhead creating pressure to maximize batch size and amortize costs. This will also mean less indebtedness from capital investment, less pressure to self-exploitation, and less pressure to compete in a global marketplace instead of serving the local economy.

That means that manufacturing can move toward the kind of local subsistence model that de Angelis desires for the Salinas economy, and envisions as its idealized “better self”: “a means for the local population to meet reproduction needs in ways that shield them from the most exploitative practices of other areas in the region…”

In general, the promise of low-cost production tools dovetails perfectly with the goals of the cooperative and relocalization movements. As we will see in more detail in the next chapter, the lower the cost of production tools, the less of a bottleneck investment capital becomes for local economic development, and the less dependent the local economy becomes on outside investors. The imploding cost of production machinery is a revolutionary reinforcement for the kind of process that Jane Jacobs regarded as the best approach to community economic development: import replacement by using local resources and putting formerly waste resources to use. Every technological change that reduces the capital outlays required for producing local consumption needs is a force multiplier, not only making import substitution more feasible but increasing its cost-effectiveness, and enabling local economies to do more with less. When the masters of the corporate state realize the full revolutionary significance of micromanufacturing technology in liberating local economies from corporate power, we’ll be lucky if the people in the Fab Labs don’t wind up being waterboarded at Gitmo.

Low capital outlays and other fixed costs, and the resulting low overhead burden to be serviced, are the key to the counter-economy’s advantages as a path to community economic development.

The Indian villages Neil Gerschenfeld described in Fab (quoted extensively in the next chapter) one illustration of the possibilities for economically depressed, resource-poor areas using the latest generation of technology to bootstrap development and leapfrog previous generations of high-cost, capital-intensive technology.

Sam Kronick recently challenged members of the Open Manufacturing email list on the relevance of their pet micromanufacturing technology as a lifeline for dying rust belt communities like Braddock, Pennsylvania.

The state has classified it a “distressed municipality” — bankrupt, more or less — since the Reagan administration. The tax base is gone. So are most of the residents. The population, about 18,000 after World War II, has declined to less than 3,000. Many of those who remain are unemployed. Real estate prices fell 50 percent in the last year. “Everyone in the country is asking, ‘Where’s the bottom?’ ” said the mayor, John Fetterman. “I think we’ve found it.” Mr. Fetterman is trying to make an asset out of his town’s lack of assets, calling it “a laboratory for solutions to all these maladies starting to knock on the door of every community.” One of his first acts after being elected mayor in 2005 was to set up, at his own expense, a Web site to publicize Braddock — if you can call pictures of buildings destroyed by neglect and vandals a form of promotion. He has encouraged the development of urban farms on empty lots, which employ area youths and feed the community. He started a nonprofit organization to save a handful of properties. [870]

This, Kronick says, “is as close as you’ll get to an open invitation by a government to experiment with some of these ideas in the real world.”

What could be done in the next week/month/year/decade?... ...[H]ow could a community fablab/hackerspace affect a place like this in the short term? [871]

Several other list members replied by pointing out the negative points of Braddock as a site for a Fab Lab or hackerspace: the high rates of crime and vandalism, the deteriorating buildings, etc. One member argued that micromanufacturing was about “building from abundance,” not “trying to rebuild from scratch” in the worst-off areas. Kronick, nonplussed, rejoined that they had “made the case for Braddock as the prototypical challenge to many of your ideas.”

If your post-scarcity dreams don’t have a chance there, I don’t know how much hope I have for them in the rest of the world.... Vandalism is, I would argue, a key indicator of abundance or, put more simply, “free time.” Vandalism can be an outlet for creativity and intelligence (and I don’t just mean artistic graffiti. Some tend to venerate the bourgeois urban explorers with their ropes and headlamps and cameras but not the kids who risk arrest or injury climbing buildings or billboards to throw up a quick tag). I won’t argue that you /should/ move there because of this, but try to understand how useless or upsetting your own pasttimes might seem to others. Buying cheap distressed property can lead to what many might call “gentrification,” a prospect some find more terrifying to their way of life than broken windows and scribbles on the walls. It’s a matter of perspective. But I will not digress further; I will attempt to sustain my disbelief that this mailing list isn’t really just a thin guise for endless theoretical musings on Utopia and return to the subject I originally asked about: what implications could “open manufacturing” have in a small town that is actively seeking out new ideas?... What might the priorities be in a Braddock communal workshop? An army of Repraps? A few old Bridgeports? A safe, sound building that can be used year-round? Community show-and-tell nights to get the whole town interested in what’s being built? Connections to the schools? Connections to local manufacturers? Initiatives that would bring in government “green jobs” money? Production of profitable items to bring cash into the community? Production of necessary items for people in the community? A focus on urban gardening, bicycle transportation, alternative energy, building rehabilitation, permaculture, electronics, EV’s, biodiesel, art, music, etc etc etc? I guess I see plenty of options and directions that the tools of “open manufacturing” could bring (though I appreciate those working on creating more/better tools, more options); now I want to know how their application would fare in a place that would provides both clear challenges and opportunities. I think this is what people like the openfarmtech people are doing already, but why not experiment in another situation? [872]

As I argued on-list, my position is midway between those of Kronick and the skeptics. It seems to me that depressed areas like Braddock, the Arkansas Delta, and a good many Rust Belt communities in the former Ohio Valley have a lot in common with the economic problems facing Indian villages, as described by Neil Gerschenfeld in Fab . Gerschenfeld’s examples (which, again, we will examine in the next chapter) of rural hardware hackers reverse-engineering homebrew versions of proprietary tractors for a small fraction of the cost, or of village cable systems using cheap reverse-engineered satellite receivers, seems like something that would be relevant to American communities with high unemployment, collapsing asset values and eroding tax bases. Those villages in India that Gerschenfeld describes couldn’t exactly be described as building from abundance, except in the sense that imploding fixed costs are creating potential abundance ex nihilo everywhere.

And as I also argued, it seems to me that stigmergic organization (see especially the discussion in the next chapter) is relevant to the problem. In my opinion micromanufacturing will benefit communities like Braddock and the Arkansas Delta a lot sooner than most people think. But the fastest way to get from here to there, from the perspective of those currently involved in the movement, is for them to develop and expand the technology as fast as they can from where they are right now. Those currently engaged in micromanufacturing should feel under no moral pressure to abandon the capital assets they’ve built up where they are to start over somewhere else, as some sort of missionary effort. The faster Fab Labs, hacker spaces and garage factories proliferate and drop in price, the more of a demonstration effect they’ll create. And the cheaper and more demonstratedly feasible the technology becomes, the more it builds up an models of complete industrial ecologies in communities where it already exists, and the more it shows itself as benefiting those local economies by filling the void left by deindustrialization of old-style mass production employers, the more attractive it will be in places where it hasn’t yet been tried. The more this happens, in turn, the more people there will be like Kronick’s friend in Braddock (his suggestion to Kronick that it might be a useful site for a micromanufacturing effort after Kronick’s graduation was what sparked the whole discussion), who are eager to experiment with it locally. And at the same time, the more people there will be in the existing fab/hackerspace movement who are willing to take a gamble in acting as micromanufacturing missionaries in the Rust Belt. Likewise, the more prominent a part of economic life it becomes in areas where it already exists, and the more public awareness it creates as a credible path to economic development in depressed levels, the more open people like the unconventional mayor of Braddock will be toward trying it out.

In keeping with Eric Raymond’s stigmergic model, the people who are best suited to tackle particular problems do so, and put all their effort into doing what they’re best at where they are. These contributions create a demonstration effect and go into the network culture’s pool of common knowledge, for free adoption by anyone who finds them to be what they need. So the more everybody does their own thing, the more they’re facilitating the eventual adoption of the benefits of their work in areas like Braddock.

Everything Kronick said of Braddock is true of Cleveland in spades; it’s an unprecedented opportunity for micromanufacturing enthusiasts to put their ideas into operation. The micromanufacturing and open hardware movements are actively engaged in building the technological basis for the libertarian, decentralized manufacturing economy of the future. And right now Cleveland is engaged in the biggest experimental project around for building a relocalized cooperative economy. An alliance between the micromanufacturing movement and the Cleveland model would seem to be the opportunity of a century. As I asked in an article at P2P Foundation Blog on the Evergreen Cooperative Initiative:

There is enormous potential for fruitful collaboration between the Cleveland experiment and the micromanufacturing, Fab Lab and hackerspace movements. What local resources exist in Cleveland right now for a networked micromanufacturing economy? Perhaps someone in our readership knows of someone in Cleveland with CNC tools who would be interested in joining the 100kGarages micromanufacturing network. Or someone in the Cleveland area with the appropriate skills might be interested in organizing a hackerspace. The university is one of the leading stakeholders in the effort. Universities like Stanford, MIT and UT Austin have played a central role in creating the leading tech economies in other parts of the country, and the flagship project of the Fab Lab movement is the Austin Fab Lab created under the auspices of UT. Perhaps the engineering department at one of the universities involved in building the Cleveland Model would be interested in supporting local micromanufacturing projects. Or maybe some high school shop classes, or community college machining classes, would be interested in collaborating to build a local Fab Lab. From the other direction, is anyone involved in networked manufacturing projects like 100kGarages, or in the Fab Lab and hackerspace movement, interested in feeling out some of the stakeholders in the Cleveland initiative? [873]

Counter-economic development initiatives in decaying American cities like Cleveland can achieve synergies not only with the micromanufacturing movement, but also with the microenterprise movement.

Micromanufacturing is a force multiplier because new, cheaper production technologies free local economies from dependence on external capital finance for organizing the local production of local needs. The microenterprise, on the other hand, is a force multiplier because it puts existing underutilized capital equipment to full use. The household microenterprise operates on extremely low overhead because it uses idle capacity (“spare cycles”) of the ordinary capital goods that most households already own.

The Cleveland initiative could achieve very high bang for the buck, in building a resilient and self-sufficient local economy, by eliminating all the local regulatory barriers to microenterprises operating out of people’s homes.

Such relocalization movements can also achieve synergies and get more bang from the buck in another way: by eliminating barriers to cheap subsistence by the homeless and unemployed. No matter how large a share of the goods and services we consume can be produced and exchanged in the counter-economy, most people still bear one significant fixed cost that can’t be met outside the wage system: their rent or mortgage payment. And most of the possibilities for informal production go right out the window when a household lacks sufficient employment income to pay the rent or mortgage, and people consequently lose the roofs over their heads.

So the problem of “informal housing” needs to be addressed in some way as part of the larger agenda. This means efforts like those discussed later in this chapter: for law enforcement to de-prioritize foreclosure evictions and the eviction squatters, for local governments to open unused public buildings as barebones shelters (with group toilets, water taps and hot plates), and similarly to open vacant public land as camping grounds with communal water taps and portable toilets.

F. Contemporary Ideas and Projects

To some extent Factor e Farm and 100kGarages, which we examined in the previous chapter, are local economy projects of sorts. Rather than duplicating the material in the last chapter, we refer you back to it.

Jeff Vail’s “Hamlet Economy.” This is a system of networked villages based on an idealized version of the historical “lattice network of Tuscan hill towns” numbering in the hundreds (which became the basis of a modern regional economy based largely on networked production). The individual communities in Vail’s network must be large enough to achieve self-sufficiency by leveraging division of labor, as well as providing sufficient redundancy to absorb systemic shock. When larger-scale division of labor is required to support some industry, Vail writes, this is not to be achieved through hierarchy, with larger regional towns becoming centers of large industry. Rather, it is to be achieved by towns of roughly similar size specializing in producing specialized surplus goods for exchange, via fairs and other horizontal exchange relationships. [874]

The Hamlet relies on a “design imperative,” in an age of Peak Oil, for extracting the maximum quality of life from reduced energy inputs. The Tuscan hill towns Vail points to as a model are decentralized, open source and vernacular.

How is the Tuscan village decentralized? Production is localized. Admittedly, everything isn’t local. Not by a long shot. But compared to American suburbia, a great percentage of food and building materials are produced and consumed in a highly local network. A high percentage of people garden and shop at local farmer’s markets. How is the Tuscan village open source? Tuscan culture historically taps into a shared community pool of technics in recognition that a sustainable society is a non-zero-sum game. Most farming communities are this way—advice, knowledge, and innovation is shared, not guarded. Beyond a certain threshold of size and centralization, the motivation to protect and exploit intellectual property seems to take over (another argument for decentralization). There is no reason why we cannot share innovation in technics globally, while acting locally—in fact, the internet now truly makes this possible, leveraging our opportunity to use technics to improve quality of life. How is the Tuscan village vernacular? You don’t see many “Colonial-Style” houses in Tuscany. Yet strangely, in Denver I’m surrounded by them. Why? They make no more sense in Denver than in Tuscany. The difference is that the Tuscans recognize (mostly) that locally-appropriate, locally-sourced architecture improves quality of life. The architecture is suited to their climate and culture, and the materials are available locally. Same thing with their food—they celebrate what is available locally, and what is in season. Nearly every Tuscan with the space has a vegetable garden. And finally (though the pressures of globalization are challenging this), their culture is vernacular. They celebrate local festivals, local harvests, and don’t rely on manufactured, mass-marketed, and global trends for their culture nearly as much as disassociated suburbanites—their strong sense of community gives prominence to whatever “their” celebration is over what the global economy tells them it should be. [875]

Global Ecovillage Network. GEN was based on, and in some cases went on to incorporate, a number of “apparently simultaneous ideas arising in different locations at about the same time.” [876] It seems to have been a direct outgrowth of the “planetary village” movement, centered on the Findhorne community in Scotland, founded in 1962. [877]

In 1975 the magazine Mother Earth News began constructing experimental energy systems, novel buildings, and organic gardens near its business office in Hendersonville, North Carolina, and in 1979, began calling this educational center an “eco-village.” At about the same time in Germany, during the political resistance against disposal of nuclear waste in the town of Gorleben, anti-nuclear activists attempted to build a small, ecologically based village at the site, which they called an okodorf (literally ecovillage). In the largest police action seen in Germany since the Second World War, their camp was ultimately removed, but the concept lived on, and small okodorf experiments continued in both eastern and western Germany. The magazine Okodorf Informationen began publishing in 1985 and later evolved into Eurotopia . After reunification of Germany, the movement coalesced and became part of the International ecovillage movement. About the same time in Denmark, a number of intentional communities began looking beyond the social benefits of cohousing and other cooperative forms of housing towards the ecological potentials of a more thorough redesign of human habitats. In 1993 a small group of communities inaugurated the Danish ecovillage network, Landsforeningen for Okosamfund , the first network of its kind and a model for the larger ecovillage movement that was to follow.... Throughout the 1980s and early 1990, on Bainbridge Island near Seattle, Robert and Diane Gilman used their journal, In Context, to publish stories and interviews describing ecovillages as a strategy for creating a more sustainable culture. When Hildur Jackson, a Danish attorney and social activist, discovered In Context, the ecovillage movement suddenly got traction. Ross Jackson, Hildur’s husband, was a Canadian computer whiz who had been working in the financial market, writing programs to predict shifts in international currencies. When he took his algorithms public as Gaia Corporation, his models made a fortune for his investors, but Ross, being a deeply spiritual man, wanted little of it for himself. Searching for the best way to use their prosperity, Ross and Hildur contacted the Gilmans and organized some gatherings of visionaries at Fjordvang, the Jackson’s retreat in rural Denmark, to mull over the needs of the world.... Ross Jackson was also interested in utilizing the new information technology that was just then emerging: email and electronic file exchanges between universities and research centers (although it would still be a few years before the appearance of shareware browsers and the open-to-all World Wide Web). Ross and Hildur Jackson created a charitable foundation, the Gaia Trust, and endowed it with 90 percent of their share of company profits. In 1990, Gaia Trust asked In Context to produce a report, Ecovillages and Sustainable Communities , in order to catalog the various efforts at sustainable community living underway around the world, and to describe the emerging philosophy and principles in greater detail. The report was released in 1991 as a spiral bound book (now out of print). In September 1991, Gaia Trust convened a meeting in Fjordvang to bring together people from eco-communities to discuss strategies for further developing the ecovillage concept. This led to a series of additional meetings to form national and international networks of ecovillages, and a decision, in 1994, to formalize networking and project development under the auspices of a new organization, the Global Ecovillage Network (GEN). By 1994 the Internet had reached the point where access was becoming available outside the realm of university and government agencies and contractors. Mosaic was the universal browser of the day, and the first Internet cafes had begun to appear in major cities. Ross Jackson brought in a young Swedish web technician, Stephan Wik, who’d had a computer services business at Findhorn, and the Ecovillage Information Service was launched from Fjordvang at www.gaia.org. With Stephan and his co-workers gathering both the latest in hardware advances and outstanding ecovillage content from around the world, gaia.org began a steady growth of “hits,” increasing 5 to 15 percent per month, that would go on for the next several years, making the GEN database a major portal for sustainability studies. In October 1995, Gaia Trust and the Findhorn Foundation co-sponsored the first international conference “Ecovillages and Sustainable Communities--Models for the 21 st Century,” held at Findhorn in Scotland. After the conference, GEN held a formative meeting and organized three worldwide administrative regions: Europe and Africa; Asia and Oceania; and the Americas. Each region was to be overseen by a secretariat office responsible for organizing local ecovillage networks and developing outreach programs to encourage growth of the movement. A fourth secretariat was established in Copenhagen to coordinate all the offices, seek additional funding, and oversee the website. The first regional secretaries, chosen at the Findhorn meeting, were Declan Kennedy, Max Lindegger, and myself. Hamish Stewart was the first international secretary. [878]

According to Ross Jackson, the GEN was founded “to link the hundreds of small projects that had sprung up around the world....” [879] The Gaia Trust website adds:

The projects identified varied from well-established settlements like Solheimer in Iceland, Findhorn in Scotland, Crystal Waters in Australia, Lebensgarten in Germany to places like The Farm in Tennessee and the loosely knit inner-city Los Angeles Ecovillage project to places like the Folkecenter for Renewable Energy in Thy and many smaller groups that were barely started, not to mention the traditional villages of the South. [880]

Following the foundation of GEN, Albert Bates continues, “[w]ith generous funding from Gaia Trust for this new model, the ecovillage movement experienced rapid growth.”

Kibbutzim that re-vegetated the deserts of Palestine in the 20 th century developed a new outlook with the formation of the Green Kibbutz Network. The Russian Ecovillage Network was inaugurated. Permaculture-based communities in Australia such as Crystal Waters and Jarlanbah pioneered easy paths to more environmentally sensitive lifestyles for the mainstream middle class. GEN-Europe hosted conferences attended by ecovillagers from dozens of countries, and national networks sprang up in many of them. In South and North America, nine representatives were designated to organize ecovillage regions by geography and language. By the turn of the 21 st century GEN had catalogued thousands of ecovillages, built “living and learning centers” in several of them, launched ecovillage experiments in universities, and sponsored university-based travel semesters to ecovillages on six continents.... Ecovillages today are typically small communities with a tightly-knit social structure united by common ecological, social, or spiritual views. These communities may be urban or rural, high or low technologically, depending on circumstance and conviction. Okodorf Seiben Linden is a zero-energy cohousing settlement for 200 people in a rural area of eastern Germany. Los Angeles EcoVillage is a neighborhood around an intersection in inner Los Angeles. Sasardi Village is in the deep rainforest of Northern Colombia. What they share is a deep respect for nature, with humans as an integral part of natural cycles. Ecovillages address social, environmental, and economic dimensions of sustainability in an integrated way, with human communities as part of, not apart from, balanced ecologies.... [881]

The best concise description of an ecovillage that I’ve seen comes from what is apparently an older version of the Gaia Trust website, preserved on an article at Permaculture Magazine :

Ecovillages are urban or rural communities that strive to combine a supportive social environment with a low-impact way of life. To achieve this, they integrate various aspects of ecological design, permaculture, ecological building, green production, alternative energy, community building practices, and much more. These are communities in which people feel supported by and responsible to those around them. They provide a deep sense of belonging to a group and are small enough for everyone to be seen and heard and to feel empowered. People are then able to participate in making decisions that affect their own lives and that of the community on a transparent basis. Ecovillages allow people to experience their spiritual connection to the living earth. People enjoy daily interaction with the soil, water, wind, plants and animals. They provide for their daily needs – food, clothing, shelter – while respecting the cycles of nature. They embody a sense of unity with the natural world, with cultural heritage around the world and foster recognition of human life and the Earth itself as part of a larger universe. Most ecovillages do not place an emphasis on spiritual practices as such, but there is often a recognition that caring for one’s environment does make people a part of something greater than their own selves. Observing natural cycles through gardening and cultivating the soil, and respecting the Earth and all living beings on it, ecovillages tend to maintain, recreate or find cultural expressions of human connectedness with nature and the universe. Respecting this spirituality and culture manifests in many ways in different traditions and places. [882]

The typical ecovillage has 50–400 people. Many ecovillages, particularly in Denmark, are linked to a cohousing project of some sort. [883] Such projects lower the material cost of housing (construction materials, heating, etc.) per person, and reduce energy costs by integrating the home with workplace and recreation. [884] Neighborhood-based ecovillages in some places have influenced the liberalization of local zoning laws and housing codes, and promoted the adoption of new building techniques by the construction industry. Ecovillage practices include peripheral parking, common open spaces and community facilities, passive solar design, vernacular materials, and composting toilets. [885]

The ecovillage movement is a loose and liberally defined network. According to Robert and Diane Giulman, in Ecovillages and Sustainable Communities (1991), an ecovillage is “A human-scale, full-featured settlement in which human activities are harmlessly integrated into the natural world in a way that is supportive of healthy human development and can be successfully continued into the indefinite future.” The GEN refuses to police member communities or to enforce any centralized standard of compliance. At a 1998 GEN board meeting in Denmark, the Network affirmed “that a community is an ecovillage if it specifies an ecovillage mission, such as in its organizational documents, community agreements, or membership guidelines, and makes progress in that direction. The Network promotes the Community Sustainability Assessment Tool, a self-administered auditing survey, as a way to measure progress toward the same general set of goals. [886] The Ecological portion of the checklist, for example, includes detailed survey questions on

Sense of Place — community location & scale; restoration & preservation of nature

Food Availability, Production & Distribution

Physical Infrastructure, Buildings & Transportation — materials, methods, designs

Consumption Patterns & Solid Waste Management

Water — sources, quality & use patterns

Waste Water & Water Pollution Management

Energy Sources & Uses [887]

Question 2, “Food Availability,” includes questions on the percentage of food produced within the community, what is done with food scraps, and whether greenhouses and rooftop gardens are used for production year-round. [888]

Such liberality of standards is arguably necessary, given the diversity of starting points of affiliate communities. An ecovillage based in an inner city neighborhood, it stands to reason, will probably have much further to go in achieving sustainability than a rural-based intentional community. Urban neighborhoods, of necessity, must be “vertically oriented,” and integrate the production of food and other inputs on an incremental basis, often starting from zero. [889]

The Transition Town Movement. This movement, which began with the town of Totnes in the UK, is described by John Robb as an “open-source insurgency”: a virally replicable, open-source model for resilient communities capable of surviving the Peak Oil transition. As of April 2008, some six hundred towns around the world had implemented Transition Town projects. [890]

The Transition Towns Wiki [891] includes, among many other things, a Transition Initiatives Primer (a 51 pp. pdf file), a guide to starting a Transition Town initiative in a local community. [892] It has also published a print book, The Transition Handbook . [893]

Totnes is the site of Rob Hopkins’ original Transition Town initiative, and a model for the subsequent global movement.

The thinking behind [Transition Town Totnes] is simply that a town using much less energy and resources than currently consumed could, if properly planned for and designed, be more resilient, more abundant and more pleasurable than the present. Given the likely disruptions ahead resulting from Peak Oil and Climate Change, a resilient community—a community that is self-reliant for the greatest possible number of its needs—will be infinitely better prepared than existing communities with their total dependence on heavily globalised systems for food, energy, transportation, health and housing. Through 2007, the project will continue to develop an Energy Descent Action Plan for Totnes, designing a positive timetabled way down from the oil peak. [894]

The most complete Energy Descent Action Plan is that of Kinsale. It assumes a scenario in which Kinsale in 2021 has half the energy inputs as in 2005. It includes detailed targets and step-by-step programs, for a wide range of areas of local economic life, by which energy consumption per unit of output may be reduced and local inputs substituted for outside imports on a sustainable basis. In the area of food, for example, it envisions a shift to local market gardening as the primary source of vegetables and a large expansion in the amount of land dedicated to community-supported agriculture. By 2021, the plan says, most ornamental landscaping will likely be replaced with fruit trees and other edible plants, and the lawnmower will be as obsolete as the buggy whip. In housing, the plan calls for a shift to local materials, vernacular building techniques, and passive solar design. The plan also recommends the use of local currency systems, skill exchange networks, volunteer time banks, and barter and freecycling networks as a way to put local producers and consumers in contact with one another. [895]

Global Villages. These are designed to generate 80% of their income internally and 20% externally, with internally generated wealth circulating five times before it leaves the community. As described by Claude Lewenz, in How to Build a Village :

The local economy is layered, built on a foundation that provides the basic needs independent of the global economy—if it melts down the Villagers will survive. The local economy is diversified.... The local economy must provide conditions that encourage a wide diversity of businesses and officers to operate. Then when some collapse or move away, the local economy only suffers a bit—it remains healthy. [896]

Lewenz’s Village is also essentially the kind of “resilient community” John Robb and Jeff Vail have in mind:

...[E]conomies can collapse and first-world people can starve if systems fail. We have now built a food system almost entirely dependent on diesel fuelled tractors, diesel delivery trucks and a long-distance supermarket delivery system. More recently, we shifted to an economic and communication system entirely dependent on computers—a system that only runs if the electrical grid supplies power. In the Great Depression in the USA, poor people say they hardly noticed—in those days they kept gardens because the USA was predominantly rural and village. The potential for economic collapse always looms, especially as the global economic system becomes more complex and vulnerable. Prudence would dictate that in planning for a local economy, it include provisions to assure the Village sustained its people, and those of the surrounding region, in such adverse conditions. The challenge is to maintain a direct rural and farm connection for local, good food, and establish an underlying local economy that can operate independent of the larger economy and which can put unemployed people to work in hard times. [897]

The Global Villages network [898] has had fairly close ties with Marcin Jakubowski and Factor e Farm, which we considered in the previous chapter.

Venture Communism. Venture communism is a project developed by Dmytri Kleiner. The basic principle—purchasing undeveloped land and resources cheaply from the capitalist economy, and then financing itself internally from the rents on that land as development by venture communist enterprises causes it to appreciate in value—is reminiscent of Ebenezer Howard’s original vision for the Garden City movement.

Starting from the belief that political change can only follow a change in the mode of production, venture communism is an attempt to create a mode of production that will expand socialism by reducing the labour available to be exploited by property.... Socialism is defined as a mode of production where the workers own the means of production, and especially the final product.By withholding our labour from Capitalists and instead forming our own worker-owned enterprises we expand Socialism. The more labour withheld from Capitalists, the less they are able to exploit. [899]

In an extended passage from the P2P Foundation Wiki, Kleiner describes the actual functioning of a venture commune:

A Venture Commune is a joint stock corporation, much like the Venture Capital Funds of the Capitalist class, however it has four distinct properties which transform it into an effective vehicle for revolutionary worker’s struggle. 1—A Share In The Venture Commune Can Only Be Acquired By Contributions Of Labour, and Not Property. In other words only by working is ownership earned, not by contributing Land, Capital or even Money. Only Labour. It is this contributed labour which represents the initial Investment capacity of the Commune. The Commune Issues its own currency, based on the value of the labour pledges it has. It then invests this currency into the private enterprises which it intends to purchase or fund, these Enterprises thus become owned by the Commune, in the same way that Enterprises which receive Venture Capital become owned by a Venture Capital Fund. 2—The Venture Commune’s Return On Investment From Its Enterprises Is Derived From Rent and Not Income. As condition of investment, the Enterprise agrees to not own its own property, neither Land nor Capital, but rather to rent Land and Capital from the Commune. The Commune, unlike a Venture Capital Fund, never takes a share of the income of the Enterprise nor of any of its workers. The Commune finances the acquisition of Land and Capital by issuing Bonds, and then Rents the Land and Capital to its Enterprises, or an Enterprise can sell whatever Land and Capital it acquires through other means to the Commune, and in turn Rent it. In this way Property is always owned Mutually by all the members of the Commune, however all workers and the Enterprises that employ them retain the entire product of their labour. 3—The Venture Commune Is Owned Equally By All Its Members. Each member can have one share, and only one share. Thus although each worker is able to earn different prices for their labour from the Enterprises, based on the demand for their labour, each worker may never earn any more than one share in the ownership of the Commune itself, and therefore can never accumulate a disproportionate share of the proceeds of Property. Ownership of Property can therefore never be concentrated in fewer and fewer hands and used to exploit the worker as in Capitalist corporations. 4—All Those Who Apply Their Labour To the Property of the Commune Must Be Eligible For Membership In The Commune. The Commune may not refuse membership to any Labour employed by any of its enterprises that work with the Land and Capital controlled by the commune. In this way Commune members can not exploit outside wage earners, and the labour needs of the Enterprise will ensure that each Commune continues to grow and accept new members.” Discussion Dmytri Kleiner: “I see venture communism in two initial phases, in the first phase proto-venture-communist enterprises must break the Iron law and then join together to found a venture commune. In a mature venture commune, cost-recovery is simply achieved by using rent-sharing to efficiently allocate property to its most productive use, thereby ensuring mutual accumulation. Rent sharing works by renting the property for it’s full market value to member enterprises and then distributing the proceeds of this rent equally among all commune members. Investment, when required by exogenous exchange, is funded by selling bonds at auction. Endogenous liquidity is achieved through the use of mutual credit. However in the initial phase there is no property to rent-share and the demand for the bonds is likely to be insufficient, thus the only way the enterprise can succeed is to break the iron law and somehow capitalize and earn more than subsistence costs, making mutual accumulation possible. IMO, there are two requirements for breaking the iron law: a) The enterprise must have highly skilled creative labour, so that the labour itself can capture scarcity rents, i.e. artists, software developers. b) Production must be based on what I call “commodity capital,” that is Capital that is a common input to most, if not all, industries, and therefore is often subsidized by public and private foundations and available on the market for below it’s actual cost. Examples of this are telecommunications and transportation infrastructure, both of which have been heavily subsidized. Also, a third requirement for me, although not implied by the simple economic logic, is that the initial products are of general use to market segments I believe are most directly agents for social change, i.e. other peer producers, activists, diasporic/translocal communities and the informal economy broadly. Also, I would like to note that while the initial enterprises depend on complex labour and should focus on products of strategic benefit, a mature venture commune can incorporate all types of labour and provide all types of goods and services once the implementation of rent-sharing, bond-auction and mutual-credit is achieved.” (Oekonux mailing list, January 2008) [900]

The Telekommunisten collective is one such initial enterprise for raising money. “Venture Communism,” Kleiner writes, “is a form of worker’s self organization which provides a model of sharing property and forming mutual capital that is compatible with anti-capitalist ideals.”

However, venture communism does not provide a means of acquiring such property in the first place. Telekommunisten is intended to realize possibilities in forming the privative mutual property required to initiate venture communism. The lack of any initial financing, most forms of which would be incompatible with the venture communist principal of ownership as a reward for labour not wealth, present twin challenges for a proto-venture-communist enterprise to overcome: Forming capital and finding customers. The first challenge in essence requires breaking the Iron Law of Wages, the implications of which are that worker’s can never form capital because they can never earn any more than their subsistence cost from wages alone. The primitive accumulation theory of Telekommunisten proposes to break the Iron Law by exploiting it’s boundary conditions, namely that some labour is scarce, and therefore captures a form of scarcity rent in addition to wages and that some forms of capital are themselves commodities, and therefore can not even capture interest, more to the point, often these forms of capital are common inputs to production and are subsidized by private and public funds and are available on the market for below their own reproduction costs. Therefore, the Iron Law can be broken if you are able to invest scarce labour and employ commodity capital in production. An obvious example of such commodity capital is basic telephone and internet infrastructure, which connects the farthest reaches of the globe together, built almost entirely with public money and available to be exploited for far less than it’s real cost. And likewise, an obvious example of the needed scarce labour investment is the IT and media skills required to derive new products from basic internet and telephone service. Thus, Telekommunisten propose to form the primitive mutual property required to initiate venture communism by collective investment in the form of IT and media labour using only commonly available internet resources to derive marketable products. The first of these products is Dialstation, which allow any land line or mobile telephone to make very inexpensive international phone calls. The second challenge, finding customers without any initial financing for marketing, is addressed by linking the artistic and political nature of the project very closely with our products, therefore we promote products such as Dialstation as a matter of course in our artistic production and our participation in the activist and hacker communities. Our basic premise is that people will use and promote our products if they identify with our artistic and political practices, and in turn the economy generated can support and expand these practices. [901]

It is most notable for its Dialstation project, an international long-distance service. [902]

Decentralized Economic and Social Organization (DESO). This is a project in development by Reed Kinney. It’s a continuation of the work of his late father, Mark Kinney, among other things a writer on alternative currency systems and an associate of Thomas Greco. [903] Kinney’s book on DESO is forthcoming. Here’s a brief summary of the project:

This is a miniscule explanation of Decentralized Economic Social Organization, DESO. The text has required five years to research and write. As of July 2009, I’m now editing it. The text categorically unfolds every DESO structure, component, department, and its accompanying philosophies. It is a substantial work and will require a conventional publisher by October 2009. As a favor to Kevin Carson, I can offer this very brief overview. The content of this text is an object of dialogue. The assertion made here is that the base of human intercourse is structurally embedded. And that each type of socioeconomic structure generates a corresponding form of social intercourse. The stated objective here is the development of the socioeconomic pattern that best meets the real needs of its members and that generates the maximum and the fullest mental health among them. This content is derived from many contributors, like Paulo Freire, whom each create equally important components that are here molded into coherent functioning form. True dialogue is the soil, water and sunlight needed to germinate DESO. DESO is the creation of viable, independent communities within which the humanity of each person is supported through humanistic education and participatory decision making processes. The autonomous DESO economy is designed to both support and further cultivate those objectives. DESO’s economic organization, its educational organization, and its civic organization are designed to interpenetrate and to be interdependent. From their incipience each DESO community develops those three fundamental DESO spheres concurrently. DESO culture is the consequence of inter-community networks. However, it is structured to maintain and perpetuate decentralization. DESO creates stable, regional economies that resemble the self-sustaining ecosystems of nature. DESO independence is proportional to its population. Structurally, DESO is designed to expand exponentially through mass centrist society, MCS, which it depopulates with astonishing rapidity. Ultimately, DESO curbs the destructive momentum of MCS. [904]
Stateless Society To refer to a society as stateless does not imply an absence of socioeconomic organization. To build an equitable society two basic interrelated tactics are used. First, the dissection, the “deconstruction,” of the structures of mass centrist society, MCS, reveal what their opposite structures would be, then, second, all of the known requirements, the conditions needed, for fermenting full, human psychic health are evaluated. These two known factors are then used to mold the functions of the structures of decentralized economic social organization, DESO. The interpersonal relations born of genuine dialogical based organization (mutualism), both in civic and familial spheres, develops the self-realization of all members (The Knowledge of Man, A Philosophy of the Interhuman, by Martin Buber, Edited with an Introduction by Maurice Friedman, Harper & Row). When that is combined with education through art (Herbert Read), then, genuine individuation develops. These combined conditions must be met to ferment full, human psychic health. DESO is member managed and is structured to be perpetually decentralized and networked. Each sovereign community is semi-self sufficient; organization is dialogical. DESO uses technology to reduce the cost, time and space required for production. A production based economy is neither consumer nor profit-based. Since DESO is a production based economy its production slows as the basic needs of its members are met; slows, levels off, and is then maintained. Its economy does not pose a threat to the life support systems of the planet. Member objectives are not materialistic per se, although prosperity is generalized. Rather, the objectives of its members orbit their dialogical interpersonal relationships and their mutual self-development through all art, aesthetics, and all knowledge. Art and knowledge are not viewed as commodities, but rather as integral aspects of culture. Unavoidably, incipient DESO grows alongside and through MCS. It purchases productive facilities from MCS and adopts from it what is useful for DESO. Nonetheless, the DESO objective is independence from MCS. Its independence is ever-augmented through the expansion of its own infrastructures. (Its internal monetized organization is an interest-free civic service.) [The “political” implications are somewhat self-evident. MCS is not disrupted by internal modifications within its own context. However, people that live in a “humanistic,” independent socioeconomic organization, one that is expansive and competitive , represent an external force that can curb the self-destructive momentum of MCS; not through direct confrontation per se, but, rather, by infiltrating and “depopulating” MCS.] In this other DESO context …within its own circumstances… the indispensable and dynamic drama of equilibrium between individuation and mutualism can be maintained indefinitely. The DESO scenario does not resemble anything that MCS produces; neither an economy of scarcity nor the alienated mind. No, rather, what you have in DESO is an economy of abundance and a post-alienated population of whole human beings; whole in all their dichotomies. [905]

The Triple Alliance. This is an interesting proposal for building a resilient community through social production by the urban underemployed and unemployed. The idea was originally sparked by a blog post by Dougald Hine: “Social Media vs the Recession.”

Looked at very simply: hundreds of thousands of people are finding or are about to find themselves with a lot more time and a lot less money than they are used to. The result is at least three sets of needs:

practical/financial (e.g. how do I pay the rent/avoid my house being repossessed?)

emotional/psychological (e.g. how do I face my friends? where do I get my identity from now I don’t have a job?

directional (e.g. what do I do with my time? how do I find work?)...

Arguably the biggest thing that has changed in countries like the UK since there was last a major recession is that most people are networked by the internet and have some experience of its potential for self-organisation... There has never been a major surge in unemployment in a context where these ways of “organising without organisations” were available. As my School of Everything co-founder Paul Miller has written, London’s tech scene is distinctive for the increasing focus on applying these technologies to huge social issues... Agility and the ability to mobilise and gather momentum quickly are characteristics of social media and online self-organisation, in ways that government, NGOs and large corporations regard with a healthy envy. So, with that, the conversations I’ve been having keep coming back to this central question: is there a way we can constructively mobilise to respond to this situation in the days and weeks ahead?... Information sharing for dealing with practical consequences of redundancy or job insecurity. You can see this happening already on a site like the Sheffield Forum. Indexes of local resources of use to the newly-unemployed—including educational and training opportunities—built up in a user-generated style. Tools for reducing the cost of living. These already exist—LiftShare, Freecycle, etc.—so it’s a question of more effective access and whether there are quick ways to signpost people towards these, or link together existing services better. An identification of skills, not just for potential employers but so people can find each other and organise, both around each other and emergent initiatives that grow in a fertile, socially-networked context. If the aim is to avoid this recession creating a new tranche of long-term unemployed (as happened in the 1980s), then softening the distinction between the employed and unemployed is vital. In social media, we’ve already seen considerable softening of the line between producer and consumer in all kinds of areas, and there must be lessons to draw from this in how we view any large-scale initiative. As I see it, such a softening would involve not only the kind of online tools and spaces suggested above, but the spread of real world spaces which reflect the collaborative values of social media. Examples of such spaces already exist: Media labs on the model of Access Space or the Brasilian Pontos de Cultura programme, which has applied this approach on a national scale Fab Labs for manufacturing, as already exist from Iceland to Afghanistan studio spaces like TenantSpin, the micro-TV station in Liverpool based in a flat in a towerblock—and like many other examples in the world of Community Media Again, if these spaces are to work, access to them should be open, not restricted to the unemployed. (If, as some are predicting, we see the return of the three day week, the value of spaces like this open to all becomes even more obvious!) [906]

This was the direct inspiration for Nathan Cravens, of Appropedia and sometime Open Source Ecology collaborator, in outlining his Triple Alliance:

The Triple Alliance describes a network of three community supported organizations necessary to meet basic needs and comforts.

The Open Cafe, a place to have a meal in good company without a price tag

The CSA or community supported farm

The Fab Lab, a digitally assisted manufacturing facility to make almost anything [907]

As we saw in Chapter Six, the Fab Lab already exists in the form of commercial workshop space (for example TechShop); it also exists, in forms ranging from non-profit to commercial, in the “hacker space” movement. Regarding this latter, according to Wired magazine there are 96 hacker spaces worldwide—29 of them in the United States—including the Noisebridge hacker space profiled in the article.

Located in rented studios, lofts or semi-commercial spaces, hacker spaces tend to be loosely organized, governed by consensus, and infused with an almost utopian spirit of cooperation and sharing. “It’s almost a Fight Club for nerds,” says Nick Bilton of his hacker space, NYC Resistor in Brooklyn, New York. Bilton is an editor in The New York Times R&D lab and a board member of NYC Resistor. Bilton says NYC Resistor has attracted “a pretty wide variety of people, but definitely all geeks. Not Dungeons & Dragons–type geeks, but more professional, working-type geeks.”... Since it was formed last November, Noisebridge has attracted 56 members, who each pay $80 per month (or $40 per month on the “starving hacker rate”) to cover the space’s rent and insurance. In return, they have a place to work on whatever they’re interested in, from vests with embedded sonar proximity sensors to web-optimized database software.... Noisebridge is located behind a nondescript black door on a filthy alley in San Francisco’s Mission District. It is a small space, only about 1,000 square feet, consisting primarily of one big room and a loft. But members have crammed it with an impressive variety of tools, furniture and sub-spaces, including kitchen, darkroom, bike rack, bathroom (with shower), circuit-building and testing area, a small “chill space” with couches and whiteboard, and machine shop. The main part of the room is dominated by a battered work table. A pair of ethernet cables snakes down into the middle of the table, suspended overhead by a plastic track. Cheap metal shelves stand against the walls, crowded with spare parts and projects in progress. The drawers of a parts cabinet carry labels reflecting the eclecticism of the space: Altoids Tins, Crapulence, Actuators, DVDs, Straps/Buckles, Anchors/Hoisting, and Fasteners. Almost everything in the room has been donated or built by members — including a drill press, oscilloscopes, logic testers and a sack of stick-on googly eyes. While many movements begin in obscurity, hackers are unanimous about the birth of U.S. hacker spaces: August, 2007 when U.S. hackers Bre Pettis, Nicholas Farr, Mitch Altman and others visited Germany on a geeky field trip called Hackers on a Plane. German and Austrian hackers have been organizing into hacker collectives for years, including Metalab in Vienna, c-base in Berlin and the Chaos Computer Club in Hannover, Germany. Hackers on a Plane was a delegation of American hackers who visited the Chaos Communications Camp — “Burning Man for hackers,” says Metalab founder Paul “Enki” Boehm — and their trip included a tour of these hacker spaces. They were immediately inspired, Altman says. On returning to the United States, Pettis quickly recruited others to the idea and set up NYC Resistor in New York, while Farr instigated a hacker space called HacDC in Washington, D.C. Both were open by late 2007. Noisebridge followed some months later, opening its doors in fall 2008. It couldn’t have happened at a better time. Make magazine, which started in January, 2005, had found an eager audience of do-it-yourself enthusiasts. (The magazine’s circulation now numbers 125,000.) Projects involving complex circuitry and microcontrollers were easier than ever for nonexperts to undertake, thanks to open source platforms like Arduino and the easy availability of how-to guides on the internet. The idea spread quickly to other cities as visitors came to existing hacker spaces and saw how cool they were. “People just have this wide-eyed look of, ‘I want this in my city.’ It’s almost primal,” says Rose White, a sociology graduate student and NYC Resistor member.... Hacker spaces aren’t just growing up in isolation: They’re forming networks and linking up with one another in a decentralized, worldwide network. The hackerspaces.org website collects information about current and emerging hacker spaces, and provides information about creating and managing new spaces. [908]

Cravens specified that his model of Fab Labs was based on Open Source Ecology (for rural areas) and hacker spaces like NYC Resistor [909] (for urban areas). [910]

In discussion on the Open Manufacturing email list, I suggested that Cravens’ three-legged stool needed a fourth leg: housing. Open-source housing would fill a big gap in the overall resiliency strategy. It might be some kind of cheap, bare bones cohousing project associated with the Cafe (water taps, cots, hotplates, etc) that would house people at minimal cost on the YMCA model. It might be an intentional community or urban commune, with cheap rental housing adapted to a large number of lodgers (probably in violation of laws restricting the number of unrelated persons living under one roof). Another model might be the commercial campground, with space for tents, water taps, etc., on cheap land outside the city, in connection with a ride-sharing arrangement of some sort to get to Alliance facilities in town. The government-run migrant worker camps, as depicted in The Grapes of Wrath , are an example of the kind of cheap and efficient, yet comfortable, bare bones projects that are possible based on a combination of prefab housing with common bathrooms. And finally, Vinay Gupta’s work in the Hexayurt project on emergency life-support technology for refugees is also relevant to the housing problem: offering cheap LED lighting, solar cookers, water purifiers, etc., to those living in tent cities and Hoovervilles. Cravens replied:

In an urban area, one large multi-level building could provide all basic needs. A floor for hydroponicly [sic] grown food, the fab, and cafe. The remaining space can be used for housing. The more sophisticated the fabs and availibility of materials, the better conditions may rival or exceed present middle class standards. [911]

Such large multi-level buildings resemble what actually exists in the networked manufacturing economies of Emilia-Romagna (as described by Sabel and Piore) and Shenzhen (as described by Bunnie Huang), which we examined in Chapter Six: publicly accessible retail space on the ground floor, a small factory upstairs, and worker housing above that.

This would probably fall afoul of local zoning laws and housing codes in the United States, in most cases. But as Dmitry Orlov points out, massive decreases in formal home ownership and increases in unemployment in coming years, coupled with increasingly hollowed-out local governments with limits on resources available for enforcement, will quite plausibly lead to a situation in which squatting on (de facto) abandoned residential and commercial real estate is the norm, and local authorities turn a blind eye to it. Squats in abandoned/public buildings, and building with scavenged materials on vacant lots, etc. (a la Colin Ward), might be a black market version of what Cravens proposes. According to Gifford Hartman, although tent cities and squatter communities often receive hostile receptions, they’re increasingly getting de facto acceptance from the local authorities in many parts of the country:

In many places people creating tent encampments are met with hostility, and are blamed for their own condition. New York City, with a reputation for intolerance towards the homeless, recently shut down a tent city in East Harlem. Homeowners near a tent city of 200 in Tampa, Florida organised to close it down, saying it would ‘devalue’ their homes. In Seattle, police have removed several tent cities, each named ‘Nickelsville’ after the Mayor who ordered the evictions. Yet in some places, like Nashville, Tennessee, tent cities are tolerated by local police and politicians. Church groups are even allowed to build showers and provide services. Other cities that have allowed these encampments are: Champaign, Illinois; St. Petersburg, Florida; Lacey, Washington; Chattanooga, Tennessee; Reno, Nevada; Columbus, Ohio; Portland, Oregon. Ventura, California recently changed its laws to allow the homeless to sleep in cars and nearby Santa Barbara has made similar allowances. In San Diego, California a tent city appears every night in front of the main public library downtown. California seems to be where most new tent cities are appearing, although many are covert and try to avoid detection. One that attracted overflowing crowds is in the Los Angeles exurb of Ontario. The region is called the ‘Inland Empire’ and had been booming until recently; it’s been hit extremely hard by the wave of foreclosures and mass layoffs. Ontario is a city of 175,000 residents, so when the homeless population in the tent city exploded past 400, a residency requirement was created. Only those born or recently residing in Ontario could stay. The city provides guards and basic services for those who can legally live there. [912] Even squatting one’s own residence after foreclosure has worked out fairly well in a surprising number of cases. A member of the Open Manufacturing email list Foreclosure is a double-edged sword. Dear friends of mine, a couple with two daughters, were really struggling two years ago, as the economy tanked, to pay their rent and feed their family from the same meager, erratic paychecks. When they heard that the owners of their rental unit had foreclosed, they saw it as the final blow. But unlike the other six residents they chose not to move out. It’s been eighteen months since this happened: they have an ongoing relationship with corporations that provide heat, power and internet service but no rent is paid, while the former masters sue each other. It is probable that the so-called ‘owners’ of the property are themselves bankrupt and bought out, at this point; who knows when the situation might resolve itself. In the US this is called “Adverse possession”, the legal term for squatting, and should they keep it up for seven years, they would own their apartment free and clear. This is in a dense neighborhood of Chicago, for perspective. It’s happening all over the place, and with more foreclosure on the horizon, it’s only going to get more common. Single families aren’t he only ones going bankrupt, it’s happening to a lot of landlords and mortgage interests also. [913]

In addition, the proliferation of mortgage-based securities means the holder of a mortgage is several change-of-hands removed from the original lender, and may well lack any documentation of the original mortgage agreement. Some courts have failed to enforce eviction orders in such cases.

Another promising expedient for victims of foreclosure is to turn to firms like Boston Community Capital that specialize in buying up foreclosed mortgages, and then selling the property (with principle reduced to current market value) back to the original occupants. BCC’s bargaining power is aided, in cutting a deal with foreclosing lenders, by embarrassing demonstrations by neighbors demanding they sell to BCC at market value rather than evict. [914]

In general, the resale value of foreclosed residences is so much lower, they are so difficult to resell, and managing the properties in the meantime is so inconvenient and costly, that—especially when the growing volume of defaults increases the difficulty of handling them—the bargaining power of defaulting home-owners is growing against lenders with an incentive to cut a deal rather than become real estate holding companies.

Although Cravens expressed some interest in the technical possibilities for social housing, he objected to my proposal to include housing as a fourth leg of an expanded Quadruple Alliance.

I disagree with the name, Quadruple Alliance, as these three organizations I consider community ventures outside the home environment. Because the home I prefer to keep in the personal realm, I do not consider that an official community space. [915]

To the extent that my proposed housing “fourth leg” is a departure from Cravens’ schema, it may be a closer approximation to Hine’s original vision. Hine’s original post addressed the basic question, from the individual in need of subsistence: “What do I do now that I’m unemployed.” Housing is an integral part of such considerations. From the perspective of the sizable fraction of the general population that may soon be unemployed or unemployed, and consequently homeless, access to shelter falls in the same general class of pressing self-support needs as work in the Fab Lab and feeding oneself via the CSA farm. Although Cravens chose to focus on social production to the exclusion of private subsistence, if we revert to Hine’s original concern, P2P housing projects are very much part of an overall resilient community package—analogous to the Roman villas of the Fifth Century—for weathering the Great Recession or Great Depression 2.0.

Chapter Seven: The Alternative Economy as a Singularity

We have seen the burdens of high overhead that the conventional, hierarchical enterprise and mass-production industry carry with them, their tendency to confuse the expenditure of inputs with productive output, and their culture of cost-plus markup. Running throughout this book, as a central theme, has been the superior efficiency of the alternative economy: its lower burdens of overhead, its more intensive use of inputs, and its avoidance of idle capacity.

Two economies are fighting to the death: one of them a highly-capitalized, high-overhead, and bureaucratically ossified conventional economy, the subsidized and protected product of one and a half century’s collusion between big government and big business; the other a low capital, low-overhead, agile and resilient alternative economy, outperforming the state capitalist economy despite being hobbled and driven underground.

The alternative economy is developing within the interstices of the old one, preparing to supplant it. The Wobbly phrase “building the structure of the new society within the shell of the old” is one of the most fitting phrases ever conceived for summing up the concept.

A. Networked Production and the Bypassing of Corporate Nodes

One of the beauties of networked production, for subcontractors ranging from the garage shop to the small factory, is that it transforms the old corporate headquarters into a node to be bypassed.

Johan Soderberg suggests that the current model of outsourcing and networked production makes capital vulnerable to being cut out of the production process by labor. He begins with an anecdote about Toyota subcontractor Aisin Seiki, “the only manufacturer of a component critical to the whole Toyota network,” whose factory was destroyed in a fire:

The whole conglomerate was in jeopardy of grinding to a halt. In two months Toyota would run out of supplies of the parts produced by Aisin Seiki. Faced with looming disaster, the network of subcontractors fervently cooperated and created provisory means for substituting the factory. In a stunningly short time, Toyota subsidiaries had restructured themselves and could carry on unaffected by the incident. Duncan Watt attributes the swift response by the Toyota conglomerate to its networked mode of organisation. The relevance of this story for labour theory becomes apparent if we stipulate that the factory was not destroyed in an accident but was held-up in a labour conflict. Networked capital turns every point of production, from the firm down to the individual work assignment, into a node subject to circumvention....[I]t is capital’s ambition to route around labour strongholds that has brought capitalism into network production.... Nations, factories, natural resources, and positions within the social and technical division of labour, are all made subject to redundancy. Thus has capital annulled the threat of blockages against necks in the capitalist production chain, upon which the negotiating power of unions is based.

But this redundancy created by capital as a way of routing around blockages, Soderberg continues, threatens to make capital itself redundant:

The fading strength of unions will continue for as long as organised labour is entrenched in past victories and outdated forms of resistance. But the networked mode of production opens up a “window of opportunity” for a renewed cycle of struggle, this time, however, of a different kind. Since all points of production have been transformed into potentially redundant nodes of a network, capital as a factor of production in the network has itself become a node subject to redundancy . [916]

(This was, in fact, what happened in the Third Italy: traditional mass-production firms attempted to evade the wave of strikes by outsourcing production to small shops, and were then blindsided when the shops began to federate among themselves.) [917]

Soderberg sees the growing importance of human relative to physical capital, and the rise of peer production in the informational realm, as reason for hope that independent and self-managed networks of laborers can route around capital. Hence the importance he attaches to the increasingly draconian “intellectual property” regime as a way of suppressing the open-source movement and maintaining control over the conditions of production. [918]

Dave Pollard, writing from the imaginary perspective of 2015, made a similar observation about the vulnerability of corporations that follow the Nike model of hollowing themselves out and outsourcing everything:

In the early 2000s, large corporations that were once hierarchical end-to-end business enterprises began shedding everything that was not deemed ‘core competency’, in some cases to the point where the only things left were business acumen, market knowledge, experience, decision-making ability, brand name, and aggregation skills. This ‘hollowing out’ allowed multinationals to achieve enormous leverage and margin. It also made them enormously vulnerable and potentially dispensable. As outsourcing accelerated, some small companies discovered how to exploit this very vulnerability. When, for example, they identified North American manufacturers outsourcing domestic production to third world plants in the interest of ‘increasing productivity’, they went directly to the third world manufacturers, offered them a bit more, and then went directly to the North American retailers, and offered to charge them less. The expensive outsourcers quickly found themselves unnecessary middlemen.... The large corporations, having shed everything they thought was non ‘core competency’, learned to their chagrin that in the connected, information economy, the value of their core competency was much less than the inflated value of their stock, and they have lost much of their market share to new federations of small entrepreneurial businesses. [919]

The worst nightmare of the corporate dinosaurs is that, in an economy where “imagination” or human capital is the main source of value, the imagination might take a walk: that is, the people who actually possess the imagination might figure out they no longer need the company’s permission, and realize its “intellectual property” is unenforceable in an age of encryption and bittorrent (the same is becoming true in manufacturing, as the discovery and enforcement of patent rights against reverse-engineering efforts by hundreds of small shops serving small local markets becomes simply more costly than it’s worth).

For example, Tom Peters gives the example of Oticon, which got rid of “the entire formal organization” and abolished departments, secretaries, and formal management titles. Employees put their personal belongings in “caddies, or personal carts, moving them to appropriate spots in the completely open space as their work with various colleagues requires.” [920] The danger for the corporate gatekeepers, in sectors where outlays for physical capital cease to present significant entry barriers, is that one of these days knowledge workers may push their “personal carts” out of the organization altogether, and decide they can do everything just as well without the company.

B. The Advantages of Value Creation Outside the Cash Nexus

We already examined, in Chapters Three and Five, the tendencies toward a sharp reduction in the number of wage hours worked and increased production of value in the informal sector. From the standpoint of efficiency and bargaining power, this has many advantages.

On the individual level, a key advantage of the informal and household economy lies in its offer of an alternative to wage employment for meeting a major share of one’s subsistence needs, and the increased bargaining power of labor in what wage employment remains.

How much does the laborer increase his freedom if he happens to own a home, so that there is no landlord to evict him, and how much still greater is his freedom if he lives on a homestead where he can produce his own food? That the possession of capital makes a man independent in his dealings with his fellows is a self-evident fact. It makes him independent merely because it furnishes him actually or potentially means which he can use to produce support for himself without first securing the permission of other men. [921]

Ralph Borsodi demonstrated some eight decades ago—using statistics!—that the hourly “wage” from gardening and canning, and otherwise replacing external purchases with home production, is greater than the wages of most outside employment. [922]

Contra conventional finance gurus like Suze Orman, who recommend investments like lifetime cost averaging of stock purchases, contributing to a 401k up to the employer’s maximum matching contribution, etc., the most sensible genuine investment for the average person is capital investment in reducing his need for outside income. This includes building or purchasing the roof over his head as cheaply and paying it off as quickly as possible, and substituting home production for purchases with wage money whenever the first alternative is reasonably competitive. Compared to the fluctuation in value of financial investments, Borsodi writes,

the acquisition of things which you can use to produce the essentials of comfort—houses and lands, machines and equipment—are not subject to these vicissitudes.... For their economic utility is dependent upon yourself and is not subject to change by markets, by laws or by corporations which you do not control. [923]

The home producer is free from “the insecurity which haunts the myriads who can buy the necessaries of life only so long as they hold their jobs.” [924] A household with no mortgage payment, a large garden and a well-stocked pantry might survive indefinitely (if inconveniently) with only one part-time wage earner.

As we saw in Chapter Three, the evaporation of rents on artificial property rights like “intellectual property,” and the rapid decline of capital outlays for physical production, mean a crisis in the ability to capture value from production. But, turning this on its head, it also means a collapse in the costs of living. As Bruce Sterling argued half facetiously (does he ever argue otherwise?), increased knowledge creates “poverty” in the sense that when everything is free, nothing is worth anything. But conversely, when nothing is worth everything, everything is free. And a world of free goods, while quite inconvenient for those who used to make their living selling those goods, is of a less unambiguously bad character for those who no longer need to make as much of a living to pay for stuff. When everything is free, the pressure to make a living in the first place is a lot less.

Waiting for the day of realization that Internet knowledge-richness actively MAKES people economically poor. “Gosh, Craigslist has such access to ultra-cheap everything now… hey wait a second, where did my job go?” Someday the Internet will offer free food and shelter. At that point, hordes simply walk away. They abandon capitalism the way a real-estate bustee abandons an underwater building. [925]

C. More Efficient Extraction of Value from Inputs

John Robb uses STEMI compression, an engineering analysis template, as a tool for evaluating the comparative efficiency of his proposed Resilient Communities:

In the evolution of technology, the next generation of a particular device/program often follows a well known pattern in the marketplace: its design makes it MUCH cheaper, faster, and more capable. This allows it to crowd out the former technology and eventually dominate the market (i.e. transistors replacing vacuum tubes in computation). A formalization of this developmental process is known as STEMI compression: Space. Less volume/area used. Time. Faster. Energy. Less energy. Higher efficiency. Mass. Less waste. Information. Higher efficiency. Less management overhead. So, the viability of a proposed new generation of a particular technology can often be evaluated based on whether it offers a substantial improvement in the compression of all aspects of STEMI without a major loss in system complexity or capability. This process of analysis also gives us an “arrow” of development that can be traced over the life of a given technology.

The relevance of the concept, he suggests, may go beyond new generations of technology. “Do Resilient Communities offer the promise of a generational improvement over the existing global system or not?”

In other words: is the Resilient Community concept (as envisioned here) a viable self-organizing system that can rapidly and virally crowd out existing structures due to its systemic improvements? Using STEMI compression as a measure, there is reason to believe it is: Space. Localization (or hyperlocalization) radically reduces the space needed to support any given unit of human activity. Turns useless space (residential, etc.) into productive space. Time. Wasted time in global transport is washed away. JIT (just in time production) and place. Energy. Wasted energy for global transport is eliminated. Energy production is tied to locality of use. More efficient use of solar energy (the only true exogenous energy input to our global system). Mass. Less systemic wastage. Made to order vs. made for market. Information. Radical simplification. Replaces hideously complex global management overhead with simple local management systems. [926]

The contrast between Robb’s Resilient Communities and the current global system dovetails, more or less, with that between our two economies. And his STEMI compression template, as a model for analyzing the alternative economy’s superiorities over corporate capitalism, overlaps with a wide range of conceptual models developed by other thinkers. Whether it be Buckminster Fuller’s ephemeralization, or lean production’s eliminating muda and “doing more and more with less and less,” the same general idea has a very wide currency.

A good example is what Mamading Ceesay calls the “economies of agility.” The emerging postindustrial age is a “network age where emerging Peer Production will be driven by the economies of agility.”

Economies of scale are about driving down costs of manufactured goods by producing them on a large scale. Economies of agility in contrast are about quickly being able to switch between producing different goods and services in response to demand. [927]

If the Toyota Production System is a quantum improvement on Sloanist mass-production in terms of STEMI compression and the economics of agility, and networked production on the Emilia-Romagna model is a similar advancement on the TPS, then the informal and household economy is an order of magnitude improvement on both of them.

Jeff Vail uses the term “Rhizome” for the forms of organization associated with Robb’s Resilient Communities, and with the alternative economy in general: “an alternative mode of human organization consisting of a network of minimally self-sufficient nodes that leverage non-hierarchal coordination of economic activity.”

The two key concepts in my formulation of rhizome are 1) minimal self-sufficiency, which eliminates the dependencies that accrete [sic] hierarchy, and 2) loose and dynamic networking that uses the “small worlds” theory of network information processing to allow rhizome to overcome information processing burdens that normally overburden hierarchies. [928]

By these standards, the alternative economy that we saw emerging from the crises of state capitalism in previous chapters is capable of eating the corporate-state economy for lunch. Its great virtue is its superior efficiency in using limited resources intensively, as opposed to mass-production capitalist industry’s practice of adding subsidized inputs extensively. The alternative economy reduces waste and inefficiency through the greater efficiency with which it extracts use-value from a given amount of land or capital.

An important concept for understanding the alternative economy’s more efficient use of inputs is “productive recursion,” which Nathan Cravens uses to refer to the order of magnitude reduction in labor required to obtain a good when it is produced in the social economy, without the artificial levels of overhead and waste associated with the corporate-state nexus. [929] Savings in productive recursion include (say) laboring to produce a design in a fraction of the time it would take to earn the money to pay for a proprietary design, or simply using an open source design; or reforging scrap metal at a tenth the cost of using virgin metal. [930]

Production methods lower the cost of products when simplified for rapid replication. That is called productive recursion. Understanding productive recursion is the first step to understanding how we need to restructure Industrial economic systems in response to this form of technological change. If Industrial systems are not reconfigured for productive recursion, they will collapse before reaching anywhere near full automation. I hope this writing helps divert a kink in the proliferation of personal desktop fabrication and full productive automation generally. [931]

He cites, from Neil Gershenfeld’s Fab , a series of “cases that prove the theory of productive recursion in practice.” One example is the greatly reduced cost for cable service in rural Indian villages, “due to reverse engineered satellite receivers by means of distributed production.” Quoting from Fab :

A typical village cable system might have a hundred subscribers, who pay one hundred rupees (about two dollars) per month. Payment is prompt, because the “cable-wallahs” stop by each of their subscribers personally and rather persuasively make sure that they pay. Visiting one of these cable operators, I was intrigued by the technology that makes these systems possible and financially viable. A handmade satellite antenna on his roof fed the village’s cable network. Instead of a roomful of electronics, the head end of his cable network was just a shelf at the foot of his bed. A sensitive receiver there detects and interprets the weak signal from the satellite, then the signal is amplified and fed into the cable for distribution around the village. The heart of all this is the satellite receiver, which sells for a few hundred dollars in the United States. He reported that the cost of his was one thousand rupees, about twenty dollars. [932]

The cheap satellite receiver was built by Sharp, which after some legwork Gershenfeld found to be “an entirely independent domestic brand” run out of a room full of workbenches in a district of furniture workshops in Delhi.

They produced all of their own products, although not in that room—done there, it would cost too much. The assembly work was farmed out to homes in the community, where the parts were put together. Sharp operated like a farm market or grain elevator, paying a market-based per-piece price on what was brought in. The job of the Sharp employees was to test the final products. The heart of the business was in a back room, where an engineer was busy taking apart last-generation video products from developed markets. Just as the students in my fab class would learn from their predecessors’ designs and use them as the starting point for their own, this engineer was getting a hands-on education in satellite reception from the handiwork of unknown engineers elsewhere. He would reverse engineer their designs to understand them, then redo the designs so that they could be made more simply and cheaply with locally available components and processes. And just as my students weren’t guilty of plagiarism because of the value they added to the earlier projects, this engineer’s inspiration by product designs that had long since become obsolete was not likely to be a concern to the original satellite-receiver manufacturers. The engineer at the apex of the Sharp pyramid was good at his job, but also frustrated. Their business model started with existing product designs. The company saw a business opportunity to branch out from cable television to cable Internet access, but there weren’t yet available obsolete cable modems using commodity parts that they could reverse-engineer. Because cable modems are so recent, they use highly integrated state-of-the-art components that can’t be understood by external inspection, and that aren’t amenable to assembly in a home. But there no technological reason that data networks couldn’t be produced in just this way, providing rural India with Internet access along with Bollywood soap operas.... ...There isn’t even a single entity with which to partner on a joint venture; the whole operation is fundamentally distributed. [933]

Another example of productive recursion, also from Gershenfeld’s experiences in India, is the reverse engineering of ground resistance meters.

For example, the ground resistance meters that were used for locating water in the area cost 25,000 rupees (about $500). At Vigyan Ashram they bought one, stripped it apart, and from studying it figured out how to make them for just 5,000 rupees.... Another example arose because they needed a tractor on the farm at Vigyan Ashram, but could not afford to buy a new one. Instead, they developed their own “MechBull” made out of spare jeep parts for 60,000 rupees ($1,200). This proved to be so popular that a Vigyan Ashram alum built a business making and selling these tractors. [934]

Yet another is a walk-behind tractor, developed from a modified motorcycle within Anil Gupta’s “Honeybee Network” (an Indian alternative technology group).

Modeled on how honeybees work—collecting pollen without harming the flowers and connecting flowers by sharing the pollen—the Honeybee Network collects and helps develop ideas from grassroots inventors, sharing rather than taking their ideas. At last count they had a database of ten thousand inventions. One Indian inventor couldn’t afford or justify buying a large tractor for his small farm; it cost the equivalent of $2,500. But he could afford a motorcycle for about $800. So he came up with a $400 kit to convert a motorcycle into a three-wheeled tractor (removable of course, so that it’s still useful as transportation). Another agricultural inventor was faced with a similar problem in applying fertilizer; his solution was to modify a bicycle. [935]

According to Marcin Jakubowski of Open Source Ecology, the effects of productive recursion are cumulative. “Cascading Factor 10 cost reduction occurs when the availability of one product decreases the cost of the next product.” [936] We already saw, in Chapter Five, the specific case of the CEB Press, which can be produced for around 20% of the cost of purchasing a competing commercial model.

Amory Lovins and his coauthors, in Natural Capitalism , described the cascading cost savings (“Tunneling Through the Cost Barrier”) that result when the efficiencies of one stage of design reduce costs in later stages. Incremental increases in efficiency may increase costs, but large-scale efficiency improvements in entire designs may actually result in major cost reductions. Improving the efficiency of individual components in isolation can be expensive, but improving the efficiency of systems can reduce costs by orders of magnitude. [937]

Much of the art of engineering for advanced resource efficiency involves harnessing helpful interactions between specific measures so that, like loaves and fishes, the savings keep on multiplying. The most basic way to do this is to “think backward,” from downstream to upstream in a system. A typical industrial pumping system, for example..., contains so many compounding losses that about a hundred units of fossil fuel at a typical power station will deliver enough electricity to the controls and motor to deliver enough torque to the pump to deliver only ten units of flow out of the pipe—a loss factor of about tenfold. But turn those ten-to-one compounding losses around backward..., and they generate a one-to-ten compounding saving . That is, saving one unit of energy furthest downstream (such as by reducing flow or friction in pipes) avoids enough compounding losses from power plant to end use to save about ten units of fuel, cost, and pollution back at the power plant. [938]

To take another example, both power steering and V-8 engines resulted from Detroit’s massive increases in automobile weight in the 1930s, along with marketing-oriented decisions to add horsepower that would be idle except during rapid acceleration. The introduction of lightweight frames, conversely, makes possible the use of much lighter internal combustion engines or even electric motors, which in turn eliminate the need for power steering.

Most of the order-of-magnitude efficiencies of whole-system design that Lovins et all describe result, not from new technology, but from more conscious use of existing technology: what Edwin Land called “the sudden cessation of stupidity” or “stopping having an old idea.” [939] Simply combining existing technological elements in the most effective way can result in efficiency increases of Factor Four, Factor Eight, or more. The overall designs are generally the kinds of mashups of off-the-shelf technology that Cory Doctorow and Murray Bookchin comment on below.

The increased efficiencies result from a design process like Eric Raymond’s Bazaar: designers operate intelligently, with constant feedback. [940] The number of steps and the transaction costs involved in aggregating user feedback with the design process are reduced. The inefficiencies that result from an inability to “think backward” are far more likely to occur in a stovepiped organizational framework, where each step or part is designed in isolation by a designer whose relation to the overall process is mediated by a bureaucratic hierarchy. For example, in building design:

Conventional buildings are typically designed by having each design specialist “toss the drawings over the transom” to the next specialist. Eventually, all the contributing specialists’ recommendations are integrated, sometimes simply by using a stapler. [941]

This approach inevitably results in higher costs, because increased efficiencies of a single step taken in isolation generally are governed by a law of increased costs and diminishing returns. Thicker insulation, better windows, etc., cost more than their conventional counterparts. Lighter materials and more efficient engines for a car, similarly, cost more than conventional components. So optimizing the efficiency of each step in isolation follows a rising cost curve, with each marginal improvement in efficiency of the step costing more than the last. But by approaching design from the perspective of a whole system, it becomes possible to “tunnel through the cost barrier”:

When intelligent engineering and design are brought into play, big savings often cost less up front than small or zero savings. Thick enough insulation and good enough windows can eliminate the need for a furnace, which represents an investment of more capital than those efficiency measures cost. Better appliances help eliminate the cooling system, too, saving even more capital cost. Similarly, a lighter, more aerodynamic car and a more efficient drive system work together to launch a spiral of decreasing weight, complexity and cost. The only moderately more efficient house and car do cost more to build, but when designed as whole systems, the super efficient house and car often cost less than the original, unimproved versions. [942]

While added insulation and tighter windows increase the cost of insulation or windows, taken in isolation, if integrated into overall building design they may reduce total costs up front by reducing the required capacity—and hence outlays on capital equipment—of heating and cooling systems. A more energy-efficient air conditioner, given unchanged cooling requirements, will cost more; but energy-efficient windows, office equipment, etc., can reduce the cooling load by 85%, and thus make it possible to replace the cooling system with one three-fourths smaller than the original—thereby not only reducing the energy bill by 75%, but enormously reducing capital expenditures on the air conditioner. [943] The trick is to “do the right things in the right order”:

...if you’re going to retrofit your lights and air conditioner, do the lights first so you can make the air conditioner smaller. If you did the opposite, you’d pay for more cooling capacity than you’d need after the lighting retrofit, and you’d also make the air conditioner less efficient because it would either run at part-load or cycle on and off too much. [944]

This is also a basic principle of lean production: most costs come from five percent of point consumption needs, and from scaling the capacity of the load-bearing infrastructure to cover that extra five percent instead of just handling the first ninety-five percent. It ties in, as well, with another lean principle: getting production out of sync with demand (including the downstream demand for the output of one step in a process), either spatially or temporally, creates inefficiencies. Optimizing one stage without regard to production flow and downstream demand usually involves expensive infrastructure to get an in-process input from one stage to another, often with intermediate storage while it is awaiting a need. The total resulting infrastructure cost greatly exceeds the saving at individual steps. Inefficient synchronization of sequential steps in any process results in bloated overhead costs from additional storage and handling infrastructure.

A good example of the cost-tunneling phenomenon was engineer Jan Schilham’s work at the Interface carpet factory in Shanghai, which reduced horsepower requirements for pumping in one process twelvefold—while reducing capital costs. In conventional design, the factory layout and system of pipes are assumed as given, and the pumps chosen against that background.

...First, Schilham chose to deploy big pipes and small pumps instead of the original design’s small pipes and big pumps. Friction falls as nearly the fifth power of pipe diameter, so making the pipes 50 percent fatter reduces their friction by 86 percent. The system needs less pumping energy— and smaller pumps and motors to push against the friction. If the solution is this easy, why weren’t the pipes originally specified to be big enough? ...Traditional optimization compares the cost of fatter pipe with only the value of the saved pumping energy . This comparison ignores the size, and hence the capital cost, of the [pumping] equipment needed to combat the pipe friction. Schilham found he needn’t calculate how quickly the savings would repay the extra up-front cost of the fatter pipe, because capital cost would fall more for the pumping and drive equipment than it would rise for the pipe, making the efficient system as a whole cheaper to construct. Second, Schilham laid out the pipes first and then installed the equipment, in reverse order from how pumping systems are conventionally installed. Normally, equipment is put in some convenient and arbitrary spot, and the pipe fitter is then instructed to connect point A to point B. the pipe often has to go through all sorts of twists and turns to hook up equipment that’s too far apart, turned the wrong way, mounted at the wrong height, and separated by other devices installed in between.... By laying out the pipes before placing the equipment that the pipes connect, Schilham was able to make the pipes short and straight rather than long and crooked. That enabled him to exploit their lower friction by making the pumps, motors, inverters and electricals even smaller and cheaper. [945]

Vinay Gupta described some of the specific efficiencies involved in productive recursion, that combine to reduce the alternative economy’s costs by an order of magnitude. [946] The most important efficiency comes from distributed infrastructure which provides

the same class of services that are provided by centralized systems like the water and power grids, but without the massive centralized investments in physical plant. For example, dry toilets and solar panels can provide high quality services household by household without a grid.

The digital revolution and network organization interact with distributed infrastructure to remove most of the administrative and other transaction costs involved in getting the technologies to the people who can benefit from them. It is, in other words, governed by the rules of Raymond’s Bazaar, which Robb made the basis of his “open source insurgency.”

Distributed infrastructure also benefits from “economies of agility,” as opposed to the enormous capital outlays in conventional blockbuster investments that must frequently be abandoned as “sunk costs” when the situation changes or funding stops. “...[H]alf a dam is no dam at all, but 500 of 1000 small projects is half way to the goal.” And distributed infrastructure projects manage to do without the enormous administrative and overhead costs of conventional organizations, which we saw described by Paul Goodman in Chapter Two; most of the organization and planning are done by those with the technical knowledge and sweat equity, who are directly engaged in the project and reacting to the situation on the ground.

And finally, Gupta argues, distributed finance—microcredit—interacts with distributed infrastructure and network organization to heighten the advantages of agility and low overhead still further.

We also saw, in Chapter Five, the ways that modular design and the forms of stigmergic organization facilitated by open-source design contribute to lower costs. Modular design is a way of getting more bang for the R&D buck by maximizing use of a given innovation across an entire product ecology, and at the same time building increased redundancy into the system through interchangeable parts. [947] And stigmergic organization with open-source designs eliminates barriers to widespread use of the most efficient existing designs.

Malcolm Gladwell’s “David vs. Goliath” analysis of military history is an excellent illustration of the economies of agility. Victory goes to the bigger battalions about seven times out of ten—when Goliath outnumbers David ten to one, that is. But when the smaller army, outnumbered ten to one, acknowledges the fact and deliberately chooses unconventional tactics that target Goliath’s weaknesses, it actually wins about six times out of ten. “When underdogs choose not to play by Goliath’s rules, they win...” Guerrilla fighters from J.E.B. Stuart to T. E. Lawrence to Ho Chi Minh have learned, as General Maurice de Saxe put it, that victory is about legs rather than arms. As Lawrence wrote, “Our largest available resources were the tribesmen, men quite unused to formal warfare, whose assets were movement, endurance, individual intelligence, knowledge of the country, courage.” [948] Another good example is what the U.S. military (analyzing Chinese asymmetric warfare capabilities) calls “Assassin’s Maces”: “anything which provides a cheap means of countering an expensive weapon.” A good example is the black box that transmits ten thousand signals on the same frequency used by SAM missiles, and thus overwhelms American air-to-surface missiles which target SAM radio signals. The Chinese, apparently, work from the assumption that the U.S. develops countermeasures to “Assassin’s Mace” weapons, and deliberately make it easier for American intelligence to acquire older such weapons as a form of disinformation; there’s good reason to believe the Chinese military can work around American countermeasures much more quickly, and cheaply, than the U.S. can develop them. [949]

A recent example of “Assassin’s Mace” technology is Skygrabber, an off-the-shelf software product that costs $26. Insurgents in Afghanistan use it to capture video feeds from U.S. military drones. The Pentagon has known about the problem since the Balkan wars, but—get this—didn’t bother spending the money to encrypt the feed because they “assumed local adversaries wouldn’t know how to exploit it.” [950] In our discussion of networked resistance in Chapter Three, if you recall, we saw that the music industry assumed its DRM only had to be good enough to thwart the average user, because the geeks who could crack it would be too few to have a significant economic impact. But as Cory Doctorow pointed out, it takes only one geek to figure it out and then explain it to everybody else. It’s called “stigmergic organization.” Well, here’s Dat Ole Debbil stigmergy again, and the Pentagon’s having about as much fun with it as the record companies. John Robb describes the clash of organizational cultures:

This event isn’t an aberration. It is an inevitable development, one that will only occur more and more often. Why? Military cycles of development and deployment take decades due to the dominance of a lethargic, bureaucratic, and bloated military industrial complex. Agility isn’t in the DNA of the system nor will it ever be (my recent experience with a breakthrough and inexpensive information warfare system my team built, is yet another example of how FAIL the military acquisition system is). In contrast, vast quantities of cheap/open/easy technologies (commercial and open source) are undergoing rapid rates of improvement. Combined with tinkering networks that can repurpose them to a plethora of unintended needs (like warfare), this development path becomes an inexorable force. The delta (a deficit from the perspective of the status quo, an advantage for revisionists) between the formal and the informal will only increase as early stage networks that focus specifically on weapons/warfare quickly become larger, richer, etc. (this will happen as they are combined with the economic systems of more complex tribal/community “Darknets”). [951]

In theory, it’s fairly obvious what the U.S. national security establishment needs to do. All the assorted “Fourth Generation Warfare” doctrines are pretty much agreed on that. It has to reconfigure itself as a network, more decentralized and agile than the network it’s fighting, so that it can respond quickly to intelligence and small autonomous units can “swarm” enemy targets from many directions at once. [952] The problem is, it’s easier said than done. Al Qaeda had one huge advantage over the U.S. national security establishment: Osama bin Laden is simply unable to interfere with the operations of local Al Qaeda cells in the way that American military bureaucracies interfere with the operations of military units. No matter what 4GW doctrine calls for, no matter what the slogans and buzzwords at the academies and staff colleges say, it will be impossible to do any of it so long as the military bureaucracy exists because military bureaucracies are constitutionally incapable of restraining themselves from interference. Robb describes the problem. He quotes Jonathan Vaccaro’s op-ed from the New York Times :

In my experience, decisions move through the process of risk mitigation like molasses. When the Taliban arrive in a village, I discovered, it takes 96 hours for an Army commander to obtain necessary approvals to act. In the first half of 2009, the Army Special Forces company I was with repeatedly tried to interdict Taliban. By our informal count, however, we (and the Afghan commandos we worked with) were stopped on 70 percent of our attempts because we could not achieve the requisite 11 approvals in time. For some units, ground movement to dislodge the Taliban requires a colonel’s oversight. In eastern Afghanistan, traveling in anything other than a 20-ton mine-resistant ambush-protected vehicle requires a written justification, a risk assessment and approval from a colonel, a lieutenant colonel and sometimes a major. These vehicles are so large that they can drive to fewer than half the villages in Afghanistan. They sink into wet roads, crush dry ones and require wide berth on mountain roads intended for donkeys. The Taliban walk to these villages or drive pickup trucks. The red tape isn’t just on the battlefield. Combat commanders are required to submit reports in PowerPoint with proper fonts, line widths and colors so that the filing system is not derailed. Small aid projects lag because of multimonth authorization procedures. A United States-financed health clinic in Khost Province was built last year, but its opening was delayed for more than eight months while paperwork for erecting its protective fence waited in the approval queue. Communication with the population also undergoes thorough oversight. When a suicide bomber detonates, the Afghan streets are abuzz with Taliban propaganda about the glories of the war against America. Meanwhile, our messages have to inch through a press release approval pipeline, emerging 24 to 48 hours after the event, like a debutante too late for the ball. [953]

Robb adds his own comments on just how badly the agility-enhancing potential of network technology is sabotaged:

Risk mitigation trumps initiative every time. Careers are more important than victory. Risk evaluation moves upward in the hierarchy. Evaluation of risk takes time, particularly with the paucity of information that can be accessed at positions removed from the conflict....

New communications technology isn’t being used for what it is designed to do (enable decentralized operation due to better informed people on the ground). Instead it is being used to enable more complicated and hierarchical approval processes — more sign offs/approvals, more required processes, and higher level oversight. For example: a general, and his staff, directly commanding a small strike team remotely. [954]

So long as the military bureaucracy exists, it will be impossible to put 4GW ideas into practice without interference from the pointy-haired bosses.

Another example of the same phenomenon is the way the Transportation Security Administration deals with security threats: as the saying goes, by “always planning for the last war.”

First they attacked us with box cutters, so the TSA took away anything even vaguely sharp or pointy. Then they tried (and failed) to hurt us with stuff hidden in their shoes. So the TSA made us take off our shoes at the checkpoint. Then there was a rumor of a planned (but never executed) attack involving liquids, so the TSA decided to take away our liquids. [955]

Distributed infrastructure benefits, as well, from what Robb calls “scale invariance” [956] : the ability of the part, in cases of system disruption, to replicate the whole. Each part conserves the features that define the whole, on the same principle as a hologram. Projects like Open-Source Ecology, [957] once the major components of a local site are in place, can duplicate any of the individual components or duplicate them all to create a second site. The Fab Lab can produce the parts for a steam engine, CEB press, tractor, sawmill, etc., or even the machine tools for another Fab Lab.

Distributist writer John Medaille pointed out, by private email, that the Israelites under the Judges were a good example of superior extraction of value from inputs. At a time when the “more civilized” Philistines dominated most of the fertile valleys of Palestine, the Israelite confederacy stuck to the central highlands. But their “alternative technology,” focused on extracting more productivity from marginal land, enabled them to make more intensive use of what was unusable to the Philistines.

The tribes clung to the hilltops because the valleys were “owned” by the townies (Philistines) and the law of rents was in full operation. The Hebrews were free in the hills, and increasingly prosperous, both because of their freedom and because of new technologies, namely contoured plowing and waterproof cement, which allowed the construction of cisterns to put them through the dry season. [958]

In other words, a new technological regime supplanted a more privileged form of society through superior efficiency, despite being disadvantaged in access to productive inputs. The Hebrews were able to outcompete the dominant social system by making more efficient and intensive use of inputs that were “unusable” with conventional methods of economic organization.

The alternative economy, likewise, has taken for its cornerstone the stone which the builders refused. As I put it in a blog post (in an admittedly grandiose yet nevertheless eminently satisfying passage):

…[T]he owning classes use less efficient forms of production precisely because the state gives them preferential access to large tracts of land and subsidizes the inefficiency costs of large-scale production. Those engaged in the alternative economy, on the other hand, will be making the most intensive and efficient use of the land and capital available to them. So the balance of forces between the alternative and capitalist economy will not be anywhere near as uneven as the distribution of property might indicate. If everyone capable of benefiting from the alternative economy participates in it, and it makes full and efficient use of the resources already available to them, eventually we’ll have a society where most of what the average person consumes is produced in a network of self-employed or worker-owned production, and the owning classes are left with large tracts of land and understaffed factories that are almost useless to them because it’s so hard to hire labor except at an unprofitable price. At that point, the correlation of forces will have shifted until the capitalists and landlords are islands in a mutualist sea—and their land and factories will be the last thing to fall, just like the U.S Embassy in Saigon. [959]

Soderberg refers to the possibility that increasing numbers of workers will “defect from the labour market” and “establish means of non-waged subsistence,” through efficient use of the waste products of capitalism. [960] The “freegan” lifestyle (less charitably called “dumpster diving”) is one end of a spectrum of such possibilities. At the other end is low-cost recycling and upgrading of used and discarded electronic equipment: for example, the rapid depreciation of computers makes it possible to add RAM to a model a few years old at a small fraction of the cost of a new computer, with almost identical performance.

Reason ’s Brian Doherty, in a display of rather convoluted logic, attempted to depict freeganism as proof of capitalism’s virtues:

It’s nice of capitalism to provide such an overflowing cornucopia that the [freegans] of the world can opt out. Wouldn’t it be gracious of them to show some love to the system that manages to keep them alive and thriving without even trying? [961]

To take Doherty’s argument and stand it on its head, consider the amount of waste resulting from the perverse incentives under the Soviet planned economy. In some cases, new refrigerators and other appliances were badly damaged by being roughly thrown off the train and onto a pile at the point of delivery, because the factory got credit simply for manufacturing them, and the railroad got credit for delivering them, under the metrics of the Five Year Plan. Whether they actually worked, or arrived at the retailer in a condition such that someone was willing to buy them, was beside the point. Now, imagine if some handy fellow in the Soviet alternative economy movement had bought up those fridges as factory rejects for a ruble apiece, or just bought them for scrap prices from a junkyard, and then got them in working order at little or no cost. Would Doherty be praising Soviet socialism for its efficiency in producing such a surplus that the Russian freegan could live off the waste?

When the alternative economy is able to make more efficient use of the waste byproducts of state capitalism—waste byproducts that result from the latter’s inefficient use of subsidized inputs—and thereby supplant state capitalism from within by the superior use of its underutilized resources and waste, it is rather perverse to dismiss the alternative economy as just another hobby or lifestyle choice enabled by the enormous efficiencies of corporate capitalism. And the alternative economy is utilizing inputs that would otherwise be waste, and thereby establishing an ecological niche based on the difference between capitalism’s actual and potential efficiencies; so to treat capitalism’s inefficiencies as a mark of efficiency—i.e., how inefficient it can afford to be—is a display of Looking Glass logic.

The alternative economy’s superior extraction of value from waste inputs extends, ultimately, to the entire economy.

If these isolated nodes of self-sufficiency connect, communicate, and interact, then they will enjoy an improve position relative to hierarchal structures.... Additionally, from the perspective of the diagonal, the Diagonal Economy will begin as a complementary structure that is coextensive but out of phase with our current system. However, it will be precisely because it leverages a more efficient information processing structure that it will be able to eventually supplant the substrate hierarchies as the dominant system. [962]

One example of how the alternative economy permits the increasingly efficient extraction of value from waste material, by the way, is the way in which network technology facilitates repair even within the limits of proprietary design and the planned obsolescence model. In Chapter Two, we considered Julian Sanchez’s account of how Apple’s design practices serve to thwart cheap repair. iFixit is an answer to that problem:

Kyle Wiens and Luke Soules started iFixit (ifixit.com) out of their dorms at Cal Poly in San Luis Obispo, Calif. That was six years ago. Today they have a self-funded business that sells the parts and tools you need to repair Apple equipment. One of their innovations is creating online repair manuals for free that show you how to make the repairs. “Our biggest source of referrals is Apple employees, particularly folks at the Genius Bar,” Wien says. They refer customers who complain when Apple won’t let them fix an out-of-warranty product. (Apple: “Just buy a new one!”) iFixit will also buy your old Mac and harvest the reusable parts to resell.... If it’s starting to sound like an auto parts franchise, well, Wiens and Soules have been thinking about someday doing for cars what they do for computers and handhelds today. [963]

In other words, the same open-source insurgency model that governs the file-sharing movement is spreading to encompass the development of all kinds of measures for routing around planned obsolescence and the other irrationalities of corporate capitalism. The reason for the quick adaptability of fourth generation warfare organizations, as described by John Robb, is that any innovation developed by a particular cell becomes available to the entire network. And by the same token, in the file-sharing world, it’s not enough that DRM be sufficiently hard to circumvent to deter the average user. The average user need only use Google to benefit from the superior know-how of the geek who has already figured out how to circumvent it. Likewise, once anyone figures out how to circumvent any instance of planned obsolescence, their hardware hack becomes part of a universally accessible repository of knowledge.

As Cory Doctorow notes, cheap technologies which can be modularized and mixed-and-matched for any purpose are just lying around. “...[T]he market for facts has crashed. The Web has reduced the marginal cost of discovering a fact to $0.00.” He cites Robb’s notion that “[o]pen source insurgencies don’t run on detailed instructional manuals that describe tactics and techniques.” Rather,they just run on “plausible premises.” You just put out the plausible premise—i.e., the suggestion based on your gut intuition, based on current technical possibilities, that something can be done—that IED’s can kill enemy soldiers, and then anyone can find out how to do it via the networked marketplace of ideas, with virtually zero transaction costs.

But this doesn’t just work for insurgents — it works for anyone working to effect change or take control of her life. Tell someone that her car has a chip-based controller that can be hacked to improve gas mileage, and you give her the keywords to feed into Google to find out how to do this, where to find the equipment to do it — even the firms that specialize in doing it for you. In the age of cheap facts, we now inhabit a world where knowing something is possible is practically the same as knowing how to do it. This means that invention is now a lot more like collage than like discovery.

Doctorow mentions Bruce Sterling’s reaction to the innovations developed by the protagonists of his (Doctorow’s) Makers : “There’s hardly any engineering. Almost all of this is mash-up tinkering.” Or as Doctorow puts it, it “assembles rather than invents.”

It’s not that every invention has been invented, but we sure have a lot of basic parts just hanging around, waiting to be configured. Pick up a $200 FPGA chip-toaster and you can burn your own microchips. Drag and drop some code-objects around and you can generate some software to run on it. None of this will be as efficient or effective as a bespoke solution, but it’s all close enough for rock-n-roll. [964]

Murray Bookchin anticipated something like this back in the 1970s, writing in Post-Scarcity Anarchism :

Suppose, fifty years ago, that someone had proposed a device which would cause an automobile to follow a white line down the middle of the road, automatically and even if the driver fell asleep.... He would have been laughed at, and his idea would have been called preposterous.... But suppose someone called for such a device today, and was willing to pay for it, leaving aside the question of whether it would actually be of any genuine use whatever. Any number of concerns would stand ready to contract and build it. No real invention would be required. There are thousands of young men in the country to whom the design of such a device would be a pleasure. They would simply take off the shelf some photocells, thermionic tubes, servo-mechanisms, relays, and, if urged, they would build what they call a breadboard model, and it would work. The point is that the presence of a host of versatile, reliable, cheap gadgets, and the presence of men who understand all their cheap ways, has rendered the building of automatic devices almost straightforward and routine. It is no longer a question of whether they can be built, it is a question of whether they are worth building. [965]

D. Seeing Like a Boss

The contrast in agility and learning ability between stigmergic organizations and hierarchies is beautifully brought out by David Pollard:

So Management by SMART Objective [Specific, Measurable, Achievable, Realistic, and Time-Based—Peter Drucker] leads to this ludicrous and dysfunctional dance:

Leaders hire ‘expert’ consultants, or huddle among themselves, or decide by fiat, what the SMART objectives should be for their organization: “increase revenues by 10% and profits by 20% next year by introducing ‘improved’ versions of 15 selected products that can be sold for an average price 25% higher than the old version, and which, through internal efficiencies, cost 15% less per unit to produce”

These leaders then ‘cascade down’ these objectives and command subordinates to come up with SMART business unit plans that will, if successful, collectively achieve these top-level objectives.

The subordinates understand that their success depends on ratcheting up profits, and that the objectives set by the leaders are ridiculous, magical thinking. So they come up with alternative plans to increase profits by 20% through a series of difficult, but realistic, moves. These entail offshoring everything to China, layoffs, pressuring staff to work longer hours for no more money, and, if all else fails, firing people or leaving vacancies unfilled.

The good people in the organization all leave, because they know this short-range thinking is dysfunctional, damaging to the organizations in the longer term, unsustainable, and a recipe for a miserable workplace. Their departure creates more vacancies that aren’t filled, which in the short term reduces costs.

The clueless and the losers, who are left, attempt to pick up the slack. They work harder, find workarounds for the dumbest management decrees, and do their best to achieve these objectives. Those fortunate enough to be in the right market areas in the right economies get promoted into some of the vacant spots left by the good people, but without the commensurate salary increase.

The leaders, as a result, achieve their short-run objectives, award themselves huge bonuses, profit from increases in the value of their stock options, and repeat the whole cycle the next year.

At some point the utter sustainability of this “management process” becomes apparent. There is a really bad year. The economy is blamed, perhaps. Or the top leaders are fired, and rehired in other organizations suffering from really bad years. Or the company is bought out, or ‘reorganized’ so that all the old objectives and measures no longer apply, and a completely new set is established.

The byproduct is a blizzard of plans, budgets and strategies, which are substantially meaningless. Everyone does ad hoc things to protect their ass and try to make the best of impossible targets and incompetent, arrogant leaders self-deluded about their own brilliance and about their ability to control what is really happening in the organization and the marketplace. There are, however, some things of real value happening in these organizations. None of them are ‘SMART’ so none is recognized or rewarded, and most of these things are actively discouraged. Nevertheless, because most people take pride in what they do, these valuable things happen. They include:

Learning: People learn by making mistakes (that they don’t admit to), and this makes them better at doing their jobs.

Conversations: People share, peer-to-peer, what works and doesn’t work, through mostly informal conversations, and this too makes them better at doing their jobs. These conversations are often surreptitious, since they are not considered ‘productive’ work.

Practice: The more people work at doing a particular task, the better they get at it. Most such practices are substantially workarounds, self-developed ways to do their particular specialized work optimally, despite instructions to the contrary from leaders and published manuals, and despite the burden of reporting SMART data up the hierarchy, which has to be creatively invented and explained so that the practices aren’t disrupted by new orders from the leaders.

Judgement: Through the above improved learning, conversations and practice, people develop good judgement. They make better decisions. The leaders get all the credit for these decision, but it doesn’t matter.

Trust Relationships: Through peer-to-peer conversations, trust relationships develop. When people trust each other, whole layers of bureaucracy are stripped away. People are left to do what they do well. Unfortunately leaders in large organizations almost never trust their subordinates, so these trust relationships are almost always horizontal, not vertical. Despite this, these relationships profoundly improve productivity.

Professionalism: The net result of all of the above is increased professionalism. People just become more competent.

This is why, in all my years as a manager, I always saw my role as listening and clearing away obstacles my staff were facing, identifying and getting rid of the small percentage who could not be trusted (too ambitious, too self-serving, uncollaborative, secretive or careless), and trusting the rest to do what they do best, and staying out of their way. In recent years I started to lose the heart to do this, but I still tried. The ideal organization is therefore not SMART, but self-organized, trusting (no need to measure results, just practice your craft and the results will inevitably be good), highly conversational, and ultimately collaborative (impossible in large organizations because performance is measured individually not collectively). It’s one where the non-performers are collectively identified by their peers and self-select out by sheer peer pressure. It’s one without hierarchy. It’s agile, resilient and improvisational, because it runs on principles, not rules, and because when issues arise they’re dealt with by the self-organized group immediately, not shelved until someone brings them to the attention of the ‘leaders’. It’s designed for complexity. It’s organic, natural. In my experience, such an organizational model can be replicated, but it doesn’t scale. [966]

Eric Raymond sees the phase transition between forms of social organization as a response to insupportable complexity. The professionalized meritocracies that managed the centralized state and large corporation through the middle of the 20 th century were an attempt to manage complexity by applying Weberian and Taylorist rules. And they did a passable job of managing the system competently for most of that time, he says. But in recent years we’ve reached a level of complexity beyond their capacity to deal with.

The “educated classes” are adrift, lurching from blunder to blunder in a world that has out-complexified their ability to impose a unifying narrative on it, or even a small collection of rival but commensurable narratives. They’re in the exact position of old Soviet central planners, systemically locked into grinding out products nobody wants to buy.

The answer, under these conditions, is to “[a]dapt, decentralize, and harden”—i.e., to reconfigure the system along the stigmergic lines he described earlier in “The Cathedral and the Bazaar”:

Levels of environmental complexity that defeat planning are readily handled by complex adaptive systems. A CAS doesn’t try to plan against the future; instead, the agents in it try lots of adaptive strategies and the successful ones propagate. This is true whether the CAS we’re speaking of is a human immune system, a free market, or an ecology. Since we can no longer count on being able to plan, we must adapt. When planning doesn’t work, centralization of authority is at best useless and usually harmful. And we must harden: that is, we need to build robustness and the capacity to self-heal and self-defend at every level of the system. I think the rising popular sense of this accounts for the prepper phenomenon. Unlike old-school survivalists, the preppers aren’t gearing up for apocalypse; they’re hedging against the sort of relatively transient failures in the power grid, food distribution, and even civil order that we can expect during the lag time between planning failures and CAS responses. CAS hardening of the financial system is, comparatively speaking, much easier. Almost trivial, actually. About all it requires is that we re-stigmatize the carrying of debt at more than a very small proportion of assets. By anybody. With that pressure, there would tend to be enough reserve at all levels of the financial system that it would avoid cascade failures in response to unpredictable shocks. Cycling back to terrorism, the elite planner’s response to threats like underwear bombs is to build elaborate but increasingly brittle security systems in which airline passengers are involved only as victims. The CAS response would be to arm the passengers, concentrate on fielding bomb-sniffers so cheap that hundreds of thousands of civilians can carry one, and pay bounties on dead terrorists. [967]

Compared to the stigmergic organization, a bureaucratic hierarchy is systematically stupid. This was the subject of a recent debate between Roderick Long and Bryan Caplan. Here’s what Long wrote:

Rand describes a “pyramid of ability” operating within capitalism, wherein the dull masses are carried along by the intelligent and enterprising few. “The man at the top,” Rand assures us, “contributes the most to all those below him,” while the “man at the bottom who, left to himself, would starve in his hopeless ineptitude, contributes nothing to those above him, but receives the bonus of all of their brains.” Rand doesn’t say that the top and the bottom always correspond to employers and employees respectively, but she clearly takes that to be the usual situation. And that simply does not correspond with the reality of most people’s everyday experience. If you’ve spent any time at all in the business world, you’ve almost certainly discovered that the reality on the ground resembles the comic-strip Dilbert a lot more than it resembles Rand’s pyramid of ability. In Kevin Carson’s words: as in government, so likewise in business, the “people who regulate what you do, in most cases, know less about what you’re doing than you do,” and businesses generally get things done only to the extent that “rules imposed by people not directly involved in the situation” are treated as “an obstacle to be routed around by the people actually doing the work.” To a considerable extent, then, in the real world we see the people at the “bottom” carrying the people at the “top” rather than vice versa. [968]

Caplan, in challenging this assessment, missed the point. He treated Long’s critique as an attack on the intelligence of the average manager:

But what about the “tons of empirical evidence” that Rand’s pyramid of ability is real? The Bell Curve is a good place to start. Intelligence is one of the strongest — if not the strongest — predictors of income, occupation, and social status. More to the point, simple pencil-and-paper tests of intelligence are the single best predictor of independently measured job performance and trainability. If you want to dig deeper, check out the large literature on why income runs in families. How then can we reconcile first-hand observation with economic theory and statistical fact? It’s easier than it seems. Lots of people think their bosses are stupid because:

The market doesn’t measure merit perfectly, so success is partly luck. As a result, some bosses are unimpressive. (Though almost all of them are smarter than the average rank-and-file worker).

There’s a big contrast effect: If you expect bosses to be in the 99 th percentile of ability, but they’re only in the 90 th , it’s natural to misperceive them as “stupid.” (Similarly, if someone scores in the 99 th percentile on the SAT in math, and the 80 th in English, many people will perceive him as “terrible in English.”)

Bosses are much more visible than regular workers, so their flaws and mistakes — even if minor — are quickly noticed. When normal people screw up, there’s usually no one paying attention.

Perhaps most importantly, people over-rate themselves. We like to imagine that we’re so great that we intellectually tower over our so-called “superiors.” Only a small percentage of us are right.

If Rod Long’s point is merely that markets would be even more meritocratic under laissez-faire, I agree. But to deny that actually-existing capitalism is highly meriocratic is misguided. To suggest that the pyramid of ability is actually inverted is just silly. [969]

But the point, as I argued with Caplan, is not that managers are inherently less intelligent or capable as individuals. Rather, it’s that hierarchical organizations are—to borrow that wonderful phrase from Feldman and March— systematically stupid. For all the same Hayekian reasons that make a planned economy unsustainable, no individual is “smart” enough to manage a large, hierarchical organization. Nobody –not Einstein, not John Galt–possesses the qualities to make a bureaucratic hierarchy function rationally. Nobody’s that smart, any more than anybody’s smart enough to run Gosplan efficiently–that’s the whole point. No matter how insightful and resourceful they are, no matter how prudent, as human beings in dealing with actual reality, nevertheless by their very nature hierarchies insulate those at the top from the reality of what’s going on below, and force them to operate in imaginary worlds where all their intelligence becomes useless. No matter how intelligent managers are as individuals, a bureaucratic hierarchy makes their intelligence less usable .

In the case of network organization, just the opposite is the case: networked, stigmergic organization promotes maximum usability of intelligence.

The fundamental reason for agility, in a self-managed peer network, is the lack of a bureaucratic hierarchy separating the worker from the end-user. The main metric of quality is direct end-user feedback. And in a self-managed peer network, “employee education” follows directly from what workers actually learn by doing their jobs.

In a corporate hierarchy, in contrast, most quality metrics are developed to inform bureaucratic intermediaries who are neither providers nor end-users of the company’s services.

And, much like management metrics of quality, their metrics of employee skill and competence are utterly divorced from reality. At just about every job where I’ve ever worked, for example, “employee education” credits were utterly worthless busy work that had nothing to do with what I actually did.

Steve Herrick, commenting under a blog post of mine, confirmed my impression of the (lack of) value of most “in-service meetings” and “employee education hours,” based on his own experience working in hospitals:

...I work as a medical interpreter. According to the rules, I can’t touch patients (let alone provide care) or computers. However, according to other rules, I have [to] pass tests on sharps disposal, pathogen transmission, proper use of portable computers, etc. [970]

Such nonsense results, of necessity, from a situation in which a bureaucratic hierarchy must develop some metric for assessing the skills or work quality of a labor force whose actual work they know nothing about. When management doesn’t know (in Paul Goodman’s words) “what a good job of work is,” they are forced to rely on arbitrary metrics. Blogger Atrios describes his experience with the phenomenon.

During my summers doing temp office work I was always astounded by the culture of “face time”—the need to be at your desk early and stay late even when there was no work to be done and doing so in no way furthered any company goals. Doing your work and doing it adequately was entirely secondary to looking like you were working hard as demonstrated by your desire to stay at work longer than strictly necessary. [971]

One of his commenters, in considerably more pointed language, added: “If you are a manager who is too stupid to figure out that what you should actually measure is real output then the next best thing is to measure how much time people spend pretending to produce that output.” But in fairness, again, establishing a satisfactory measure of real output that can convey information to those outside the production process, without being gamed by those engaged in the process, in a situation where the interests of the two diverge, is a lot easier said than done.

Most of the constantly rising burden of paperwork exists to give an illusion of transparency and control to a bureaucracy that is out of touch with the actual production process. Most new paperwork is added to compensate for the fact that existing paperwork reflects poorly designed metrics that poorly convey the information they’re supposed to measure. “If we can only design the perfect form, we’ll finally know what’s going on.”

Weberian work rules result of necessity when performance and quality metrics are not tied to direct feedback from the work process itself. It is a metric of work for someone who is neither a creator/provider not an end user.

In a self-managed process, if we may recur to the terminology of James Scott cited in the previous chapter, work quality is horizontally legible to those directly engaged in it. In a hierarchy, managers are forced to see “in a glass darkly” a process which is necessarily opaque to them because they are not directly engaged in it. They are forced to carry out the impossible task of developing accurate metrics for evaluating the behavior of subordinates, based on the self-reporting of people with whom they have a fundamental conflict of interest. All of the paperwork burden that management imposes on workers reflects an attempt to render legible a set of social relationships that by its nature must be opaque and closed to them, because they are outside of it. Each new form is intended to remedy the heretofore imperfect self-reporting of subordinates. The need for new paperwork is predicated on the assumption that compliance must be verified because those being monitored have a fundamental conflict of interest with those making the policy, and hence cannot be trusted; but at the same time, that paperwork relies on their self-reporting as the main source of information. Every time new evidence is presented that this or that task isn’t being performed to management’s satisfaction, or this or that policy isn’t being followed, despite the existing reams of paperwork, management’s response is to design yet another form. “If you don’t trust me to do the job right without filling out all these forms, why do you trust me to fill out the forms truthfully?”

The difficulties are inherent in the agency problem. Human agency is inalienable. When someone agrees to work under someone else’s direction for a period of time, the situation is comparable to selling a car but remaining in the driver’s seat. There is no magical set of compliance paperwork or quality/performance metrics that will enable management to sit in the driver’s seat of the worker’s consciousness, to exercise direct control over his hands, or to look out through his eyes.

The only solution is to build incentives into the work itself, and into the direct relationships between the worker and customer, so that it is legible to them It is necessary to create a situation in which creators/providers and end-users are the only parties directly involved in the provision of goods and services, so that metrics of quality are for them as well as of them. Michel Bauwens writes:

The capacity to cooperate is verified in the process of cooperation itself. Thus, projects are open to all comers provided they have the necessary skills to contribute to a project. These skills are verified, and communally validated, in the process of production itself. This is apparent in open publishing projects such as citizen journalism: anyone can post and anyone can verify the veracity of the articles. Reputation systems are used for communal validation. The filtering is a posteriori, not a priori. Anti-credentialism is therefore to be contrasted to traditional peer review, where credentials are an essential prerequisite to participate. P2P projects are characterized by holoptism. Holoptism is the implied capacity and design of peer to [peer] processes that allows participants free access to all the information about the other participants; not in terms of privacy, but in terms of their existence and contributions (i.e. horizontal information) and access to the aims, metrics and documentation of the project as a whole (i.e. the vertical dimension). This can be contrasted to the panoptism which is characteristic of hierarchical projects: processes are designed to reserve ‘total’ knowledge for an elite, while participants only have access on a ‘need to know’ basis. However, with P2P projects, communication is not top-down and based on strictly defined reporting rules, but feedback is systemic, integrated in the protocol of the cooperative system. [972]

When you make a sandwich for yourself, or for a member of your family, you don’t need a third-party inspection regime to guarantee that the sandwich is up to snuff, because there is a fundamental unity of interest between you as sandwich maker and sandwich eater, or between you and the person you’re making food for. And if the quality of the sandwich is substandard, you or your family know it because it tastes bad when they take a bite of it. In other words, the process is run directly for the benefit of those engaged in it, and the quality feedback is built directly into the process itself.

It’s only when people are engaged in work with no intrinsic value or meaning to themselves, with which they don’t identify, which they don’t control, and which is for the benefit of people whose interests are fundamentally opposed to their own, that a complicated system of compliance and quality metrics are required to vouch for its quality to third parties removed from the immediate situation. And in such circumstances, because the managerial hierarchy lacks the job-related tacit knowledge required to formulate meaningful metrics or evaluate incoming data, the function of the metrics and data is at best largely symbolic: e.g., elaborate exercises in shining it on, like JCAHO inspections and ISO-9000. At worst, they reduce quality when people who don’t understand the work interfere with those who do. So you wind up with a 300-page manual for making the sandwich, along with numerous other 300-page manuals for vendor specifications—and it still tastes like crap.

A classic example of the counterproductivity of using bureaucratic rules to obstruct the initiative of those directly involved in a situation is the story of a train fire which was widely circulated on the Internet (which, according to Snopes.Com, it turns out was legitimate). A faulty bearing caused a wheel on one of the cars to overheat and melt down. The crew, spotting the smoke, stopped the train in compliance with the rules. Unfortunately, it came to rest on a wooden bridge with creosote ties. Still more unfortunately, the management geniuses directing the crew from afar refused to budge on the rules, which prohibited moving the train. As a result, the bridge burned and six burning coal cars dropped into the creek below. [973]

The same principle was illustrated by an anecdote from the Soviet Great Patriotic War (I’m afraid I can’t track down the original source, but it’s too good a story not to relate). A division commander was denied permission to pull his divisional artillery back far enough to be in effective range of a road, and thus to be able to target German armor moving along that road, because he couldn’t convince the political officer that backward movement didn’t constitute “retreat.”

And then there’s the old saw about how the Egyptians lost the 1967 Arab-Israeli War because they literally obeyed the instructions in their Russian field manuals: “retreat into the interior and wait for the first snowfall.”

Rigid hierarchies and rigid work rules only work in a predictable environment. When the environment is unpredictable, the key to success lies with empowerment and autonomy for those in direct contact with the situation. A good example is the Transportation Safety Administration’s response to the threat of Al Qaeda attacks. As Matthew Yglesias has argued, “the key point about identifying al-Qaeda operatives is that there are extremely few al-Qaeda operatives so (by Bayes’ theorem) any method you employ of identifying al-Qaeda operatives is going to mostly reveal false positives.” [974] So (this is me talking) when your system for anticipating attacks upstream is virtually worthless, the “last mile” becomes monumentally important: having people downstream capable of recognizing and thwarting the attempt, and with the freedom to use their own discretion in stopping it, when it is actually made.

An almost universal problem, when bureaucratic, stovepiped industrial design processes isolate designers from user feedback, is the “gold plated turd.” Whenever a product is designed by one bureaucracy, for sale to procurement officers in another bureaucracy who are buying it for someone else’s use, a gold-plated turd is almost invariably the result.

A good example from my experience as a hospital worker is the kind of toilet paper dispenser sold to large institutional clients. If you’ve ever used a public restroom or patient restroom in a hospital, you’ve almost certainly encountered one of those Georgia-Pacific monstrosities: a plastic housing that makes it almost impossible to manipulate the roll without breaking your wrist, and so much resistance that you tear the paper rather than turning the spool more often than not. And these toilet paper dispensers, seemingly engineered at great effort to perform their functions as badly as possible, sell for $20 or more. On the other hand, an ordinary toilet paper spool—one that actually turns easily and is convenient to use—can probably be bought at Lowe’s or Home Depot for a dollar.

I’ve had similar experiences as a consumer of goods and services, outside of my job. A good example is my experience with the IT officer at the local public library, which I described earlier in the book. I emailed the library on how poorly the newly installed Word 2007 software, and whatever Windows desktop upgrade they’d bought, performed compared to the earlier version of Windows and the Word 2003 they replaced. As Windows products go, Word 2003 is about the best word processing software you can get. It’s got a user interface pretty much the same as that of Open Office, in terms of complexity. In fact, I’d go so far as to say it was as good as Open Office, aside from the $200 price tag and the forced upgrades that open source software is mercifully free of. Word 2007, on the other hand, is a classic gold-plated turd. Its user interface is so complicated and busy that the dashboard actually has to be tabbed to accommodate all the bells and whistles. I told the IT officer that it was a good idea, whenever she found a Windows product that worked acceptably, to hold onto it like grim death and run like hell when offered anything “new and improved” from Redmond. Her response: Word 2007 is the standard “productivity software” choice of major public libraries and corporations all across America. In my follow-up, I told her the very fact that something worked worse than what it replaced, despite being the “standard choice” of pointy-haired bosses all across the country, was an object lesson in the wisdom of basing one’s software choice on corporate bureaucrats’ “best practices” rather than on feedback from user communities. Never heard back from her, for some reason. Nice lady, though.

Niall Cook, in Enterprise 2.0 , describes the comparative efficiencies of social software outside the enterprise to the “enterprise software” in common use by employers. Self-managed peer networks, and individuals meeting their own needs in the outside economy, organize their efforts through social software chosen by the users themselves based on its superior usability for their purposes. And they are free to do so without corporate bureaucracies and their officially defined procedural rules acting as a ball and chain. Enterprise software, in contrast, is chosen by non-users for use by other people of whose needs they know little (at best). Hence enterprise software is frequently a gold-plated turd. Blogs and wikis, and the free, browser-based platforms offered by Google and Mozilla, are a quantum improvement on the proprietary enterprise software that management typically forces on its employees. The kinds of productivity software and social software freely available to individuals in their private lives is far better than the enterprise software that corporate bureaucrats buy for a captive clientele of users—consumer software capabilities amount to “a fully functioning, alternative IT department.” [975] Corporate IT departments, in contrast, “prefer to invest in a suite of tools ‘offered by a major incumbent vendor like Microsoft or IBM’.” System specs are driven by management’s top-down requirements rather than by user needs.

...a small group of people at the top of the organization identify a problem, spend 12 months identifying and implementing a solution, and a huge amount of resources launching it, only then to find that employees don’t or won’t use it because they don’t buy in to the original problem. [976]

Management is inclined “to conduct a detailed requirements analysis with the gestation period of an elephant simply in order to chose a $1,000 social software application.” [977] Employees often wind up using their company credit cards to purchase needed tools online rather than “wait for [the] IT department to build a business case and secure funding.” [978] This is the direct opposite of agility.

As a result of all this, people are more productive away from work than they are at work.

Corporate IT departments are a lot like the IT department at my public library, as recounted above. They are obsessed with security and control, and see the free exchange of information between employees as a threat to that security and control. They also have an affinity for doing business with other bureaucracies like themselves, which means a preference for buying proprietary enterprise software from giant corporations. They select software on pretty much the same basis as a Grandma buying a gift for her granddaughter just entering college: “I just knew it had to be the best, dear, because it’s the latest thing from Microsoft!”

Nascent “Enterprise 2.0” organization within a traditional firm is often forced to fight obstruction from top-down management styles, even in areas where human capital is the main source of value. With corporate cultures based on obsession with security and control, management instinctively fights workers’ attempts to choose their own platforms based on usability. Attempts to facilitate information sharing between employees falls afoul of this culture, because employees obviously wouldn’t desire access to information unless they were up to no good. On the outside, peer networks are free to self-organize without interference from hierarchy. As a result, in forms of production where the main source of value is human capital, and human relationships for sharing knowledge, autonomous outside peer networks have a leg up on corporate hierarchies.

The parallels between Enterprise 2.0 and the military’s doctrines for Fourth Generation Warfare are striking. The military’s Fourth Generation Warfare doctrines are an attempt to take advantage of network communications technology and cybernetic information processing capabilities in order replicate, within a conventional military force, the agility and resilience of networked organizations like Al Qaeda. The problem, as we saw earlier in this chapter, is that interference from the military’s old bureaucratic hierarchies systematically impede all the possibilities offered by network technology. The basic idea behind the new doctrines is, through the use of networked communications technology, to increase the autonomy and reduce the reaction time of the “boots on the ground” directly engaged in a situation. But as John Robb suggested, military hierarchies wind up seeing the new communications technologies instead as a way of increasing mid-level commanders’ realtime control over operations, and increasing the number of sign-offs required to approve any proposed operation. By the time those engaged in combat operations get the required eleven approvals of higher-ups, and the staff officers have had time to process the information into some kind of unrecognizable scrapple (PowerPoint presentations and all), the immediate situation has changed to the point that their original plan is meaningless anyway.

So the real thing—genuinely independent, self-managed networked resistance movements unimpeded by bureaucratic interference with the natural feedback and reaction mechanisms of a stigmergic organization—is incomparably better than the military hierarchy’s pallid imitations.

Similarly, Enterprise 2.0 is an attempt to replicate, within the boundaries of a corporation, the kinds of networked, stigmergic organization that Raymond wrote about in “The Cathedral and the Bazaar.” But networked producers inside the corporation find themselves thwarted, at every hand, by bureaucratic impediments to their putting immediately into practice their own judgment of what’s necessary based on direct experience of the situation.

What actually happens, when management attempts to “empower” employees by adopting a networked organization within corporate boundaries, is suggested by an anecdote from an HR blog. Management came up with a brilliant idea for reducing the number of round-robin emails selling extra concert tickets and used cars, soliciting rides, etc.: to put an official bulletin board at one convenient central location! But rather than simply mounting a square of corkboard and leaving employees to their own devices in posting notices, management had to come up with an official procedure for advance submission of notices for approval, followed—a week later, if they were lucky and the notice was successfully vetted for all conceivable violations of company policy—by a manager unlocking the glass case with his magic set of keys and posting the ad. Believe it or not, management was puzzled as to why the round-robin emails continued and the bulletin board wasn’t more popular. [979]

This sort of thing is the currency of one school of organization theorists, who as Charles Sabel describes them, assert that

So bounded is the rationality of organizations that they are incapable of learning in the sense of improving decisions by deliberation on experience. Thus the assumption that decision makers ‘survey’ only the first feasible choices immediately accessible to them at the moment of decision, and ‘prefer’ that choice to any other or inaction, yields ‘garbage-can’ models of organizations, in which decisions result from collisions between decision makers and solutions.... The assumption that decision makers can compare only a few current solutions to their problem, and prefer the one that best meets their needs, but cannot draw from this decision any analytic conclusions regarding subsequent choices, turns organized decision making into muddling through.... [980]

To take just one example: Martha Feldman and James March found little relationship between the gathering of information and the policies that were ostensibly based on it. In corporate legitimizing rhetoric, of course, management decisions are always based on a rational assessment of the best available information. [981] They did case studies of three organizations, and found an almost total disconnect between policies and the information they were supposedly based on.

Feldman and March did their best to provide a charitable explanation—an explanation, that is, other than “organizations are systematically stupid.” [982] “Systematically stupid” probably comes closest to satisfying Occam’s Razor, and I’d have happily stuck with that explanation. But Feldman and March struggled to find some adaptive purpose in the observed use of information.

The interesting thing, from my perspective, is that most of the “adaptive purposes” they describe reflect precisely what I’d call “systematic stupidity.” They began by surveying more conventional assessments of organizational inefficiency as an explanation for the observed pattern. First, organizations are “unable... to process the information they have. They experience an explanation glut as a shortage. Indeed, it is possible that the overload contributes to the breakdown in processing capabilities....” Second, “...the information available to organizations is systematically the wrong kind of information. Limits of analytical skill or coordination lead decision makers to collect information that cannot be used.” [983]

Then they made three observations of their own on how organizational structure affects the use of information:

First, ordinary organizational procedures provide positive incentives for underestimating the costs of information relative to its benefits. Second, much of the information in an organization is gathered in a surveillance mode rather than in a decision mode. Third, much of the information used in organizational life is subject to strategic misrepresentations. Organizations provide incentives for gathering more information than is optimal from a strict decision perspective.... First, the costs and benefits of information are not all incurred at the same place in the organization. Decisions about information are often made in parts of the organization that can transfer the costs to other parts of the organization while retaining the benefits.... Second, post hoc accountability is often required of both individual decision makers and organizations... Most information that is generated and processed in an organization is subject to misrepresentation....

The decision maker, in other words, must gather excess information in anticipated defense against the possibility that his decision will be second-guessed. [984] By “surveillance mode,” the authors mean that the organization seeks out information not for any specific decision, but rather to monitor the environment for surprises. The lead time for information gathering is longer than the lead time for decisions. Information must therefore be gathered and processed without clear regard to the specific decisions that may be made. [985]

All the incentives mentioned so far seem to result mainly from large size and hierarchy—i.e., to result (again) from “systematic stupidity.” The problem of non-internalization of the costs and benefits of information-gathering by the same actor, of course, falls into the inefficiency costs of large size. The problem of post hoc accountability results from hierarchy. At least part of the problem of surveillance mode is another example of poor internalization: the people gathering the information are different from the ones using it, and are therefore gathering it with a second-hand set of goals which does not coincide with their own intrinsic motives. The strategic distortion of information, as an agency problem, is (again) the result of hierarchy and the poor internalization of costs and benefits in the same responsible actors. In other words, the large, hierarchical organization is “systematically stupid.”

The authors’ most significant contribution in this article is their fourth observation: that the gathering of information serves a legitimizing function in the organization.

Bureaucratic organizations are edifices built on ideas of rationality. The cornerstones of rationality are values regarding decision making.... The gathering of information provides a ritualistic assurance that appropriate attitudes about decision making exist. Within such a scenario of performance, information is not simply a basis for action. It is a representation of competence and a reaffirmation of social virtue. Command of information and information sources enhances perceived competence and inspires confidence. The belief that more information characterizes better decisions engenders a belief that having information, in itself, is good and that a person or organization with more information is better than a person or organization with less. Thus the gathering and use of information in an organization is part of the performance of a decision maker or an organization trying to make decisions intelligently in a situation in which the verification of intelligence is heavily procedural and normative.... Observable features of information use become particularly important in this scenario. When there is no reliable alternative for asserting a decision maker’s knowledge, visible aspects of information gathering and storage are used as implicit measures of the quality and quantity of information possessed and used.... [986]

In other words, when an organization gets too big to have any clear idea how well it is performing the function for which it officially exists, it creates a metric for “success” defined—as we saw in our study of Sloanist organizational pathologies—in terms of the processing of inputs.

But in fairness to management, it’s not the stupidity of the individual; to repeat my point above contra Caplan, it’s the stupidity of the organization. Large, hierarchical organizations are systematically stupid, regardless of how intelligent and competent the people running them are. By definition, nobody is smart enough to run a large, hierarchical organization, just as nobody’s smart enough to centrally plan an economy.

The reality of corporate life is apt to bear a depressing resemblance to the Ministry of Central Services in Brazil , or to “The Feds” in Neal Stephenson’s Snow Crash . “The Feds” in the latter example are the direct successor to the United States government, claiming continued sovereign jurisdiction over the territory of the former U.S., but in fact functioning as simply one of many competing franchise “governments” or networked civil societies in the panarchy that exists following the collapse of most territorial states. Mainly occupying the federal office buildings on what used to be federal property, its primary activity is designing enterprise software for sale to corporations. Its internal governance seems to reflect, in equal parts, the bureaucratic world of Brazil and the typical IT department’s idealized vision of a corporate intranet (not that there’s much difference).

One employee of the Feds shows up for work and logs on, after negotiating the endless series of biometric scans, only to receive a long and excruciatingly detailed memo on the policies governing the unauthorized bringing in of toilet paper from home, sparked by toilet paper shortages in the latest austerity drive.

The memo includes an announcement that “Estimated reading time for this document is 15.62 minutes (and don’t think we won’t check).” Her supervisor’s standard template, in checking up on memo reading times, is something like this:

Less than 10 min. Time for an employee conference and possible attitude counseling. 10–14 min. Keep an eye on this employee; may be developing slipshod attitude. 14–15.61 min. Employee is an efficient worker, may sometimes miss important details. Exactly 15.62 min. Smartass. Needs attitude counseling. 15.63–16 min. Asswipe. Not to be trusted. 16–18 min. Employee is a methodical worker, may sometimes get hung up on minor details. More than 18 min. Check the security videotape, see just what this employee was up to (e.g., possible unauthorized restroom break).

The employee decides, accordingly, to spend between fourteen and fifteen minutes reading the memo. “It’s better for younger workers to spend too long, to show that they’re careful, not cocky. It’s better for older workers to go a little fast, to show good management potential.”

Their actual work is similarly micromanaged:

She is an applications programmer for the Feds. In the old days, she would have written computer programs for a living. Nowadays, she writes fragments of computer programs. These programs are designed by Marietta and Marietta’s superiors in massive week-long meetings on the top floor. Once they get the design down, they start breaking up the problem into tinier and tinier segments, assigning them to group managers, who break them down even more and feed little bits of work to the individual programmers. In order to keep the work done by the individual coders from colliding, it all has to be done according to a set of rules and regulations even bigger and more fluid than the Government procedure manual [even bigger than the rules for reading a toilet paper memo?]. So the first thing [she] does, having read the new subchapter on bathroom tissue pools, is to sign on to a subsystem of the main computer system that handles the particular programming project she’s working on. She doesn’t know what the project is—that’s classified—or what it’s called. She shares it with a few hundred other programmers, she’s not sure exactly who. And every day when she signs on to it, there’s a stack of memos waiting for her, containing new regulations and changes to the rules that they all have to follow when writing code for the project. These regulations make the business with the bathroom tissue seem as simple and elegant as the Ten Commandments. So she spends until about eleven A.M. reading, rereading, and understanding the new changes in the Project [presumably with recommended reading times, carefully monitored, for each one].... Then she starts going back over all the code she has previously written for the Project and making a list of all the stuff that will have to be rewritten in order to make it compatible with the new specifications. Basically, she’s going to have to rewrite all of her material from the ground up. For the third time in as many months. But hey, it’s a job. [987]

If you think that’s a joke, go back and reread the material in the last section on the rules governing PowerPoint presentations in the U.S. military command in Afghanistan.

E. The Implications of Reduced Physical Capital Costs

The informal and household economy reduces waste by its reliance on “spare cycles” of ordinary capital goods that most people already own. It makes productive use of idle capital assets the average person owns anyway, provides a productive outlet for the surplus labor of the unemployed, and transforms the small surpluses of household production into a ready source of exchange value.

Let’s consider again our example of the home-based microenterprise—the microbrewery or restaurant—from Chapter Five. Buying a brewing vat and a few small fermenters for your basement, using a few tables in an extra room as a public restaurant area, etc., would require a small bank loan for at most a few thousand dollars. And with that capital outlay, you could probably make payments on the debt with the margin from one customer a day. A few customers evenings and weekends, probably found mainly among your existing circle of acquaintances, would enable you to initially shift some of your working hours from wage labor to work in the restaurant, with the possibility of gradually phasing out wage labor altogether or scaling back to part time, as you built up a customer base. In this and many other lines of business (for example a part-time gypsy cab service using a car and cell phone you own anyway), the minimal entry costs and capital outlay mean that the minimum turnover required to pay the overhead and stay in business would be quite modest. In that case, a lot more people would be able to start small businesses for supplementary income and incrementally shift some of their wage work to self employment, with minimal risk or sunk costs.

The lower the initial capital outlays, and the lower the resulting overhead that must be serviced, the larger the percentage of its income stream belongs to the microenterprise without encumbrance—regardless of how much business it is able to do. It is under no pressure to “go big or not go at all,” to “get big or get out,” or to engage in large batch production to minimize unit costs from overhead, because it has virtually no overhead costs. So the microenterprise can ride out prolonged periods of slow business. If the microenterprise is based in a household which owns its living space free and clear and has a garden and well-stocked pantry, the household may be able to afford to go without income during slow spells and live off its savings from busy periods. Even if the household is dependent on some wage labor, the microenterprise in good times can be used as a supplemental source of income with no real cost or risk of the kind that would exist were there overhead to be serviced, and therefore enable a smaller wage income to go further in a household income-pooling unit.

That’s why, as we saw in Chapter Two, one of the central functions of so-called “health” and “safety” codes, and occupational licensing is to prevent people from using idle capacity (or “spare cycles”) of what they already own anyway, and thereby transforming them into capital goods for productive use. In general, state regulatory measures that increase the minimum level of overhead needed to engage in production will increase the rate of failure for small businesses, with pressure to intensified “cutthroat competition.” In the specific case of high burdens of interest-bearing debt, and the pressure to earn a sufficient revenue stream to repay the interest as well as the principal, Tom Greco writes,

As borrowers compete with one another to try to meet their debt obligations in this game of financial “musical chairs,” they are forced to expand their production, sales, and profits.... ...Thus, debt continually mounts up, and businesses and individuals are forced to compete for markets and scarce money in a futile attempt to avoid defaulting on their debts. The system makes it certain that some must fail. [988]

Because the household economy and the microenterprise require few or no capital outlays, their burden of overhead is miniscule. This removes the pressure to large-batch production. It removes the pressure to get out of business altogether and liquidate one’s assets when business is slow, because there is no overhead to service. Reduced overhead costs reduce the failure rate; they reduce the cost of staying in business indefinitely, enjoying revenue free and clear in good periods and riding out slow ones with virtually no loss. As Borsodi wrote,

Only in the home can the owner of a machine afford the luxury of using it only when he has need of it. The housewife uses her washing machine only an hour or two per week. The laundry has to operate its washing machine continuously. Whether operating or not operating all of its machines, the factory has to earn enough to cover depreciation and obsolescence on them. Office overhead, too, must be earned, whether the factory operates on full time or only on part time. [989]

And a housewife who uses her washing machine to full capacity in a household micro-laundry, with no additional marginal cost besides the price of soap, water, and power, will eat the commercial laundry alive.

F. Strong Incentives and Reduced Agency Costs

We already saw, above, Eric Raymond’s description of how self-selection and incentives work in the Linux “Bazaar” model of open-source development. As Michel Bauwens put it,

the permissionless self-aggregation afforded by the internet, allowed humans to congregate around their passionate pursuits.... It was discovered that when people are motivated by intrinsic positive motivation, they are hyperproductive.... ...[W]hile barely one in five of corporate workers are passionately motivated, one hundred percent of peer producers are, since the system filters out those lacking it! [990]

And Johan Soderberg, likewise:

To a hired programmer, the code he is writing is a means to get a pay check at the end of the month. Any shortcut when getting to the end of the month will do. For a hacker, on the other hand, writing code is an end in itself. He will always pay full attention to his endeavour, or else he will be doing something else. [991]

The alternative economy reduces waste by eliminating all the waste of time involved in the “face time” paradigm. Wage labor and hierarchy are characterized by high degrees of “presenteeism.” Because the management is so divorced from the actual production process, it has insufficient knowledge of the work to develop a reliable metric of actual work accomplished. So it is required to rely on proxies for work accomplished, like the amount of time spent in the office and whether people “look busy.” Workers, who have no intrinsic interest in the work and who get paid for just being there, have no incentive to use their time efficiently.

Matthew Yglesias describes this as “the office illusion”: the equation of “being in the office” to “working”

Thus, minor questions like am I getting any work done? can tend to slip away. Similarly, when I came into an office every day, I felt like I couldn’t just leave the office just because I didn’t want to do anymore work, so I would kind of foot-drag on things to make sure whatever task I had stretched out to fill the entire working day. If I’m not in an office, by contrast, I’m acutely aware that I have a budget of tasks that need to be accomplished, that “working” means finishing some of those tasks, and that when the tasks are done, I can go to the gym or go see a movie or watch TV. Thus, I tend to work in a relatively focused, disciplined manner and then go do something other than work rather than slack off. [992]

Under the “face time” paradigm of wage employment at a workplace away from home, there is no trade-off between work and leisure. Anything done at work is “work,” for which one gets paid. There is no opportunity cost to slacking off on the job. In home employment, on the other hand, the trade-off between effort and consumption is clear. The self-employed worker knows how much productive labor is required to support his desired level of consumption, and gets it done so he can enjoy the rest of his life. If his work itself is a consumption good, he still balances it with the rest of his activities in a rational, utility-maximizing manner, because he is the conscious master of his time, and has no incentive to waste time because “I’m here anyway.” Any “work” he does which is comparatively unproductive or unrewarding comes at the expense of more productive or enjoyable ways of spending his time.

At work, on the other hand, all time belongs to the boss. A shift of work is an eight-hour chunk of one’s life, cut off and flushed down the toilet for the money it will bring. And as a general rule, people do not make very efficient use of what belongs to someone else.

J.E. Meade contrasts the utility-maximizing behavior of a self-employed individual to that of a wage employee:

A worker hired at a given hourly wage in an Entrepreneurial firm will have to observe the minimum standard of work and effort in order to keep his job; but he will have no immediate personal financial motive... to behave in a way that will promote the profitability of the enterprise.... [A]ny extra profit due to his extra effort will in the first place accrue to the entrepreneur.... Let us go to the other extreme and consider a one-man Cooperative, i.e. a single self-employed worker who hires his equipment. He can balance money income against leisure and other amenities by pleasing himself over hours of work, holidays, the pace and concentration of work, tea-breaks or the choice of equipment and methods of work which will make his work more pleasant at the cost of profitability. Any innovative ideas which he has, he can apply at once and reap the whole benefit himself. [993]

This is true not only of self-employment in the household sector and of self-managed peer networks, but of self-managed cooperatives in the money economy as well. The latter require far less in the way of front-line managers than do conventional capitalist enterprises. Edward Greenberg contrasts the morale and engagement with work, among the employees of a capitalist enterprise, with that of workers who own and manage their place of employment:

Rather than seeing themselves as a group acting in mutuality to advance their collective interests and happiness, workers in conventional plants perceive their work existence, quite correctly, as one in which they are almost powerless, being used for the advancement and purposes of others, subject to the decisions of higher and more distant authority, and driven by a production process that is relentless.... The general mood of these two alternative types of work settings could not be more sharply contrasting. To people who find themselves in conventional, hierarchically structured work environments, the work experience is not humanly rewarding or enhancing. This seems to be a product of the all-too-familiar combination of repetitious and monotonous labor... and the structural position of powerlessness, one in which workers are part of the raw material that is manipulated, channeled, and directed by an only partly visible managerial hierarchy. Workers in such settings conceive of themselves, quite explicitly, as objects rather than subjects of the production process, and come to approach the entire situation, quite correctly, since they are responding to an objective situation of subordination, as one of a simple exchange of labor for wages. Work, done without a great deal of enthusiasm, is conceived of as intrinsically meaningless, yet necessary for the income that contributes to a decent life away from the workplace. [994]

Greenberg notes a “striking” fact: “the vast difference in the number of supervisors and foremen found in conventional plants as compared with the plywood cooperatives.”

While the latter were quite easily able to manage production with no more than two per shift, and often with only one, the former often requires six or seven. Such a disparity is not uncommon. I discovered in one mill that had recently been converted from a worker-owned to a conventional, privately owned firm that the very first action taken by the new management team was to quadruple the number of line supervisors and foremen. In the words of the general manager of this mill who had also been manager of the mill prior to its conversion,

We need more foremen because, in the old days, the shareholders supervised themselves.... They cared for the machinery, kept their areas picked up, helped break up production bottlenecks all by themselves. That’s not true anymore. We’ve got to pretty much keep on them all of the time. [995]

Workers in a cooperative enterprise put more of themselves into their work, and feel free to share their private knowledge—knowledge that would be exploited far more ruthlessly as a source of information rent in a conventional enterprise. Greenberg quotes a comment by a worker in a plywood co-op that speaks volumes on wage labor’s inefficiency at aggregating distributed knowledge, compared to self-managed labor:

If the people grading off the end of the dryer do not use reasonable prudence and they start mixing the grades too much, I get hold of somebody and I say, now look, this came over to me as face stock and it wouldn’t even make decent back. What the hell’s goin’ on here? [Interviewer: That wouldn’t happen if it were a regular mill?] That wouldn’t happen. [In a regular mill]... he has absolutely no money invested in the product that’s being manufactured.... He’s selling nothing but his time. Any knowledge he has on the side, he is not committed or he is not required to share that . [emphasis added] It took me a little while to get used to this because where I worked before... there was a union and you did your job and you didn’t go out and do something else. Here you get in and do anything to help.... I see somebody needs help, why you just go help them. I also tend to... look around and make sure things are working right a little more than... if I didn’t have anything invested in the company.... I would probably never say anything when I saw something wrong. [996]

G. Reduced Costs from Supporting Rentiers and Other Useless Eaters

The alternative economy reduces waste and increases efficiency by eliminating the burden of supporting a class of absentee investors. By lowering the threshold of capital investment required to enter production, and easing the skids for self-employment at the expense of wage employment, the informal economy increases efficiency. Because producer-owned property must support only the laborer and his family, the rate of return required to make the employment of land and capital worthwhile is reduced. As a result, fewer productive resources are held out of use and there are more opportunities for productive labor.

The absentee ownership of capital skews investment in a different direction from what it would be in an economy of labor-owned capital, and reduces investment to lower levels. Investments that would be justified by the bare fact of making labor less onerous and increasing productivity, in an economy of worker-owned capital, [997] must produce an additional return on the capital to be considered worth making in an economy of rentiers. It is directly analogous to the holding of vacant land out of use that might enable laborers to subsist comfortably, because it will not in addition produce a rent over and above the laborer’s subsistence. As Thomas Hodgskin observed in Popular Political Economy ,

It is maintained... that labour is not productive, and, in fact, the labourer is not allowed to work, unless, in addition to replacing whatever he uses or consumes, and comfortably subsisting himself, his labour also gives a profit to the capitalist...; or unless his labour produces a great deal more... than will suffice for his own comfortable subsistence. Capitalists becoming the proprietors of all the wealth of the society... act on this principle, and never... will they suffer labourers to have the means of subsistence, unless they have a confident expectation that their labour will produce a profit over and above their own subsistence. This... is so completely the principle of slavery, to starve the labourer, unless his labour will feed his master as well as himself, that we must not be surprised if we should find it one of the chief causes... of the poverty and wretchedness of the labouring classes. [998]

When capital equipment is owned by the same people who make and use it, or made and used by different groups of people who divide the entire product according to their respective labor and costs, it is productive. But when capital equipment is owned by a class of rentiers separate from those who make it or use it, the owners may be said more accurately to impede production rather than “contribute” to it.

If there were only the makers and users of capital to share between them the produce of their co-operating labour, the only limit to productive labour would be, that it should obtain for them and their families a comfortable subsistence. But when in addition to this..., they must also produce as much more as satisfies the capitalist, this limit is much sooner reached. When the capitalist... will allow labourers neither to make nor use instruments, unless he obtains a profit over and above the subsistence of the labourer, it is plain that bounds are set to productive labour much within what Nature prescribes. In proportion as capital in the hands of a third party is accumulated, so the whole amount of profit required by the capitalist increases, and so there arises an artificial check to production and population. The impossibility of the labourer producing all which the capitalist requires prevents numberless operations, such as draining marshes, and clearing and cultivating waste lands; to do which would amply repay the labourer, by providing him with the means of subsistence, though they will not, in addition, give a large profit to the capitalist. In the present state of society, the labourers being in no case the owners of capital, every accumulation of it adds to the amount of profit demanded from them, and extinguishes all that labour which would only procure the labourer his comfortable subsistence. [999]

Hodgskin developed this same theme, as it applied to land, in The Natural and Artificial Right of Property Contrasted :

It is, however, evident, that the labour which would be amply rewarded in cultivating all our waste lands, till every foot of the country became like the garden grounds about London, were all the produce of labour on those lands to be the reward of the labourer, cannot obtain from them a sufficiency to pay profit, tithes, rent, and taxes.... In the same manner as the cultivation of waste lands is checked, so are commercial enterprise and manufacturing industry arrested. Infinite are the undertakings which would amply reward the labour necessary for their success, but which will not pay the additional sums required for rent, profits, tithes, and taxes. These, and no want of soil, no want of adequate means for industry to employ itself, are the causes which impede the exertions of the labourer and clog the progress of society. [1000]

The administrative and tranaction costs of conventional commercial economy have a similar effect to that of rentier incomes: they increase the number of people the laborer must support, in addition to himself, and thereby increase the minimum scale of output required for entering the market. The social economy enables its participants to evade the overhead costs of conventional organization (of the kind we saw skewered by Paul Goodman in Chapter Two), as described by Scott Burns in The Household Economy . The most enthusiastic celebrations of increased efficiencies from division of labor—like those at Mises.Org—tend to rely on illustrations in which, as Burns puts it, “labor can be directly purchased,” or be made the object of direct exchange between the laborers themselves. But in fact,

[m]arketplace labor must not only bear the institutional burden of taxation, it must also carry the overhead costs of organization and the cost of distribution. Even the most direct service organizations charge two and one-half times the cost of labor. The accountant who is paid ten dollars an hour is billed out to clients at twenty-five dollars an hour.... When both the general and the specific overhead burdens are considered, it becomes clear that any productivity that accrues to specialization is vitiated by the overhead burdens it must carry. Consider, for example, what happens when an eight-dollar-an-hour accountant hires an eight-dollar-an-hour service repairman, and vice versa. The repairman is billed out by his company at two and one-half times his hourly wage, or twenty dollars; to earn this money, the accountant must work three hours and twenty minutes, because 25 per cent of his wages are absorbed by taxes. Thus, to be truly economically efficient, the service repairman must be at least three and one-third times as efficient as the accountant at repairing things. [1001]

The same principle applies to exchange, with household and informal arrangements requiring far less in the way of administrative overhead than conventional retailers. Food buying clubs run out of people’s homes, barter bazaars [1002] and freecycling networks, the imploding transaction costs of aggregating information and putting buyer and seller together on Craigslist, etc., all involve little or no overhead cost. Projects like FreeCycle, in fact, kill two birds with one stone: they simultaneously provide a low-overhead alternative to conventional retail, and maximize the efficiency with which the alternative economy extracts the last drop of value from the waste byproducts of capitalism.

To take just one example, consider the enormous cost of factoring in the apparel industry. Because most large retailers don’t pay their apparel suppliers on time (delays of as much as six months are common), apparel producers must rely on factors to buy their accounts receivable at a heavy discount (“loan shark rates,” in the words of Eric Husman, an engineer who blogs on lean manufacturing issues—typically 15–20%). [1003] The requirement either to absorb several months’ expenses while awaiting payment, or to get timely payment only at a steep discount, is an enormous source of added cost which exerts pressure to make it up on volume through large batch size. Now the large retailers, helpfully, are introducing a new “Supplier Alliance Program,” which amounts to bringing the factoring operation in-house. [1004] That’s right: they actually “lend you the money they owe you” (in Husman’s words). Technically, the retailers aren’t actually lending the money, but rather extending their credit rating to cover your dealings with independent banks. The program is a response to the bankruptcy of several major factors in the recent financial crisis, and the danger that hundreds of vendors would go out of business in the absence of factoring. (Of course actually paying for orders on receipt would be beyond the meager resources of the poor big box chains.)

For the small apparel producer, in contrast, producing directly for an independent local retailer, for a local barter network, or for networked operations like Etsy, carries little or no overhead. Consider also the number of other industries in which something like the factoring system prevails (i.e., selling you, on credit, the rope to hang yourself with). A good example is the relationship Cargill and ADM have with family farmers: essentially a recreation of the 18 th -century putting-out system. Kathleen Fasanella, a consultant to the small apparel industry who specializes—among other things—in applying lean principles to apparel manufacturing, is for this reason an enthusiastic supporter of pull distribution networks (farmers selling at farmers’ markets, craft producers selling on Etsy, etc.). [1005]

The shift to dispersed production in countless micro-enterprises also makes the alternative economy far less vulnerable to state taxation and imposition of artificial levels of overhead. In an economy of large-scale, conventional production, the required scale of capital outlays and resulting visibility of enterprises provides a physical hostage for the state’s enforcement of overhead-raising regulations and “intellectual property” laws.

The conventional enterprise also provides a much larger target for taxation, with much lower costs for enforcement. [1006] But as required physical capital outlays implode, and conventional manufacturing melts into a network of small machine shops and informal/household “hobby” shops, the targets become too small and dispersed to bother with.

This effect of rentier income, by the way, is just another example of a broader phenomenon we have been observing in various guises throughout this book: the effect of any increase in the minimum capital outlay, overhead, etc., to carry out a function, is to increase the scale of production necessary to service fixed costs. Overhead is a baffle that disrupts the flow from effort to output, and has an effect on the productive economy comparable to that of constipation or edema on the human body.

On the Open Manufacturing list, Eric Hunting argues that one of the side-effects of the kind of relocalized flexible manufacturing we examined in Chapter Five is that increasing competition, easy diffusion of new technology and technique, and increasing transparency of cost structure will—between them—arbitrage the rate of profit to virtually zero and squeeze artificial scarcity rents and spot-market profits from price almost entirely.

What Open Manufacturing is doing is on the bleeding-edge of a general tend in industrial automation for progressively increasing productivity and production flexibility (mass customization/demand-driven flex production) with systems of decreasing scale and up-front cost. At the same time the economics of manufacturing has used-up the potential of Globalization as a means to exploit geographic spot-market bargains in materials and labor costs and is now dealing in a world of increasingly homogenous materials costs and expensive energy—and therefore transportation—costs. The efficiency of manufacturing logistics really matters now. It no longer makes economic sense to manufacture whole goods in far away places no matter how cheap the labor is. And—though the executive class remains slow on the uptake as usual—the trend is toward localization of production with increasing flexibility. So, ironically driven by the profit motive, commercial manufacturing is on a parallel track to the same goal as Open Manufacturing; progressive localization and diversification of production. I foresee this producing a progressive ‘commoditization’ of global economics. In other words, global trade will increasingly be trade of commodities materials and components, because it no longer makes economic sense to move finished goods around when their transportation is so inefficient. Commodities trade is highly automated because commodities production is highly automated, produces uniform products, and deals in large volumes relative to the number of workers. Production costs are highly quantifiable when the amortized cost of equipment supersedes the human labor overhead and that tends to factor out the variability in that only remaining (and deliberately) ‘fuzzy’ valued commodity. The result is that there is increasing global price capitulation in the value of commodities—largely because its increasingly difficult to hide costs, find exclusive geographical spot-market bargains, or maintain exclusive distribution hegemonies. Trading systems have a very high and steadily increasing quantitative awareness of the costs of everything and the projected demand and production capacity for everything. At a certain point they can algorithmically factor out profit and can start trading commodities for commodities without cash indexed to projected demand/production. Profit in trade is based on divergence in the perception of value between buyer and seller. Scarcity is often a perception created by hiding data—and that’s increasingly hard to do in a world where quantitative analysis trading knows more about an industry than the CEOs do. When everybody knows ahead of time what the concrete values of everything is and you have an actual open market where everyone has alternate sources for just about everything, profit becomes impossible. [1007]

H. The Stigmergic Non-Revolution

Kim Stanley Robinson, in the second volume of his Mars trilogy, made some interesting comments (through the mouth of one of his characters) on the drawbacks of traditional models of revolution:

“...[R]evolution has to be rethought. Look, even when revolutions have been successful, they have caused so much destruction and hatred that there is always some kind of horrible backlash. It’s inherent in the method. If you choose violence, then you create enemies who will resist you forever. And ruthless men become your revolutionary leaders, so that when the war is over they’re in power, and likely to be as bad as what they replaced.” [1008]

Arthur Silber, in similar vein, wrote that “with no exception in history that I can think of, violent revolutions on any scale lead to a state of affairs which is no better and frequently worse than that which the rebels seek to replace.” [1009]

A political movement is useful mainly for running interference, defending safe spaces in which we can build the real revolution—the revolution that matters . To the extent that violence is used, it should not be perceived by the public at large as a way of conquering anything, but as defensive force that raises the cost of government attacks on the counter-economy in a situation where the government is clearly the aggressor. The movement should avoid, at all costs, being seen as an attempt to impose a new “alternative” way of life on the “conventional” public, but instead strive to be seen as a fight to enable everyone to live their own lives the way they want. And even in such cases, non-cooperation and civil disobedience—while taking advantage of the possibilities of exposure that networked culture provide—are likely to be more effective than violent defense.

Rather than focusing on ways to shift the correlation of forces between the state’s capabilities for violence and ours, it makes far more sense to focus on ways to increase our capabilities of living how we want below the state’s radar. The networked forms of organization we’ve examined in Chapter Three and in this chapter are key to that process.

The focus on securing liberty primarily through political organization—organizing “one big movement” to make sure everybody is on the same page, before anyone can put one foot in front of the other—embodies all the worst faults of 20 th century organizational culture. What we need, instead, is to capitalize on the capabilities of network culture.

Network culture, in its essence, is stigmergic: that is, an “invisible hand” effect results from the several efforts of individuals and small groups working independently. Such independent actors may have a view to coordinating their efforts with a larger movement, and take the actions of other actors into account, but they do so without any single coordinating apparatus set over and above their independent authority.

In other words, we need a movement that works like Wikipedia at its best (without the deletionazis), or like open-source developers who independently tailor modular products to a common platform.

The best way to change “the laws,” in practical terms, is to make them irrelevant and unenforceable through counter-institution building and through counter-economic activity outside the state’s control. States claim all sorts of powers that they are utterly unable to enforce. It doesn’t matter what tax laws are on the books if most commerce is in encrypted currency of some kind and invisible to the state. It doesn’t matter how industrial patents enforce planned obsolescence, when a garage factory produces generic replacements and modular accessories for proprietary corporate platforms, and sells to such a small market that the costs of detecting and punishing infringement are prohibitive. It doesn’t matter that local zoning regulations prohibit people doing business out of their homes, when their clientele is so small they can’t be effectively monitored.

Without the ability of governments to enforce their claimed powers, the claimed powers themselves are about as relevant as the edicts of the Emperor Norton. That’s why Charles Johnson argues that it’s far more cost-effective to go directly after the state’s enforcement capabilities than to try to change the law.

In point of fact, if options other than electoral politics are allowed onto the table, then it might very well be the case that exactly the opposite course would be more effective: if you can establish effective means for individual people, or better yet large groups of people, to evade or bypass government enforcement and government taxation, then that might very well provide a much more effective route to getting rid of particular bad policies than getting rid of particular bad policies provides to getting rid of the government enforcement and government taxation. To take one example, consider immigration. If the government has a tyrannical immigration law in place..., then there are two ways you could go about trying to get rid the tyranny. You could start with the worst aspects of the law, build a coalition, do the usual stuff, get the worst aspects removed or perhaps ameliorated, fight off the backlash, then, a couple election cycles later, start talking about the almost-as-bad aspects of the law, build another coalition, fight some more, and so on, and so forth, progressively whittling the provisions of the immigration law down until finally you have whittled it down to nothing, or as close to nothing as you might realistically hope for. Then, if you have gotten it down to nothing, you can now turn around and say, “Well, since we have basically no restrictions on immigration any more, why keep paying for a border control or internal immigration cops? Let’s go ahead and get rid of that stuff.” And then you’re done. The other way is the reverse strategy: to get rid of the tyranny by first aiming at the enforcement, rather than aiming at the law, by making the border control and internal immigration cops as irrelevant as you can make them. What you would do, then, is to work on building up more or less loose networks of black-market and grey-market operators, who can help illegal immigrants get into the country without being caught out by the Border Guard, who provide safe houses for them to stay on during their journey, who can help them get the papers that they need to skirt surveillance by La Migra, who can hook them up with work and places to live under the table, etc. etc. etc. To the extent that you can succeed in doing this, you’ve made immigration enforcement irrelevant. And without effective immigration enforcement, the state can bluster on as much as it wants about the Evil Alien Invasion; as a matter of real-world policy, the immigration law will become a dead letter. [1010]

It’s a principle anticipated over twenty years ago by Chuck Hammill, in an early celebration of the liberatory potential of network technology:

While I certainly do not disparage the concept of political action, I don’t believe that it is the only, nor even necessarily the most cost-effective path toward increasing freedom in our time. Consider that, for a fraction of the investment in time, money and effort I might expend in trying to convince the state to abolish wiretapping and all forms of censorship—I can teach every libertarian who’s interested how to use cryptography to abolish them unilaterally.... ....Suppose this hungry Eskimo never learned to fish because the ruler of his nation-state had decreed fishing illegal.... ....However, it is here that technology—and in particular information technology—can multiply your efficacy literally a hundredfold. I say “literally,” because for a fraction of the effort (and virtually none of the risk) attendant to smuggling in a hundred fish, you can quite readily produce a hundred Xerox copies of fishing instructions.... And that’s where I’m trying to take The LiberTech Project. Rather than beseeching the state to please not enslave, plunder or constrain us, I propose a libertarian network spreading the technologies by which we may seize freedom for ourselves.... So, the next time you look at the political scene and despair, thinking, “Well, if 51% of the nation and 51% of this State, and 51% of this city have to turn Libertarian before I’ll be free, then somebody might as well cut my goddamn throat now, and put me out of my misery”—recognize that such is not the case. There exist ways to make yourself free. [1011]

This coincides to a large extent with what Dave Pollard calls “incapacitation”: “rendering the old order unable to function by sapping what it needs to survive.” [1012]

But suppose if, instead of waiting for the collapse of the market economy and the crumbling of the power elite, we brought about that collapse, guerrilla-style, by making information free, by making local communities energy self-sufficient, and by taking the lead in biotech away from government and corporatists (the power elite) by working collaboratively, using the Power of Many, Open Source, unconstrained by corporate allegiance, patents and ‘shareholder expectations’? [1013]

In short, we undermine the old corporate order, not by the people we elect to Washington, or the policies those people make, but by how we do things where we live. A character in Marge Piercy’s Woman on the Edge of Time , describing the revolution that led to her future decentralist utopia,summed it up perfectly. Revolution, she said, was not uniformed parties, slogans, and mass-meetings. “It’s the people who worked out the labor-and-land intensive farming we do. It’s all the people who changed how people bought food, raised children, went to school! ....Who made new unions, withheld rent, refused to go to wars, wrote and educated and made speeches.” [1014]

One of the benefits of stigmergic organization, as we saw in earlier discussions of it, is that individual problems are tackled by the self-selected individuals and groups best suited to deal with them—and that their solutions are then passed on, via the network, to everyone who can benefit from them. DRM may be so hard to crack that only a handful of geeks can do it; but that doesn’t mean, as the music and movie industries had hoped, that that would make “piracy” economically irrelevant. When a handful of geeks figure out how to crack DRM today, thanks to stigmergic organization, grandmas will be downloading DRM-free “pirated” music and movies at torrent sites next week.

Each individual innovation in ways of living outside the control of the corporate-state nexus, of the kind mentioned by Pollard and Piercy, creates a demonstration effect: You can do this too! Every time someone figures out a way to produce “pirated” knockoff goods in a microfactory in defiance of a mass-production corporation’s patents, or build a cheap and livable house in defiance of the contractor-written building code, or run a microbakery or unlicensed hair salon out of their home with virtually zero overhead in defiance of local zoning and licensing regulations, they’re creating another hack to the system, and adding it to the shared culture of freedom. And the more they’re able to do business with each other through encrypted currencies and organize the kind of darknet economy described by John Robb, the more the counter-economy becomes a coherent whole opaque to the corporate state.

Statism will ultimately end, not as the result of any sudden and dramatic failure, but as the cumulative effect of a long series of little things. The costs of enculturing individuals to the state’s view of the world, and of dissuading a large enough majority of people from disobeying when they’re pretty sure they’re not being watched, will result in a death of a thousand cuts. More and more of the state’s activities, from the perspective of those running things, will just cost more (in terms not only of money but of just plain mental aggravation) than they’re worth. The decay of ideological hegemony and the decreased feasibility of enforcement will do the same thing to the state that file-sharing is now doing to the RIAA.

One especially important variant of the stigmergic principle is educational and propaganda effort. Even though organized, issue-oriented advocacy groups arguably can have a significant effect on the state, in pressuring the state to cease or reduce suppression of the alternative economy, the best way to maximize bang for the buck in such efforts is simply to capitalize on the potential of network culture: that is, put maximum effort into just getting the information out there, giving the government lots and lots of negative publicity, and then “letting a thousand flowers bloom” when it comes to efforts to leverage it into political action. That being done, the political pressure itself will be organized by many different individuals and groups operating independently, spurred by their own outrage, without even sharing any common antistatist ideology.

In the case of any particular state abuse of power or intervention into the economy, there are likely to be countless subgroups of people who oppose it for any number of idiosyncratic reasons of their own, and not from any single dogmatic principle. If we simply expose the nature of the state action and all its unjust particular effects, it will be leveraged into action by people in numbers many times larger than those of the particular alternative economic movement we are involved in. Even people who do not particularly sympathize with the aims of a counter-economic movement may be moved to outrage if the state’s enforcers can be put in a position of looking like Bull Connor. As John Robb says: “The use of the media to communicate intent and to share innovation with other insurgent groups is a staple of open source insurgency....” [1015] The state and the large corporations are a bunch of cows floundering around in the Amazon. Just get the information out there, and the individual toothy little critters in the school of piranha, acting independently, will take care of the skeletonizing on their own.

A good example, in the field of civil liberties, is what Radley Balko does every day, just through his own efforts at exposing the cockroaches of law enforcement to the kitchen light, or the CNN series about gross civil forfeiture abuses in that town in Texas. When Woodward and Bernstein uncovered Watergate, they didn’t start trying to organize a political movement to capitalize on it. They just published the info and a firestorm resulted.

This is an example of what Robb calls “self-replication”: “create socially engineered copies of your organization through the use of social media. Basically, this means providing the motivation, knowledge, and focus necessary for an unknown person (external and totally unconnected to your group) to conduct operations that advance your group’s specific goals (or the general goals of the open source insurgency).” [1016]

It’s because of increased levels of general education and the diffusion of more advanced moral standards that countries around the world have had to rename their ministries of war “ministries of defense.” It’s for the same reason that, in the twentieth and twenty-first centuries, governments could no longer launch wars for reasons of naked realpolitik on the model of the dynastic wars of two centuries earlier; rather, they had to manufacture pretexts based on self-defense. Hence pretexts like the mistreatment of ethnic Germans in Danzig as a pretext for Hitler’s invasion of Poland, and the Tonkin Gulf incident and Kuwaiti incubator babies as pretexts for American aggressions. That’s not to say that the pretexts had to be very good to fool the general public; but network culture is changing that as well, as witnessed by the contrasting levels of anti-war mobilization in the first and second Gulf wars.

More than one thinker on network culture has argued that network technology and the global justice movements piggybacked on it are diffusing more advanced global moral norms and putting increasing pressure on governments that violate those norms. [1017] Global activism and condemnation of violations of human rights in countries like China and Iran—like American nationwide exposure and boycotts of measures like Arizona’s “papers, please” law—are an increasing source of embarrassment and pressure. NGOs and global civil society are emerging as a powerful countervailing force against both national governments and global corporations. As we saw in the subsection on networked resistance in Chapter Three, governments and corporations frequently can find themselves isolated and exposed in the face of an intensely hostile global public opinion quite suddenly, thanks to networked global actors.

In light of all this, the most cost-effective “political” effort is simply making people understand that they don’t need anyone’s permission to be free. Start telling them right now that the law is unenforceable, and disseminating knowledge as widely as possible on the most effective ways of evading it. Publicize examples of ways we can live our lives the way we want, with institutions of our own making, under the radar of the state’s enforcement apparatus: local currency systems, free clinics, ways to protect squatter communities from harassment, and so on. Educational efforts to undermine the state’s moral legitimacy, educational campaigns to demonstrate the unenforceability of the law, and efforts to develop and circulate means of circumventing state control, are all things best done on a stigmergic basis.

Critics of “digital communism” like Jaron Lanier and Mark Helprin, who condemn network culture for submerging “individual authorial voice” in the “collective,” are missing the point. Stigmergy synthesizes the highest realization of both individualism and collectivism, and represents the most absolute form of each of them, without either being limited or qualified in any way.

Stigmergy is not “collectivist” in the traditional sense, as it was understood in the days when a common effort on any significant scale required a large organization to represent the collective, and the coordination of individual efforts through a hierarchy. But it is the ultimate realization of collectivism, in that it removes the transaction cost of free collective action by many individuals.

It is the ultimate in individualism because all actions are the free actions of individuals, and the “collective” is simply the sum total of several individual actions. Every individual is free to formulate any innovation he sees fit, without any need for permission from the collective. Every individual or voluntary association of individuals is free to adopt the innovation, or not, as they see fit. The extent of adoption of any innovation is based entirely on the unanimous consent of every voluntary grouping that adopts it. Each innovation is modular, and may be adopted into any number of larger projects where it is found useful. Any grouping where there is disagreement over adoption may fork and replicate their project with or without the innovation.

Group action is facilitated with greater ease and lower transaction costs than ever before, but all “group actions” are the unanimous actions of individuals.

I. The Singularity

The cumulative effect of all these superior efficiencies of peer production, and of the informal and household economy, is to create a singularity.

The problem, for capital, is that—as we saw in previous chapters—the miniaturization and cheapness of physical capital, and the emergence of networked means of aggregating investment capital, are rendering capital increasingly superfluous.

The resulting crisis of realization is fundamentally threatening. Not only is capital superfluous in the immaterial realm, but the distinction between the immaterial and material realms is becoming increasingly porous. Material production, more and more, is taking on the same characteristics that caused the desktop computer to revolutionize production in the material realm.

The technological singularity means that labor is ceasing to depend on capital, and on wage employment by capital, for its material support.

For over two centuries, as Immanuel Wallerstein observed, the system of capitalist production based on wage labor has depended on the ability to externalize many of its reproduction functions on the non-monetized informal and household economies, and on organic social institutions like the family which were outside the cash nexus.

Historically, capital has relied upon its superior bargaining power to set the boundary between the money and social economies to its own advantage. The household and informal economies have been allowed to function to the extent that they bear reproduction costs that would otherwise have to be internalized in wages; but they have been suppressed (as in the Enclosures) when they threaten to increase in size and importance to the point of offering a basis for independence from wage labor.

The employing classes’ fear of the subsistence economy made perfect sense. For as Kropotkin asked:

If every peasant-farmer had a piece of land, free from rent and taxes, if he had in addition the tools and the stock necessary for farm labour—Who would plough the lands of the baron? Everyone would look after his own.... If all the men and women in the countryside had their daily bread assured, and their daily needs already satisfied, who would work for our capitalist at a wage of half a crown a day, while the commodities one produces in a day sell in the market for a crown or more? [1018]

“The household as an income-pooling unit,” Wallerstein writes, “can be seen as a fortress both of accommodation to and resistance to the patterns of labor-force allocation favored by accumulators.” Capital has tended to favor severing the nuclear family household from the larger territorial community or extended kin network, and to promote an intermediate-sized income-pooling household. The reason is that too small a household falls so far short as a basis for income pooling that the capitalist is forced to commodify too large a portion of the means of subsistence, i.e. to internalize the cost in wages. [1019] It is in the interest of the employer not to render the worker totally dependent on wage income, because without the ability to carry out some reproduction functions through the production of use value within the household subsistence economy, the worker will be “compelled to demand higher real wages....” [1020] On the other hand, too large a household meant that “the level of work output required to ensure survival was too low,” and “diminished pressure to enter the wage-labor market.” [1021]

It’s only common sense that when there are multiple wage-earners in a household, their dependence on any one job is reduced, and the ability of each member to walk away from especially onerous conditions is increased: “While a family with two or more wage-earners is no less dependent on the sale of labor power in general, it is significantly shielded from the effects of particular unemployment...” [1022] And in fact it is less dependent on the sale of labor power in general, to the extent that the per capita overhead of fixed expenses to be serviced falls as household size increases. And the absolute level of fixed expenses can also be reduced by substituting the household economy for wage employment, in part, as the locus of value creation. As we saw Borsodi put it in the previous chapter, “[a] little money, where wages are joined to the produce of the soil, will go a long way....”

The new factor today is a revolutionary shift in competitive advantage from wage labor to the informal economy. The rapid growth of technologies for home production, based on small-scale electrically powered machinery and new forms of intensive cultivation, has radically altered the comparative efficiencies of large- and small-scale production. This was pointed out by Borsodi almost eighty years ago, and the trend has continued since. The current explosion in low-cost manufacturing technology promises to shift competitive advantage in the next decade much more than in the entire previous century.

The practical choice presented to labor by this shift of comparative advantage was ably stated by Marcin Jakubowski, whose Factor E Farm is one of the most notable attempts to integrate open manufacturing and digital fabrication with an open design repository:

Friends and family still harass me. They still keep telling me to ‘get a real job.’ I’ve got a good response now. It is:

Take a look at the last post on the soil pulverizer

Consider ‘getting a real job at $100k,’ a well-paid gig in The System. Tax and expense take it down to $50k, saved, if you’re frugal.

Ok. I can ‘get a real job’, work for 6 months, and then buy a Soil Pulverizer for $25k. Or, I make my own in 2 weeks at $200 cost, and save the world while I’m at it. Which one makes more sense to you? You can see which one makes more sense to me. It’s just economics. [1023]

In other words, how ya gonna keep ‘em down in the factory, when the cost of getting your own garage factory has fallen to two months’ wages?

As James O’Connor described the phenomenon in the 1980s, “the accumulation of stocks of means and objects of reproduction within the household and community took the edge off the need for alienated labor.”

Labor-power was hoarded through absenteeism, sick leaves, early retirement, the struggle to reduce days worked per year, among other ways. Conserved labor-power was then expended in subsistence production.... The living economy based on non- and anti-capitalist concepts of time and space went underground in the reconstituted household; the commune; cooperatives; the single-issue organization; the self-help clinic; the solidarity group. Hurrying along the development of the alternative and underground economies was the growth of underemployment... and mass unemployment associated with the crisis of the 1980s. “Regular” employment and union-scale work contracted, which became an incentive to develop alternative, localized modes of production.... ...New social relationships of production and alternative employment, including the informal and underground economies, threatened not only labor discipline, but also capitalist markets.... Alternative technologies threatened capital’s monopoly on technological development... Hoarding of labor-power threatened capital’s domination of production. Withdrawal of labor-power undermined basic social disciplinary mechanisms.... [1024]

More recently, “Eleutheros,” of How Many Miles from Babylon? blog, described the sense of freedom that results from a capacity for independent subsistence:

...if we padlocked the gate to this farmstead and never had any trafficking with Babylon ever again, we could still grow corn and beans in perpetuity.... What is this low tech, low input, subsistence economy all about, what does it mean to us? It is much like Jack Sparrow’s remark to Elizabeth Swann when... he told her what the Black Pearl really was, it was freedom. Like that to us our centuries old agriculture represents for us a choice. And having a choice is the very essence and foundation of our escape from Babylon. ...To walk away from Babylon, you must have choices.... Babylon, as with any exploitative and controlling system, can only exist by limiting and eliminating your choices. After all, if you actually have choices, you may in fact choose the things that benefit and enhance you and your family rather than things that benefit Babylon. Babylon must eliminate your ability to choose.... So I bring up my corn field in way of illustration of what a real choice looks like. We produce... our staple bread with no input at all from Babylon. So we always have the choice to eat that instead of what Babylon offers. We also buy wheat in bulk and make wheat bread sometimes, but if (when, as it happened this year) the transportation cost or scarcity of wheat makes the price beyond the pale, we can look at it and say, “No, not going there, we will just go home and have our cornbread and beans.” Likewise we sometimes buy food from stands and stores, and on a few occasions we eat out. But we always have the choice, and if we need to, we can enforce that choice for months on end.... Your escape from Babylon begins when you can say, “No, I have a choice. Oh, I can dine around Babylon’s table if I choose, but if the Babyonian terms and conditions are odious, then I don’t have to.” [1025]

And the payoff doesn’t require a total economic implosion. This is a winning strategy even if the money economy and division of labor persist indefinitely to a large extent—as I think they almost surely will—and most people continue to get a considerable portion of their consumption needs through money purchases. The end-state, after Peak Oil and the other terminal crises of state capitalism have run their course, is apt to bear a closer resemblance to Warren Johnson’s Muddling Toward Frugality and Brian Kaller’s “Return to Mayberry” than Jim Kunstler’s World Made by Hand . The knowledge that you are debt-free and own your living space free and clear, and that you could keep a roof over your head and food on the table without wage labor indefinitely, if you had to, has an incalculable effect on your bargaining power here and now, even while capitalism persists.

As Ralph Borsodi observed almost eighty years ago, his ability to “retire” on the household economy for prolonged periods of time—and potential employers’ knowledge that he could do so—enabled him to negotiate far better terms for what outside work he did decide to accept. He described, from his own personal experience, the greatly increased bargaining power of labor when the worker has the ability to walk away from the table:

...Eventually income began to go up as I cut down the time I devoted to earning money, or perhaps it would be more accurate to say I was able to secure more for my time as I became less and less dependent upon those to whom I sold my services.... This possibility of earning more, by needing to work less, is cumulative and is open to an immense number of professional workers. It is remarkable how much more appreciative of one’s work employers and patrons become when they know that one is independent enough to decline unattractive commissions. And of course, if the wage-earning classes were generally to develop this sort of independence, employers would have to compete and bid up wages to secure workers instead of workers competing by cutting wages in order to get jobs. [1026] ....Economic independence immeasurably improves your position as a seller of services. It replaces the present “buyer’s market” for your services, in which the buyer dictates terms with a “seller’s market,” in which you dictate terms. It enables you to pick and choose the jobs you wish to perform and to refuse to work if the terms, conditions, and the purposes do not suit you. The next time you have your services to sell, see if you cannot command a better price for them if you can make the prospective buyer believe that you are under no compulsion to deal with him. [1027] ...[T] he terms upon which an exchange is made between two parties are determined by the relative extent to which each is free to refuse to make the exchange.... The one who was “free” (to refuse the exchange), dictated the terms of the sale, and the one who was “not free” to refuse, had to pay whatever price was exacted from him. [1028]

Colin Ward, in “Anarchism and the informal economy,” envisioned a major shift from wage labor to the household economy:

[Jonathan Gershuny of the Science Policy Research Unit at Sussex University] sees the decline of the service economy as accompanied by the emergence of a self-service economy in the way that the automatic washing machine in the home can be said to supersede the laundry industry. His American equivalent is Scott Burns, author of The Household Economy , with his claim that ‘America is going to be transformed by nothing more or less than the inevitable maturation and decline of the market economy. The instrument for this positive change will be the household—the family—revitalized as a powerful and relatively autonomous productive unit’. The only way to banish the spectre of unemployment is to break free from our enslavement to the idea of employment.... The first distinction we have to make then is between work and employment. The world is certainly short of jobs, but it has never been, and never will be, short of work.... The second distinction is between the regular, formal, visible and official economy, and the economy of work which is not employment.... ...Victor Keegan remarks that ‘the most seductive theory of all is that what we are experiencing now is nothing less than a movement back towards an informal economy after a brief flirtation of 200 years or so with a formal one’. We are talking about the movement of work back into the domestic economy.... [1029]

Burns, whom Ward cited above, saw the formation of communes, the buying of rural homesteads, and other aspects of the back to the land movement, as an attempt

to supplant the marketplace entirely. By building their own homes and constructing them to minimize energy consumption, by recycling old cars or avoiding the automobile altogether, by building their own furniture, sewing their own clothes, and growing their own food, they are minimizing their need to offer their labor in the marketplace. They pool it, instead, in the extended household.... [T]he new homesteader can internalize 70–80 per cent of all his needs in the household; his money work is intermittent when it can’t be avoided altogether. [1030]

To reiterate: we’re experiencing a singularity in which it is becoming impossible for capital to prevent a shift in the supply of an increasing proportion of the necessities of life from mass produced goods purchased with wages, to small-scale production in the informal and household sector. The upshot is likely to be something like Vinay Gupta’s “Unplugged” movement, in which the possibilities for low-cost, comfortable subsistence off the grid result in exactly the same situation, the fear of which motivated the propertied classes in carrying out the Enclosures: a situation in which the majority of the public can take wage labor or leave it, if it takes it at all, the average person works only on his own terms when he needs supplemental income for luxury goods and the like, and (even if he considers supplemental income necessary in the long run for an optimal standard of living) can afford in the short run to quit work and live off his own resources for prolonged periods of time, while negotiating for employment on the most favorable terms. It will be a society in which workers, not employers, have the greater ability to walk away from the table. It will, in short, be the kind of society Wakefield lamented in the colonial world of cheap and abundant land: a society in which labor is hard to get on any terms, and almost impossible to hire at a low enough wage to produce significant profit.

Gupta’s short story “The Unplugged” [1031] related his vision of how such a singularity would affect life in the West.

To “get off at the top” requires millions and millions of dollars of stored wealth. Exactly how much depends on your lifestyle and rate of return, but it’s a lot of money, and it’s volatile depending on economic conditions. A crash can wipe out your capital base and leave you helpless, because all you had was shares in a machine. So we Unpluggers found a new way to unplug: an independent life-support infrastructure and financial architecture—a society within society—which allowed anybody who wanted to “buy out” to “buy out at the bottom” rather than “buying out at the top.” If you are willing to live as an Unplugger does, your cost to buy out is only around three months of wages for a factory worker, the price of a used car. You never need to “work” again—that is, for money which you spend to meet your basic needs.

The more technical advances lower the capital outlays and overhead for production in the informal and household economy, the more the economic calculus is shifted in the way described by Jakubowski above.

The basic principle of Unplugging was to combine “Gandhi’s Goals” (“self-sufficiency,” or “the freedom that comes from owning your own life support system”) with “Fuller’s Methods” (getting more from less). Such freedom

allows us to disconnect from the national economy as a way of solving the problems of our planet one human at a time. But Gandhi’s goals don’t scale past the lifestyle of a peasant farmer and many westerners view that way of life as unsustainable for them personally.... Fuller’s “do more with less” was a method we could use to attain self-sufficiency with a much lower capital cost than “buy out at the top.” An integrated, whole-systems-thinking approach to a sustainable lifestyle—the houses, the gardening tools, the monitoring systems—all of that stuff was designed using inspiration from Fuller and later thinkers inspired by efficiency. The slack—the waste—in our old ways of life were consuming 90% of our productive labor to maintain. A thousand dollar a month combined fuel bill is your life energy going down the drain because the place you live sucks your life way [sic] in waste heat, which is waste money, which is waste time. Your car, your house, the portion of your taxes which the Government spends on fuel, on electricity, on waste heat... all of the time you spent to earn that money is wasted to the degree those systems are inefficient systems, behind best practices!

James L. Wilson, in a vignette of family life in the mid-21 st century, writes of ordinary people seceding from the wage system and meeting as many of their needs as possible locally, primarily as a response to the price increases from Peak Oil—but in so doing, also regaining control of their lives and ending their dependence on the corporation and the state.

“Well, you see all these people working on their gardens? They used to not be here. People had grass lawns, and would compete with each other for having the greenest, nicest grass. But your gramma came home from the supermarket one day, sat down, and said, ‘That’s it. We’re going to grow our own food.’ And the next spring, she planted a vegetable garden where the grass used to be. “And boy, were some of the neighbors mad. The Homeowners Association sued her. They said the garden was unsightly. They said that property values would fall. But then, the next year, more people started planting their own gardens. “And not just their lawns. People started making improvements on their homes, to make them more energy-efficient. They didn’t do it to help the environment, but to save money. People in the neighborhood started sharing ideas and working together, when before they barely ever spoke to each other.... “And people also started buying from farmer’s markets, buying milk, meat, eggs and produce straight from nearby farmers. This was fresher and healthier than processed food. They realized they were better off if the profits stayed within the community than if they went to big corporations far away. “This is when your gramma, my Mom, quit her job and started a bakery from home. It was actually in violation of the zoning laws, but the people sided with gramma against the government. When the government realized it was powerless to crack down on this new way of life, and the people realized they didn’t have to fear the government, they became free. And so more and more people started working from home. Mommies and Daddies used to have different jobs in different places, but now more and more of them are in business together in their own home, where they’re close to their children instead of putting them in day care.”.... [1032]

We have seen throughout this chapter the superiority of the alternative economy, in terms of a number of different conceptual models—Robb’s STEMI compression, Ceesay’s economies of agility, Gupta’s distributed infrastructure, and Cravens’ productive recursion—to the corporate capitalist economy. All these superiorities can be summarized as the ability to make better use of material inputs than capitalism, and the ability to make use of the waste inputs of capitalism.

Localized, small-scale economies are the rats in the dinosaurs’ nests. The informal and household economy operates more efficiently than the capitalist economy, and can function on the waste byproducts of capitalism. It is resilient and replicates virally. In an environment in which resources for technological development have been almost entirely diverted toward corporate capitalism, it takes technologies that were developed to serve corporate capitalism, adapts them to small-scale production, and uses them to destroy corporate capitalism. In fact, it’s almost as though the dinosaurs themselves had funded a genetic research lab to breed mammals: “Let’s reconfigure the teeth so they’re better for sucking eggs, and ramp up the metabolism to survive a major catastrophe—like, say, an asteroid collision. Nah, I don’t really know what it would be good for—but what the fuck, the Pangean Ministry of Defense is paying for it!”

To repeat, there are two economies competing: their old economy of bureaucracy, high overhead, enormous capital outlays, and cost-plus markup, and our new economy of agility and low overhead. And in the end... we will bury them.

Appendix: The Singularity in the Third World

If the coming singularity will enable the producing classes in the industrialized West to defect from the wage system, in the Third World it may enable them to skip that stage of development altogether. Gupta concluded “The Unplugged” with a hint about how the principle might be applied in the Third World: “We encourage the developing world to Unplug as the ultimate form of Leapfrogging: skip hypercapitalism and anarchocapitalism and democratic socialism entirely and jump directly to Unplugging.”

Gupta envisions a corresponding singularity in the Third World when the cost of an Internet connection, through cell phones and other mobile devices, falls low enough to be affordable by impoverished villagers. At that point, the transaction costs which hampered previous attempts at disseminating affordable intermediate technologies in the Third World, like Village Earth’s Appropriate Technology Library or Schumacher’s Intermediate Technology Development Group, will finally be overcome by digital network technology.

It is inevitable that the network will spread everywhere across the planet, or very nearly so. Already the cell phone has reached 50% of the humans on the planet. As technological innovation transforms the ordinary cell phone into a little computer, and ordinary cell services into connections to the Internet, the population of the internet is going to change from being predominantly educated westerners to being mainly people in poorer countries, and shortly after that, to being predominantly people living on a few dollars a day.... ...Most people are very poor, and as the price of a connection to the Internet falls to a level they can afford, as they can afford cell phones now, we’re going to get a chance to really help these people get a better life by finding them the information resources they need to grow and prosper. Imagine that you are a poor single mother in South America who lives in a village without a clean water source. Your child gets sick now and again from the dirty water, and you feel there is nothing you can do, and worry about their survival. Then one of your more prosperous neighbors gets a new telephone, and there’s a video which describes how to purify water [with a solar purifier made from a two-liter soda bottle]. It’s simple, in your language, and describes all the basic steps without showing anything which requires schooling to understand. After a while, you master the basic practical skills—the year or two of high school you caught before having the child and having to work helps. But then you teach your sisters, and none of the kids get sick as often as they used to… life has improved because of the network. Then comes solar cookers, and improved stoves, and preventative medicine, and better agriculture [earlier Gupta mentions improved green manuring techniques], and diagnosis of conditions which require a doctor’s attention, with a GPS map and calendar of when the visiting doctors will be in town again. [1033]

The revolution is already here, according to a New York Times story. Cell phones, with service plans averaging $5 a month, have already spread to a third of the population of India. That means that mobile phones, with Internet service, have “seeped down the social strata, into slums and small towns and villages, becoming that rare Indian possession to traverse the walls of caste and region and class; a majority of subscribers are now outside the major cities and wealthiest states.” And the mushrooming growth of cell phone connections, 15 million in March 2009, amounts to something like a 45% annual growth rate over the 400 million currently in use—a rate which, if it continues, will mean universal cell phone ownership within five years. [1034]

Interestingly, Jeff Vail predicts that this increased connectivity, combined with especially exacerbated trends toward the hollowing out of the nation-state (see Chapter Three), will cause India to be a pioneer in the early development of the Diagonal Economy (see Chapter Six).

...I do want to give one location-specific example of where I think this trend toward the degradation of the Nation-State construct will be especially severe: India. No, I don’t think India will collapse (though there will be plenty of stories of woe), nor that the state government that occupies most of the geographic territory of “India” will collapse (note that careful wording). Rather, I think that the trend for a disconnection between any abstract notion of “Nation” and a unitary state in India will become particularly pronounced over the course of 2010. This is already largely apparent in India, but look for it to become more so. While Indian business and economy will fare decently well in 2010 from an international trade perspective, the real story will be a rising failure of this success to be effectively distributed by the government outside of a narrow class of urban middle class. It will instead be a rising connectivity and self-awareness of their situation among India’s rural poor, resulting in an increasing push for localized self-sufficiency and resiliency of food production (especially the “tipping” of food forests and perennial polycultures), that will most begin to tear at the relevancy of India’s central state government. In India there is a great potential for the beginnings of the Diagonal Economy to emerge in 2010. [1035]

Bibliography

“100kGarages is Building a MakerBot.” 100kGarages, October 17, 2009 < blog.100kgarages.com >.

“270-day libel case goes on and on...,” Daily Telegraph , June 28, 1996 < www.mcspotlight.org >.

Scott Adams. “Ridesharing in the Future.” Scott Adams Blog , January 21, 2009 < dilbert.com >.

Guy Alperowitz, Ted Howard, and Thad Williamson. “The Cleveland Model.” The Nation , February 11, 2010 < www.thenation.com >.

Lloyd Alter. “Ponoko + ShopBot = 100kGarages: This Changes Everything in Downloadable Design.” Treehugger , September 16, 2009 < www.treehugger.com >.

Oscar Ameriger. “Socialism for the Farmer Who Farms the Farm.” Rip-Saw Series No. 15 (Saint Louis: The National Rip-Saw Publishing Co., 1912).

Beatrice Anarow, Catherine Greener, Vinay Gupta, Michael Kinsley, Joanie Henderson, Chris Page and Kate Parrot, Rocky Mountain Institute. “Whole-Systems Framework for Sustainable Consumption and Production.” Environmental Project No. 807 (Danish Environmental Protection Agency, Ministry of the Environment, 2003), p. 24. < files.howtolivewiki.com >.

Chris Anderson. Free: The Future of a Radical Price (New York: Hyperion, 2009).

Anderson. “In the Next Industrial Revolution, Atoms Are the New Bits.” Wired, January 25, 2010 < www.wired.com >.

Poul Anderson. Orion Shall Rise (New York: Pocket Books, 1983).

Massimo de Angelis. “Branding + Mingas + Coops = Salinas.” the editor’s blog , March 26, 2010 < www.commoner.org.uk >.

John Arquilla and David Ronfeldt. The Advent of Netwar MR-789 (Santa Monica, CA: RAND, 1996) < www.rand.org >.

Arquilla and Ronfeldt. “Fighting the Network War,” Wired , December 2001 < www.wired.com >.

Arquilla and Ronfeldt. “Introduction,” in Arquilla and Ronfeldt, eds., “Networks and Netwars: The Future of Terror, Crime, and Militancy” MR-1382-OSD (Santa Monica: Rand, 2001) < www.rand.org >.

Arquilla and Ronfeldt. Swarming & the Future of Conflict DB-311 (Santa Monica, CA: RAND, 2000), iii < www.rand.org >.

Arquilla, Ronfeldt, Graham Fuller, and Melissa Fuller. The Zapatista “Social Netwar” in Mexico MR-994-A (Santa Monica: Rand, 1998) < www.rand.org index.html>.

Adam Arvidsson. “Review: Cory Doctorow, The Makers.” P2P Foundation Blog , February 24, 2010 < blog.p2pfoundation.net >.

Arvidsson. “The Makers—again: or the need for keynesian management of abundance,” P2P Foundation Blog , February 25, 2010 < blog.p2pfoundation.net >.

Associated Press. “Retail sales fall after Cash for Clunkers ends,” MSNBC, October 14, 2009 < www.msnbc.msn.com >.

Associated Press. “U.S. government fights to keep meatpackers from testing all slaughtered cattle for mad cow,” International Herald-Tribune , May 29, 2007 < www.iht.com >.

Atrios. “Face Time.” Eschaton , July 9, 2005 < atrios.blogspot.com -03_atrios_archive.html>.

Ronald Bailey. “Post-Scarcity Prophet: Economist Paul Romer on growth, technological change, and an unlimited human future,” Reason , December 2001 < reason.com >.

Gopal Balakrishnan. “Speculations on the Stationary State,” New Left Review , September-October 2009 < www.newleftreview.org >.

Paul Baran and Paul Sweezy. Monopoly Capitalism: An Essay in the American Economic and Social Order (New York: Monthly Review Press, 1966).

David Barboza. “In China, Knockoff Cellphones are a Hit,” New York Times , April 27, 2009 < www.nytimes.com >.

Taylor Barnes. “America’s ’shadow economy’ is bigger than you think — and growing.” Christian Science Monitor , November 12, 2009 < features.csmonitor.com >.

Albert Bates. “Ecovillage Roots (and Branches): When, where, and how we re-invented this ancient village concept.” Communities Magazine No. 117 (2003).

Michel Bauwens. “Asia needs a Social Innovation Stimulus plan.” P2P Foundation Blog , March 23, 2009 < blog.p2pfoundation.net >.

Bauwens. “Can the experience economy be capitalist?” P2P Foundation Blog , September 27, 2007 < blog.p2pfoundation.net >.

Bauwens. “Conditions for the Next Long Wave.” P2P Foundation Blog , May 28, 2009 < blog.p2pfoundation.net >.

Bauwens. “Contract manufacturing as distributed manufacturing.” P2P Foundation Blog , September 11, 2008 < blog.p2pfoundation.net >.

Bauwens. “The Emergence of Open Design and Open Manufacturing.” We Magazine , vol. 2 < www.we-magazine.net >.

Bauwens. “The great internet/p2p deflation.” P2P Foundation Blog , November 11, 2009 < blog.p2pfoundation.net >.

Bauwens. “A milestone for distributed manufacturing: 100kGarages.” P2P Foundation Blog , September 19, 2009 < blog.p2pfoundation.net >.

Bauwens. P2P and Human Evolution . Draft 1.994 (Foundation for P2P Alternatives, June 15, 2005) < integralvisioning.org >.

Bauwens. “Phases for implementing peer production: Towards a Manifesto for Mutually Assured Production.” P2P Foundation Forum , August 30, 2008 < p2pfoundation.ning.com 2003008:Topic:6275>.

Bauwens. “The Political Economy of Peer Production.” CTheory , December 1, 2005 < www.ctheory.net >.

Bauwens. “Strategic Support for Factor e Farm and Open Source Ecology.” P2P Foundation Blog , June 19, 2009 < blog.p2pfoundation.net >.

Bauwens. “The three revolutions in human productivity.” P2P Foundation Blog , November 29, 2009 < blog.p2pfoundation.net >.

Bauwens. “Three Times Exodus, Three Phase Transitions.” P2P Foundation Blog , May 2, 2010 < blog.p2pfoundation.net >.

Bauwens. “What kind of economy are we moving to? 3. A hierarchy of engagement between companies and communities.” P2P Foundation Blog , October 5, 2007 < blog.p2pfoundation.net >.

Robert Begg, Poli Roukova, John Pickles, and Adrian Smith. “Industrial Districts and Commodity Chains: The Garage Firms of Emilia-Romagna (Italy) and Haskovo (Bulgaria).” Problems of Geography (Sofia, Bulgarian Academy of Sciences), 1–2 (2005).

Walden Bello. “Asia: The Coming Fury.” Asia Times Online , February 11, 2009 < www.atimes.com >.

Bello. “Can China Save the World from Depression?” Counterpunch , May 27, 2009 < www.counterpunch.org >.

Bello. “Keynes: A Man for This Season?” Share the World’s Resources , July 9, 2009 < www.stwr.org >.

Bello. “A Primer on Wall Street Meltdown.” MR Zine , October 3, 2008 < mrzine.monthlyreview.org >.

James C. Bennett. “The End of Capitalism and the Triumph of the Market Economy.” from Network Commonwealth: The Future of Nations in the Internet Era (1998, 1999) < www.pattern.com >.

Yochai Benkler. The Wealth of Networks: How Social Production Transforms Markets and Freedom (New Haven and London: Yale University Press, 2006), pp. 220–223, 227–231.

Edwin Black. “Hitler’s Carmaker: How Will Posterity Remember General Motors’ Conduct? (Part 4).” History News Network , May 14, 2007 < hnn.us >.

“Black Mountain College.” Wikipedia < en.wikipedia.org > (captured March 30, 2009).

David G. Blanchflower and Andrew J. Oswald. “What Makes an Entrepreneur?”

< www2.warwick.ac.uk >. Later appeared in Journal of Labor Economics , 16:1 (1998), pp. 26–60.

Murray Bookchin. Post-Scarcity Anarchism (Berkeley, Ca.: The Ramparts Press, 1971).

Ralph Borsodi. The Distribution Age (New York and London: D. Appleton and Company, 1929).

Borsodi. Flight From the City: An Experiment in Creative Living on the Land (New York, Evanston, San Francisco, London: Harper & Row, 1933, 1972).

Borsodi. Prosperity and Security: A Study in Realistic Economics (New York and London: Harper & Brothers Publishers, 1938).

Borsodi. This Ugly Civilization (Philadelphia: Porcupine Press, 1929, 1975).

Kenneth Boulding. Beyond Economics (Ann Arbor: University of Michigan Press, 1968).

Samuel Bowles and Herbert Gintis. “The Crisis of Liberal Democratic Capitalism: The Case of the United States.” Politics and Society 11:1 (1982).

Ben Brangwyn and Rob Hopkins. Transition Initiatives Primer: becoming a Transition Town, City, District, Village, Community or even Island (Version 26—August 12, 2008) < transitionnetwork.org >.

Brad Branan. “Police: Twitter used to avoid DUI checkpoints.” Seattle Times , December 28, 2009 < seattletimes.nwsource.com >.

Gareth Branwyn. “ShopBot Open-Sources Their Code.” Makezine , April 13, 2009 < blog.makezine.com >.

John Brummett. “Delta Solution: Move.” The Morning News of Northwest Arkansas , June 14, 2009 < arkansasnews.com >.

Stewart Burgess. “Living on a Surplus.” The Survey 68 (January 1933).

Scott Burns. The Household Economy: Its Shape, Origins, & Future (Boston: The Beacon Press, 1975).

Bryan Caplan. “Pyramid Power.” EconLog , January 21, 2010 < econlog.econlib.org 2010/01/pyramid_power.html>.

Kevin Carey. “College for $99 a Month.” Washington Monthly , September/October 2009 < www.washingtonmonthly.com >.

Kevin Carson. “Abundance Creates Utility but Destroys Exchange Value.” P2P Foundation Blog , February 2, 2010 < blog.p2pfoundation.net >.

Carson. “’Building the Structure of the New Society Within the Shell of the Old.’” Mutualist Blog: Free Market Anti-Capitalism , March 22, 2005 < mutualist.blogspot.com 03/ building-structure-of-new-society.html>.

Carson. “The Cleveland Model and Micromanufacturing.” P2P Foundation Blog , April 2, 2010 < blog.p2pfoundation.net >.

Carson. “Cory Doctorow. Makers.” P2P Foundation Blog , October 25, 2009 < blog.p2pfoundation.net >.

Carson. “Daniel Suarez. Daemon and Freedom(TM).” P2P Foundation Blog , April 26, 2010 < blog.p2pfoundation.net >.

Carson. “The People Making ‘The Rules’ are Dumber than You.” Center for a Stateless Society, January 11, 2010 < c4ss.org >.

Carson. Studies in Mutualist Political Economy (Blitzprint, 2004).

Carson. “Three Works on Abundance and Technological Unemployment.” Mutualist Blog , March 30, 2010 < mutualist.blogspot.com >.

“Carter Doctrine.” Wikipedia , accessed December 23, 2009 < en.wikipedia.org Carter_Doctrine>.

“Doug Casey on Unemployment.” LewRockwell.Com, January 22, 2010. Interviewed by Louis James, editor, International Speculator < www.lewrockwell.com >.

Cassander. “It’s Hard Being a Bear (Part Three): Good Economic History.” Steve Keen’s Debtwatch, September 5, 2009 < www.debtdeflation.com >.

Mamading Ceesay. “The Economies of Agility and Disrupting the Nature of the Firm.” Confessions of an Autodidactic Engineer , March 31, 2009 < evangineer.agoraworx.com >.

Alfred D. Chandler, Jr. Inventing the Electronic Century (New York: The Free Press, 2001).

Chandler. Scale and Scope: The Dynamics of Industrial Capitalism (Cambridge and London: The Belknap Press of Harvard University Press, 1990).

Chandler. The Visible Hand: The Managerial Revolution in American Business (Cambridge and London: The Belknap Press of Harvard University Press, 1977).

Aimin Chen. “The structure of Chinese industry and the impact from China’s WTO entry.” Comparative Economic Studies (Spring 2002) < www.entrepreneur.com print/86234198.html>.

Chloe. “Important People.” Corporate Whore , September 21, 2007 < web.archive.org 20071014221728/ corporatewhore.us >.

“The CloudFab Manifesto.” Ponoko Blog, September 28, 2009 < blog.ponoko.com / 2009/09/28/the-cloudfab-manifesto/>.

“CNC machine v2.0—aka ‘Valkyrie’.” Let’s Make Robots , July 14, 2009 < letsmakerobots.com >.

Moses Coady. Masters of Their Own Destiny: The Story of the Antigonish Movement of Adult Education Through Economic Cooperation (New York, Evanston, and London: Harper & Row, 1939).

Coalition of Immokalee Workers. “Burger King Corp. and Coalition of Immokalee Workers to Work Together.” May 23, 2008 < www.ciw-online.org >.

Tom Coates. “(Weblogs and) The Mass Amateurisation of (Nearly) Everything...” Plasticbag.org, September 3, 2003 < www.plasticbag.org amateurisation_of_nearly_everything>.

G.D.H. Cole. A Short History of the British Working Class Movement (1789–1947) (London: George Allen & Unwin, 1948).

John R. Commons. Institutional Economics (New York: Macmillan, 1934).

“Community Wealth Building Conference in Cleveland, OH.” GVPT News , February 2007, p. 14 < www.bsos.umd.edu >.

Abe Connally. “Open Source Self-Replicator.” MAKE Magazine , No. 21 < www.make-digital.com >.

Niall Cook. Enterprise 2.0: How Social Software Will Change the Future of Work (Burlington, Vt.: Gower, 2008).

Alan Cooper. The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy and How to Restore the Sanity (Indianapolis: Sams, 1999).

James Coston, Amtrak Reform Council, 2001. In “America’s long history of subsidizing transportation.” < www.trainweb.org >.

Tyler Cowen. “Was recent productivity growth an illusion?” Marginal Revolution , March 3, 2009 < www.marginalrevolution.com >.

Nathan Cravens. “important appeal: social media and p2p tools against the meltdown.” Open Manufacturing (Google Groups), March 13, 2009 < groups.google.com openmanufacturing/msg/771617d04e45cd63>.

Cravens. “[p2p-research] simpler way wiki.” P2P Research, April 20, 2009 < listcultures.org >.

Cravens. “Productive Recursion.” Open Source Ecology Wiki < openfarmtech.org >.

Cravens. “Productive Recursion Proven.” Open Manufacturing (Google Groups), March 8, 2009 < groups.google.com >.

Cravens. “The Triple Alliance.” Appropedia: The sustainability wiki < www.appropedia.org / The_Triple_Alliance> (accessed July 3, 2009).

Matthew B. Crawford. “Shop Class as Soulcraft.” The New Atlantis , Number 13, Summer 2006, pp. 7–24 < www.thenewatlantis.com >.

“CubeSpawn, An open source, Flexible Manufacturing System (FMS).” < www.kickstarter.com >.

John Curl. For All the People: Uncovering the Hidden History of Cooperation, Cooperative Movements, and Communalism in America (Oakland, CA: PM Press, 2009).

Fred Curtis. “Peak Globalization: Climate change, oil depletion and global trade.” Ecological Economics Volume 69, Issue 2 (December 15, 2009).

Benjamin Darrington. “Government Created Economies of Scale and Capital Specificity.” (Austrian Student Scholars’ Conference, 2007).

Craig DeLancey. “Openshot.” Analog (December 2006).

Brad DeLong. “Another Bad Employment Report (I-Wish-We-Had-a-Ripcord-to-Pull Department).” Grasping Reality with All Eight Tentacles , October 2, 2009 < delong.typepad.com >.

DeLong. “Jobless Recovery: Quiddity Misses the Point.” J. Bradford DeLong’s Grasping Reality with All Eight Tentacles , October 25, 2009 < delong.typepad.com >.

Karl Denninger. “GDP: Uuuuggghhhh – UPDATED.” The Market Ticker , July 31, 2009 < market-ticker.denninger.net >.

Chris Dillow. “Negative Credibility.” Stumbling and Mumbling , October 12, 2007 < stumblingandmumbling.typepad.com >.

Maurice Dobb. Political Economy and Capitalism: Some Essays in Economic Tradition , 2 nd rev. ed. (London: Routledge & Kegan Paul Ltd, 1940, 1960).

Cory Doctorow. “Australian seniors ask Pirate Party for help in accessing right-to-die sites.” Boing Boing , April 9, 2010 < www.boingboing.net >.

Cory Doctorow. “Cheap Facts and the Plausible Premise.” Locus Online , July 5, 2009 < www.locusmag.com >.

Doctorow. Content: Selected Essays on Technology, Creativity, Copyright, and the Future of the Future (San Francisco: Tachyon Publications, 2008).

Doctorow. “The criticism that Ralph Lauren doesn’t want you to see!” BoingBoing , October 6, 2009 < www.boingboing.net >.

Brian Doherty. “The Glories of Quasi-Capitalist Modernity, Dumpster Diving Division.” Reason Hit & Run Blog , September 12, 2007 < reason.com >.

Dale Dougherty. “What’s in Your Garage?” Make , vol. 18 < www.make-digital.com vol18/?pg=39>.

Steve Dubb, Senior Research Associate, The Democracy Collaborative. “A Report on the Cleveland Community Wealth Building Roundtable December 7 – 8, 2007.” < www.community-wealth.org >.

Deborah Durham-Vichr. “Focus on the DeCSS trial.” CNN.Com, July 27, 2000 < archives.cnn.com >.

Barry Eichengreen and Kevin H. O’Rourke. “A Tale of Two Depressions.” VoxEU.Org , June 4, 2009 < www.voxeu.org >.

Eleutheros. “Choice, the Best Sauce.” How Many Miles from Babylon , October 15, 2008 < milesfrombabylon.blogspot.com >.

Mark Elliott. “Some General Off-the-Cuff Reflections on Stigmergy.” Stigmergic Collaboration , May 21, 2006 < stigmergiccollaboration.blogspot.com >.

Elliott. “Stigmergic Collaboration: The Evolution of Group Work.” M/C Journal, May 2006 < journal.media-culture.org.au >.

Elliott. Stigmergic Collaboration: A Theoretical Framework for Mass Collaboration . Doctoral Dissertation, Centre for Ideas, Victorian College of the Arts, University of Melbourne (October 2007).

Ralph Estes. Tyranny of the Bottom Line: Why Corporations Make Good People Do Bad Things (San Francisco: Berrett-Koehler Publishers, 1996).

Stuart Ewen. Captains of Consciousness: Advertising and the Social Roots of Consumer Culture (New York: McGraw-Hill, 1976).

Kathleen Fasanella. “IP Update: DPPA & Fashion Law Blog.” Fashion Incubator , March 10, 2010 < www.fashion-incubator.com >.

Fasanella. “Selling to Department Stores pt. 1.” Fashion Incubator , August 11, 2009 < www.fashion-incubator.com >.

Martha Feldman and James G. March. “Information in Organizations as Signal and Symbol.” Administrative Science Quarterly 26 (April 1981).

Mike Ferner. “Taken for a Ride on the Interstate Highway System.” MRZine (Monthly Review) June 28, 2006 < mrzine.monthlyreview.org >.

Ken Fisher. “Darknets live on after P2P ban at Ohio U.” Ars Technica , May 9, 2007 < arstechnica.com >.

Joseph Flaherty. “Desktop Injection Molding.” February 1, 2010 < replicatorinc.com 2010/02/desktop-injection-molding/>.

Richard Florida. “Are Bailouts Saving the U.S. from a New Great Depression.” Creative Class , March 18, 2009 < www.creativeclass.com >.

Florida. The Rise of the Creative Class (New York: Basic Books, 2002).

Martin Ford. The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future (CreateSpace, 2009).

John Bellamy Foster and Fred Magdoff. “Financial Implosion and Stagnation: Back to the Real Economy.” Monthly Review , December 2008 < www.monthlyreview.org >.

Justin Fox. “The Great Paving How the Interstate Highway System helped create the modern economy--and reshaped the FORTUNE 500.” Reprinted from Fortune . CNNMoney.Com, January 26, 2004 < money.cnn.com >.

John Kenneth Galbraith. The New Industrial State (New York: Signet Books, 1967).

John Gall. Systemantics: How Systems Work and Especially How They Fail (New York: Pocket Books, 1975).

Priya Ganapati. “Open Source Hardware Hackers Start P2P Bank.” Wired , March 18, 2009 < www.wired.com >.

Neil Gershenfeld. Fab: The Coming Revolution on Your Desktop—from Personal Computers to Personal Fabrication (New York: Basic Books, 2005), p. 182.

Kathryn Geurin. “Toybox Outlaws.” Metroland Online , January 29, 2009 < www.metroland.net >.

Bruno Giussani. “Open Source at 90 MPH.” Business Week , December 8, 2006 < www.businessweek.com ?>. See also the OS Car website, < www.theoscarproject.org />.

Malcolm Gladwell. “How David Beats Goliath.” The New Yorker , May 11, 2009 < www.newyorker.com >.

Paul Goodman. Compulsory Miseducation and The Community of Scholars (New York: Vintage books, 1964, 1966).

Goodman. People or Personnel and Like a Conquered Province (New York: Vintage Books, 1964, 1966).

Paul and Percival Goodman. Communitas: Means of Livelihood and Ways of Life (New York: Vintage Books, 1947, 1960).

David Gordon. “Stages of Accumulation and Long Economic Cycles.” in Terence K. Hopkins and Immanuel Wallerstein, eds., Processes of the World-System (Beverly Hills, Calif.: Sage, 1980), pp. 9–45.

Siobhan Gorman, Yochi J. Dreazen and August Cole. “Insurgents Hack U.S. Drones.” Wall Street Journal , December 17, 2009 < online.wsj.com >.

Thomas Greco. The End of Money and the Future of Civilization (White River Junction, Vermont: Chelsea Green Publishing, 2009).

Greco. Money and Debt: A Solution to the Global Crisis (1990), Part III: Segregated Monetary Functions and an Objective, Global, Standard Unit of Account < circ2.home.mindspring.com >.

Edward S. Greenberg. “Producer Cooperatives and Democratic Theory.” in Robert Jackall and Henry M. Levin, eds., Worker Cooperatives in America (Berkeley, Los Angeles, London: University of California Press, 1984).

Anand Giridhardas. “A Pocket-Size Leveler in an Outsized Land.” New York Times , May 9, 2009 < www.nytimes.com >.

Vinay Gupta. “The Global Village Development Bank: financing infrastructure at the individual, household and village level worldwide.” Draft 2 (March 12, 2009) < vinay.howtolivewiki.com / blog/hexayurt/my-latest-piece-the-global-village-development-bank-1348>.

Gupta. “The Unplugged.” How to Live Wiki, February 20, 2006 < howtolivewiki.com The_Unplugged>.

Gupta. “What’s Going to Happen in the Future.” The Bucky-Gandhi Design Institution , June 1, 2008 < vinay.howtolivewiki.com >.

Ted Hall. “100kGarages is Open: A Place to Get Stuff Made.” Open Manufacturing email list, September 15, 2009 < groups.google.com ae45b45de1d055a7?hl=en#>.

Hall (ShopBot) and Derek Kelley (Ponoko). “Ponoko and ShopBot announce partnership: More than 20,000 online creators meet over 6,000 digital fabricators.” joint press release, September 16, 2009. Posted on Open Manufacturing email list, September 16, 2009 < groups.google.com >.

David Hambling. “China Looks to Undermine U.S. Power, With ‘Assassin’s Mace’.” Wired , July 2 < www.wired.com >.

Chuck Hammill. “From Crossbows to Cryptography: Techno-Thwarting the State” (Given at the Future of Freedom Conference, November 1987) <www.csua.berkeley.edu/~ranga/papers/ crossbows2crypto/crossbows2crypto.pdf>.

Bascha Harris. “A very long talk with Cory Doctorow, part 1.” redhat.com, January 2006 < www.redhat.com >.

Jed Harris. “Capitalists vs. Entrepreneurs.” Anomalous Presumptions , February 26, 2007 < jed.jive.com >.

Gifford Hartman. “Crisis in California: Everything Touched by Capital Becomes Toxic.” Turbulence 5 (2010) < turbulence.org.uk >.

Paul Hartzog. “Panarchy: Governance in the Network Age.” < www.panarchy.com >.

Paul Hawken, Amory Lovins, and L. Hunter Lovins. Natural Capitalism: Creating the Next Industrial Revolution (Boston, New York, London: Little, Brown, and Company, 1999).

Richard Heinberg. Peak Everything: Waking Up to the Century of Declines (Gabriola Island, B.C.: New Society Publishers, 2007).

Richard Heinberg. Powerdown (Gabriola Island, British Columbia: New Society Publishers, 2004).

Martin Hellwig. “On the Economics and Politics of Corporate Finance and Corporate Control.” in Xavier Vives, ed., Corporate Governance: Theoretical and Empirical Perspectives (Cambridge: Cambridge University Press, 2000).

Doug Henwood. Wall Street: How it Works and for Whom (London and New York: Verso, 1997).

Karl Hess. Community Technology (New York, Cambridge, Hagerstown, Philadelphia, San Francisco, London, Mexico City, Sao Paulo, Sydney: Harper & Row, Publishers, 1979).

Hess and David Morris. Neighborhood Power: The New Localism (Boston: Beacon Press, 1975).

Dougald Hine. “Social Media vs the Recession.” Changing the World , January 28, 2009 < otherexcuses.blogspot.com >.

Thomas Hodgskin. Labour Defended Against the Claims of Capital (New York: Augustus M. Kelley, 1969 [1825]).

Hodgskin. The Natural and Artificial Right of Property Contrasted. A Series of Letters, addressed without permission to H. Brougham, Esq. M.P. F.R.S. (London: B. Steil, 1832).

Hodgskin. Popular Political Economy: Four Lectures Delivered at the London Mechanics’ Institution (London: Printed for Charles and William Tait, Edinburgh, 1827).

Joshua Holland. “Let the Banks Fail: Why a Few of the Financial Giants Should Crash.” Alternet , December 15, 2008 < www.alternet.org >.

Holland. “The Spectacular, Sudden Crash of the Global Economy.” Alternet , February 24, 2009 < www.alternet.org >.

Lisa Hoover. “Riversimple to Unveil Open Source Car in London This Month.” Ostatic , June 11, 2009 < ostatic.com >.

Rob Hopkins. The Transition Handbook: From Oil Dependency to Local Resilience (Totnes: Green Books, 2008).

“How to Fire Your Boss: A Worker’s Guide to Direct Action.” < www.iww.org strategy/strikes.shtml> (originally a Wobbly Pamphlet, it is reproduced in all its essentials at the I.W.W. Website under the heading of “Effective Strikes and Economic Actions”—although the Wobblies no longer endorse it in its entirety).

Ebenezer Howard. To-Morrow: A Peaceful Path to Real Reform . Facsimile of original 1898 edition, with introduction and commentary by Peter Hall, Dennis Hardy and Colin Ward (London and New York: Routledge, 2003).

Bunnie Huang. “Copycat Corolla?” bunnie’s blog , December 13, 2009 < www.bunniestudios.com >.

Huang. “Tech Trend: Shanzhai.” Bunnie’s Blog , February 26, 2009 < www.bunniestudios.com >.

Michael Hudson. “What Wall Street Wants.” Counterpunch , February 11, 2009 < www.counterpunch.org >.

Eric Hunting. “On Defining a Post-Industrial Style (1): from Industrial blobjects to post-industrial spimes.” P2P Foundation Blog , November 2, 2009 < blog.p2pfoundation.net >.

Hunting. “On Defining a Post-Industrial Style (2): some precepts for industrial design.” P2P Foundation Blog , November 3, 2009 < blog.p2pfoundation.net >.

Hunting. “On Defining a Post-Industrial Style (3): Emerging examples.” P2P Foundation Blog , November 4, 2009 < blog.p2pfoundation.net >.

Hunting. “[Open Manufacturing] Re: Roadmap to Post-Scarcity.” Open Manufacturing , January 12, 2010 < groups.google.com >.

Hunting. “[Open Manufacturing] Re:Vivarium.” Open Manufacturing , March 28, 2009 < groups.google.com #>.

Hunting. “[Open Manufacturing] Re: Why automate? and opinions on Energy Descent?” Open Manufacturing , September 22, 2008 < groups.google.com browse_thread/thread/1f40d031453b94eb>.

Hunting. “Toolbook and the Missing Link.” Open Manufacturing , January 30, 2009 < groups.google.com >.

Samuel P. Huntington, Michael J. Crozier, and Joji Watanuki. The Crisis of Democracy . Report on the Governability of Democracies to the Trilateral Commission: Triangle Paper 8 (New York: New York University Press, 1975).

Jon Husband. “How Hard is This to Understand?” Wirearchy , June 22, 2007 < blog.wirearchy.com _archives/2007/6/22/3040833.html>.

Eric Husman. “Human Scale Part II--Mass Production.” Grim Reader blog , September 26, 2006 < www.zianet.com >.

Husman. “Human Scale Part III--Self-Sufficiency” GrimReader blog , October 2, 2006 < www.zianet.com >.

Husman. “Open Source Automobile,” GrimReader , March 3, 2005 < www.zianet.com >.

Tom Igoe. “Idle speculation on the shan zhai and open fabrication” hello blog , September 4, 2009 < www.tigoe.net >.

Ivan Illich. Deschooling Society (New York, Evanston, San Francisco, London: Harper & Row, 1973).

Illich. Disabling Professions (New York and London: Marion Boyars, 1977).

Illich. “The Three Dimensions of Public Opinion,” in The Mirror of the Past: Lectures and Addresses, 1978–1990 (New York and London: Marion Boyars, 1992).

Illich. Tools for Conviviality (New York, Evanston, San Francisco, London: Harper & Row, 1973).

Illich. Vernacular Values (1980). Online edition courtesy of The Preservation Institute < www.preservenet.com >.

“Ironworkers.” Open Source Ecology Wiki < openfarmtech.org >.

Neil Irwin. “Economic data don’t point to boom times just yet.” Washington Post , April 13, 2010 < www.washingtonpost.com >.

Andrew Jackson. “Recession Far From Over.” The Progressive Economics Forum , August 7, 2009 < www.progressive-economics.ca >.

Ross Jackson. “The Ecovillage Movement.” Permaculture Magazine No. 40 (Summer 2004).

Jane Jacobs. Cities and the Wealth of Nations: Principles of Economic Life (New York: Vintage Books, 1984).

Jacobs. The Economy of Cities (New York: Vintage Books, 1969, 1970).

Marcin Jakubowski. “CEB Proposal—Community Supported Manufacturing.” Factor e Farm weblog , October 23, 2008 < openfarmtech.org >.

Jakubowski. “CEB Prototype II Finished.” Factor e Farm Weblog , August20, 2009 < openfarmtech.org >.

Jakubowski. “CEB Sales: Rocket Fuel for Post-Scarcity Economic Development?” Factor e Farm Weblog , November 28 2009 < openfarmtech.org >.

Jakubowski. “Clarifying OSE Vision.” Factor E Farm Weblog , September 8, 2008 < openfarmtech.org >.

Jakubowski. “Exciting Times: Nearing Product Release.” Factor e Farm Weblog , October 10, 2009 < openfarmtech.org >.

Jakubowski. “Factor e Live Distillations—Part 8—Solar Power Generator,” Factor e Farm Weblog , February 3, 2009 < openfarmtech.org >.

Jakubowski. “Get a Real Job!” Factor E Farm Weblog , September 7, 2009 < openfarmtech.org >.

Jakubowski. “Initial Steps to the Open Source Multimachine.” Factor e Farm Weblog , January 26, 2010 < openfarmtech.org >.

Jakubowski. “MicroTrac Completed.” Factor e Farm Weblog , July 7, 2009 < openfarmtech.org >.

Jakubowski. “Moving Forward.” Factor e Farm Weblog , August 20, 2009 < openfarmtech.org >

Jakubowski. “Open Source Induction Furnace.” Factor e Farm Weblog , December 15, 2009 < openfarmtech.org >.

Jakubowski. “OSE Proposal—Towards a World Class Open Source Research and Development Facility.” v0.12, January 16, 2008 < openfarmtech.org > (accessed August 25, 2009).

Jakubowski. “Power Cube Completed.” Factor e Farm Weblog , June 29, 2009 < openfarmtech.org >.

Jakubowski. “PowerCube on LifeTrak.” Factor e Farm Weblog , April 26, 2010 < openfarmtech.org >.

Jakubowski. “Product.” Factor e Farm Weblog , November 4, 2009 < openfarmtech.org >.

Jakubowski. “Rapid Prototyping for Industrial Swadeshi.” Factor E Farm Weblog , August 10, 2008 < openfarmtech.org >.

Jakubowski. “Soil Pulverizer Annihilates Soil Handling Limits.” Factor e Farm Weblog , September 7, 2009 < openfarmtech.org >.

Jakubowski. “”TED Fellows.” Factor e Farm Weblog , September 22, 2009 < openfarmtech.org >.

Jakubowski. “The Thousandth Brick: CEB Field Testing Report.” Factor e Farm Weblog , Nov. 16, 2008 < openfarmtech.org >.

Jeff Jarvis. “When innovation yields efficiency.” BuzzMachine, June 12, 2009 < www.buzzmachine.com >.

“Jay Rogers: I Challenge You to Make Cool Cars,” Alphachimp Studio Inc., November 10, 2009 < www.alphachimp.com >; Local Motors website at < www.local-motors.com >.

Charles Johnson. “Coalition of Imolakee Workers marches in Miami,” Rad Geek People’s Daily, November 30, 2007 < radgeek.com >.

Johnson. “Dump the rentiers off your back” Rad Geek People’s Daily , May 29, 2008 < radgeek.com >.

Johnson. “In which I fail to be reassured.” Rad Geek People’s Daily , January 26, 2008 < radgeek.com >.

Johnson. “Liberty, Equality, Solidarity: Toward a Dialectical Anarchism.” in Roderick T. Long and Tibor R. Machan, eds., Anarchism/Minarchism: Is a Government Part of a Free Country? (Hampshire, UK, and Burlington, Vt.: Ashgate Publishing Limited, 2008).

Johnson. “¡Sí, Se Puede! Victory for the Coalition of Imolakee Workers in the Burger King penny-per-pound campaign.” Rad Geek People’s Daily , May 23, 2008 < radgeek.com 23/ si_se/>.

H. Thomas Johnson. “Foreword.” William H. Waddell and Norman Bodek, Rebirth of American Industry: A Study of Lean Management (Vancouver, WA: PCS Press, 2005).

Warren Johnson. Muddling Toward Frugality (San Francisco: Sierra Club Books, 1978).

Linda Joseph and Albert Bates. “What Is an ‘Ecovillage’?” Communities Magazine No. 117 (2003).

Matthew Josephson. The Robber Barons: The Great American Capitalists 1861–1901 (New York: Harcourt, Brace & World, Inc., 1934, 1962).

Brian Kaller. “Future Perfect: the future is Mayberry, not Mad Max.” Energy Bulletin , February 27, 2009 (from The American Conservative, August 2008) < www.energybulletin.net 48209>.

Jeffrey Kaplan. “The Gospel of Consumption: And the better future we left behind.” Orion , May/June 2008 < www.orionmagazine.org >.

Raphael Kaplinsky. “From Mass Production to Flexible Specialization: A Case Study of Microeconomic Change in a Semi-Industrialized Economy.” World Development 22:3 (March 1994).

Kevin Kelly. “Better Than Free.” The Technium , January 31, 2008 < www.kk.org / thetechnium/archives/2008/01/ better_than_fre.php>.

Marjorie Kelly. “The Corporation as Feudal Estate.” (an excerpt from The Divine Right of Capital) Business Ethics , Summer 2001. Quoted in GreenMoney Journal, Fall 2008 < greenmoneyjournal.com >.

Paul T. Kidd. Agile Manufacturing: Forging New Frontiers (Addison-Wesley Publishing Company, 1994).

Lawrence Kincheloe. “First Dedicated Project Visit Comes to a Close.” Factor e Farm Weblog , October 25, 2009 < openfarmtech.org >

Kincheloe. “One Month Project Visit: Take Two.” Factor e Farm Weblog , October 4, 2009 < openfarmtech.org >.

Mark Kinney. “In Whose Interest?” (n.d) < www.appropriate-economics.org in_whose_interest.pdf>.

Kinsale 2021: An Energy Descent Action Plan . Version.1. 2005. By Students of Kinsale Further Education College. Edited by Rob Hopkins < transitionculture.org >.

Peter Kirwan. “Bad News: What if the money’s not coming back?” Wired.Co.Uk, August 7, 2009 < www.wired.co.uk >.

Ezra Klein. “A Fast Recovery? Or a Slow One?” Washington Post , April 14, 2010 < voices.washingtonpost.com >.

Ezra Klein. “Why Labor Matters.” The American Prospect , November 14, 2007 < www.prospect.org >.

Naomi Klein. No Logo (New York: Picador, 1999).

Keith Kleiner. “3D Printing and Self-Replicating Machines in Your Living Room—Seriously.” Singularity Hub , April 9, 2009 < singularityhub.com >.

Thomas L. Knapp. “The Revolution Will Not Be Tweeted,” Center for a Stateless Society, October 5, 2009 < c4ss.org >.

Jennifer Kock. “Employee Sabotage: Don’t Be a Target!” < www.workforce.com features/22/20/88/mdex-printer.php>.

Frank Kofsky. Harry Truman and the War Scare of 1948 , (New York: St. Martin’s Press, 1993).

Leopold Kohr. The Overdeveloped Nations: The Diseconomies of Scale (New York: Schocken Books, 1978, 1979).

Gabriel Kolko. Confronting the Third World: United States Foreign Policy 1945–1980 (New York: Pantheon Books, 1988).

Kolko. The Triumph of Conservatism: A Reinterpretation of American History 1900–1916 (New York: The Free Press of Glencoe, 1963).

Sam Kornell. “Will PeakOil Turn Flying into Something Only Rich People Can Afford?” Alternet , May 7, 2010 < www.alternet.org will_peak_oil_turn_flying_into_something_only_rich_people_can_afford>.

Peter Kropotkin. The Conquest of Bread (New York: Vanguard Press, 1926).

Kropotkin. Fields, Factories and Workshops: or Industry Combined with Agriculture and Brain Work with Manual Work (New York: Greenwood Press, Publishers, 1968 [1898]).

Paul Krugman. “Averting the Worst.” New York Times , August 9, 2009 < www.nytimes.com / 2009/08/10/opinion/10krugman.html>.

Krugman. “Double dip warning.” Paul Krugman Blog, New York Times , Dec. 1, 2009 < krugman.blogs.nytimes.com >.

Krugman. “Life Without Bubbles.” New York Times , January 6, 2009 < www.nytimes.com / 2008/12/22/opinion/22krugman.html?ref=opinion>.

Krugman. “Use, Delay, and Obsolescence.” The Conscience of a Liberal , February 13, 2009 < krugman.blogs.nytimes.com >.

James Howard Kunstler. “Lagging Recognition.” Clusterfuck Nation , June 8, 2009 < kunstler.com >.

Kunstler. The Long Emergency: Surviving the End of Oil, Climate Change, and Other Converging Catastrophes of the Twenty-First Century (Grove Press, 2006).

Kunstler. “Note: Hope = Truth.” Clusterfuck Nation , April 20, 2009 < jameshowardkunstler.typepad.com >.

Kunstler. World Made by Hand (Grove Press, 2009).

Karim Lakhana. “Communities Driving Manufacturers Out of the Design Space.” The Future of Communities Blog , March 25, 2007 < www.futureofcommunities.com >.

“Lawrence Kincheloe Contract.” OSE Wiki < openfarmtech.org >.

Eli Lake. “Hacking the Regime.” The New Republic , September 3, 2009 < www.tnr.com / article/politics/hacking-the-regime>.

Steve Lawson. “The Future of Music is... Indie!” Agit8 , September 10, 2009 < agit8.org.uk >.

David S. Lawyer. “Are Roads and Highways Subsidized?” March 2004 < www.lafn.org / ~dave/trans/econ/highway_subsidy.html>.

William Lazonick. Business Organization and the Myth of the Market Economy (Cambridge, 1991).

John Leland. “Finding in Foreclosure a Beginning, Not an End.” New York Times , March 21, 2010 < www.nytimes.com >.

Jay Leno. “Jay Leno’s 3-D Printer Replaces Rusty Old Parts.” Popular Mechanics , July 2009 < www.popularmechanics.com >.

Daniel S. Levine. Disgruntled: The Darker Side of the World of Work (New York: Berkley Boulevard Books, 1998).

Rick Levine, Christopher Locke, Doc Searls and David Weinberger. The Cluetrain Manifesto: The End of Business as Usual (Perseus Books Group, 2001) < www.cluetrain.com >.

Claude Lewenz. How to Build a Village (Auckland, New Zealand: Village Forum Press and Jackson House Publishing Company, 2007).

Bernard Lietaer. The Future of Money: A New Way to Create Wealth, Work and a Wiser World (London: Century, 2001).

“LifeTrac.” Open Source Ecology wiki < openfarmtech.org >.

Roderick Long. “Free Market Firms: Smaller, Flatter, and More Crowded.” Cato Unbound , Nov. 25, 2008 < www.cato-unbound.org >.

Long. “The Winnowing of Ayn Rand.” Cato Unbound , January 20, 2010 < www.cato-unbound.org >.

“Long-Term Unemployment.” Economist’s View , November 9, 2009 < economistsview.typepad.com >.

Luca. “TeleKommunisten.” (interview with Dmytri Kleiner), ecopolis , May 21, 2007 < www.ecopolis.org >.

Spencer H. MacCallum. “E. C. Riegel on Money.” (January 2008) < www.newapproachtofreedom.info >.

Andrew MacLeod. “Mondragon—Cleveland—Sacramento.” Cooperate and No One Gets Hurt , October 10, 2009 < coopgeek.wordpress.com >.

“McDonald’s Restaurants v Morris & Steele.” Wikipedia < en.wikipedia.org McLibel_case> (accessed December 26, 2009).

Bill McKibben. Deep Economy: The Wealth of Communities and the Durable Future (New York: Times Books, 2007).

Karl Marx. The Poverty of Philosophy. Marx and Engels Collected Works , vol. 6 (New York: International Publishers, 1976).

Harry Magdoff and Paul Sweezy. The End of Prosperity: The American Economy in the 1970s (New York and London: Monthly Review Press, 1977).

Magdoff and Sweezy. The Irreversible Crisis: Five Essays by Harry Magdoff and Paul M. Sweezy (New York: Monthly Review Press, 1988).

“Mahatma Gandhi on Mass Production.” (1936). TinyTech Plants < www.tinytechindia.com / gandhiji2.html>.

Katherine Mangu-Ward. “The Sheriff is Coming! The Sheriff is Coming!” Reason Hit & Run , January 6, 2010 < reason.com >.

“Manufacture Goods, Not Needs.” E. F. Schumacher Society Blog , October 11, 2009 < efssociety.blogspot.com >.

Mike Masnick. “Artificial Scarcity is Subject to Massive Deflation.” Techdirt , < techdirt.com 20090624/ 0253385345.shtml>.

Masnick. “How Automakers Abuse Intellectual Property Laws to Force You to Pay More For Repairs.” Techdirt , December 29, 2009 < techdirt.com >.

Masnick. “Yet Another High School Newspaper Goes Online to Avoid District Censorship.” Techdirt , January 15, 200 < www.techdirt.com >.

Jeremy Mason. “Sawmill Development.” Factor e Farm Weblog , January 22, 2009 < openfarmtech.org >.

Jeremy Mason. “What is Open Source Ecology?” Factor e Farm Weblog , March 20, 2009 < openfarmtech.org >.

Race Matthews. Jobs of Our Own: Building a Stakeholder Society—Alternatives to the Market & the State (Annandale, NSW, Australia: Pluto Press, 1999).

Paul Mattick. “The Economics of War and Peace.” Dissent (Fall 1956).

J.E. Meade. “The Theory of Labour-Managed Firms and Profit Sharing.” in Jaroslav Vanek, ed., Self-Management: Economic Liberation of Man (Hammondsworth, Middlesex, England: Penguin Education, 1975).

Seymour Melman. The Permanent War Economy: American Capitalism in Decline (New York: Simon and Schuster, 1974).

Richard Milne. “Crisis and climate force supply chain shift.” Financial Times , August 9, 2009 < www.ft.com >.

MIT Center for Bits and Atoms. “Fab Lab FAQ.” < fab.cba.mit.edu > (accessed August 31, 2009).

“Monsanto Declares War on ‘rBGH-free’ Dairies.” April 3, 2007 (reprint of Monsanto press release by Organic Consumers Association) < www.organicconsumers.org >.

Dante-Gabryell Monson. “[p2p-research] trends? : “Corporate Dropouts.” towards Open diy ? ...” P2P Research, October 13, 2009 < listcultures.org >.

William Morris. News From Nowhere: or, An Epoch of Rest (1890). Marxists.Org online text < www.marxists.org >.

Jim Motavalli. “Getting Out of Gridlock: Thanks to the Highway Lobby, Now We’re Stuck in Traffic. How Do We Escape?” E Magazine , March/April 2002 < www.emagazine.com >.

“Multimachine.” Wikipedia < en.wikipedia.org > (accessed August 31, 2009>; < groups.yahoo.com >.

“Multimachine & Flex Fab--Open Source Ecology.” < openfarmtech.org >.

Lewis Mumford. The City in History: Its Transformations, and Its Prospects (New York: Harcourt, Brace, & World, Inc., 1961).

Mumford. Technics and Civilization (New York: Harcourt, Brace, and Company, 1934).

Charles Nathanson. “The Militarization of the American Economy.” in David Horowitz, ed., Corporations and the Cold War (New York and London: Monthly Review Press, 1969).

Daisy Nguyen. “High tech vehicles pose trouble for some mechanics.” North County Times , December 26, 2009 < nctimes.com >.

David F. Noble. America by Design: Science, Technology, and the Rise of Corporate Capitalism (New York: Alfred A. Knopf, 1977).

Noble. Forces of Production: A Social History of American Automation (New York: Alfred A. Knopf, 1984).

James O’Connor. Accumulation Crisis (New York: Basil Blackwell, 1984).

O’Connor. The Fiscal Crisis of the State (New York: St. Martin’s Press, 1973).

“October 30 2009: An interview with Stoneleigh—the case for deflation.” The Automatic Earth < theautomaticearth.blogspot.com >.

Ohio Employee Ownership Center. “Cleveland Goes to Mondragon.” Owners at Work (Winter 2008–2009) < dept.kent.edu Mondragon.pdf>.

“Open Source Hardware.” P2P Foundation Wiki < www.p2pfoundation.net / Open_Source_Hardware>.

“Open Source Fab Lab.” Open Source Ecology wiki (accessed August 22, 2009) < openfarmtech.org >.

“Organizational Strategy.” Open Source Ecology wiki, February 11, 2009 < openfarmtech.org > (accessed August 28, 2009).

Franz Oppenheimer. “A Post Mortem on Cambridge Economics (Part Three).” The American Journal of Economics and Sociology , vol. 3, no. 1 (1944), pp, 122–123, [115–124].

George Orwell. 1984 . Signet Classics reprint (New York: Harcourt Brace Jovanovich, 1949, 1981).

“Our Big Idea!” 100kGarages site < 100kgarages.com >.

“Pa. bars hormone-free milk labels.” USA Today, November 13, 2007 < www.usatoday.com / news/nation/2007-11-13-milk-labels_N.htm>.

Keith Paton. The Right to Work or the Fight to Live? (Stoke-on-Trent, 1972).

Michael Parenti. “Capitalism’s Self-Inflicted Apocalypse.” Common Dreams , January 21, 2009 < www.commondreams.org >.

David Parkinson. “A coming world that’s ‘a whole lot smaller.’” The Globe and Mail , May 19, 2009 < docs.google.com >.

Michael Perelman. “The Political Economy of Intellectual Property.” Monthly Review , January 2003 < www.monthlyreview.org >.

Tom Peters. The Tom Peters Seminar: Crazy Times Call for Crazy Organizations (New York: Vantage Books, 1999).

Diane Pfeiffer. “Digital Tools, Distributed Making and Design.” Thesis submitted to the faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for Master of Science in Architecture, 2009.

“PhysicalDesignCo teams up with 100kGarages.” 100kGarages News , October 4, 2009 < blog.100kgarages.com >.

Marge Piercy. Woman on the Edge of Time (New York: Fawcett Columbine, 1976).

Chris Pinchen. “Resilience: Patterns for thriving in an uncertain world.” P2P Foundation Blog , April 17, 2010 < blog.p2pfoundation.net >.

Michael J. Piore and Charles F. Sabel. “Italian Small Business Development: Lessons for U.S. Industrial Policy.” in John Zysman and Laura Tyson, eds., American Industry in International Competition: Government Policies and Corporate Strategies (Ithaca and London: Cornell University Press, 1983).

Piore and Sabel. “Italy’s High-Technology Cottage Industry.” Transatlantic Perspectives 7 (December 1982).

Piore and Sabel. The Second Industrial Divide: Possibilities for Prosperity (New York: HarperCollins, 1984).

“Plowboy Interview.” (Ralph Borsodi), Mother Earth News , March-April 1974 < www.soilandhealth.org >.

David Pollard, “All About Power and the Three Ways to Topple It (Part 1).” How to Save the World , February 18, 2005 < blogs.salon.com >.

Pollard. “All About Power—Part Two,” How to Save the World. February 21, 2005 < blogs.salon.com >.

Pollard. “The Future of Business.” How to Save the World , January 14, 2004 < blogs.salon.com >.

Pollard. “Peer Production.” How to Save the World , October 28, 2005 < blogs.salon.com >.

Pollard. “Replicating (Instead of Growing) Natural Small Organizations.” how to save the world , January 14, 2009 < howtosavetheworld.ca >.

Pollard. “Ten Important Business Trends.” How to Save the World , May 12, 2009 < blogs.salon.com >.

J.A. Pouwelse, P. Garbacki, D.H.J. Epema, and H.J. Sips. “Pirates and Samaritans: a Decade of Measurements on Peer Production and their Implications for Net Neutrality and Copyright.” (The Netherlands: Delft University of Technology, 2008) < www.tribler.org PiratesSamaritans>.

“PR disaster, Wikileaks and the Streisand Effect.” PRdisasters.com, March 3, 2007 < prdisasters.com >.

David L Prychitko. Marxism and Workers’ Self-Management: The Essential Tension ( New York; London; Westport, Conn.: Greenwood Press, 1991).

“Public Service Announcement—Craig Murray, Tim Ireland, Boris Johnson, Bob Piper and Alisher Usmanov…” Chicken Yoghurt , September 20, 2007 < www.chickyog.net >.

Jeff Quackenbush and Jessica Puchala. “Middleville woman threatened with fines for watching neighbors’ kids,” WZZM13.Com, September 24, 2009 < www.wzzm13.com #>.

Quiddity. “Job-loss recovery,” uggabugga , October 25, 2009 < uggabugga.blogspot.com >.

John Quiggin. “The End of the Cash Nexus.” Crooked Timber , March 5, 2009 < crookedtimber.org >.

Nick Raaum. “Steam Dreams.” Factor e Farm Weblog , January 22, 2009 < openfarmtech.org >.

Raghuram Rajan and Luigi Zingales. “The Governance of the New Enterprise,” in Xavier Vives, ed., Corporate Governance: Theoretical and Empirical Perspectives (Cambridge: Cambridge University Press, 2000).

Joshua Cooper Ramo. “Jobless in America: Is Double-Digit Unemployment Here to Stay?” Time , September 11, 2009 < www.time.com >.

JP Rangaswami. “Thinking about predictability: More musings about Push and Pull.” Confused of Calcutta , May 4, 2010 < confusedofcalcutta.com >.

Eric S. Raymond. The Cathedral and the Bazaar < catb.org >.

Raymond. “Escalating Complexity and the Collapse of Elite Authority.” Armed and Dangerous , January 5, 2010 < esr.ibiblio.org >.

Eric Reasons. “Does Intellectual Property Law Foster Innovation?” The Tinker’s Mind , June 14, 2009 < blog.ericreasons.com >.

Reasons. “The Economic Reset Button.” The Tinker’s Mind , July 2, 2009 < blog.ericreasons.com >.

Reasons. “Innovative Deflation.” The Tinker’s Mind , July 5, 2009 < blog.ericreasons.com / 2009/07/innovative-deflation.html>.

Reasons, “Intellectual Property and Deflation of the Knowledge Economy.” The Tinker’s Mind , June 21, 2009. < blog.ericreasons.com >.

Lawrence W. Reed. “A Tribute to the Polish People,” The Freeman: Ideas on Liberty , October 2009 < www.thefreemanonline.org >.

George Reisman. “Answer to Paul Krugman on Economic Inequality.” The Webzine , March 3, 2006 < thewebzine.com >.

“RepRap Project.” Wikipedia < en.wikipedia.org > (accessed August 31, 2009).

E. C. Riegel. The New Approach to Freedom: together with Essays on the Separation of Money and State . Edited by Spencer Heath MacCallum (San Pedro, California: The Heather Foundation, 1976) < www.newapproachtofreedom.info >.

Riegel. Private Enterprise Money: A Non-Political Money System (1944) < www.newapproachtofreedom.info >.

John Robb. “THE BAZAAR’S OPEN SOURCE PLATFORM.” Global Guerrillas , Sept3ember 24, 2004 < globalguerrillas.typepad.com >.

Robb. “Below Replacement Level.” Global Guerrillas , February 20, 2009 < globalguerrillas.typepad.com >.

Robb. “An Entrepreneur’s Approach to Resilient Communities.” Global Guerrillas , February 22, 2010 < globalguerrillas.typepad.com >.

Robb. “Fighting an Automated Bureaucracy.” Global Guerrillas , December 8, 2009 < globalguerrillas.typepad.com >.

Robb. “HOLLOW STATES vs. FAILED STATES.” Global Guerrillas , March 24, 2009 < globalguerrillas.typepad.com >.

Robb. “INFOWAR vs. CORPORATIONS.” Global Guerrillas , October 1, 2009 < globalguerrillas.typepad.com >.

Robb. “Onward to a Hollow State.” Global Guerrillas , September 22, 2009 < globalguerrillas.typepad.com >.

Robb. “Resilient Communities and Scale Invariance.” Global Guerrillas , April 16, 2009 < globalguerrillas.typepad.com >.

Robb. “Resilient Communities: Transition Towns.” Global Guerrillas , April 7, 2008 < globalguerrillas.typepad.com >.

Robb. “STANDING ORDER 8: Self-replicate.” Global Guerrillas , June 3, 2009 < globalguerrillas.typepad.com >.

Robb. “STEMI Compression.” Global Guerrillas , November 12, 2008 < globalguerrillas.typepad.com >.

Robb. “Stigmergic Leaning and Global Guerrillas.” Global Guerrillas , July 14, 2004 < globalguerrillas.typepad.com >.

Robb. “SUPER EMPOWERMENT: Hack a Predator Drone.” Global Guerrillas , December 17, 2009 < globalguerrillas.typepad.com >.

Robb. “The Switch to Local Manufacturing.” Global Guerrillas , July 8, 2009 < globalguerrillas.typepad.com >.

Robb. “Viral Resilience.” Global Guerrillas , January 12, 2009 < globalguerrillas.typepad.com >.

Robb. “You Are in Control.” Global Guerrillas , January 3, 2010 < globalguerrillas.typepad.com >.

Andy Robinson. “[p2p research] Berardi essay.” P2P Research email list, May 25, 2009 < listcultures.org >.

Kim Stanley Robinson. Green Mars (New York, Toronto, London, Sydney, Auckland: Bantam Books, 1994).

Nick Robinson. “Even Without a Union, Florida Wal-Mart Workers Use Collective Action to Enforce Rights.” Labor Notes , January 2006. Reproduced at Infoshop, January 3, 2006 < www.infoshop.org >.

Janko Roettgers. “The Pirate Bay: Distributing the World’s Entertainment for $3,000 a Month.” NewTeeVee.Com, July 19, 2009 < newteevee.com >.

Paul M. Romer. “Endogenous Technological Change.” (December 1989). NBER Working Paper No. W3210.

Joseph Romm. “McCain’s Cruel Offshore Drilling Hoax.” CommonDreams.Org , July 11, 2008 < www.commondreams.org >.

David F. Ronfeldt. Tribes, Institutions, Markets, Networks P-7967 (Santa Monica: RAND, 1996) < www.rand.org >.

Ronfeldt and Armando Martinez. “A Comment on the Zapatista Netwar.” in Ronfeldt and Arquilla, In Athena’s Camp: Preparing for Conflict in the Information Age (Santa Monica: Rand, 1997),

Murray N. Rothbard. Power and Market: Government and the Economy (Menlo Park, Calif.: Institute for Humane Studies, Inc., 1970).

Jonathan Rowe. “Entrepreneurs of Cooperation,” Yes! , Spring 2006 < www.yesmagazine.org >.

Jeffrey Rubin. “The New Inflation,” StrategEcon (CIBC World Markets), May 27, 2008 < research.cibcwm.com >.

Rubin. Why Your World is About to Get a Whole Lot Smaller: Oil and the End of Globalization (Random House, 2009).

Rubin and Benjamin Tal. “Will Soaring Transport Costs Reverse Globalization?” StrategEcon , May 27, 2008.

Eric Rumble, “Toxic Shocker.” Up! Magazine , January 1, 2007 < www.up-magazine.com >.

Alan Rusbridge. “First Read: The Mutualized Future is Bright,” Columbia Journalism Review , October 19, 2009 < www.cjr.org >.

Douglas Rushkoff. “How the Tech Boom Terminated California’s Economy,” Fast Company , July 10, 2009 < www.fastcompany.com >.

Charles F. Sabel. “A Real-Time Revolution in Routines.” Charles Hecksher and Paul S. Adler, The Firm as a Collaborative Community: Reconstructing Trust in the Knowledge Economy (New York: Oxford University Press, 2006).

Reihan Salam. “The Dropout Economy.” Time , March 10, 2010 < www.time.com specials/packages/printout/0,29239,1971133_1971110_1971126,00.html>.

Kirkpatrick Sale. Human Scale (New York: Coward, McCann, & Geoghegan, 1980).

Julian Sanchez. “Dammit, Apple,” Notes from the Lounge , June 2, 2008 < www.juliansanchez.com >.

“Say No to Schultz Mansion Purchase” Starbucks Union < www.starbucksunion.org >.

F.M. Scherer and David Ross. Industrial Market Structure and Economic Performance . 3 rd ed (Boston: Houghton Mifflin, 1990).

Ron Scherer. “Number of long-term unemployed hits highest rate since 1946.” Christian Science Monitor , January 8, 2010 < www.csmonitor.com >.

E. F. Schumacher. Good Work (New York, Hagerstown, San Fransisco, London: Harper & Row, 1979).

Schumacher. Small is Beautiful: Economics as if People Mattered (New York, Hagerstown, San Francisco, London: Harper & Row, Publishers, 1973).

Joseph Schumpeter. History of Economic Analysis . Edited from manuscript by Elizabeth Boody Schumpeter (New York: Oxford University Press, 1954).

Schumpeter. “Imperialism,” in Imperialism, Social Classes: Two Essays by Joseph Schumpeter . Translated by Heinz Norden. Introduction by Hert Hoselitz (New York: Meridian Books, 1955).

Tom Scotney. “Birmingham Wragge team to focus on online comment defamation.” Birmingham Post , October 28, 2009 < www.birminghampost.net >.

James Scott. Seeing Like a State (New Haven and London: Yale University Press, 1998).

Butler Shaffer. Calculated Chaos: Institutional Threats to Peace and Human Survival (San Francisco: Alchemy Books, 1985).

Shaffer. In Restraint of Trade: The Business Campaign Against Competition, 1918–1938 (Lewisburg: Bucknell University Press, 1997).

Laurence H. Shoup and William Minter. “Shaping a New World Order: The Council on Foreign Relations’ Blueprint for World Hegemony, 1939–1945,” in Holly Sklar, ed., Trilateralism: The Trilateral Commission and Elite Planning for World Management (Boston: South End Press, 1980).

Christian Siefkes. From Exchange to Contributions: Generalizing Peer Production into the Physical World Version 1.01 (Berlin, October 2007).

Siefkes. “[p2p-research] Fwd: Launch of Abundance: The Journal of Post-Scarcity Studies, preliminary plans,” Peer to Peer Research List, February 25, 2009 < listcultures.org / pipermail/p2presearch_listcultures.org/2009-February/001555.html>.

Arthur Silber. “An Evil Monstrosity: Thoughts Upon the Death State.” Once Upon a Time , April 20, 2010 < powerofnarrative.blogspot.com >.

Charles Hugh Smith. “End of Work, End of Affluence III: The Rise of Informal Businesses.” Of Two Minds , December 10, 2009 < www.oftwominds.com >.

Smith. “The Future of Manufacturing in the U.S.” oftwominds , February 5, 2010 < charleshughsmith.blogspot.com >.

Smith. “Globalization and China: Neoliberal Capitalism’s Last ‘Fix’,” Of Two Minds , June 29, 2009 < www.oftwominds.com >.

Smith. “The Travails of Small Business Doom the U.S. Economy,” Of Two Minds , August 17, 2009 < charleshughsmith.blogspot.com >.

Smith. “Trends for 2009: The Rise of Informal Work.” Of Two Minds , December 30, 2009 < www.oftwominds.com >.

Smith, “Unemployment: The Gathering Storm,” Of Two Minds , September 26, 2009 < charleshughsmith.blogspot.com >.

Smith. “Welcome to America’s Lost Decade(s),” Of Two Minds , September 18, 2009 < charleshughsmith.blogspot.com >.

Smith. “What if the (Debt Based) Economy Never Comes Back?” Of Two Minds , July 2, 2009 < www.oftwominds.com >.

Johan Soderberg. Hacking Capitalism: The Free and Open Source Software Movement (New York and London: Routledge, 2008).

“Solar Turbine—Open Source Ecology.” < openfarmtech.org > Accessed January 5, 2009.

Donna St. George. “Pew report shows 50-year high point for multi-generational family households.” Washington Post , March 18, 2010 < www.washingtonpost.com article/2010/03/18/AR2010031804510.html>.

L. S. Stavrianos. The Promise of the Coming Dark Age (San Francisco: W. H. Freeman and Co. 1976).

Barry Stein. Size, Efficiency, and Community Enterprise (Cambridge: Center for Community Economic Development, 1974).

Anton Steinpilz. “Destructive Creation: BuzzMachine’s Jeff Jarvis on Internet Disintermediation and the Rise of Efficiency.” Generation Bubble , June 12, 2009 < generationbubble.com / 2009/ 06/ 12/destructive-creation-buzzmachines-jeff-jarvis-on-internet-disintermediation-and-the-rise-of-efficiency/>.

Neal Stephenson. Snow Crash (Westminster, Md.: Bantam Dell Pub Group, 2000).

Bruce Sterling. “The Power of Design in your exciting new world of abject poverty.” Wired: Beyond the Beyond , February 21, 2010 < www.wired.com >.

“Stigmergy.” Wikipedia < en.wikipedia.org > (accessed September 29, 2009).

Carin Stillstrom and Mats Jackson. “The Concept of Mobile Manufacturing.” Journal of Manufacturing Systems 26:3–4 (July 2007) < www.sciencedirect.com ArticleURL&_udi=B6VJD-4TK3FG8-6&_user=108429&_rdoc=1&_fmt &_orig=search&_sort=d&view=c&_version=1&_urlVersion=0&_userid=108429&md5=bf6e603b5de29cdfd026d5d00379877c>.

David Streitman. “Rock Bottom for Decades, but Showing Signs of Life.” New York Times , February 1, 2009 < www.nytimes.com >.

Dan Strumpf. “Exec Says Toyota Prepared for GM Bankruptcy.” Associated Press , April 8, 2009 < abcnews.go.com >.

Daniel Suarez. Daemon (Signet, 2009).

Suarez. Freedom(TM) (Dutton 2010).

Kevin Sullivan. “As Economy Plummets, Cashless Bartering Soars on the Internet.” Washington Post , March 14, 2009 < www.washingtonpost.com AR2009031303035_pf.html>.

“Supply Chain News: Walmart Joins Kohl’s in Offering Factoring Program to Apparel Suppliers.” Supply Chain Digest , November 17, 2009 < www.scdigest.com 09-11-17-2.PHP?cid=2954&ctype=conte>.

Vin Suprynowicz. “Schools guarantee there can be no new Washingtons.” Review Journal , February 10, 2008 < www.lvrj.com >.

Paul Sweezy. “Competition and Monopoly.” Monthly Review (May 1981).

Joseph Tainter. The Collapse of Complex Societies (Cambridge, New York, New Rochelle, Melbourne, Sydney: Cambridge University Press, 1988).

Don Tapscott and Anthony D. Williams. Wikinomics: How Mass Collaboration Changes Everything (New York: Portfolio, 2006).

“Telekommunisten: The Revolution is Coming.” < telekommunisten.net > Accessed October 19, 2009.

Clive Thompson. “The Dream Factory.” Wired , September 2005 < www.wired.com / wired/archive/13.09/fablab_pr.html>.

E. P. Thompson. The Making of the English Working Class (New York: Vintage Books, 1963, 1966).

Thoreau. “More on the swarthy threat to our precious carry-on fluids.” Unqualified Offerings , December 26, 2009 < highclearing.com >.

“Torch Table Build.” Open Source Ecology wiki (accessed August 22, 2009 < openfarmtech.org >.

Ted Trainer. “Local Currencies.” (September 4, 2008), The Simpler Way < ssis.arts.unsw.edu.au >.

Trainer. “The Transition Towns Movement; its huge significance and a friendly criticism.” (We) can do better, July 30, 2009 < candobetter.org >.

Trainer. “We Need More Than LETS.” The Simpler Way < ssis.arts.unsw.edu.au D11WeNdMreThLETS2p.html>.

Gul Tuysuz. “An ancient tradition makes a little comeback.” Hurriyet DailyNews , January 23, 2009 < www.hurriyet.com.tr >.

Dylan Tweney. “DIY Freaks Flock to ‘Hacker Spaces’ Worldwide.” Wired , March29, 2009 < www.wired.com >.

“Uh, oh, higher jobless rates could be the new normal.” New York Daily News , October 23, 2009 < www.nydailynews.com >.

United States Participation in the Multilateral Development Banks in the 1980s . Department of the Treasury (Washingon, DC: 1982).

Bob Unruh. “Food co-op hit by SWAT raid fights back.” WorldNetDaily , December 24, 2008 < www.wnd.com >.

Unruh. “SWAT raid on food co-op called ‘entrapment’.” WorldNetDaily , December 26, 2008 < www.wnd.com >.

“U.S. Suffering Permanent Destruction of Jobs.” Washington’s Blog , October 5, 2009 < www.washingtonsblog.com >

Jonathan J. Vaccaro. “The Next Surge—Counterbureaucracy.” New York Times , December 7, 2009 < www.nytimes.com >.

Jeff Vail. “2010—Predictions and Catabolic Collapse.” Rhizome , January 4, 2010 < www.jeffvail.net >.

Vail. “The Design Imperative.” JeffVail.Net , April 8, 2007 < www.jeffvail.net 04/design-imperative.html>.

Vail. “Diagonal Economy 1: Overview.” JeffVail.Net , August 24, 2009 < www.jeffvail.net / 2009/08/diagonal-economy-1-overview.html>.

Vail. “The Diagonal Economy 5: The Power of Networks.” Rhizome , December 21, 2009 < www.jeffvail.net >.

Vail. “Five Geopolitical Feedback-Loops in Peak Oil.” JeffVail.Net , April 23, 2007 < www.jeffvail.net >.

Vail. “Re-Post: Hamlet Economy.” Rhizome , July 28, 2008 < www.jeffvail.net >.

Vail. A Theory of Power (iUniverse, 2004) < www.jeffvail.net >.

Vail. “What is Rhizome?” JeffVail.Net , January 28, 2008 < www.jeffvail.net >.

Lyman P. van Slyke. “Rural Small-Scale Industry in China.” in Richard C. Dorf and Yvonne L. Hunter, eds., Appropriate Visions: Technology the Environment and the Individual (San Francisco: Boyd & Fraser Publishing Company, 1978).

“Venture Communism.” P2P Foundation Wiki < p2pfoundation.net > (accessed August 8, 2009.

Chris Vernon. “Peak Coal—Coming Soon?” The Oil Drum: Europe , April 5, 2007 < europe.theoildrum.com >.

William Waddell. “But You Can’t Fool All the People All the Time.” Evolving Excellence , August 25, 2009 < www.evolvingexcellence.com >.

Waddell. “The Irrelevance of the Economists.” Evolving Excellence , May 6, 2009 < www.evolvingexcellence.com >.

Waddell and Norman Bodek. The Rebirth of American Industry: A Study of Lean Management (Vancouver, WA: PCS Press, 2005).

“Wal-Mart Nixes ‘Open Availability’ Policy.” Business & Labor Reports (Human Resources section), June 16, 2005 < hr.blr.com >.

Jesse Walker. “The Satellite Radio Blues: Why is XM Sirius on the verge of bankruptcy?” Reason , February 27, 2009 < reason.com >.

Tom Walker. “The Doppelganger Effect.” EconoSpeak , January 2, 2010 < econospeak.blogspot.com doppelg-effect.html>.

Todd Wallack. “Beware if your blog is related to work.” San Francisco Chronicle , January 25, 2005 < www.sfgate.com >.

Immanuel Wallerstein. “Household Structures and Labor-Force Formation in the Capitalist World Economy.” in Joan Smith, Immanuel Wallerstein, Hans-Dieter Evers, eds., Households and the World Economy (Beverly Hills, London, New Delhi: Sage Publications, 1984).

Wallerstein and Joan Smith. “Households as an institution of the world-economy.” in Smith and Wallerstein, eds., Creating and Transforming Households: The constraints of the world-economy (Cambridge; New York; Oakleigh, Victoria; Paris: Cambridge University Press, 1992).

Colin Ward. “Anarchism and the informal economy.” The Raven No. 1 (1987).

Ward. Anarchy in Action (London: Freedom Press, 1982).

“What are we working on?” 100kGarages , January 8, 2010 < blog.100kgarages.com >.

“What is an Ecovillage?” Gaia Trust website < www.gaia.org >.

“What is an Ecovillage?” (sidebar), Agnieszka Komoch, “Ecovillage Enterprise.” Permaculture Magazine No. 32 (Summer 2002).

“What is the relationship between RepRap and Makerbot?” Hacker News < news.ycombinator.com >.

“What’s Digital Fabrication?” 100kGarages website < 100kgarages.com / digital_fabrication.html>.

“What’s Next for 100kGarages?” 100kGarages News , February10, 2010 < blog.100kgarages.com >.

Shawn Wilbur, “Re: [Anarchy-List] Turnin’ rebellion into money (or not... your choice).” Anarchy list, July 17, 2009 < lists.anarchylist.org >.

Wilbur. “Taking Wing: Corvus Editions.” In the Libertarian Labyrinth , July 1, 2009 < libertarian-labyrinth.blogspot.com >; Corvus Distribution website < www.corvusdistribution.org >.

Wilbur. “Who benefits most economically from state centralization.” In the Libertarian Labyrinth , December 9, 2008 < libertarian-labyrinth.blogspot.com >.

Chris Williams. “Blogosphere shouts ‘I’m Spartacus’ in Usmanov-Murray case: Uzbek billionaire prompts Blog solidarity.” The Register , September 24, 2007 < www.theregister.co.uk >.

William Appleman Williams. The Contours of American History (Cleveland and New York The World Publishing Company, 1961).

Williams. The Tragedy of American Diplomacy (New York: Dell Publishing Company, 1959, 1962).

Frank N. Wilner. “Give truckers an inch, they’ll take a ton-mile: every liberalization has been a launching pad for further increases—trucking wants long combination vehicle restrictions dropped.” Railway Age , May 1997 < findarticles.com >.

James L. Wilson. “Standard of Living vs. Quality of Life.” The Partial Observer , May 29, 2008 < www.partialobserver.com >.

James P. Womack and Daniel T. Jones. Lean Thinking: Banish Waste and Create Wealth in Your Corporation (Simon & Schuster, 1996).

Womack, Jones, and Daniel Roos. The Machine That Changed the World (New York: Macmillian Publishing Company, 1990).

Nicholas Wood. “The ‘Family Firm’—Base of Japan’s Growing Economy.” The American Journal of Economics and Sociology , vol. 23 no. 3 (1964).

Matthew Yglesias. “The Elusive Post-Bubble Economy,” Yglesias/ThinkProgress.Org , December 22, 2008 < yglesias.thinkprogress.org >.

Yglesias. “The Office Illusion.” Matthew Yglesias , September 1, 2007 < matthewyglesias.theatlantic.com >.

Yglesias. “Too Much Information.” Matthew Yglesias , December 28, 2009 < yglesias.thinkprogress.org >.

Andrea Zippay. “Organic food co-op raid sparks case against health department, ODA.” FarmAndDairy.Com, December 19, 2008 < www.farmanddairy.com .

Luigi Zingales. “In Search of New Foundations.” The Journal of Finance , vol. lv, no. 4 (August 2000).

Ethan Zuckerman. “Samuel Bowles introduces Kudunomics.” My Heart’s in Accra November 17, 2009 < www.ethanzuckerman.com >.

[1] Kevin Carson, “Industrial Policy: New Wine in Old Bottles,” C4SS Paper No. 1 (1 st Quarter 2009) < c4ss.org >.

[2] Carson, “MOLOCH: Mass Production Industry as a Statist Construct,” C4SS Paper No. 3 (July 2009) < c4ss.org / content/888>; “The Decline and Fall of Sloanism,” C4SS Paper No. 4 (August 2009) < c4ss.org studies>; “The Homebrew Industrial Revolution,” C4SS Paper No. 5 (September 2009) < c4ss.org >; “Resilient Communities and Local Economies,” C4SS Paper No. 6 (4 th Quarter 2009) < c4ss.org >; “The Alternative Economy as a Singularity,” C4SS Paper No. 7 (4 th Quarter 2009) < c4ss.org >.

[3] Charles Johnson, “Scratching By: How Government Creates Poverty as We Know It,” The Freeman: Ideas on Liberty, December 2007 < www.thefreemanonline.org - it/>.

[4] Johnson comment under Roderick Long, “Amazon versus the Market,” Austro-Athenian Empire, December 13, 2009 < aaeblog.com >.

[5] Jesse Walker, “Five Faces of Jerry Brown,” The American Conservative, November 1, 2009 < www.amconmag.com >.

[6] Lewis Mumford, Technics and Civilization (New York Harcourt, Brace, and Company, 1934), pp. 14–15.

[7] Ibid., p. 112.

[8] Ibid., p. 68.

[9] Ibid., p. 134.

[10] Ibid., p. 113.

[11] Ibid., pp. 114–115.

[12] Ibid., pp. 159, 161.

[13] Ibid., p. 90.

[14] Ibid., p. 224.

[15] William Waddell and Norman Bodek, The Rebirth of American Industry: A Study of Lean Management (Vancouver, WA: PCS Press, 2005), pp. 119–121.

[16] Mumford, Technics and Civilization, p. 110.

[17] Ibid., pp. 214, 221.

[18] Paul and Percival Goodman, Communitas: Means of Livelihood and Ways of Life (New York: Vintage Books, 1947, 1960), p. 156.

[19] Peter Kropotkin, Fields, Factories and Workshops or Industry Combined with Agriculture and Brain Work with Manual Work (New York: Greenwood Press, Publishers, 1968 [1898]), pp. 154., 179–180.

[20] William Morris, News From Nowhere: or, An Epoch of Rest (1890). Marxists.Org online text < www.marxists.org >.

[21] Mumford, Technics and Civilization, pp. 224–225.

[22] In the case of flour, according to Borsodi, the cost of custom-milled flour from a local mill was about half that of flour from a giant mill in Minneapolis, and flour from a small electric household mill was cheaper still. Prosperity and Security: A Study in Realistic Economics (New York and London: Harper & Brothers Publishers, 1938), pp. 178–181.

[23] Ibid., pp. 388–389.

[24] Mumford, Technics and Civilization, pp. 258–259.

[25] Mumford, Technics and Civilization, p. 212.

[26] Ralph Borsodi, This Ugly Civilization (Philadelphia Porcupine Press, 1929, 1975), p. 65.

[27] Mumford, Technics and Civilization, p. 118.

[28] Ibid., p. 118.

[29] Ibid., p. 143.

[30] Lewis Mumford, The City in History: Its Transformations, and Its Prospects (New York: Harcourt, Brace, & World, Inc., 1961), pp. 333–34.

[31] Borsodi, This Ugly Civilization, pp. 60–61.

[32] Borsodi, Prosperity and Security, p. 182.

[33] Michael S. Piore and Charles F. Sabel, The Second Industrial Divide: Possibilities for Prosperity (New York: HarperCollins, 1984), pp. 4–6, 19.

[34] Mumford, Technics and Civilization, pp. 212–13.

[35] Ibid., p. 215.

[36] Ibid., p. 236.

[37] Ibid., p. 264.

[38] Ibid., p. 265.

[39] Ibid., p. 266.

[40] Ibid., p. 267.

[41] Ibid., p. 265.

[42] Ibid., p. 264.

[43] Alfred D. Chandler, Jr., The Visible Hand: The Managerial Revolution in American Business(Cambridge and London: The Belknap Press of Harvard University Press, 1977), p. 8.

[44] Ibid., p. 11.

[45] William Lazonick, Business Organization and the Myth of the Market Economy (Cambridge, 1991), pp. 198–226.

[46] Chandler, The Visible Hand, p. 79.

[47] Ibid., pp. 79, 96–121.

[48] Ibid., p. 209.

[49] Ibid., p. 235.

[50] Ibid., p. 240.

[51] Ivan Illich, “The Three Dimensions of Public Opinion,” in The Mirror of the Past: Lectures and Addresses, 1978–1990 (New York and London: Marion Boyars, 1992), p. 84; Tools for Conviviality (New York, Evanston, San Francisco, London: Harper & Row, 1973), pp. xxii-xxiii, 1–2, 3, 6–7, 84–85; Disabling Professions (New York and London: Marion Boyars, 1977), p. 28.

[52] Chandler, The Visible Hand, p. 215.

[53] Ibid., p. 363.

[54] Ibid., p. 287.

[55] Ibid., p. 376.

[56] Piore and Sabel, pp. 66–67.

[57] Matthew Josephson, The Robber Barons: The Great American Capitalists 1861–1901 (New York: Harcourt, Brace & World, Inc., 1934, 1962), pp. 77–78.

[58] Murray N. Rothbard, Power and Market: Government and the Economy (Menlo Park, Calif.: Institute for Humane Studies, Inc., 1970), p. 70.

[59] Josephson, pp. 83–84.

[60] Piore and Sabel, pp. 66–67.

[61] Josephson, pp. 250–251.

[62] Ibid., p. 253.

[63] Ibid., p. 265.

[64] Ibid., p. 251.

[65] Ibid., p. 252.

[66] David F. Noble, America by Design: Science, Technology, and the Rise of Corporate Capitalism(New York: Alfred A. Knopf, 1977), p. 5.

[67] Ibid., p. 9.

[68] Ibid., pp. 9–10.

[69] Ibid., pp. 11–12.

[70] Ibid., p. 12.

[71] Ibid., p. 12.

[72] Ibid., p. 91.

[73] Ibid., p. 92.

[74] Ibid., pp. 93–94.

[75] Alfred Chandler, Jr., Inventing the Electronic Century (New York: The Free Press, 2001).

[76] Noble, America by Design, p. 16.

[77] Ibid., p. 91.

[78] Ibid., p. 89.

[79] Ibid., p. 95.

[80] Paul Hawken, Amory Lovins, and L. Hunter Lovins, Natural Capitalism Creating the Next Industrial Revolution (Boston, New York, London Little, Brown, and Company, 1999), p. 81

[81] Lewis Mumford, Technics and Civilization (New York: Harcourt, Brace, and Company, 1934), pp. 396–397.

[82] Michael J. Piore and Charles F. Sabel, The Second Industrial Divide Possibilities for Prosperity(New York: HarperCollins, 1984), p. 50.

[83] Ibid., p. 49.

[84] Ibid., p. 54.

[85] Ibid., p. 15.

[86] Ralph Borsodi, This Ugly Civilization (Philadelphia: Porcupine Press, 1929, 1975), pp. 64–65.

[87] Ibid., p. 126.

[88] “Manufacture Goods, Not Needs,” E. F. Schumacher Society Blog, October 11, 2009 < efssociety.blogspot.com >.

[89] John Kenneth Galbraith, The New Industrial State (New York Signet Books, 1967), p. 16

[90] Ibid., p. 28.

[91] Ibid., p. 31.

[92] Ibid., pp. 34–35.

[93] Ibid., pp. 210–212.

[94] Alfred D.Chandler, Jr., The Visible Hand The Managerial Revolution in American Business(Cambridge and London: The Belknap Press of Harvard University Press, 1977), p. 6.

[95] Ibid., pp. 6–7.

[96] Ibid., p. 241.

[97] Ibid., p. 287.

[98] Ibid., p. 244.

[99] Ibid., p. 412.

[100] William H. Waddell and Norman Bodek, Rebirth of American Industry A Study of Lean Management (Vancouver, WA PCS Press, 2005), p. 75.

[101] Ibid., p. 140.

[102] William Waddell, “The Irrelevance of the Economists,” Evolving Excellence, May 6, 2009 < www.evolvingexcellence.com >. Paul T. Kidd anticipated much of Waddell’s and Bodek’s criticism in Agile Manufacturing: Forging New Frontiers (Wokingham, England; Reading, Mass.; Menlo Park, Calif.; New York; Don Mills, Ontario; Amsterdam; Bonn; Sydney; Singapore; Tokyo; Madrid; San Juan; Paris; Mexico City; Seoul; Taipei: Addison-Wesley Publishing Company, 1994), especially Chapter Four.

[103] Waddell and Bodek, p. 98.

[104] Raphael Kaplinsky, “From Mass Production to Flexible Specialization: A Case Study of Microeconomic Change in a Semi-Industrialized Economy,” World Development 22:3 (March 1994), p. 346.

[105] Waddell and Bodek, p. 122.

[106] Ibid., p. 119.

[107] Ibid., p. xx.

[108] Ibid., pp. 112–114.

[109] Lovins et al,, Natural Capitalism, pp. 129–30.

[110] Waddell and Bodek, pp. 89, 92.

[111] Ibid., pp. 122–123.

[112] Ibid., p. 39.

[113] Hawken et al, pp. 129–130.

[114] Ibid., pp. 128–129.

[115] Ibid., p. 129.

[116] James P. Womack and Daniel T. Jones, Lean Thinking: Banish Waste and Create Wealth in Your Corporation (New York: Simon and Schuster, 1996), p. 60.

[117] Lovins et al, Natural Capitalism, p. 127.

[118] James P. Womack, Daniel T. Jones, Daniel Roos, The Machine That Changed the World (New York: Macmillian Publishing Company, 1990), p. 80.

[119] Mumford, Technics and Civilization, p. 196.

[120] Michael Parenti, “Capitalism’s Self-Inflicted Apocalypse,” Common Dreams, January 21, 2009 < www.commondreams.org >.

[121] Mumford, Technics and Civilization, p. 347.

[122] Ibid., p. 241.

[123] F.M. Scherer and David Ross, Industrial Market Structure and Economic Performance. 3 rd ed (Boston Houghton Mifflin, 1990), p. 97.

[124] Barry Stein, Size, Efficiency, and Community Enterprise (Cambridge: Center for Community Economic Development, 1974), p. 41.

[125] Ibid., p. 43.

[126] Ibid., p. 44.

[127] Ibid., p. 58.

[128] Galbraith, The New Industrial State, p. 37.

[129] See Kevin Carson, Organization Theory: A Libertarian Perspective (Booksurge, 2008), Chapter Four.

[130] Galbraith, New Industrial State, p. 38.

[131] Ibid., p. 39.

[132] Ibid., pp. 50–51.

[133] Martin Hellwig, “On the Economics and Politics of Corporate Finance and Corporate Control,” in Xavier Vives, ed., Corporate Governance Theoretical and Empirical Perspectives (Cambridge: Cambridge University Press, 2000),pp. 100–101.

[134] Ralph Estes, Tyranny of the Bottom Line Why Corporations Make Good People Do Bad Things(San Francisco: Berrett-Koehler Publishers, 1996), p. 51.

[135] Hellwig, pp. 101–102, 113.

[136] Doug Henwood, Wall Street How it Works and for Whom (London and New York: Verso, 1997), p. 3.

[137] Piore and Sabel, pp. 70–71.

[138] Hellwig, pp. 114–115.

[139] Ibid., p. 117.

[140] Henwood, Wall Street, pp. 154–155.

[141] Galbraith, The New Industrial State, pp. 39–40.

[142] Ibid., pp. 41–42.

[143] Piore and Sabel, p. 58.

[144] Ibid., p. 65.

[145] Ibid., p. 132.

[146] Paul Baran and Paul Sweezy, Monopoly Capitalism: An Essay in the American Economic andSocial Order (New York: Monthly Review Press, 1966), pp. 93–94.

[147] Paul Goodman, People or Personnel, in People or Personnel and Like a Conquered Province(New York: Vintage Books, 1964, 1966), p. 58.

[148] Gabriel Kolko. The Triumph of Conservatism: A Reinterpretation of American History 1900–1916(New York: The Free Press of Glencoe, 1963), p. 3.

[149] Chandler, The Visible Hand, p. 316.

[150] Ibid., p. 331.

[151] Paul Sweezy. “Competition and Monopoly,” Monthly Review (May 1981), pp. 1–16.

[152] Kolko, Triumph of Conservatism.

[153] Ibid., p. 5.

[154] Ibid., p. 58.

[155] Ibid., p. 129.

[156] Ibid., pp. 98–108. In the 1880s, repeated scandals involving tainted meat had resulted in U.S. firms being shut out of several European markets. The big packers had turned to the government to inspect exported meat. By organizing this function jointly, through the state, they removed quality inspection as a competitive issue between them, and the government provided a seal of approval in much the same way a trade association would. The problem with this early inspection regime was that only the largest packers were involved in the export trade, which gave a competitive advantage to the small firms that supplied only the domestic market. The main effect of Roosevelt’s Meat Inspection Act was to bring the small packers into the inspection regime, and thereby end the competitive disability it imposed on large firms. Upton Sinclair simply served as an unwitting shill for the meat-packing industry.

[157] Butler Shaffer, Calculated Chaos: Institutional Threats to Peace and Human Survival (San Francisco: Alchemy Books, 1985), p. 143.

[158] Associated Press, “U.S. government fights to keep meatpackers from testing all slaughtered cattle for mad cow,” International Herald-Tribune, May 29, 2007 < www.iht.com >. “Monsanto Declares War on ‘rBGH-free’ Dairies,” April 3, 2007 (reprint of Monsanto press release by Organic Consumers Association) < www.organicconsumers.org >. “Pa. bars hormone-free milk labels,” USA Today, November 13, 2007 < www.usatoday.com >.

[159] Kolko, The Triumph of Conservatism, p. 268.

[160] Ibid., p. 275.

[161] Butler Shaffer, In Restraint of Trade The Business Campaign Against Competition, 1918–1938(Lewisburg Bucknell University Press, 1997).

[162] Ibid., pp. 82–84.

[163] Kolko, Triumph of Conservatism, p. 287.

[164] James O’Connor, The Fiscal Crisis of the State (New York: St. Martin’s Press, 1973), pp. 6–7.

[165] Ibid., p. 24.

[166] Ibid., p. 24.

[167] Paul Baran and Paul Sweezy, Monopoly Capitalism : An Essay in the American Economic and Social Order (New York: Monthly Review Press, 1966), p. 108.

[168] Ibid., pp. 128–129.

[169] Ibid., p. 131.

[170] Stuart Ewen, Captains of Consciousness Advertising and the Social Roots of Consumer Culture (New York: McGraw-Hill, 1976), pp. 163, 171–172.

[171] Jeffrey Kaplan, “The Gospel of Consumption: And the better future we left behind,” Orion, May/June 2008 < www.orionmagazine.org >.

[172] John Hagel III, John Seely Brown, and Lang Davison, The Power of Pull: How Small Moves, Smartly Made, Can Set Big Things in Motion, quoted in JP Rangaswami, “Thinking about predictability: More musings about Push and Pull,” Confused of Calcutta, May 4, 2010 < confusedofcalcutta.com >.

[173] Paul and Percival Goodman, Communitas Means of Livelihood and Ways of Life (New York: Vintage Books, 1947, 1960), pp. 188–89.

[174] Eric Rumble, “Toxic Shocker,” Up! Magazine, January 1, 2007 < www.up-magazine.com >.

[175] Baran and Sweezy, Monopoly Capital, p. 124.

[176] Ralph Borsodi, The Distribution Age (New York and London: D. Appleton and Company, 1929), pp. 217, 228.

[177] Ibid., p. 110.

[178] Quoted in Ibid., pp. 160–61.

[179] Ibid., p. v.

[180] Ibid., p. 4.

[181] Ibid., pp. 112–113.

[182] Ibid., p. 136.

[183] Ibid., p. 247.

[184] Ibid., pp. 83–84.

[185] Ibid., p. 84.

[186] Ibid., p. 162.

[187] Ibid. pp. 216–17.

[188] Stein, Size, Efficiency, and Community Enterprise, p. 79.

[189] Advertising and Selling Fortnightly, February 25, 1925, in Borsodi, The Distribution Age, pp. 159–60.

[190] Stuart Chase and F. J. Schlink, The New Republic, December 30, 1925, in Ibid., p. 204.

[191] Naomi Klein, No Logo (New York: Picador, 1999), p. 14.

[192] Chandler, The Visible Hand, p. 411.

[193] Borsodi, The Distribution Age, pp. 42–43.

[194] William Appleman Williams, The Tragedy of American Diplomacy (New York: Dell Publishing Company, 1959, 1962) 21–2.

[195] Williams, The Contours of American History (Cleveland and New York The World Publishing Company, 1961).

[196] Laurence H. Shoup and William Minter, “Shaping a New World Order: The Council on Foreign Relations’ Blueprint for World Hegemony, 1939–1945,” in Holly Sklar, ed., Trilateralism: The Trilateral Commission and Elite Planning for World Management (Boston: South End Press, 1980), pp. 135–56

[197] “Now the price that brings the maximum monopoly profit is generally far above the price that would be fixed by fluctuating competitive costs, and the volume that can be marketed at that maximum price is generally far below the output that would be technically and economically feasible.... [The trust] extricates itself from this dilemma by producing the full output that is economically feasible, thus securing low costs, and offering in the protected domestic market only the quantity corresponding to the monopoly price—insofar as the tariff permits; while the rest is sold, or “dumped,” abroad at a lower price.... “--Joseph Schumpeter, “Imperialism,” inImperialism, Social Classes: Two Essays by Joseph Schumpeter. Translated by Heinz Norden. Introduction by Hert Hoselitz (New York: Meridian Books, 1955) 79–80. Joseph Stromberg, by the way, did an excellent job of integrating this thesis, generally identified with the historical revisionism of the New Left, into the theoretical framework of Mises and Rothbard, in “The Role of State Monopoly Capitalism in the American Empire”Journal of Libertarian Studies Volume 15, no. 3 (Summer 2001), pp. 57–93. Available online at < www.mises.org >.

[198] Gabriel Kolko, Confronting the Third World United States Foreign Policy 1945–1980 (New York: Pantheon Books, 1988), p. 120.

[199] United States Participation in the Multilateral Development Banks in the 1980s. Department of the Treasury (Washingon, DC: 1982), p. 9.

[200] L. S. Stavrianos, The Promise of the Coming Dark Age (San Francisco: W. H. Freeman and Co. 1976), p. 42.

[201] Baran and Sweezy, pp. 146–147.

[202] Ibid., p. 219.

[203] George Orwell, 1984. Signet Classics reprint (New York: Harcourt Brace Jovanovich,1949, 1981), p. 157.

[204] Baran and Sweezy, pp. 173–174.

[205] Jim Motavalli, “Getting Out of Gridlock: Thanks to the Highway Lobby, Now We’re Stuck in Traffic. How Do We Escape?” E Magazine, March/April 2002 < www.emagazine.com >.

[206] Mike Ferner, “Taken for a Ride on the Interstate Highway System,” MRZine (Monthly Review) June 28, 2006 < mrzine.monthlyreview.org >.

[207] Justin Fox, “The Great Paving How the Interstate Highway System helped create the modern economy--and reshaped the FORTUNE 500.” Reprinted from Fortune. CNNMoney.Com, January 26, 2004 < money.cnn.com >.

[208] Edwin Black, “Hitler’s Carmaker: How Will Posterity Remember General Motors’ Conduct? (Part 4)” History News Network, May 14, 2007 < hnn.us >.

[209] Ferner, “Taken for a Ride.”

[210] Ibid.

[211] Frank N. Wilner, “Give truckers an inch, they’ll take a ton-mile: every liberalization has been a launching pad for further increases — trucking wants long combination vehicle restrictions dropped,” Railway Age, May 1997 < findarticles.com >.

[212] David S. Lawyer, “Are Roads and Highways Subsidized ?” March 2004 < www.lafn.org >.

[213] James Coston, Amtrak Reform Council, 2001, in “America’s long history of subsidizing transportation” < www.trainweb.org >.

[214] Frank Kofsky, Harry Truman and the War Scare of 1948, (New York: St. Martin’s Press, 1993).

[215] Noble, America by Design, pp. 6–7.

[216] Charles Nathanson, “The Militarization of the American Economy,” in David Horowitz, ed.,Corporations and the Cold War (New York and London: Monthly Review Press, 1969), p. 214.

[217] David F. Noble, Forces of Production: A Social History of American Automation (New York: Alfred A. Knopf, 1984), pp. 5–6.

[218] Ibid., p. 6.

[219] Baran and Sweezy, Monopoly Capitalism, p. 220.

[220] “The Militarization of the American Economy,” p. 208.

[221] Ibid., p. 230.

[222] Ibid., p. 230.

[223] Ibid., pp. 222–25.

[224] Noble, Forces of Production, p. 5.

[225] Ibid., p. 7.

[226] Ibid., pp. 7–8.

[227] Ibid., pp. 47–48.

[228] Ibid., p. 50.

[229] Ibid., p. 52.

[230] Ibid., pp. 8–9.

[231] Ibid., p. 47.

[232] Ibid., pp. 48–49.

[233] Ibid., pp. 60–61.

[234] Ibid., p. 213.

[235] Nathanson, “The Militarization of the American Economy,” p. 208.

[236] Seymour Melman, The Permanent War Economy: American Capitalism in Decline (New York: Simon and Schuster, 1974), p. 11.

[237] Ibid., p. 21.

[238] Paul Mattick, “The Economics of War and Peace,” Dissent (Fall 1956), p. 377.

[239] Ibid., pp. 378–379.

[240] Chandler, The Visible Hand, p. 487.

[241] William Lazonick, Business Organization and the Myth of the Market Economy (Cambridge, 1991).

[242] Alfred D. Chandler, Jr., Inventing the Electronic Century (New York: The Free Press, 2001), pp. 13–49.

[243] Alan Cooper’s The Inmates are Running the Asylum Why High-Tech Products Drive Us Crazy and How to Restore the Sanity (Indianapolis: Sams, 1999) is an excellent survey of the tendency of American industry to produce gold-plated turds without regard to the user.

[244] Quoted in Stein, Size, Efficiency, and Community Enterprise, p. 55.

[245] John Gall, Systemantics: How Systems Work and Especially How They Fail (New York: Pocket Books, 1975), p. 74.

[246] Alfred Chandler, Scale and Scope The Dynamics of Industrial Capitalism (Cambridge and London: The Belknap Press of Harvard University Press, 1990), p. 262.

[247] Paul Goodman, People or Personnel, pp. 114–115.

[248] Ibid., pp. 94–122.

[249] Ibid., pp. 102–104.

[250] Ibid., pp. 107–110.

[251] Ibid., pp. 110–111.

[252] Ibid., p. 105.

[253] Ibid., p. 106; “Black Mountain College,” Wikipedia< en.wikipedia.org > (captured March 30, 2009).

[254] Janko Roettgers, “The Pirate Bay: Distributing the World’s Entertainment for $3,000 a Month,”NewTeeVee.Com, July 19, 2009 < newteevee.com >.

[255] Ivan Illich, Tools for Conviviality (New York, Evanston, San Francisco, London: Harper & Row, 1973), pp. 52–53.

[256] Illich, Energy and Equity (1973), Chapter Six (online edition courtesy of Ira Woodhead and Frank Keller) < www.cogsci.ed.ac.uk >.

[257] Illich, Tools for Conviviality, p. 54.

[258] Illich, Vernacular Values (1980), “Part One The Three Dimensions of Social Choice,” online edition courtesy of The Preservation Institute < www.preservenet.com >.

[259] Leopold Kohr, The Overdeveloped Nations: The Diseconomies of Scale (New York: Schocken Books, 1978, 1979), pp. 27–28.

[260] Goodman, Compulsory Miseducation, in Compulsory Miseducation and The Community of Scholars (New York Vintage books, 1964, 1966), p. 108.

[261] Illich, Disabling Professions (New York and London: Marion Boyars, 1977), p. 28.

[262] Illich, Tools for Conviviality, p. 9.

[263] E. F. Schumacher, Small is Beautiful Economics as if People Mattered (New York, Hagerstown, San Francisco, London Harper & Row, Publishers, 1973), p. 38.

[264] Goodman, People or Personnel, p. 70.

[265] Ibid., p. 70.

[266] Ibid., p. 120.

[267] Goodman, The Community of Scholars, in Compulsory Miseducation and The Community of Scholars, p. 241.

[268] Goodman, People or Personnel, p. 120.

[269] Ibid., p. 117.

[270] Kenneth Boulding, Beyond Economics (Ann Arbor University of Michigan Press, 1968), p. 75.

[271] Kohr, The Overdeveloped Nations, pp. 36–37.

[272] Thomas Hodgskin, Popular Political Economy: Four Lectures Delivered at the London Mechanics’ Institution (London: Printed for Charles and William Tait, Edinburgh, 1827), pp. 33–34.

[273] Kevin Carson, Studies in Mutualist Political Economy (Blitzprint, 2004), p. 79.

[274] Maurice Dobb, Political Economy and Capitalism: Some Essays in Economic Tradition 2 nd rev. ed. (London: Routledge & Kegan Paul Ltd, 1940, 1960), p. 66

[275] Thorstein Veblen, The Place of Science in Modern Civilization and other Essays, p. 352, in John R. Commons, Institutional Economics (New York: Macmillan, 1934), p. 664.

[276] Matthew B. Crawford, “Shop Class as Soulcraft,” The New Atlantis, Number 13, Summer 2006, pp. 7–24 < www.thenewatlantis.com >.

[277] Julian Sanchez, “Dammit, Apple,” Notes from the Lounge, June 2, 2008 < www.juliansanchez.com >.

[278] Eric Hunting, “On Defining a Post-Industrial Style (1): from Industrial blobjects to post-industrial spimes,” P2P Foundation Blog, November 2, 2009 < blog.p2pfoundation.net >.

[279] Daisy Nguyen, “High tech vehicles pose trouble for some mechanics,” North County Times, December 26, 2009 < nctimes.com >.

[280] Mike Masnick, “How Automakers Abuse Intellectual Property Laws to Force You to Pay More For Repairs,” Techdirt, December 29, 2009 < techdirt.com >.

[281] Tom Peters, The Tom Peters Seminar: Crazy Times Call for Crazy Organizations (New York: Vantage Books, 1999), p. 10.

[282] Ibid., pp. 10–11.

[283] Ibid., p. 11.

[284] Ibid. p. 12.

[285] Michael Perelman, “The Political Economy of Intellectual Property,” Monthly Review, January 2003 < www.monthlyreview.org >.

[286] Kathryn Geurin, “Toybox Outlaws,” Metroland Online, January 29, 2009 < www.metroland.net >.

[287] Kathleen Fasanella, “IP Update: DPPA & Fashion Law Blog,” Fashion Incubator, March 10, 2010 < www.fashion-incubator.com >.

[288] Quoted by Charles Hugh Smith, in “The Travails of Small Business Doom the U.S. Economy,” Of Two Minds, August 17, 2009 < charleshughsmith.blogspot.com >.

[289] Jeff Quackenbush, Jessica Puchala , “Middleville woman threatened with fines for watching neighbors’ kids,” WZZM13.Com, September 24, 2009 < www.wzzm13.com #>.

[290] Vin Suprynowicz, “Schools guarantee there can be no new Washingtons,” Review Journal, February 10, 2008 < www.lvrj.com >.

[291] Bob Unruh, “Food co-op hit by SWAT raid fights back,” WorldNetDaily, December 24, 2008 < www.wnd.com >.

[292] Bob Unruh, “SWAT raid on food co-op called ‘entrapment’,” WorldNetDaily, December 26, 2008 < www.wnd.com >. See also Andrea Zippay, “Organic food co-op raid sparks case against health department, ODA,” FarmAndDairy.Com, December 19, 2008 < www.farmanddairy.com >.

[293] Roderick Long, “Free Market Firms Smaller, Flatter, and More Crowded,” Cato Unbound, Nov. 25, 2008 < www.cato-unbound.org >.

[294] John Curl, For All the People: Uncovering the Hidden History of Cooperation, Cooperative Movements, and Communalism in America (Oakland, CA: PM Press, 2009).

[295] Paul Baran and Paul Sweezy, Monopoly Capital: An Essay in the American Economic and Social Order (New York: Monthly Review Press, 1966) p. 240.

[296] William Waddell and Norman Bodek, Rebirth of American Industry: A Study of Lean Management (Vancouver, WA: PCS Press, 2005) p. 94.

[297] Harry Magdoff and Paul M. Sweezy, “Capitalism and the Distribution of Income and Wealth,” Magdoff and Sweezy, The Irreversible Crisis: Five Essays by Harry Magdoff and Paul M. Sweezy (New York: Monthly Review Press, 1988), p. 38

[298] John F. Walker and Harold G. Vattner, “Stagnation—Performance and Policy: A Comparison of the Depression Decade with 1973–1974,” Journal of Post Keynesian Economics, Summer 1986, in Magdoff and Sweezy, “Stagnation and the Financial Explosion,” Magdoff and Sweezy, The Irreversible Crisis, pp. 12–13.

[299] Magdoff and Sweezy, “Stagnation and the Financial Explosion,” p. 13.

[300] Piore and Sabel, The Second Industrial Divide, p. 184.

[301] Magdoff and Sweezy, “Capitalism and the Distribution of Income and Wealth,” p. 31.

[302] Ibid., p. 39.

[303] Martin Hellwig, “On the Economics and Politics of Corporate Finance and Corporate Control,” in Xavier Vives, ed., Corporate Governance: Theoretical and Empirical Perspectives (Cambridge: Cambridge University Press, 2000), pp. 114–115.

[304] Magdoff and Sweezy, “Capitalism and the Distribution of Income and Wealth,” p. 32.

[305] Ibid., p. 33.

[306] Walden Bello, “A Primer on Wall Street Meltdown,” MR Zine, October 3, 2008 < mrzine.monthlyreview.org >.

[307] Ibid.

[308] Walden Bello, “Asia: The Coming Fury,” Asia Times Online, February 11, 2009 < www.atimes.com >.

[309] Joshua Holland, “The Spectacular, Sudden Crash of the Global Economy,” Alternet, February 24, 2009 < www.alternet.org >.

[310] Walden Bello, “Can China Save the World from Depression?” Counterpunch, May 27, 2009 < www.counterpunch.org >.

[311] John Bellamy Foster and Fred Magdoff, “Financial Implosion and Stagnation: Back to the Real Economy,” Monthly Review, December 2008 < www.monthlyreview.org >.

[312] Magdoff and Sweezy, “Stagnation and the Financial Explosion,” pp. 13–14.

[313] Joshua Holland, “Let the Banks Fail: Why a Few of the Financial Giants Should Crash,” Alternet, December 15, 2008 < www.alternet.org >.

[314] Magdoff and Sweezy, “Stagnation and the Financial Explosion,” p. 23.

[315] Charles Hugh Smith, “Globalization and China: Neoliberal Capitalism’s Last ‘Fix’,” Of Two Minds, June 29, 2009 < www.oftwominds.com >.

[316] Barry Eichengreen and Kevin H. O’Rourke, “A Tale of Two Depressions,” VoxEU.Org, June 4, 2009 < www.voxeu.org >.

[317] Paul Krugman, “Averting the Worst,” New York Times, August 9, 2009 < www.nytimes.com >.

[318] Karl Denninger, “GDP: Uuuuggghhhh – UPDATED,” The Market Ticker, July 31, 2009 < market-ticker.denninger.net >.

[319] Cassander, “It’s Hard Being a Bear (Part Three): Good Economic History,” Steve Keen’s Debtwatch, September 5, 2009 < www.debtdeflation.com >.

[320] “October 30 2009: An interview with Stoneleigh — The case for deflation,” The Automatic Earth < theautomaticearth.blogspot.com >.

[321] Walden Bello, “Keynes: A Man for This Season?” Share the World’s Resources, July 9, 2009 < www.stwr.org >.

[322] James Kunstler, “Note: Hope = Truth,” Clusterfuck Nation, April 20, 2009 < jameshowardkunstler.typepad.com >.

[323] Michael Hudson, “What Wall Street Wants,” Counterpunch, February 11, 2009 < www.counterpunch.org > (see also expanded version, “Obama’s Awful Financial Recovery Plan,” Counterpunch, February 12, 2009 < www.counterpunch.org >).

[324] Charles Hugh Smith, “Welcome to America’s Lost Decade(s),” Of Two Minds, September 18, 2009 < charleshughsmith.blogspot.com >.

[325] David Rosenberg, Lunch with Dave, September 4, 2009 < www.scribd.com >.

[326] Paul Krugman, “Double dip warning,” Paul Krugman Blog, New York Times, Dec. 1, 2009 < krugman.blogs.nytimes.com >.

[327] Paul Krugman, “Life Without Bubbles,” New York Times, January 6, 2009 < www.nytimes.com >.

[328] Despite exuberance in the press over Cash for Clunkers, auto sales went flat—in fact reaching a low for the year—as soon as the program ended. Associated Press, “Retail sales fall after Cash for Clunkers ends,” MSNBC, October 14, 2009 < www.msnbc.msn.com >.

[329] Paul Krugman, “Use, Delay, and Obsolescence,” The Conscience of a Liberal, February 13, 2009 < krugman.blogs.nytimes.com >.

[330] John Robb, “Below Replacement Level,” Global Guerrillas, February 20, 2009 < globalguerrillas.typepad.com >.

[331] Peter Kirwan, “Bad News: What if the money’s not coming back?” Wired.Co.Uk, August 7, 2009 < www.wired.co.uk >.

[332] Richard Florida, “Are Bailouts Saving the U.S. from a New Great Depression,” Creative Class, March 18, 2009 < www.creativeclass.com >.

[333] Ellen Byron, “Tide Turns ‘Basic’ for P&G in Slump,” WSJ online, August 6, 2009 < online.wsj.com >; in William Waddell, “But You Can’t Fool All the People All the Time,” Evolving Excellence, August 25, 2009 < www.evolvingexcellence.com >.

[334] Naomi Klein, No Logo (New York: Picador, 2000, 2002), pp. 12–14.

[335] Matthew Yglesias, “The Elusive Post-Bubble Economy,” Yglesias/ThinkProgress.Org, December 22, 2008 < yglesias.thinkprogress.org >.

[336] David Gordon, “Stages of Accumulation and Long Economic Cycles,” in Terence K. Hopkins and Immanuel Wallerstein, eds., Processes of the World-System (Beverly Hills, Calif.: Sage, 1980), pp. 9–45.

[337] Michel Bauwens, “Conditions for the Next Long Wave,” P2P Foundation Blog, May 28, 2009 < blog.p2pfoundation.net >.

[338] Greenspan remarks from 1980, quoted by Magdoff and Sweezy, “The Great Malaise,” in Magdoff and Sweezy, The Irreversible Crisis, pp. 58–60.

[339] Joshua Cooper Ramo, “Jobless in America: Is Double-Digit Unemployment Here to Stay?” Time, September 11, 2009 < www.time.com >.

[340] Brad DeLong, “Another Bad Employment Report (I-Wish-We-Had-a-Ripcord-to-Pull Department),” Grasping Reality with All Eight Tentacles, October 2, 2009 < delong.typepad.com >.

[341] Ibid.

[342] “U.S. Suffering Permanent Destruction of Jobs,” Washington’s Blog, October 5, 2009 < www.washingtonsblog.com >

[343] “Long-Term Unemployment,” Economist’s View, November 9, 2009 < economistsview.typepad.com >.

[344] Ron Scherer, “Number of long-term unemployed hits highest rate since 1948,” Christian Science Monitor, January 8, 2010 < www.csmonitor.com >.

[345] Quiddity, “Job-loss recovery,” uggabugga, October 25, 2009 < uggabugga.blogspot.com >.

[346] DeLong, “Jobless Recovery: Quiddity Misses the Point,” J. Bradford DeLong’s Grasping Reality with All Eight Tentacles, October 25, 2009 < delong.typepad.com >.

[347] Ezra Klein, “A Fast Recovery? Or a Slow One?” Washington Post, April 14, 2010 < voices.washingtonpost.com >.

[348] Neil Irwin, “Economic data don’t point to boom times just yet,” Washington Post, April 13, 2010 < www.washingtonpost.com >.

[349] Harry Magdoff and Paul Sweezy, The End of Prosperity: The American Economy in the 1970s (New York and London: Monthly Review Press, 1977), pp. 95, 120–121.

[350] Ibid., p. 96.

[351] Smith, “Unemployment: The Gathering Storm,” Of Two Minds, September 26, 2009 < charleshughsmith.blogspot.com >.

[352] “Uh, oh, higher jobless rates could be the new normal,” New York Daily News, October 23, 2009 < www.nydailynews.com >.

[353] “Carter Doctrine,” Wikipedia, accessed December 23, 2009 < en.wikipedia.org >.

[354] Rob Hopkins, The Transition Handbook: From Oil Dependency to Local Resilience (Totnes: Green Books, 2008), p. 23.

[355] Chris Vernon, “Peak Coal—Coming Soon?” The Oil Drum: Europe, April 5, 2007 < europe.theoildrum.com >.

[356] Ibid.

[357] Richard Heinberg, Peak Everything: Waking Up to the Century of Declines (Gabriola Island, B.C.: New Society Publishers, 2007), p. 12.

[358] Joseph Romm, “McCain’s Cruel Offshore Drilling Hoax,” CommonDreams.Org, July 11, 2008 < www.commondreams.org >.

[359] Richard Heinberg, Powerdown (Gabriola Island, British Columbia: New Society Publishers, 2004), pp. 27–28.

[360] Jeff Vail, “Five Geopolitical Feedback-Loops in Peak Oil,” JeffVail.Net, April 23, 2007 < www.jeffvail.net >.

[361] Hopkins, The Transition Handbook, p. 22.

[362] Jeff Rubin, Why Your World is About to Get a Whole Lot Smaller: Oil and the End of Globalization (Random House, 2009), p. 220.

[363] Warren Johnson, Muddling Toward Frugality (San Francisco: Sierra Club Books, 1978).

[364] James Howard Kunstler, The Long Emergency: Surviving the End of Oil, Climate Change, and Other Converging Catastrophes of the Twenty-First Century (Grove Press, 2006); Kunstler, World Made by Hand (Grove Press, 2009).

[365] Brian Kaller, “Future Perfect: the future is Mayberry, not Mad Max,” Energy Bulletin, February 27, 2009 (from The American Conservative, August 2008) < www.energybulletin.net >.

[366] David Parkinson, “A coming world that’s ‘a whole lot smaller,’” The Globe and Mail, May 19, 2009 < docs.google.com >.

[367] Jeffrey Rubin, “The New Inflation,” StrategEcon (CIBC World Markets), May 27, 2008 < research.cibcwm.com >.

[368] Jeffrey Rubin and Benjamin Tal, “Will Soaring Transport Costs Reverse Globalization?” StrategEcon, May 27, 2008, p. 4.

[369] Richard Milne, “Crisis and climate force supply chain shift,” Financial Times, August 9, 2009 < www.ft.com >. See also Fred Curtis, “Peak Globalization: Climate change, oil depletion and global trade,” Ecological Economics Volume 69, Issue 2 (December 15, 2009).

[370] Sam Kornell, “Will PeakOil Turn Flying into Something Only Rich People Can Afford?” Alternet, May 7, 2010 < www.alternet.org >.

[371] James O’Connor, The Fiscal Crisis of the State (New York: St. Martin’s Press, 1973), p. 106.

[372] Ibid., pp. 109–110.

[373] Ibid., p. 8.

[374] Ibid., p. 9.

[375] Illich, Disabling Professions (New York and London: Marion Boyars, 1977), p. 30.

[376] Illich, Deschooling Society (New York, Evanston, San Francisco, London: Harper & Row, 1973).

[377] John Robb, “Onward to a Hollow State,” Global Guerrillas, September 22, 2009 < globalguerrillas.typepad.com >.

[378] Robb, “HOLLOW STATES vs. FAILED STATES,” Global Guerrillas, March 24, 2009 < globalguerrillas.typepad.com >.

[379] Lawrence W. Reed, “A Tribute to the Polish People,” The Freeman: Ideas on Liberty, October 2009 < www.thefreemanonline.org >.

[380] James Howard Kunstler, “Lagging Recognition,” Clusterfuck Nation, June 8, 2009 < kunstler.com >

[381] Kunstler, The Long Emergency, pp. 264–265.

[382] Illich, Tools for Conviviality (New York, Evanston, San Francisco, London: Harper & Row, 1973), p. 103.

[383] Piore and Sabel, Second Industrial Divide, p. 48.

[384] Ibid., p. 192.

[385] Piore and Sabel, “Italian Small Business Development: Lessons for U.S. Industrial Policy,” in John Zysman and Laura Tyson, eds., American Industry in International Competition: Government Policies and Corporate Strategies (Ithaca and London: Cornell University Press, 1983), p. 397.

[386] Piore and Sabel, Second Industrial Divide, p. 207.

[387] Ibid., p. 218.

[388] Piore and Sabel, “Italian Small Business Development,” pp. 397–398.

[389] Piore and Sabel, “Italy’s High-Technology Cottage Industry,” Transatlantic Perspectives 7 (December 1982), p. 7.

[390] Eric Hunting, private email, August 4, 2008.

[391] Andy Robinson, “[p2p research] CAD files at the Pirate Bay? (Follow up,” October 28, 2009 < listcultures.org >.

[392] Ibid.

[393] Andy Robinson, “[p2p research] Berardi essay,” P2P Research email list, May 25, 2009 < listcultures.org >.

[394] Piore and Sabel, Second Industrial Divide, pp. 226–227.

[395] David Pollard, “Ten Important Business Trends,” How to Save the World, May 12, 2009 < blogs.salon.com >.

[396] Dan Strumpf, “Exec Says Toyota Prepared for GM Bankruptcy,” Associated Press, April 8, 2009 < abcnews.go.com >.

[397] Don Tapscott and Anthony D. Williams, Wikinomics: How Mass Collaboration Changes Everything (New York: Portfolio, 2006), p. 231.

[398] Tapscott and Williams, pp. 217–218.

[399] David Barboza, “In China, Knockoff Cellphones are a Hit,” New York Times, April 27, 2009 < www.nytimes.com >.

[400] Tapscott and Williams, pp. 221–222.

[401] Bunnie Huang, “Copycat Corolla?” bunnie’s blog, December 13, 2009 < www.bunniestudios.com >.

[402] Klein, No Logo, p. 203.

[403] Michel Bauwens, P2P and Human Evolution. Draft 1.994 (Foundation for P2P Alternatives, June 15, 2005) < integralvisioning.org >; Although I’ve read Wark, his abstruse postmodern style generally obfuscates what Bauwens summarizes with great clarity clarifty.

[404] Michel Bauwens, “Can the experience economy be capitalist?” P2P Foundation Blog, September 27, 2007 < blog.p2pfoundation.net >. Joseph Tainter’s thesis, that the collapse of complex societies results from the declining marginal productivity of increases in complexity or expansion, is relevant here; The Collapse of Complex Societies (Cambridge, New York, New Rochelle, Melbourne, Sydney: Cambridge University Press, 1988). In particular, he echoes Bauwens’ thesis that classical civilization failed as a result of the inability to continue extensive addition of inputs through territorial expansion. As we will see shortly below, it is the inability to capture sufficient marginal returns on new increments of capital investment and innovation, in an era of “Free,” that is destroying the existing economic system.

[405] Soderberg, Hacking Capitalism, pp. 144–145.

[406] Cory Doctorow, “Happy Meal Toys versus Copyright: How America Chose Hollywood and Wal-Mart, and Why It’s Doomed Us, and How We Might Survive Anyway,” in Doctorow, Content: Selected Essays on Technology, Creativity, Copyright, and the Future of the Future (San Francisco: Tachyon Publications, 2008), p. 39.

[407] Ronald Bailey, “Post-Scarcity Prophet: Economist Paul Romer on growth, technological change, and an unlimited human future,” Reason, December 2001 < reason.com >.

[408] Manuel Castells, The Rise of the Network Society (Blackwell Publishers, 1996), pp. 203–204

[409] Paul M. Romer, “Endogenous Technological Change” (December 1989). NBER Working Paper No. W3210.

[410] Jeff Jarvis, “When innovation yields efficiency,” BuzzMachine, June 12, 2009 < www.buzzmachine.com / 2009/06/ 12/when-innovation-yields-efficiency/>.

[411] Anton Steinpilz, “Destructive Creation: BuzzMachine’s Jeff Jarvis on Internet Disintermediation and the Rise of Efficiency,” Generation Bubble, June 12, 2009 < generationbubble.com >.

[412] Eric Reasons, “Does Intellectual Property Law Foster Innovation?” The Tinker’s Mind, June 14, 2009 < blog.ericreasons.com >.

[413] Reasons, “Intellectual Property and Deflation of the Knowledge Economy,” The Tinker’s Mind, June 21, 2009 < blog.ericreasons.com >.

[414] Reasons, “The Economic Reset Button,” The Tinker’s Mind, July 2, 2009 < blog.ericreasons.com >.

[415] Reasons, “Innovative Deflation,” The Tinker’s Mind, July 5, 2009 < blog.ericreasons.com >.

[416] Mike Masnick, “Artificial Scarcity is Subject to Massive Deflation,” Techdirt, < techdirt.com 20090624/ 0253385345.shtml>.

[417] Reasons comment under Ibid., “The glass is twice the size it needs to be” < techdirt.com >.

[418] Comment under Michel Bauwens, “The great internet/p2p deflation,” P2P Foundation Blog, November 11, 2009 < blog.p2pfoundation.net >.

[419] “Doug Casey on Unemployment,” LewRockwell.Com, January 22, 2010. Interviewed by Louis James, editor, International Speculator < www.lewrockwell.com >.

[420] Tom Walker, “The Doppelganger Effect,” EconoSpeak, January 2, 2010 < econospeak.blogspot.com / 2010/01/ doppelg-effect.html>.

[421] P. M. Lawrence, private email, January 25, 2010. Lawrence subsequently requested I add the following explanatory material: ...people might not understand just how you can use the idea of a “fixed” value in intermediate calculations on the way to getting a better description of how it really does vary. So you should probably refer people to more detail in the footnote, particularly on these areas:- - Successive relaxation; see en.wikipedia.org . Related topics include “accelerated convergence” (see en.wikipedia.org ), which can be combined directly with that in successive over-relaxation (see en.wikipedia.org ). - The method of perturbations; see en.wikipedia.org , which states “This general procedure is a widely used mathematical tool in advanced sciences and engineering: start with a simplified problem and gradually add corrections that make the formula that the corrected problem matches closer and closer to the formula that represents reality”. (Successive relaxation is applying that general approach in one particular area.) The part of my email you cut read “oversimplifying the technique just a little, as an engineering approximation you assume it’s fixed, then you run it through the figures in a circular way to get a new contradictory value – and that’s the value it changes to, after a corresponding time step; repeat indefinitely for a numerical model, or work out the time dependent equations that match that and solve them analytically”. Your footnote should edit this and connect it to the same general approach, bringing out the idea that the first simplification is to pretend that the value is constant (as in a “lump of labour”, say), and saying that since the whole point is to use an incorrect description to get to a better description, “incorrect” doesn’t mean “invalid” — and, over a short enough term, even that first simplification of being fixed can be useful and meaningful as people really do have to get through those very short terms. - Simultaneous differential equations, rigidly coupled and otherwise.... I brought some of these issues out in an unpublished letter to the Australian Financial Review, written 6.7.98, available at users.beagle.com.au .

[422] Richard Florida, The Rise of the Creative Class (New York: Basic Books, 2002), p.36.

[423] Ibid. p. 6.

[424] Ibid., pp. 26–27.

[425] J.A. Pouwelse, P. Garbacki, D.H.J. Epema, and H.J. Sips, “Pirates and Samaritans: a Decade of Measurements on Peer Production and their Implications for Net Neutrality and Copyright” (The Netherlands: Delft University of Technology, 2008) < www.tribler.org >., p. 20.

[426] Ibid., p. 15.

[427] Ken Fisher, “Darknets live on after P2P ban at Ohio U,” Ars Technica, Mqy 9, 2007 < arstechnica.com >.

[428] Girlintraining comment under Soulskill, “Your Rights Online,” Slashdot, January 9, 2010 < yro.slashdot.org >.

[429] Bascha Harris, “A very long talk with Cory Doctorow, part 1,” redhat.com, January 2006 < www.redhat.com >.

[430] Doctorow, “Microsoft DRM Research Talk,” in Content, pp. 7–8.

[431] Doctorow, “It’s the Information Economy, Stupid,” Ibid., p. 60.

[432] Doctorow, “Why is Hollywood Making a Sequel to the Napster Wars?” in Content, p. 47.

[433] Bauwens, P2P and Human Evolution.

[434] Bauwens, “Can the experience economy be capitalist?”

[435] Douglas Rushkoff, “How the Tech Boom Terminated California’s Economy,” Fast Company, July 10, 2009 < www.fastcompany.com >.

[436] Michel Bauwens, “Asia needs a Social Innovation Stimulus plan,” P2P Foundation Blog, March 23, 2009 < blog.p2pfoundation.net >.

[437] George Reisman, “Answer to Paul Krugman on Economic Inequality,” The Webzine, March 3, 2006 < thewebzine.com >.

[438] Gopal Balakrishnan, “Speculations on the Stationary State,” New Left Review, September-October 2009 < www.newleftreview.org >.

[439] Balakrishnan, in Ibid., points to an interesting parallel between national accounting in the Soviet bloc and the neoliberal West: ...During the heyday of Reaganism, official Western opinion had rallied to the view that the bureaucratic administration of things was doomed to stagnation and decline because it lacked the ratio of market forces, coordinating transactions through the discipline of competition. Yet it was not too long after the final years of what was once called socialism that an increasingly debt- and speculation-driven capitalism began to go down the path of accounting and allocating wealth in reckless disregard of any notionally objective measure of value. The balance sheets of the world’s greatest banks are an imposing testimony to the breakdown of standards by which the wealth of nations was once judged. In their own ways, both bureaucratic socialism and its vastly more affluent neo-liberal conqueror concealed their failures with increasingly arbitrary tableaux économiques. By the 80s the gdr’s reported national income was revealed to be a statistical artifact that grossly inflated its cramped standards of living. But in the same decade, an emerging circuit of global imbalances was beginning to generate considerable problems for the measurement of capitalist wealth. The coming depression may reveal that the national economic statistics of the period of bubble economics were fictions, not wholly unlike those operative in the old Soviet system.

[440] Chris Anderson, Free: The Future of a Radical Price (New York: Hyperion, 2009), pp. 129–130.

[441] Niall Cook, Enterprise 2.0: How Social Software Will Change the Future of Work (Burlington, Vt.: Gower, 2008), p. 24.

[442] Charles Hugh Smith, “What if the (Debt Based) Economy Never Comes Back?” Of Two Minds, July 2, 2009 < www.oftwominds.com >.

[443] James C. Bennett, “The End of Capitalism and the Triumph of the Market Economy,” from Network Commonwealth: The Future of Nations in the Internet Era (1998, 1999) < www.pattern.com >.

[444] Samuel P. Huntington, Michael J. Crozier, Joji Watanuki, The Crisis of Democracy. Report on the Governability of Democracies to the Trilateral Commission: Triangle Paper 8 (New York: New York University Press, 1975), pp. 105–6.

[445] Ibid., p. 92.

[446] Ibid., pp. 7–8.

[447] Ibid., pp. 113–5.

[448] Ibid., pp. 7–8.

[449] Mark Elliott, “Stigmergic Collaboration: The Evolution of Group Work,” M/C Journal, May 2006 < journal.media-culture.org.au >.

[450] Ibid.

[451] Mark Elliott, “Some General Off-the-Cuff Reflections on Stigmergy,” Stigmergic Collaboration, May 21, 2006 < stigmergiccollaboration.blogspot.com >.

[452] Mark Elliott, Stigmergic Collaboration: A Theoretical Framework for Mass Collaboration. Doctoral Dissertation, Centre for Ideas, Victorian College of the Arts, University of Melbourne (October 2007) , pp. 9–10

[453] John Arquilla and David Ronfeldt, The Advent of Netwar MR-789 (Santa Monica, CA: RAND, 1996) < www.rand.org >.

[454] David F. Ronfeldt, Tribes, Institutions, Markets, Networks P-7967 (Santa Monica: RAND, 1996) < www.rand.org >.

[455] John Arquilla, David Ronfeldt, Graham Fuller, and Melissa Fuller. The Zapatista “Social Netwar” in Mexico MR-994-A (Santa Monica: Rand, 1998) < www.rand.org >.

[456] David Ronfeldt and Armando Martinez, “A Comment on the Zapatista Netwar,” in Ronfeldt and Arquilla, In Athena’s Camp: Preparing for Conflict in th Information Age (Santa Monica: Rand, 1997), pp. 369–371.

[457] Klein, No Logo, pp. 393–395.

[458] Arquilla and Ronfeldt, Swarming & the Future of Conflict DB-311 (Santa Monica, CA: RAND, 2000), iii < www.rand.org >.

[459] Ibid., p. 39.

[460] Ibid., pp. 50–52.

[461] John Arquilla and David Ronfeldt, “Introduction,” in Arquilla and Ronfeldt, eds., “Networks and Netwars: The Future of Terror, Crime, and Militancy” MR-1382-OSD (Santa Monica: Rand, 2001) < www.rand.org > ix.

[462] Jeff Vail, A Theory of Power (iUniverse, 2004) < www.jeffvail.net >.

[463] Eric S. Raymond, The Cathedral and the Bazaar < catb.org >.

[464] Cory Doctorow, “Australian seniors ask Pirate Party for help in accessing right-to-die sites,” Boing Boing, April 9, 2010 < www.boingboing.net >.

[465] John Robb, “THE BAZAAR’S OPEN SOURCE PLATFORM,” Global Guerrillas, Sept3ember 24, 2004 < globalguerrillas.typepad.com >.

[466] Thomas L. Knapp, “The Revolution Will Not Be Tweeted,” Center for a Stateless Society, October 5, 2009 < c4ss.org >.

[467] Katherine Mangu-Ward, “The Sheriff is Coming! The Sheriff is Coming!” Reason Hit & Run, January 6, 2010 < reason.com >; Brad Branan, “Police: Twitter used to avoid DUI checkpoints,” Seattle Times, December 28, 2009 < seattletimes.nwsource.com 2010618380_twitterdui29.html>.

[468] Eli Lake, “Hacking the Regime,” The New Republic, September 3, 2009 < www.tnr.com >.

[469] Ibid.

[470] Doctorow, “It’s the Information Economy, Stupid,” p. 60.

[471] “McDonald’s Restaurants v Morris & Steele,” Wikipedia < en.wikipedia.org > (accessed December 26, 2009).

[472] Klein, No Logo, p. 330.

[473] Yochai Benkler, The Wealth of Networks How Social Production Transforms Markets and Freedom (New Haven and London: Yale University Press, 2006), pp. 220–223.

[474] Ibid., pp. 227–231.

[475] “PR disaster, Wikileaks and the Streisand Effect” PRdisasters.com, March 3, 2007 < prdisasters.com >.

[476] Deborah Durham-Vichr. “Focus on the DeCSS trial,” CNN.Com, July 27, 2000 < archives.cnn.com >.

[477] Chris Williams, “Blogosphere shouts ‘I’m Spartacus’ in Usmanov-Murray case: Uzbek billionaire prompts Blog solidarity,” The Register, September 24, 2007 < www.theregister.co.uk >.

[478] “Public Service Announcement—Craig Murray, Tim Ireland, Boris Johnson, Bob Piper and Alisher Usmanov…” Chicken Yoghurt, September 20, 2007 < www.chickyog.net >.

[479] Doctorow, “The criticism that Ralph Lauren doesn’t want you to see!” BoingBoing, October 6, 2009 < www.boingboing.net >.

[480] Alan Rusbridge, “First Read: The Mutualized Future is Bright,” Columbia Journalism Review, October 19, 2009 < www.cjr.org >.

[481] John Robb, “INFOWAR vs. CORPORATIONS,” Global Guerrillas, October 1, 2009 < globalguerrillas.typepad.com >.

[482] Mike Masnick, “Yet Another High School Newspaper Goes Online to Avoid District Censorship,” Techdirt, January 15, 200 < www.techdirt.com >.

[483] Klein, No Logo, pp. 279–437.

[484] Ibid., p. 281.

[485] Ibid., p. 351.

[486] Ibid. p. 285.

[487] Ibid., p. 288.

[488] Ibid., p. 281.

[489] Ibid., pp. 349–350.

[490] Ibid., p. 351.

[491] Ibid., p. 353.

[492] Ibid., p. 294.

[493] “How to Fire Your Boss: A Worker’s Guide to Direct Action” < www.iww.org > (originally a Wobbly Pamphlet, it is reproduced in all its essentials at the I.W.W. Website under the heading of “Effective Strikes and Economic Actions”—although the Wobblies no longer endorse it in its entirety).

[494] “Markets are Conversations,” in Rick Levine, Christopher Locke, Doc Searls and David Weinberger, The Cluetrain Manifesto The End of Business as Usual (Perseus Books Group, 2001) < www.cluetrain.com >.

[495] “95 theses,” in Ibid.

[496] “Chapter One. Internet Apocalypso,” in Ibid.

[497] “Chapter Four. Markets Are Conversations,” in Ibid.

[498] Ibid.

[499] Tapscott and Williams, p. 271.

[500] Luigi Zingales, “In Search of New Foundations,” The Journal of Finance, vol. lv, no. 4 (August 2000), pp. 1627–1628.

[501] “Wal-Mart Nixes ‘Open Availability’ Policy,” Business & Labor Reports (Human Resources section), June 16, 2005 < hr.blr.com >.

[502] Nick Robinson, “Even Without a Union, Florida Wal-Mart Workers Use Collective Action to Enforce Rights,” Labor Notes, January 2006. Reproduced at Infoshop, January 3, 2006 < www.infoshop.org >.

[503] Ezra Klein, “Why Labor Matters,” The American Prospect, November 14, 2007 < www.prospect.org / csnc/blogs/ezraklein_archive?month=11&year=2007&base_name=why_labor_matters>.

[504] “Say No to Schultz Mansion Purchase,” Starbucks Union < www.starbucksunion.org >.

[505] Charles Johnson, “Coalition of Imolakee Workers marches in Miami,” Rad Geek People’s Daily, November 30, 2007 < radgeek.com >.

[506] Coalition of Immokalee Workers. “Burger King Corp. and Coalition of Immokalee Workers to Work Together,” May 23, 2008 < www.ciw-online.org >. Charles Johnson, “¡Sí, Se Puede! Victory for the Coalition of Imolakee Workers in the Burger King penny-per-pound campaign,” Rad Geek People’s Daily, May 23, 2008 < radgeek.com >.

[507] Jennifer Kock, “Employee Sabotage: Don’t Be a Target!” < www.workforce.com >.

[508] Tom Scotney, “Birmingham Wragge team to focus on online comment defamation,” Birmingham Post, October 28, 2009 < www.birminghampost.net >.

[509] Todd Wallack, “Beware if your blog is related to work,” San Francisco Chronicle, January 25, 2005 < www.sfgate.com >.

[510] “270-day libel case goes on and on...,” Daily Telegraph, June 28, 1996 < www.mcspotlight.org >.

[511] Jon Husband, “How Hard is This to Understand?” Wirearchy, June 22, 2007 < blog.wirearchy.com _archives/2007/6/22/3040833.html>.

[512] Chris Dillow, “Negative Credibility,” Stumbling and Mumbling, October 12, 2007 < stumblingandmumbling.typepad.com >.

[513] Originally a series of posts at P2P Foundation Blog. All four parts are linked at < mutualist.blogspot.com >.

[514] < blog.p2pfoundation.net >

[515] Adam Arvidsson, “Review: Cory Doctorow, The Makers,” P2P Foundation Blog, February 24, 2010 < blog.p2pfoundation.net >.

[516] Kevin Carson, “Cory Doctorow. Makers,” P2P Foundation Blog, October 25, 2009 < blog.p2pfoundation.net >.

[517] < blog.p2pfoundation.net >.

[518] Ralph Borsodi, Flight From the City An Experiment in Creative Living on the Land (New York, Evanston, San Francisco, London: Harper & Row, 1933, 1972), pp. 10–15.

[519] Ibid., pp. 17–19.

[520] Borsodi,This Ugly Civilization (Philadelphia: Porcupine Press, 1929, 1975), pp. 34–38.

[521] Borsodi, Prosperity and Security: A Study in Realistic Economics (New York and London: Harper & Brothers Publishers, 1938), p. 172.

[522] Ibid., p. 181.

[523] Borsodi, This Ugly Civilization, pp. 56–57.

[524] Ibid., p. 187.

[525] Ibid., p. 78.

[526] Ibid., p. 90.

[527] Michael J. Piore and Charles F. Sabel, The Second Industrial Divide: Possibilities for Prosperity (New York: HarperCollins, 1984), p. 47.

[528] Ibid., p. 37.

[529] Ibid., p. 47.

[530] Robert Begg, Poli Roukova, John Pickles, and Adrian Smith, “Industrial Districts and Commodity Chains: The Garage Firms of Emilia-Romagna (Italy) and Haskovo (Bulgaria),” Problems of Geography (Sofia, Bulgarian Academy of Sciences), 1–2 (2005), p. 162.

[531] James P. Womack, Daniel T. Jones, and Daniel Roos, The Machine That Changed the World (New York, Toronto, London, Sydney: The Free Press, 1990 and 2007), p. 22.

[532] Ibid., pp. 24–25.

[533] Ibid., pp. 25–26.

[534] Ibid., p. 78.

[535] Ibid., p. 33.

[536] Ibid., p. 51.

[537] Ibid., p. 52.

[538] Waddell and Bodek, pp. 119–122.

[539] Piore and Sabel, “Italian Small Business Development: Lessons for U.S. Industrial Policy,” in John Zysman and Laura Tyson, eds., American Industry in International Competition: Governnment Policies and Corporate Strategies (Ithaca and London: Cornell University Press, 1983).

[540] Ibid, pp. 392–393.

[541] Ibid., p. 394.

[542] Ibid., p. 394.

[543] Piore and Sabel, “Italy’s High-Technology Cottage Industry,” Transatlantic Perspectives 7 (December 1982), p. 6.

[544] Piore and Sabel, Second Industrial Divide, pp. 29–30.

[545] Piore and Sabel, “Italian Small Business Development,” pp. 400–401.

[546] Piore and Sabel, Second Industrial Divide, p. 32.

[547] Bunnie Huang, “Tech Trend: Shanzhai,” Bunnie’s Blog, February 26, 2009 < www.bunniestudios.com >.

[548] Comment under ibid. < www.bunniestudios.com >.

[549] David Barboza, “In China, Knockoff Cellphones are a Hit,” New York Times, April 28, 2009 < www.nytimes.com >.

[550] Piore and Sabel, p. 30.

[551] Ibid., p. 36.

[552] Ibid., p. 31.

[553] “Plowboy Interview” (Ralph Borsodi), Mother Earth News, March-April 1974 < www.soilandhealth.org >.

[554] Murray Bookchin, Post-Scarcity Anarchism (Berkeley, Ca. The Ramparts Press, 1971), pp. 110–111.

[555] Kirkpatrick Sale, Human Scale (New York Coward, McCann, & Geoghegan, 1980), pp. 409–410.

[556] Eric Husman, “Human Scale Part II--Mass Production,” Grim Reader blog, September 26, 2006 < www.zianet.com >.

[557] Piore and Sabel, p. 218.

[558] Ibid., p. 260.

[559] Ibid., p. 277.

[560] H. Thomas Johnson, “Foreword,” William H. Waddell and Norman Bodek, Rebirth of American Industry A Study of Lean Management (Vancouver, WA PCS Press, 2005), p. xxi.

[561] Husman, “Human Scale Part III—Self-Sufficiency,” GrimReader blog, October 2, 2006 < www.zianet.com >.

[562] James P. Womack and Daniel T. Jones, Lean Thinking: Banish Waste and Create Wealth in Your Corporation (Simon & Schuster, 1996), p. 43. In addition, recycling’s slow takeoff may reflect a cost structure determined by the kind of standard, high-overhead bureaucratic organization which we saw dissected by Paul Goodman in Chapter Two. As recounted by Karl Hess and David Morris in Neighborhood Power, a neighborhood church group which set up a recycling center operated by local residents found they could sort out trash themselves and receive $20–50 a ton (this was in the mid-70s). Karl Hess and David Morris, Neighborhood Power: The New Localism (Boston: Beacon Press, 1975), p. 139.

[563] Womack, Lean Thinking, p. 64.

[564] Ibid., p. 244.

[565] Husman, “Open Source Automobile,” GrimReader, March 3, 2005 < www.zianet.com >.

[566] Paul Hawken, Amory Lovins, and L. Hunter Lovins, Natural Capitalism Creating the Next Industrial Revolution (Boston, New York, London Little, Brown and Company, 1999), pp. 129–130.

[567] Piore and Sabel, p. 209.

[568] Christian Siefkes, “[p2p-research] Fwd: Launch of Abundance: The Journal of Post-Scarcity Studies, preliminary plans,” Peer to Peer Research List, February 25, 2009 < listcultures.org >.

[569] Piore and Sabel, The Second Industrial Divide, pp. 117–118.

[570] Ibid., pp. 120–121.

[571] Kirkpatrick Sale, Human Scale (New York Coward, McCann, & Geoghegan, 1980), p. 406.

[572] Colin Ward, Anarchy in Action (London Freedom Press, 1982), p. 94.

[573] Keith Paton, The Right to Work or the Fight to Live? (Stoke-on-Trent, 1972), in Ward, Anarchy in Action, pp. 108–109.

[574] Karl Hess, Community Technology, pp. 96–97.

[575] < techshop.ws />.

[576] Karl Hess, Community Technology (New York, Cambridge, Hagerstown, Philadelphia, San Francisco, London, Mexico City, Sao Paulo, Sydney Harper & Row, Publishers, 1979), pp. 96–98.

[577] Jane Jacobs, The Economy of Cities (New York Vintage Books, 1969, 1970)

[578] E. F. Schumacher, Good Work (New York, Hagerstown, San Fransisco, London Harper & Row, 1979), pp. 80–83.

[579] Jacobs, Economy of Cities, pp. 63–64.

[580] Karl Hess and David Morris, Neighborhood Power: The New Localism (Boston: Beacon Press, 1975), p. 69.

[581] Don Tapscott and Anthony D. Williams, Wikinomics: How Mass Collaboration Changes Everything (New York: Portfolio, 2006), p. 213.

[582] Hess and Morris, p. 142.

[583] Jacobs, Cities and the Wealth of Nations: Principles of Economic Life (New York: Vintage Books, 1984), p. 38.

[584] p. 83.

[585] Nicholas Wood, “The ‘Family Firm’—Base of Japan’s Growing Economy,” The American Journal of Economics and Sociology, vol. 23 no. 3 (1964), p. 316.

[586] Ibid., p. 319.

[587] Ibid., p. 317.

[588] Ibid., p. 318.

[589] Paul Goodman, People or Personnel, in People or Personnel and Like a Conquered Province (New York: Vintage Books, 1965, 1967, 1968), p. 95.

[590] Lyman P. van Slyke, “Rural Small-Scale Industry in China,” in Richard C. Dorf and Yvonne L. Hunter, eds., Appropriate Visions Technology the Environment and the Individual (San Francisco Boyd & Fraser Publishing Company, 1978) pp. 193–194.

[591] Ibid., p. 196.

[592] Aimin Chen, “The structure of Chinese industry and the impact from China’s WTO entry,” Comparative Economic Studies (Spring 2002) < www.entrepreneur.com >.

[593] Hess and Morris, Neighborhood Power, p. 127.

[594] Piore and Sabel, p. 261.

[595] Johan Soderberg, Hacking Capitalism The Free and Open Source Software Movement (New York and London Routledge, 2008), p. 2.

[596] Luigi Zingales, “In Search of New Foundations,” The Journal of Finance, vol. lv, no. 4 (August 2000), pp. 1641–1642.

[597] Don Tapscott and Anthony D. Williams, Wikinomics: How Mass Collaboration Changes Everything (New York: Portfolio, 2006), pp. 239–267.

[598] Chapter Five, “The Hyperlinked Organization,” in Rick Levine, Christopher Locke, Doc Searls and David Weinberger. The Cluetrain Manifesto: The End of Business as Usual (Perseus Books Group, 2001) < www.cluetrain.com index.html>.

[599] Niall Cook, Enterprise 2.0: How Social Software Will Change the Future of Work (Burlington, Vt.: Gower, 2008).

[600] Tom Peters. The Tom Peters Seminar Crazy Times Call for Crazy Organizations (New York: Vintage Books, 1994), p. 35.

[601] Yochai Benkler, The Wealth of Networks How Social Production Transforms Markets and Freedom (New Haven and London: Yale University Press, 2006), p. 179.

[602] Ibid., p. 188.

[603] Ibid., pp. 212–13.

[604] Ibid., pp. 32–33.

[605] Ibid., p. 54.

[606] Steve Lawson, “The Future of Music is... Indie!” Agit8, September 10, 2009 < agit8.org.uk >.

[607] Tom Coates, “(Weblogs and) The Mass Amateurisation of (Nearly) Everything...” Plasticbag.org, September 3, 2003 < www.plasticbag.org amateurisation_of_nearly_everything>.

[608] Jesse Walker, “The Satellite Radio Blues: Why is XM Sirius on the verge of bankruptcy?,” Reason, February 27, 2009 < reason.com >.

[609] < www.straighterline.com />.

[610] < www.straighterline.com >.

[611] Kevin Carey, “College for $99 a Month,” Washington Monthly, September/October 2009 < www.washingtonmonthly.com >.

[612] < www.straighterline.com >.

[613] < smarthinking.com >.

[614] Carey, “College for $99 a Month.”

[615] Daniel S. Levine, Disgruntled The Darker Side of the World of Work (New York: Berkley Boulevard Books, 1998), p. 160.

[616] Zingales, “In Search of New Foundations,” p. 1641.

[617] Ibid., p. 1641.

[618] Raghuram Rajan and Luigi Zingales, “The Governance of the New Enterprise,” in Xavier Vives, ed., Corporate Governance Theoretical and Empirical Perspectives (Cambridge: Cambridge University Press, 2000), pp. 211–212.

[619] Marjorie Kelly, “The Corporation as Feudal Estate” (an excerpt from The Divine Right of Capital) Business Ethics, Summer 2001. Quoted in GreenMoney Journal, Fall 2008 < greenmoneyjournal.com >.

[620] David L Prychitko, Marxism and Workers’ Self-Management The Essential Tension ( New York; London; Westport, Conn.: Greenwood Press, 1991), p. 121n.

[621] “Open Source Hardware,” P2P Foundation Wiki < www.p2pfoundation.net >.

[622] Karim Lakhana, “Communities Driving Manufacturers Out of the Design Space,” The Future of Communities Blog, March 25, 2007 < www.futureofcommunities.com >.

[623] Vinay Gupta, “Facilitating International Development Through Free/Open Source,” < guptaoption.com > Quoted from Beatrice Anarow, Catherine Greener, Vinay Gupta, Michael Kinsley, Joanie Henderson, Chris Page and Kate Parrot, Rocky Mountain Institute, “Whole-Systems Framework for Sustainable Consumption and Production.” Environmental Project No. 807 (Danish Environmental Protection Agency, Ministry of the Environment, 2003), p. 24. < files.howtolivewiki.com >

[624] < www.p2pfoundation.net >.

[625] < hexayurt.com />.

[626] Michel Bauwens, “What kind of economy are we moving to? 3. A hierarchy of engagement between companies and communities,” P2P Foundation Blog, October 5, 2007 < blog.p2pfoundation.net >.

[627] Marcin Jakubowski, “Clarifying OSE Vision,” Factor E Farm Weblog, September 8, 2008 < openfarmtech.org >.

[628] Dave Pollard, “Peer Production,” How to Save the World, October 28, 2005 < blogs.salon.com >.

[629] Bruno Giussani, “Open Source at 90 MPH,” Business Week, December 8, 2006 < www.businessweek.com ?>. See also the OS Car website, < www.theoscarproject.org />.

[630] Lisa Hoover, “Riversimple to Unveil Open Source Car in London This Month,” Ostatic, June 11, 2009 < ostatic.com >.

[631] Craig DeLancey, “Openshot,” Analog, December 2006, pp. 64–74.

[632] “LifeTrac,” Open Source Ecology wiki < openfarmtech.org >.

[633] Tapscott and Williams, pp. 219–220.

[634] Ibid., p. 222.

[635] Christian Siefkes, From Exchange to Contributions Generalizing Peer Production into the Physical World Version 1.01 (Berlin, October 2007), pp. 104–105.

[636] Hunting comment under Michel Bauwens, “Phases for implementing peer production: Towards a Manifesto for Mutually Assured Production,” P2P Foundation Forum, August 30, 2008 < p2pfoundation.ning.com >.

[637] Eric Hunting, “[Open Manufacturing] Re: Why automate? and opinions on Energy Descent?” Open Manufacturing, September 22, 2008 < groups.google.com >.

[638] Hunting, “[Open Manufacturing] Re:Vivarium,” Open Manufacturing, March 28, 2009 < groups.google.com #>.

[639] Hunting, “On Defining a Post-Industrial Style (1): from Industrial blobjects to post-industrial spimes,” P2P Foundation Blog, November 2, 2009 < blog.p2pfoundation.net >.

[640] “On Defining a Post-Industrial Style (2): some precepts for industrial design,” P2P Foundation Blog, November 3, 2009 < blog.p2pfoundation.net >.

[641] “On Defining a Post-Industrial Style (3): Emerging examples,” P2P Foundation Blog, November 4, 2009 < blog.p2pfoundation.net >.

[642] “Jay Rogers: I Challenge You to Make Cool Cars,” Alphachimp Studio Inc., November 10, 2009 < www.alphachimp.com >; Local Motors website at < www.local-motors.com >.

[643] Michel Bauwens, “Contract manufacturing as distributed manufacturing,” P2P Foundation Blog, September 11, 2008 < blog.p2pfoundation.net >.

[644] John Robb, “Stigmergic Leaning and Global Guerrillas,” Global Guerrillas, July 14, 2004 < globalguerrillas.typepad.com >.

[645] “Stigmergy,” Wikipedia < en.wikipedia.org > (accessed September 29, 2009).

[646] Bauwens, “The Political Economy of Peer Production,” CTheory, December 2005 < www.ctheory.net >.

[647] Priya Ganapati, “Open Source Hardware Hackers Start P2P Bank,” Wired, March 18, 2009 < www.wired.com >.

[648] David G. Blanchflower and Andrew J. Oswald, “What Makes an Entrepreneur?” < www2.warwick.ac.uk >. Later appeared in Journal of Labor Economics, 16:1 (1998), pp. 26–60.

[649] Ibid., p. 2.

[650] Ibid., p. 28.

[651] Ibid., p. 3.

[652] Jed Harris, “Capitalists vs. Entrepreneurs,” Anomalous Presumptions, February 26, 2007 < jed.jive.com >.

[653] Charles Johnson, “Dump the rentiers off your back,” Rad Geek People’s Daily, May 29, 2008 < radgeek.com >.

[654] Marcin Jakubowski, “OSE Proposal—Towards a World Class Open Source Research and Development Facility,” v0.12, January 16, 2008 < openfarmtech.org > (accessed August 25, 2009).

[655] Quoted in Diane Pfeiffer, “Digital Tools, Distributed Making and Design.” Thesis submitted to the faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for Master of Science in Architecture, 2009, p. 36.

[656] Chris Anderson, “In the Next Industrial Revolution, Atoms Are the New Bits,” Wired, January 25, 2010 < www.wired.com >.

[657] Neil Gerschenfeld, Fab: The Coming Revolution on Your Desktop—From Personal Computers to Personal Fabrication (New York: Basic Books, 2005), pp. 14–15.

[658] Pfeiffer, “Digital Tools,” pp. 33–35.

[659] Charles Hugh Smith, “The Future of Manufacturing in the U.S.” oftwominds, February 5, 2010 < charleshughsmith.blogspot.com >.

[660] Tom Igoe, “Idle speculation on the shan zhai and open fabrication,” hello blog, September 4, 2009 < www.tigoe.net >.

[661] Ibid.

[662] Joseph Flaherty, “Desktop Injection Molding,” Replicator, February 1, 2020 < replicatorinc.com desktop-injection-molding>.

[663] Igoe, op. Cit.

[664] Michel Bauwens post to Institute for Distributed Creativity email list, May 7, 2007. < lists.thing.net >

[665] Chris Anderson, Free: The Future of a Radical Price (New York: Hyperion, 2009), p. 241.

[666] Soderberg, Hacking Capitalism, pp. 185–186.

[667] MIT Center for Bits and Atoms, “Fab Lab FAQ” < fab.cba.mit.edu > (accessed August 31, 2009).

[668] “Multimachine,” Wikipedia < en.wikipedia.org > (accessed August 31, 2009>; < groups.yahoo.com >.

[669] “Multimachine & Flex Fab--Open Source Ecology” < openfarmtech.org >.

[670] < smari.yaxic.org > (note in quoted text).

[671] < reprap.org >. (note in quoted text).

[672] < www.makingthings.com > (note in quoted text).

[673] Jakubowski, “OSE Proposal.”

[674] < groups.yahoo.com >.

[675] < opensourcemachine.org >.

[676] Jakubowski, “OSE Proposal.”

[677] Marcin Jakubowski, “Rapid Prototyping for Industrial Swadeshi,” Factor E Farm Weblog, August 10, 2008 < openfarmtech.org >. “Open Source Fab Lab,” Open Source Ecology wiki (accessed August 22, 2009) < openfarmtech.org >.

[678] Open source CNC code is being developed by Smari McCarthy of the Iceland Fab Lab, < smari.yaxic.org >.

[679] Jakubowski, “OSE Proposal.”

[680] RepRap site < reprap.org >; “RepRap Project,” Wikipedia < en.wikipedia.org > (accessed August 31, 2009).

[681] < makerbot.com />

[682] Keith Kleiner, “3D Printing and Self-Replicating Machines in Your Living Room—Seriously,” Singularity Hub, April 9, 2009 < singularityhub.com >.

[683] “What is the relationship between RepRap and Makerbot?” Hacker News < news.ycombinator.com >.

[684] Jay Leno, “Jay Leno’s 3-D Printer Replaces Rusty Old Parts,” Popular Mechanics, July 2009 < www.popularmechanics.com >.

[685] < www.desktopfactory.com />.

[686] Jakubowski, “OSE Proposal.”

[687] Ibid.

[688] “CNC machine v2.0 — aka ‘Valkyrie’,” Let’s Make Robots, July 14, 2009 < letsmakerobots.com >.

[689] Jakubowski, “OSE Proposal.”

[690] < www.cubespawn.com />.

[691] “CubeSpawn, An open source, Flexible Manufacturing System (FMS)” < www.kickstarter.com >.

[692] < p2pfoundation.net >.

[693] < diylilcnc.org />.

[694] < www.bigbluesaw.com >.

[695] < www.emachineshop.com /> (see also <www.barebonespcb.com/!BB1.asp>).

[696] Clive Thompson, “The Dream Factory,” Wired, September 2005 < www.wired.com >.

[697] “The CloudFab Manifesto,” Ponoko Blog, September 28, 2009 < blog.ponoko.com >.

[698] Carin Stillstrom and Mats Jackson, “The Concept of Mobile Manufacturing,” Journal of Manufacturing Systems 26:3–4 (July 2007) < www.sciencedirect.com >.

[699] Kevin Kelly, “Better Than Free,” The Technium, January 31, 2008 < www.kk.org better_than_fre.php>.

[700] Roderick Long, “Free Market Firms: Smaller, Flatter, and More Crowded,” Cato Unbound, November 25, 2008 < www.cato-unbound.org >.

[701] Comment under Shawn Wilbur, “Who benefits most economically from state centralization” In the Libertarian Labyrinth, December 9, 2008 < libertarian-labyrinth.blogspot.com >.

[702] Shawn Wilbur, “Taking Wing: Corvus Editions,” In the Libertarian Labyrinth, July 1, 2009 < libertarian-labyrinth.blogspot.com >; Corvus Distribution website < www.corvusdistribution.org >.

[703] Shawn Wilbur, “Re: [Anarchy-List] Turnin’ rebellion into money (or not... your choice),” email to Anarchy List, July 17, 2009 < lists.anarchylist.org >.

[704] Steve Herrick, private email, December 10, 2009.

[705] Scott Adams, “Ridesharing in the Future,” Scott Adams Blog, January 21, 2009 < dilbert.com ridesharing_in_the_future/>.

[706] Michel Bauwens, “Asia needs a Social Innovation Stimulus plan,” P2P Foundation Blog, March 23, 2009 < blog.p2pfoundation.net >.

[707] Tyler Cowen, “Was recent productivity growth an illusion?” Marginal Revolution, March 3, 2009 < www.marginalrevolution.com >.

[708] John Quiggin, “The End of the Cash Nexus,” Crooked Timber, March 5, 2009 < crookedtimber.org >.

[709] Michel Bauwens, “Three Times Exodus, Three Phase Transitions,” P2P Foundation Blog, May 2, 2010 < blog.p2pfoundation.net >.

[710] James O’Connor, Accumulation Crisis (New York Basil Blackwell, 1984), pp. 184–186.

[711] Samuel Bowles and Herbert Gintis, “The Crisis of Liberal Democratic Capitalism: The Case of the United States,” Politics and Society 11:1 (1982), pp. 79–84.

[712] Dante-Gabryell Monson, “[p2p-research] trends ? : “Corporate Dropouts” towards Open diy ? ...” P2P Research, October 13, 2009 < listcultures.org >.

[713] Andrew Jackson, “Recession Far From Over,” The Progressive Economics Forum, August 7, 2009 < www.progressive-economics.ca >.

[714] Taylor Barnes, “America’s ’shadow economy’ is bigger than you think — and growing,” Christian Science Monitor, November 12, 2009 < features.csmonitor.com >.

[715] Charles Hugh Smith, “End of Work, End of Affluence III: The Rise of Informal Businesses,” Of Two Minds, December 10, 2009 < www.oftwominds.com >.

[716] Smith, “Trends for 2009: The Rise of Informal Work,” Of Two Minds, December 30, 2009 < www.oftwominds.com >.

[717] Marcin Jakubowski, “Clarifying OSE Vision,” Factor e Farm Weblog, September 8, 2008 < openfarmtech.org >.

[718] Jeremy Mason, “What is Open Source Ecology?” Factor e Farm Weblog, March 20, 2009 < openfarmtech.org >.

[719] “Organizational Strategy,” Open Source Ecology wiki, February 11, 2009 < openfarmtech.org > (accessed August 28, 2009).

[720] Marcin Jakubowski, “CEB Proposal—Community Supported Manufacturing,” Factor e Farm weblog, October 23, 2008 < openfarmtech.org >.

[721] Jakubowski, “Power Cube Completed,” Factor e Farm Weblog, June 29, 2009 < openfarmtech.org >.

[722] Jakubowski, “PowerCube on LifeTrak,” Factor e Farm Weblog , April 26, 2010 < openfarmtech.org >.

[723] Jakubowski, “CEB Phase 1 Done,” Factor e Farm Weblog, December 26, 2007 < openfarmtech.org >.

[724] Jakubowski, “The Thousandth Brick CEB Field Testing Report,” Factor e Farm Weblog, Nov. 16, 2008 < openfarmtech.org >.

[725] Jakubowski, “CEB Prototype II Finished,” Factor e Farm Weblog, August20, 2009 < openfarmtech.org >.

[726] Jakubowski, “Soil Pulverizer Annihilates Soil Handling Limits,” Factor e Farm Weblog, September 7, 2009 < openfarmtech.org >.

[727] Jakubowski, “Exciting Times: Nearing Product Release,” Factor e Farm Weblog, October 10, 2009 < openfarmtech.org >.

[728] Jakubowski, “Product,” Factor e Farm Weblog, November 4, 2009 < openfarmtech.org >.

[729] Jakubowski, “CEB Sales: Rocket Fuel for Post-Scarcity Economic Development?” Factor e Farm Weblog, November 28 2009 < openfarmtech.org >.

[730] Jakubowski, “MicroTrac Completed,” Factor e Farm Weblog, July 7, 2009 < openfarmtech.org >.

[731] Jakubowski, “Rapid Prototyping for Industrial Swadeshi,” Factor e Farm Weblog, August 10, 2008 < openfarmtech.org >.

[732] “Open Source Fab lab,” Open Source Ecology wiki (accessed August 22, 2009) < openfarmtech.org >.

[733] Marcin Jakubowski, “Moving Forward,” Factor e Farm Weblog, August 20, 2009< openfarmtech.org >; “Lawrence Kincheloe Contract,” OSE Wiki < openfarmtech.org >; “Torch Table Build,” Open Source Ecology wiki (accessed August 22, 2009 < openfarmtech.org >.

[734] Lawrence Kincheloe, “First Dedicated Project Visit Comes to a Close,” Factor e Farm Weblog, October 25, 2009 < openfarmtech.org > (see especially comment no. 5 under the post).

[735] Abe Connally, “Open Source Self-Replicator,” MAKE Magazine, No. 21 < www.make-digital.com >.

[736] Jakubowski, “CEB Sales”; “Ironworkers,” Open Source Ecology Wiki < openfarmtech.org >. Accessed December 10, 2009.

[737] Jakubowski, “Open Source Induction Furnace,” Factor e Farm Weblog, December 15, 2009 < openfarmtech.org >.

[738] Jakubowski, “Initial Steps to the Open Source Multimachine,” Factor e Farm Weblog, January 26, 2010 < openfarmtech.org >.

[739] Jakubowski, “OSE Proposal—Towards a World Class Open Source Research and Development Facility” v0.12, January 16, 2008 < openfarmtech.org > (accessed August 25, 2009).

[740] www.aipengineering.com

[741] < opensourcemachine.org />.

[742] See Extruder_doc.pdf at < www.fastonline.org >.

[743] Jakubowski, “OSE Proposal” [Note—OSE later decided to replace the boundary layer turbine with a simple steam engine as their primary heat engine. Also “Babington oil burner, compressed fuel gas production, and fuel alcohol production have now been superseded by pelletized biomass-fueled steam engines.” (Marcin Jakubowski, private email, January 22, 2010)]

[744] “Solar Turbine—Open Source Ecology” < openfarmtech.org >.

[745] Marcin Jakubowski, “Factor e Live Distillations—Part 8—Solar Power Generator,” Factor e Farm Weblog, February 3, 2009 < openfarmtech.org >.

[746] Nick Raaum, “Steam Dreams,” Factor e Farm weblog, January 22, 2009 < openfarmtech.org >.

[747] Jeremy Mason, “Sawmill Development,” Factor e Farm weblog, January 22, 2009 < openfarmtech.org >.

[748] Jakubowski, “OSE Proposal.”

[749] Ibid.

[750] “Organizational Strategy.”

[751] Jakubowski, “”TED Fellows,” Factor e Farm Weblog, September 22, 2009 < openfarmtech.org >.

[752] Lawrence Kincheloe, “One Month Project Visit: Take Two,” Factor e Farm Weblog, October 4, 2009 < openfarmtech.org >.

[753] < www.shopbottools.com />.

[754] < www.ponoko.com />.

[755] “What’s Digital Fabrication?” 100kGarages website < 100kgarages.com >.

[756] Ted Hall (ShopBot) and Derek Kelley (Ponoko), “Ponoko and ShopBot announce partnership: More than 20,000 online creators meet over 6,000 digital fabricators,” joint press release, September 16, 2009. Posted on Open Manufacturing email list, September 16, 2009 < groups.google.com >.

[757] 100KGarages founder Ted Hall, “100kGarages is Open: A Place to Get Stuff Made,” Open Manufacturing email list, September 15, 2009 < groups.google.com #>.

[758] “Our Big Idea!” 100kGarages site < 100kgarages.com >.

[759] Gareth Branwyn, “ShopBot Open-Sources Their Code,” Makezine, April 13, 2009 < blog.makezine.com >.

[760] “What’s Digital Fabrication?”

[761] “100kGarages is Building a MakerBot,” 100kGarages, October 17, 2009 < blog.100kgarages.com >.

[762] “What are we working on?” 100kGarages, January 8, 2010 < blog.100kgarages.com >.

[763] “What’s Next for 100kGarages?” 100kGarages News, February10, 2010 < blog.100kgarages.com >.

[764] John Robb, “The Switch to Local Manufacturing,” Global Guerrillas, July 8, 2009 < globalguerrillas.typepad.com >.

[765] Lloyd Alter, “Ponoko + ShopBot = 100kGarages: This Changes Everything in Downloadable Design,” Treehugger, September 16, 2009 < www.treehugger.com >.

[766] Eric Hunting, “Toolbook and the Missing Link,” Open Manufacturing, January 30, 2009 < groups.google.com >.

[767] Michel Bauwens, “A milestone for distributed manufacturing: 100kGarages,” P2P Foundation Blog, September 19, 2009 < blog.p2pfoundation.net >.

[768] Alter, op. Cit.

[769] Bauwens, “The Emergence of Open Design and Open Manufacturing,” We Magazine, vol. 2 < www.we-magazine.net >.

[770] < www.physicaldesignco.com />.

[771] “PhysicalDesignCo teams up with 100kGarages,” 100kGarages News, October 4, 2009 < blog.100kgarages.com >.

[772] Quoted in Michel Bauwens, “Strategic Support for Factor e Farm and Open Source Ecology,” P2P Foundation Blog, June 19, 2009 < blog.p2pfoundation.net >.

[773] John Robb, “Viral Resilience,” Global Guerrillas, January 12, 2009 < globalguerrillas.typepad.com >.

[774] Jeff Vail, “Diagonal Economy 1: Overview,” JeffVail.Net, August 24, 2009 < www.jeffvail.net >.

[775] Paul and Percival Goodman, Communitas: Means of Livelihood and Ways of Life (New York: Vintage Books, 1947, 1960), p. 170.

[776] Ralph Borsodi. Flight from the City An Experiment in Creative Living on the Land (New York, Evanston, San Francisco, London Harper & Row, 1933, 1972), p. 147.

[777] Karl Marx, The Poverty of Philosophy, Marx and Engels Collected Works, vol. 6 (New York International Publishers, 1976).

[778] Leopold Kohr, The Overdeveloped Nations The Diseconomies of Scale (New York Schocken Books, 1977), p. 110.

[779] Ebenezer Howard, To-Morrow A Peaceful Path to Real Reform. Facsimile of original 1898 edition, with introduction and commentary by Peter Hall, Dennis Hardy and Colin Ward (London and New York Routledge, 2003), pp. 100, 102 [facsimile pp. 77–78].

[780] “Mahatma Gandhi on Mass Production” (1936), TinyTech Plants < www.tinytechindia.com > (punctuation in original).

[781] L. S. Stavrianos, The Promise of the Coming Dark Age (San Francisco W. H. Freeman and Company, 1976), p. 41.

[782] Bill McKibben, Deep Economy The Wealth of Communities and the Durable Future (New York Times Books, 2007), p. 165.

[783] E. P. Thompson, The Making of the English Working Class (New York: Vintage Books, 1963, 1966), p. 790.

[784] G.D.H. Cole, A Short History of the British Working Class Movement (1789–1947) (London: George Allen & Unwin, 1948), p. 76.

[785] Ibid., p. 78.

[786] Ibid., pp. 793–794.

[787] Ibid., pp. 78–79.

[788] Ibid., p. 76.

[789] Thompson, Making of the English Working Class, p. 791.

[790] John Curl, For All the People: Uncovering the Hidden History of Cooperation, Cooperative Movements, and Communalism in America (Oakland, CA: PM Press, 2009), p.4

[791] Ibid., p. 33.

[792] Ibid., p. 34.

[793] Ibid., pp. 35, 47.

[794] Ibid., p. 77.

[795] Ibid., p. 107. The fate of the KofL cooperatives, resulting from the high capitalization requirements for production, is a useful contrast to the potential for small-scale production today. The economy today is experiencing a revolution as profound as the corporate transformation of the late 19 th century. The main difference today is that, for material reasons, the monopolies on which corporate rule depends are becoming unenforceable. Another revolution, based on P2P and micromanufacturing, is sweeping society on the same scale as did the corporate revolution of 150 years ago. But the large corporations today are in the same position that the Grange and Knights of Labor were in the Great Upheaval back then, fighting a desperate, futile rearguard action, and doomed to be swept under by the tidal wave of history. The worker cooperatives organized in the era of artisan labor paralleled, in many ways, the forms of work organization that are arising today. Networked organization, crowdsourced credit and the implosion of capital outlays required for physical production, taken together, are recreating the same conditions that made artisan cooperatives feasible in the days before the factory system. In the artisan manufactories that prevailed into the early 19 th century, most of the physical capital required for production was owned by the work force; artisan laborers could walk out and essentially take the firm with them in all but name. Likewise, today, the collapse of capital outlay requirements for production in the cultural and information fields (software, desktop publishing, music, etc.) has created a situation in which human capital is the source of most book value for many firms; consequently, workers are able to walk out with their human capital and form “breakaway firms,” leaving their former employers as little more than hollow shells. And the rise of cheap garage manufacturing machinery (a Fab Lab with homebrew CNC tools costing maybe two months’ wages for a semi-skilled worker) is, in its essence, a return to the days when low physical capital costs made worker cooperatives a viable alternative to wage labor. The first uprising against corporate power, in the late 19 th century, was defeated by the need for capital. The present one will destroy the old system by making capital superfluous.

[796] Howard, To-Morrow, pp. 32, 42 [facsimile pp. 13, 20–21].

[797] Ibid., pp. 108, 110 [facsimile pp. 85–86].

[798] Colin Ward, Commentator’s introduction to Ibid., p. 3.

[799] Ibid., p. 28 [facsimile p. 10].

[800] Ibid., p. 14 [facsimile p. 34].

[801] Ralph Borsodi, The Nation, April 19, 1933; reproduced in Flight From the City, pp. 154–59. Incidentally, the New Town project in Great Britain was similarly sabotaged, first under the centralizing social-democratic tendencies of Labour after WWII, and then by Thatcherite looting (er, “privatization”) in the 1980s. Ward commentary, Howard, To-Morrow, p. 45.

[802] Editorial by Walter Locke in The Dayton News, quoted by Borsodi in Flight From the City, pp. 170–71.

[803] Jonathan Rowe, “Entrepreneurs of Cooperation,” Yes!, Spring 2006 < www.yesmagazine.org >.

[804] J. Stewart Burgess, “Living on a Surplus,” The Survey 68 (January 1933), p. 6.

[805] Bernard Lietaer, The Future of Money: A New Way to Create Wealth, Work and a Wiser World (London: Century, 2001), p. 148. In pp. 151–157, he describes examples from all over the world, including “several thousand examples of local scrip from every state in the Union.”

[806] Kevin Sullivan, “As Economy Plummets, Cashless Bartering Soars on the Internet,” Washington Post, March 14, 2009 < www.washingtonpost.com >.

[807] Charles Johnson, “Liberty, Equality, Solidarity: Toward a Dialectical Anarchism,” in Roderick T. Long and Tibor R. Machan, eds., Anarchism/Minarchism Is a Government Part of a Free Country? (Hampshire, UK, and Burlington, Vt.: Ashgate Publishing Limited, 2008). Quoted from textfile provided by author.

[808] Donna St. George, “Pew report shows 50-year high point for multi-generational family households,” Washington Post, March 18, 2010 < www.washingtonpost.com >.

[809] John Robb, “You Are In Control,” Global Guerrillas, January 3, 2010 < globalguerrillas.typepad.com / globalguerrillas/2010/01/you-are-in-control.html>. For a wonderful fictional account of the growth of a society of resilient communities linked in a darknet, and its struggle with the host society, I strongly recommend two novels by Daniel Suarez: Daemon (Signet, 2009), and its sequel Freedom(TM) (Dutton, 2010). I reviewed them here: Kevin Carson, “Daniel Suarez: Daemon and Freedom, P2P Foundation Blog, April 26, 2010 < blog.p2pfoundation.net >.

[810] Poul Anderson, Orion Shall Rise (New York: Pocket Books, 1983).

[811] John Robb, “An Entrepreneur’s Approach to Resilient Communities,” Global Guerrillas, February 22, 2010 < globalguerrillas.typepad.com >.

[812] Reihan Salam, “The Dropout Economy,” Time, March 10, 2010 < www.time.com >.

[813] James Scott, Seeing Like a State (New Haven and London: Yale University Press, 1998).

[814] Ibid., pp. 64–73.

[815] Ethan Zuckerman, “Samuel Bowles Introduces Kudunomics,” My Heart’s in Accra, November 17, 2009 < www.ethanzuckerman.com >.

[816] See, for example, Roderick Long and Charles Johnson, “Libertarian Feminism: Can This Marriage Be Saved?” May 1, 2005 < charleswjohnson.name >; Johnson, “Libertarianism Through Thick and Thin,” Rad Geek People’s Daily, October 3, 2008 < radgeek.com >; Matt MacKenzie, “Exploitation: A Dialectical Anarchist Perspective,” Upaya: Skillful Means to Liberation, March 20, 2007 < upaya.blogspot.com >. (link defunct—retrieved through Internet Archive).

[817] Claire Wolfe, “Insanity, the Job Culture, and Freedom,” Loompanics Catalog 2005 < www.loompanics.com >.

[818] Gary Chartier, private email, January 15, 2010. The discussion took place in the context of my remarks on Michael Taylor’s book Community, Anarchy and Liberty (Cambridge, UK: Cambridge University Press, 1982). To put the references to the Sabbath and other issues of personal morality in context, Chartier is from a Seventh Day Adventist backgrounds and teaches at a university affiliated with that denomination.

[819] Taylor, pp. 161–164 (see note immediately above).

[820] Lietaer, p. 112.

[821] Ibid., pp. 23–24.

[822] Joseph Schumpeter, History of Economic Analysis. Edited from manuscript by Elizabeth Boody Schumpeter (New York: Oxford University Press, 1954), p. 1114.

[823] Ibid., p. 717.

[824] Thomas Hodgskin, Labour Defended Against the Claims of Capital (New York: Augustus M. Kelley, 1969 [1825]), pp. 36–40.

[825] Hodgskin, Popular Political Economy: Four Lectures Delivered at the London Mechanics’ Institution (New York: Augustus M. Kelley, 1966 [1827]), p. 247.

[826] Hodgskin, Labour Defended, p. 71.

[827] Franz Oppenheimer, “A Post Mortem on Cambridge Economics (Part Three),” The American Journal of Economics and Sociology, vol. 3, no. 1 (1944), pp, 122–123, [115–124]

[828] Oscar Ameriger. “Socialism for the Farmer Who Farms the Farm.” Rip-Saw Series No. 15 (Saint Louis: The National Rip-Saw Publishing Co., 1912).

[829] Schumpeter, History of Economic Analysis, p. 1114.

[830] Ibid., p. 717.

[831] E. C. Riegel, Private Enterprise Money: A Non-Political Money System (1944), Introduction < www.newapproachtofreedom.info >.

[832] Ibid., Chapter Seven < www.newapproachtofreedom.info >.

[833] Riegel, The New Approach to Freedom: together with Essays on the Separation of Money and State. Edited by Spencer Heath MacCallum (San Pedro, California: The Heather Foundation, 1976), Chapter Four < www.newapproachtofreedom.info >.

[834] Riegel, “The Money Pact, in Ibid. < www.newapproachtofreedom.info >.

[835] Spencer H. MacCallum, “E. C. Riegel on Money” (January 2008) < www.newapproachtofreedom.info >.

[836] Thomas Greco, Money and Debt: A Solution to the Global Crisis (1990), Part III: Segregated Monetary Functions and an Objective, Global, Standard Unit of Account < circ2.home.mindspring.com >.

[837] Greco, The End of Money and the Future of Civilization (White River Junction, Vermont: Chelsea Green Publishing, 2009), p. 82.

[838] Ibid., p. 102.

[839] Ibid. pp. 106–107

[840] Ibid., p. 134.

[841] Greco, The End of Money, pp. 139–141.

[842] Karl Hess and David Morris, Neighborhood Power: The New Localism (Boston: Beacon Press, 1975), pp. 154–155.

[843] Greco, The End of Money, p. 116.

[844] Ibid., p. 158.

[845] Ted Trainer, “Local Currencies” (September 4, 2008), The Simpler Way < ssis.arts.unsw.edu.au >.

[846] Trainer, “We Need More Than LETS,” The Simpler Way < ssis.arts.unsw.edu.au >.

[847] Trainer, “The Transition Towns Movement; its huge significance and a friendly criticism,” (We) can do better, July 30, 2009 < candobetter.org >.

[848] Trainer, “We Need More Than LETS.”

[849] Greco, The End of Money, p. 81.

[850] Lietaer, pp. 207–209.

[851] John Brummett, “Delta Solution: Move,” The Morning News of Northwest Arkansas, June 14, 2009 < arkansasnews.com >.

[852] Race Matthews, Jobs of Our Own: Building a Stakeholder Society—Alternatives to the Market & the State (Annandale, NSW, Australia: Pluto Press, 1999), pp. 125–172.

[853] Ibid. , pp. 151–152; p. 47.

[854] Ibid., pp. 173–190.

[855] Massimo de Angelis, “Branding + Mingas + Coops = Salinas,” the editor’s blog, March 26, 2010 < www.commoner.org.uk >.

[856] < www.evergreencoop.com />

[857] Guy Alperowitz, Ted Howard, and Thad Williamson, “The Cleveland Model,” The Nation, February 11, 2010 < www.thenation.com >.

[858] Andrew MacLeod, “Mondragon—Cleveland—Sacramento,” Cooperate and No One Gets Hurt, October 10, 2009 < coopgeek.wordpress.com >; Ohio Employee Ownership Center, “Cleveland Goes to Mondragon,” Owners at Work (Winter 2008–2009), pp.10–12 < dept.kent.edu >.

[859] Alperovitz et al. “The Cleveland Model.”

[860] < www.evergreencoop.com >

[861] < www.evergreencoop.com >

[862] < www.evergreencoop.com >

[863] Alperowitz et al. “The Cleveland Model.”

[864] < www.community-wealth.org >.

[865] < www.community-wealth.org >.

[866] < www.community-wealth.org >.

[867] “Community Wealth Building Conference in Cleveland, OH,” GVPT News, February 2007, p. 14 < www.bsos.umd.edu >.

[868] See Chapter One, Appendix A, “Economy of Scale in Development Economics,” in Kevin Carson, Organization Theory: A Libertarian Perspective (Booksurge, 2008), pp. 24 et seq.

[869] Keith Taylor, who is doing dissertation work on how wind farms relate to alternative models of economic development. The structure of refundable tax credits for “green energy” investment, in particular, massively empowers conventional corporate wind farms against electric power cooperatives. Making credits conditional on paying at least some taxes seems at first glance to be a fairness issue, ensuring that only people who pay taxes can get credits, and thus making refundable credits a bit less welfare-like. But the ostensible fairness is only superficial: Once the threshold of paying any taxes at all is triggered, the scale of the credit need bear no proportion at all to the amount of taxes paid. So a refundable credit which is available only to for-profit, tax-paying entities is equivalent to a $20 million welfare check that’s available to anyone who paid a dollar in taxes, but not to the unemployed. And the refundable green energy investment tax credits are in effect a massive subsidy that is available only to for-profit corporations. Likewise, the Obama administration’s “smart grid” policies are suited primarily to the interests of corporate wind farm mega-projects, situated far from the point of consumption, like those T. Boone Pickens is so busy promoting.

[870] David Streitfeld, “Rock Bottom for Decades, but Showing Signs of Life,” New York Times, February 1, 2009 < www.nytimes.com >.

[871] Sam Kronick, “[Open Manufacturing] Re: How will laws be changed just by the existence of self-sufficient people?” Open Manufacturing, January 16, 2010 < groups.google.com >.

[872] Kronick, “[Open Manufacturing] Regenerating Braddock (was Re: How will laws be changed ...),” Open Manufacturing, January 17, 2010 < groups.google.com >.

[873] Kevin Carson, “The Cleveland Model and Micromanufacturing,” P2P Foundation Blog, April 6, 2010 < blog.p2pfoundation.net 06>

[874] Jeff Vail, “Re-Post Hamlet Economy,” Rhizome, July 28, 2008 < www.jeffvail.net >.

[875] Vail, “The Design Imperative,” JeffVail.Net, April 8, 2007 < www.jeffvail.net >.

[876] Albert Bates, “Ecovillage Roots (and Branches): When, where, and how we re-invented this ancient village concept,” Communities Magazine No. 117 (2003).

[877] Ross Jackson, “The Ecovillage Movement,” Permaculture Magazine No. 40 (Summer 2004), p. 25.

[878] Bates, “Ecovillage Roots (and Branches).”

[879] Ross Jackson, “The Ecovillage Movement.”

[880] “What is an Ecovillage?” Gaia Trust website < www.gaia.org >.

[881] Bates, “Ecovillage Roots (and Branches).”

[882] “What is an Ecovillage?” (sidebar), Agnieszka Komoch, “Ecovillage Enterprise,” Permaculture Magazine No. 32 (Summer 2002), p. 38.

[883] Jackson, p. 26.

[884] Jackson, p. 28.

[885] Jackson, p. 29.

[886] Linda Joseph and Albert Bates, “What Is an ‘Ecovillage’?” Communities Magazine No. 117 (2003).

[887] < gen.ecovillage.org >.

[888] < gen.ecovillage.org >.

[889] Joseph and Bates.

[890] John Robb, “Resilient Communities: Transition Towns,” Global Guerrillas, April 7, 2008 < globalguerrillas.typepad.com >.

[891] < transitiontowns.org />.

[892] Ben Brangwyn and Rob Hopkins, Transition Initiatives Primer: becoming a Transition Town, City, District, Village, Community or even Island (Version 26—August 12, 2008) < transitionnetwork.org >.

[893] Rob Hopkins, The Transition Handbook: From Oil Dependency to Local Resilience (Green Books) < transitiontowns.org >.

[894] Ibid., p. 10.

[895] Kinsale 2021: An Energy Descent Action Plan. Version.1. 2005. By Students of Kinsale Further Education College. Edited by Rob Hopkins < transitionculture.org >.

[896] Claude Lewenz, How to Build a Village (Auckland, New Zealand Village Forum Press and Jackson House Publishing Company, 2007), p. 73.

[897] Ibid., p. 77.

[898] See also the Global Villages site maintained by Frahz Nahrada, another leading figure in the movement. < www.globalvillages.info >.

[899] Luca, “TeleKommunisten” (interview with Dmytri Kleiner), ecopolis, May 21, 2007 < www.ecopolis.org / telekommunisten/>.

[900] “Venture Communism,” P2P Foundation Wiki < p2pfoundation.net > (accessed August 8, 2009.

[901] “Telekommunisten: The Revolution is Coming” < telekommunisten.net > Accessed October 19, 2009.

[902] < www.dialstation.com />.

[903] See, for example, Mark Kinney’s pamphlet “In Whose Interest?” (n.d) < www.appropriate-economics.org >. It briefly sets forth a view of money much like Greco’s. His work is quoted several times in Greco’s body of work.

[904] Reed Kinney, private email.

[905] Reed Kinney, personal email, April 8, 2010.

[906] Dougald Hine, “Social Media vs the Recession,” Changing the World, January 28, 2009 < otherexcuses.blogspot.com >.

[907] Nathan Cravens, “The Triple Alliance,” Appropedia: The sustainability wiki < www.appropedia.org / The_Triple_Alliance> (accessed July 3, 2009).

[908] Dylan Tweney, “DIY Freaks Flock to ‘Hacker Spaces’ Worldwide,” Wired, March29, 2009 < www.wired.com >.

[909] < www.nycresistor.com >.

[910] Nathan Cravens, “important appeal: social media and p2p tools against the meltdown,” Open Manufacturing (Google Groups), March 13, 2009 < groups.google.com >.

[911] Ibid.

[912] Gifford Hartman, “Crisis in California: Everything Touched by Capital Turns Toxic,” Turbulence 5 (2010) < turbulence.org.uk >.

[913] Sam Putman, “Walkable Community Networks for Spontaneous Gift Economy Development and Happiness,” Open Manufacturing, March 20, 2010 < groups.google.com 373013b9d631a374/78ba19a52d25e144>.

[914] John Leland, “Finding in Foreclosure a Beginning, Not an End,” New York Times, March 21, 2010 < www.nytimes.com >.

[915] Nathan Cravens, “[p2p-research] simpler way wiki,” P2P Research, April 20, 2009 < listcultures.org >.

[916] Johan Soderberg, Hacking Capitalism: The Free and Open Source Software Movement (New York and London: Routledge, 2008), pp. 141–142.

[917] Michael J. Piore and Charles F. Sabel, The Second Industrial Divide: Possibilities for Prosperity (New York: HarperCollins, 1984), pp. 226–227.

[918] Soderberg, Hacking Capitalism,, pp. 142–142

[919] David Pollard, “The Future of Business,” How to Save the World, January 14, 2004 < blogs.salon.com >.

[920] Tom Peters, The Tom Peters Seminar: Crazy Times Call for Crazy Organizations (New York: Vintage Books, 1994), pp. 29–30.

[921] Ralph Borsodi, Prosperity and Security (New York and London: Harper & Brothers, 1938), p. 241.

[922] Borsodi, This Ugly Civilization (Philadelphia: Porcupine Press, 1929, 1975), p. 99.

[923] Ibid., p. 337.

[924] Ibid., p. 352.

[925] Bruce Sterling, “The Power of Design in your exciting new world of abject poverty,” Wired: Beyond the Beyond, February 21, 2010 < www.wired.com >.

[926] John Robb, “STEMI Compression,” Global Guerrillas blog, November 12, 2008 < globalguerrillas.typepad.com >.

[927] Mamading Ceesay, “The Economies of Agility and Disrupting the Nature of the Firm,” Confessions of an Autodidactic Engineer, March 31, 2009 < evangineer.agoraworx.com >.

[928] Jeff Vail, “What is Rhizome?” JeffVail.Net, January 28, 2008 < www.jeffvail.net >.

[929] Nathan Cravens, “Productive Recursion Proven,” Open Manufacturing (Google Groups), March 8, 2009 < groups.google.com >.

[930] Cravens, “Productive Recursion,” Open Source Ecology Wiki < openfarmtech.org >.

[931] Cravens, “Productive Recursion Proven.”

[932] Neil Gershenfeld, Fab: The Coming Revolution on Your Desktop—from Personal Computers to Personal Fabrication (New York: Basic Books, 2005), p. 182.

[933] Ibid., pp. 185–187.

[934] Ibid. p. 164.

[935] Ibid., p. 88.

[936] Marcin Jakubowski, “OSE Proposal—Towards a World Class Open Source Research and Development Facility,” v0.12, January 16, 2008 < openfarmtech.org >.

[937] Paul Hawken, Amory Lovins, and L. Hunter Lovins, Natural Capitalism: Creating the Next Industrial Revolution (Boston, New York, and London: Little, Brown and Company, 1999), pp. 113–124.

[938] Ibid., p. 121.

[939] Ibid., pp. 65, 117.

[940] Eric S. Raymond, The Cathedral and the Bazaar < catb.org >.

[941] Hawken et al, Natural Capitalism, p. 90.

[942] Ibid., p. 114.

[943] Ibid., pp. 119–120.

[944] Ibid., p. 122.

[945] Ibid., pp. 116–117.

[946] Vinay Gupta, “The Global Village Development Bank: financing infrastructure at the individual, household and village level worldwide” Draft 2 (March 12, 2009) < vinay.howtolivewiki.com >.

[947] Jonathan Dugan, for example, stresses Redundancy and Modularity as two of the central principles of resilience. Chris Pinchen, “Resilience: Patterns for thriving in an uncertain world,” P2P Foundation Blog, April 17, 2010 < blog.p2pfoundation.net >.

[948] Malcolm Gladwell, “How David Beats Goliath,” The New Yorker, May 11, 2009 < www.newyorker.com >.

[949] David Hambling, “China Looks to Undermine U.S. Power, With ‘Assassin’s Mace’.” Wired, July 2 < www.wired.com >.

[950] Siobhan Gorman, Yochi J. Dreazen and August Cole, “Insurgents Hack U.S. Drones,” Wall Street Journal, December 17, 2009 < online.wsj.com >.

[951] John Robb, “SUPER EMPOWERMENT: Hack a Predator Drone,” Global Guerrillas, December 17, 2009 < globalguerrillas.typepad.com >.

[952] John Arquilla and David Ronfeldt, “Fighting the Network War,” Wired, December 2001 < www.wired.com / wired/archive/9.12/netwar.html>.

[953] Jonathan J. Vaccaro, “The Next Surge—Counterbureaucracy,” New York Times, December 7, 2009 < www.nytimes.com >.

[954] Robb, “Fighting an Automated Bureaucracy,” Global Guerrillas, December 8, 2009 < globalguerrillas.typepad.com >.

[955] Thoreau, “More on the swarthy threat to our precious carry-on fluids,” Unqualified Offerings, December 26, 2009 < highclearing.com >.

[956] Robb, “Resilient Communities and Scale Invariance,” Global Guerrillas, April 16, 2009 < globalguerrillas.typepad.com >.

[957] See Chapter Five.

[958] John Medaille, personal email to author, January 28, 2009.

[959] Kevin Carson, “’Building the Structure of the New Society Within the Shell of the Old,’” Mutualist Blog Free Market Anti-Capitalism, March 22, 2005 < mutualist.blogspot.com >.

[960] Soderberg, Hacking Capitalism, p. 172.

[961] Brian Doherty, “The Glories of Quasi-Capitalist Modernity, Dumpster Diving Division,” Reason Hit & Run Blog, September 12, 2007 < reason.com >.

[962] Jeff Vail, “The Diagonal Economy 5: The Power of Networks,” Rhizome, December 21, 2009 < www.jeffvail.net >.

[963] Dale Dougherty, “What’s in Your Garage?” Make, vol. 18 < www.make-digital.com >.

[964] Cory Doctorow, “Cheap Facts and the Plausible Premise,” Locus Online, July 5, 2009 < www.locusmag.com >.

[965] Murray Bookchin, “Toward a Liberatory Technology,” in Post-Scarcity Anarchism (Berkeley, Calif.: The Ramparts Press, 1971), pp. 49–50.

[966] David Pollard, “Replicating (Instead of Growing) Natural Small Organizations,” how to save the world, January 14, 2009 < howtosavetheworld.ca >.

[967] Eric Raymond, “Escalating Complexity and the Collapse of Elite Authority,” Armed and Dangerous, January 5, 2010 < esr.ibiblio.org >.

[968] Roderick Long, “The Winnowing of Ayn Rand,” Cato Unbound, January 20, 2010 < www.cato-unbound.org >.

[969] Bryan Caplan, “Pyramid Power,” EconLog, January 21, 2010 < econlog.econlib.org pyramid_power.html>.

[970] Comment under Carson, “The People Making ‘The Rules’ are Dumber than You,” Center for a Stateless Society, January 11, 2010 < c4ss.org >.

[971] Atrios, “Face Time,” Eschaton, July 9, 2005 < atrios.blogspot.com -03_atrios_archive.html>.

[972] Michel Bauwens, “The Political Economy of Peer Production,” Ctheory.net, December 1, 2005 < www.ctheory.net >.

[973] “A Bridge Too Far: Train Sets Bridge on Fire,” Snopes.Com < www.snopes.com >.

[974] Matthew Yglesias, “Too Much Information,” Matthew Yglesias, December 28, 2009 < yglesias.thinkprogress.org / archives/2009/12/too-much-information.php>.

[975] Niall Cook, Enterprise 2.0: How Social Software Will Change the Future of Work (Burlington, Vt.: Gower, 2008), p. 91.

[976] Ibid., p. 93.

[977] Ibid., p. 95.

[978] Ibid., p. 96.

[979] Chloe, “Important People,” Corporate Whore, September 21, 2007 < corporatewhore.us /][web.archive.org]]>.

[980] Charles F. Sabel, “A Real-Time Revolution in Routines,” in Charles Hecksher and Paul S. Adler, The Firm as a Collaborative Community: Reconstructing Trust in the Knowledge Economy (New York: Oxford University Press, 2006), pp. 110–111.

[981] Martha S. Feldman and James G. March, “Information in Organizations as Signal and Symbol,” Administrative Science Quarterly 26 (April 1981).

[982] Ibid., p. 174.

[983] Ibid., p. 175.

[984] Ibid., pp. 175–176.

[985] Ibid., p. 176.

[986] Ibid., pp. 177–178.

[987] Neal Stephenson, Snow Crash (Westminster, Md.: Bantam Dell Pub Group, 2000).

[988] Thomas Greco, The End of Money and the Future of Civilization (White River Junction, Vt.: Chelsea Green Publishing, 2009), p. 55.

[989] Borsodi, This Ugly Civilization, p. 126.

[990] Michel Bauwens, “The three revolutions in human productivity,” P2P Foundation Blog, November 29, 2009 < blog.p2pfoundation.net >.

[991] Johan Soderberg, Hacking Capitalism, p. 26.

[992] Matthew Yglesias, “The Office Illusion,” Matthew Yglesias, September 1, 2007 < matthewyglesias.theatlantic.com >.

[993] J.E. Meade, “The Theory of Labour-Managed Firms and Profit Sharing,” in Jaroslav Vanek, ed., Self-Management Economic Liberation of Man (Hammondsworth, Middlesex, England Penguin Education, 1975), p. 395.

[994] Edward S. Greenberg. “Producer Cooperatives and Democratic Theory” in Robert Jackall and Henry M. Levin, eds., Worker Cooperatives in America (Berkeley, Los Angeles, London: University of California Press, 1984), p. 185.

[995] Ibid., p. 193.

[996] Ibid., p. 191.

[997] Thomas Hodgskin, Popular Political Economy: Four Lectures Delivered at the London Mechanics’ Institution (New York: Augustus M. Kelley, 1966 [1827]) , pp. 255–256.

[998] Ibid., pp. 51–52.

[999] Ibid., pp. 243–244

[1000] Hodgskin, “Letter the Eighth: Evils of the Artificial Right of Property,” The Natural and Artificial Right of Property Contrasted. A Series of Letters, addressed without permission to H. Brougham, Esq. M.P. F.R.S. (London: B. Steil, 1832). < oll.libertyfund.org >

[1001] Scott Burns, The Household Economy Its Shape, Origins, & Future (Boston The Beacon Press, 1975), pp. 163–164.

[1002] Gul Tuysuz, “An ancient tradition makes a little comeback,” Hurriyet DailyNews, January 23, 2009 < www.hurriyet.com.tr >.

[1003] Eric Husman, private email, November 18, 2009; Kathleen Fasanella, “Selling to Department Stores pt. 1,” Fashion Incubator, August 11, 2009 < www.fashion-incubator.com >.

[1004] “Supply Chain News: Walmart Joins Kohl’s in Offering Factoring Program to Apparel Suppliers,” Supply Chain Digest, November 17, 2009 < www.scdigest.com >.

[1005] Kathleen Fasanella, private email, November 19, 2009. Fasanella wrote the best-known book in the industry on how to start an apparel company: The Entrepreneur’s Guide to Sewn Product Manufacturing (Apparel Technical Svcs, 1998). Eric Husman also happens to be her husband.

[1006] See, for example, Benjamin Darrington, “Government Created Economies of Scale and Capital Specificity” (Austrian Student Scholars’ Conference, 2007) pp. 6–7 < agorism.info >.

[1007] Eric Hunting, “Re: Roadmap to Post-Scarcity,” Open Manufacturing, January 12, 2010 < groups.google.com >.

[1008] Kim Stanley Robinson, Green Mars (New York, Toronto, London, Sydney, Auckland: Bantam Books, 1994), p. 309.

[1009] Arthur Silber, “An Evil Monstrosity: Thoughts on the Death State,” Once Upon a Time, April 20, 2010 < powerofnarrative.blogspot.com >.

[1010] Charles Johnson, “In which I fail to be reassured,” Rad Geek People’s Daily, January 26, 2008 < radgeek.com >.

[1011] Chuck Hammill, “From Crossbows to Cryptography: Techno-Thwarting the State” (Given at the Future of Freedom Conference, November 1987) <www.csua.berkeley.edu/~ranga/papers/crossbows2crypto/crossbows2crypto.pdf>.

[1012] David Pollard, “All About Power and the Three Ways to Topple It (Part 1),” How to Save the World, February 18, 2005 < blogs.salon.com >.

[1013] Pollard, “All About Power—Part Two,” How to Save the World,” February 21, 2005 < blogs.salon.com >.

[1014] Marge Piercy, Woman on the Edge of Time (New York: Fawcett Columbine, 1976), p. 190.

[1015] John Robb, “Links: 2 APR 2010,” Global Guerrillas, April 2, 2010 < globalguerrillas.typepad.com >.

[1016] John Robb, “STANDING ORDER 8: Self-replicate,” Global Guerrillas, June 3, 2009 < globalguerrillas.typepad.com >.

[1017] Paul Hartzog, “Panarchy: Governance in the Network Age,” < www.panarchy.com >.

[1018] Peter Kropotkin, The Conquest of Bread (New York Vanguard Press, 1926), pp. 36–37.

[1019] Immanuel Wallerstein, “Household Structures and Labor-Force Formation in the Capitalist World Economy,” in Joan Smith, Immanuel Wallerstein, Hans-Dieter Evers, eds., Households and the World Economy (Beverly Hills, London, New Delhi Sage Publications, 1984), pp. 20–21.

[1020] Wallerstein and Joan Smith, “Households as an institution of the world-economy,” in Smith and Wallerstein, eds., Creating and Transforming Households The constraints of the world-economy (Cambridge; New York; Oakleigh, Victoria; Paris Cambridge University Press, 1992), p. 16.

[1021] Wallerstein, “Household Structures,” p. 20.

[1022] Samuel Bowles and Herbert Gintis. “The Crisis of Liberal Democratic Capitalism: The Case of the United States,” Politics and Society 11:1 (1982), p. 83.

[1023] Marcin Jakubowski, “Get a Real Job!” Factor E Farm Weblog, September 7, 2009 < openfarmtech.org >.

[1024] James O’Connor, Accumulation Crisis (New York Basil Blackwell, 1984), pp. 184–186.

[1025] Eleutheros, “Choice, the Best Sauce,” How Many Miles from Babylon, October 15, 2008 < milesfrombabylon.blogspot.com >.

[1026] Borsodi, Flight From the City An Experiment in Creative Living on the Land(New York, Evanston, San Francisco, London Harper & Row, 1933, 1972), p. 100.

[1027] Borsodi, p. 335.

[1028] Ibid., p. 403.

[1029] Colin Ward, “Anarchism and the informal economy,” The Raven No. 1 (1987), pp. 27–28.

[1030] Burns, The Household Economy, p. 47.

[1031] Vinay Gupta, “The Unplugged,” How to Live Wiki, February 20, 2006 < howtolivewiki.com >.

[1032] James L. Wilson, “Standard of Living vs. Quality of Life,” The Partial Observer, May 29, 2008 < www.partialobserver.com >.

[1033] Vinay Gupta, “What’s Going to Happen in the Future,” The Bucky-Gandhi Design Institution, June 1, 2008 < vinay.howtolivewiki.com >.

[1034] Anand Giridhardas, “A Pocket-Size Leveler in an Outsized Land,” New York Times, May 9, 2009 < www.nytimes.com >.

[1035] Jeff Vail, “2010—Predictions and Catabolic Collapse,” Rhizome, January 4, 2010 < www.jeffvail.net >.

  • Meeting abstracts
  • Open access
  • Published: 13 September 2018

Proceedings of the 4th IPLeiria’s International Health Congress

Leiria, Portugal. 11-12 May 2018

BMC Health Services Research volume  18 , Article number:  684 ( 2018 ) Cite this article

54k Accesses

4 Citations

8 Altmetric

Metrics details

Keynote lectures

S1 the role of practice-based research in stimulating educational innovation in healthcare, sandra hasanefendic ([email protected]), vrije universiteit amsterdam, de boelelaan 1105, 1081 hv amsterdam, the netherlands.

Practice-based research is not uncommon in healthcare. In fact, the way nurses and doctors train is through extensive and intensive practice [1]. In other words, practice-based research has been used to gain new knowledge partly by means of practice and the outcomes of that practice [2]. Practice based research networks have also been gaining on importance in healthcare as ways of addressing research questions informed by practicing clinicians. They aim to gather data and improve existing practices of primary care [3], practice-based research is not only about gaining new knowledge via practice and improving existing practices.

In this presentation/paper I explain and highlight the role of practice-based research as an instrument for educational innovation in healthcare sciences.

I used interview excerpts and examples of projects related to healthcare at different universities of applied sciences in the Netherlands and Germany (also known as polytechnics in Portugal) to advance the role of practice-based research in educational innovation. This type of research is an integral part of teaching and curricular assignments in the healthcare settings in the Netherlands and Germany, and particularly at universities of applied sciences. I emphasized how practice-based research can improve and enrich the curricula, while at the same time, building necessary skills of future healthcare professionals and improving practices in already existing healthcare institutions.

I show that practice-based research is in fact short term problem-oriented research which serves educational purposes by upgrading students’ and teachers’ skills and knowledge of the profession and dynamics in the work environment; which also has the potential to improve company products or design solutions and at the same time contribute to local and regional innovation in professions and profession related institutions [4-5]. Its role is multidimensional and dialectic insofar it serves multitude goals and is accomplished in dialogue among relevant stakeholders [6]. Practical suggestions for healthcare educators and practitioners in designing their curricula to incorporate the basic elements of this practice-based research are also offered in this presentation/paper.

Conclusions

Practice-based research is more than knowledge acquisition via practice. Its role and goals expand to enriching educational curricula with a more comprehensive engagement of external and professional stakeholders, at the same time contributing to student soft and professional skill development and solving stakeholder problems or optimizing services and products at local or regional levels.

1. Westfall JM, Mold J, Fagnan L. Practice-based research—“Blue Highways” on the NIH roadmap. Jama, 2007; 297(4): 403-406.

2. Andrews JE, Pearce KA, Ireson C, Love MM. Information-seeking behaviors of practitioners in a primary care practice-based research network (PBRN). Journal of the Medical Library Association, 2005;93(2):206.

3. Hartung DM, Guise JM, Fagnan LJ, Davis MM, Stange KC. Role of practice-based research networks in comparative effectiveness research. Journal of comparative effectiveness research. 2012;1(1):45-55.

4. Frederik H, Hasanefendic S, Van der Sijde P. Professional field in the accreditation process: examining information technology programmes at Dutch Universities of Applied Sciences. Assessment & Evaluation in Higher Education. 2017, 42(2): 208-225.

5. Hasanefendic S. Responding to new policy demands: A comparative study of Portuguese and Dutch non-university higher education organizations. [Doctoral Thesis]. Vrije Universiteit Amsterdam, the Netherlands. 2018.

6. Hasanefendic S, Heitor M, Horta H. Training students for new jobs: The role of technical and vocational higher education and implications for science policy in Portugal. Technological Forecasting and Social Change. 2016; 113: 328-340.

Practice-based research, Short term, Problem oriented, Healthcare, Universities of applied sciences.

S2 Is sexuality a right for all? Sexual revolution in the old age

Francisco j hernĂĄndez-martĂ­nez ([email protected]), universidad de las palmas de gran canaria, 35001 las palmas de gran canaria, espaĂąa.

“Do not you think your grandmother has sex? What happens with old gays? Why does a kiss between two elders tenderizes us and we do not think it is erotic” (interview, Ricardo Iacub, 2018). It still impacts us, and what do we do with it? Do we let it pass? Do we encourage them?

Throughout the centuries, sex has been postulated as the impulse that gives life to people. This word, of Latin origin, has always aroused much interest in society and in all stages of life; but it must be differentiated from “sexuality”, because it contemplates various aspects among which it is found; sex, identities and gender roles, eroticism, pleasure, intimacy, reproduction and sexual orientation [1-6]. Sexuality is a vital dimension that is present in all stages of life, at least since adolescence. It contributes significantly to health and quality of life and is, moreover, a right recognized by international organizations such as the World Health Organization (WHO) [4, 7-9]. Despite this, old age has traditionally been considered as a stage in which sexual needs would be absent, in which people are no longer interested or have the capacity to lead an active sexual life [3-8, 11]. Master and Johnson, two famous American sexologists, argued that older people should fight against a false belief, which considers that “ sexual incompetence is a natural component of the aging process ”. This belief limits access to sexuality due to fear of failure, to consider that it is no longer correct, that it can be sick or perverse. The same authors pointed out that many of their patients had gone to priests, rabbis, doctors or psychologists and that they had received the answer “ it is logical at their age ” [3, 7, 10]

The studies carried out, in our country and internationally, show that the majority of the elderly, and especially those who have a partner, are still sexually active until very old ages [6-9]. The keys to continue carrying and enjoying a quality sexual life in old age should be recognized and admitted at a social level, and among others, we should start; to be free of prejudices and stereotypes that condemn the elderly to lack of desire, or that associate sexuality in old age to something dirty or morally condemnable. Stop associating youth and sexuality. Do not assume the possible problems or difficulties that may appear as irreversible barriers. Age influences the decrease in sexual activity and interest, but not in satisfaction. It is demonstrated that sex and sexuality play an important role in healthy and full aging [1-3, 6-9, 11]. Taking into account these premises, throughout the presentation will present the results of a study conducted in the Canary Islands among people over 65 years, users of senior centers whose main objective was; obtain data on sexual activity, sexuality and whether age-related pathologies have affected their sexual relations. Against these prejudices, older adults need, want and seek some kind of loving exchange; “ Old people want and need to talk about sex ”, and also young people need to think that we have a lifetime to continue enjoying and experimenting with our sexuality.

1. Baudracco CP, Romero M. Derecho e Igualdad para la comunidad trans se llama Ley de Identidad de GĂŠnero. En Derecho a la Identidad. Ley de Identidad de GĂŠnero y Ley de AtenciĂłn Integral de la Salud para Personas Trans. Buenos Aires: FederaciĂłn Argentina LGBTTI; 2011.

2. UchĂ´a YS, Costa DA, Silva Jr. IAD, Saldanha ST, Silva ES, Freitas WMTM, Soares SS (2016). A sexualidade sob o olhar da pessoa idosa. Rev. Bras. Geriatr. Gerontol. 2016;19(6):939-949.

3. Iacub R. ErĂłtica y Vejez. Perspectivas de Occidente. Ed. PaidĂłs S.A.C.F. Buenos Aires. 2006.

4. Foucault M. Historia de la sexualidad, Buenos Aires: Ed. Siglo Veintiuno; 2010.

5. Grosman CP. et al. Los adultos mayores y la efectividad de sus derechos. Nuevas realidades en el derecho de familia. Rubinzal-Culzoni Editores. Buenos Aires; 2015.

6. Guadarrama R, Ortiz Zaragoza MC, Moreno Castillo YC, GonzĂĄlez Pedraza AA. CaracterĂ­sticas de la actividad sexual de los adultos mayores y su relaciĂłn con su calidad de vida. Revista de Especialidades MĂŠdico-QuirĂşrgicas. 2010;15(2):72-79.

7. Hernandez-MartĂ­nez FJ, JimĂŠnez-DĂ­az JF, RodriguĂŠz-de-Vera BC, Quintana-Montesdeoca MP, GarcĂ­a-Caballero A, Rodrigues A. Tapersex en ancianos: ÂżExiste el placer despuĂŠs de?. En libro de Actos del XX Congreso de la Sociedad EspaĂąola de GeriatrĂ­a y GerontologĂ­a. Valladolid. 2013.

8. LĂłpez SĂĄnchez F. Sexualidad y afectos en la vejez. Madrid: Ediciones PirĂĄmide; 2012.

9. MartĂ­n HernĂĄndez M, Renteria DĂ­az P, SardiĂąas Llerenas E. Estados clĂ­nicos y autopercepciĂłn de la sexualidad en ancianos con enfoque de gĂŠnero. Revista Cubana de EnfermerĂ­a, 2009;25(1-2).

10. Masters W, Johnson V. Incompatibilidad sexual humana, Inter MĂŠdica, Buenos Aires; 1976

11. PĂŠrez MartĂ­nez VT. Sexualidad humana: una mirada desde el adulto mayor. Revista Cubana de Medicina General Integral. 2008:24(1).

Active Aging, Sexuality, Elderly, Sexual activity and benefits.

S3 Promoting independent living in frail older adults by improving cognition and gait ability and using assistive products – MIND&GAIT Project

JoĂŁo apĂłstolo ([email protected]), the health sciences research unit: nursing, portugal centre for evidence based practice: a joanna briggs institute centre of excellence, nursing school of coimbra, 3000-232 coimbra, portugal.

Frail older adults are more susceptible to falls, fractures, disability, dependency, hospitalization and institutionalization [1]. Physical and cognitive decline, associated with frailty, potentiate the development of geriatric syndromes and lead to a decrease in self-care, depressive vulnerability and a decrease in quality of life [2]. Adapted physical exercise and cognitive stimulation allow the maintenance of physical and cognitive capacities, which is reflected in an improvement on the functional status of the elderly and in a reduction of associated comorbidities [3].

To promote independent living in frail older adults by improving cognition and gait ability and using assistive products.

It is planned to develop a combined intervention that will be composed by a digital cognitive stimulation program and an adapted physical exercise program. It is also being developed an auto-blocking kit mechanism for rolling walkers as an assistive product that could be used during the physical exercise program. A randomized-controlled trial will be developed to test the efficacy of the combined intervention in frail older adults. At the same time, a web platform will be developed and it will be used as a repository, providing digital intervention’ materials and results.

Through the implementation of a multidisciplinary strategy, significant benefits are expected in the prevention and maintenance of physical and cognitive decline of the frail older adults. It is hoped that for frail older adults the combined intervention and its digital components would be synonymous of autonomy and improvement of their quality of life, contributing to active aging. The project, being based and tested in clinical practice, will guide health professionals, caregivers and general public to promote the independence of this population.

Cognitive interventions and physical exercise have impact on cognitive decline, a condition that assumes more importance once it is related with frailty in older adults. This multidisciplinary strategy gives the opportunity to older adults to act actively in their health through the spontaneous performance of cognitive and physical exercises available on the web platform. The components of the combined intervention will allow better reintegration of this population into society of today’s world. By promoting research policies among educational institutions and health service delivery institutions, the MIND & GAIT project will make health care available to the frail elderly population more accessible to professionals, caregivers and general public.

Trial Registration

NCT03390478

Acknowledgements

The current abstract is being presented on behalf of a research group. It is also part of the MIND&GAIT project Promoting independent living in frail older adults by improving cognition and gait ability and using assistive products, which is a Portuguese project with the support of COMPETE 2020 under the Scientific and Technological Research Support System, in the co-promotion phase. We acknowledge The Health Sciences Research Unit: Nursing (UICISA: E) of the Nursing School of Coimbra, the Polytechnic Institute of Leiria, the Polytechnic of SantarĂŠm, Polythecnic of Coimbra and also to other members, institutions and students involved in the project.

1. ApĂłstolo J, Holland C, O'Connell MD, Feeney J, Tabares-Seisdedos R, Tadros G et al. Mild cognitive decline. A position statement of the Cognitive Decline Group of the European Innovation Partnership for Active and Healthy Ageing (EIPAHA). Maturitas. 2016;83:83-93.

2. Apóstolo J, Cooke R, Bobrowicz-Campos E, Santana S, Marcucci M, Cano A et al. Effectiveness of interventions to prevent pre-frailty and frailty progression in older adults: a systematic review. JBI Database of Systematic Reviews and Implementation Reports. 2018;16(1):140–232.

3. Mewborn CM, Lindbergh CA, Miller LS. Cognitive interventions for cognitively healthy, mildly impaired and mixed samples of older adults: a systematic review and meta-analysis of randomized-controlled trials. Neuropsychology Rev. 2017;27(4):403-439.

Aged, Cognitive decline, Cognitive stimulation, Frailty, Physical exercise.

S4 Electronic health records in Portugal

Cristiana maia ([email protected]), serviços partilhados do ministĂŠrio da saĂşde, 1050-189 lisboa, portugal.

In the digital transformation Era, there is an increasing need to provide systems capable of offering functionalities that allow the user a quicker and easier access to healthcare related information’s. These digital services aim to provide access to more information, allowing the users to make better informed decisions.

In Portugal’s National Health Service Portal (SNS Portal www.sns.gov.pt), there are already several digital services available, the Citizen’s Area aggregates these services for the user.

Citizen’s Area main objectives is to facilitate communication and interaction between Citizens, Professionals and Health Institutions, allowing access to information in an integrated way, providing a better healthcare. Simple and accessible to all users, this area allows personal health information access in one place at any time, thus avoiding unnecessary commuting. This area has access monitoring and permission policy configurations, allowing the Citizen to view access history and configure access permissions to their health information, thus increasing control and management of their own personal health information.

Health Literacy is actively promoted though multiple initiatives, in dedicated areas accessible in SNS Portal and Citizen’s Area.

Citizen’s Area, SNS Portal, Healthcare, Digital Services, Literacy.

S5 Economic crisis and inequalities in the Southern European health systems

Mauro serapioni ([email protected]), centro de estudos sociais, universidade de coimbra, 3000-104 coimbra, portugal.

Despite the overall increase in living standards and the introduction of universal health systems, many studies have identified persistent inequalities in all industrialized countries. The Southern European countries, namely Greece, Italy, Portugal and Spain, although the reforms of the 1970s and 1980s introduced universal national health services, social inequalities in health only became a critical issue in the late 1990s. However, the issue of health inequalities became a priority from 2010-2011, when (although with different degrees of severity) the four countries began to feel the first effects of the financial crisis. Various studies have identified the impact of the economic crisis on the most vulnerable population groups, with increasing rates of mental health disorders and a rise in suicides.

After a brief contextualization of the welfare state in southern European countries and a characterization of health systems in Greece, Spain, Italy and Portugal, the main health inequalities are described, identifying the potential inequity induced by the reform processes undertaken and the current austerity policies implemented.

The study resulted from a non-systematic literature review, based on the Scoping review proposal. A total of 74 publications were analysed.

Results and Discussion

The analysis has highlighted common characteristics and trends in the Southern European health systems, as well as some significant differences between them. In the four countries, the social gradient (particularly in education, income, and work status) is the principal determinant factor in health inequalities. Another key aspect is the steady increase in out-of-pocket spending in health as a percentage of total health spending in all four countries, markedly in Greece and Portugal. The analysis has identified potential inequalities induced by the reform processes, as result of new relations between the public and private sectors in services provision. Another example of how health systems produce inequalities is the rising proportion of users’ health expenses covered by co-payments and user fees. Geographic inequality in health is another critical issue, observed in all four Southern European countries. Finally, the recent debate in the international literature on the relationship between different welfare state regimes and health inequalities will be discussed.

The crisis and austerity policies have greatly increased the level of dissatisfaction with healthcare provision in these countries.

Health Inequalities, Health Systems, Economic Recession, Southern European Countries.

S6 CBmeter- a new medical device for early screening of metabolic diseases

Maria p guarino 1,2,3 , gabriel brito 1,4 , marlene lages 1 , rui fonseca-pinto 1,4 , nuno lopes 1,4, 1 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 chronic diseases research center, nova medical school, 1150-082 lisbon, portugal; 4 school of technology and management, polytechnic institute of leiria; 2411-901 leiria, portugal, correspondence: maria p guarino ([email protected]).

Type 2 diabetes mellitus (T2DM) is a highly prevalent disease worldwide which is asymptomatic in about 44% of patients being critical to search for new ways of early diagnosis. Recent studies have demonstrated that the etiology of this disease may be associated with alterations in the function of the carotid body (CB), a chemosensor organ located within the bifurcation of the carotid artery. In animal models of metabolic syndrome, it was observed that the CBs are overactivated, underlying diseases such as obesity, hypertension and T2DM. This discovery provided a new paradigm in the neuroendocrinology field, suggesting that diagnostic function of the CBs has predictive value for the development of metabolic diseases. Despite this fact, it is not common in clinical practice to look at the CBs as organs associated with endocrine dysfunction and we believe this is probably due to the nonexistence of a user-friendly, portable medical device that diagnosis the function of the CBs.

The general aim of this work is to develop a novel device that evaluates the function of the carotid bodies - a CBmeter. We are also developing a standard test meal to be used as a physiological dynamic test during CBmeter utilization.

This medical device will synchronously assess several physiological variables: heart rate, respiratory rate, blood pressure variation, arterial pulse oximetry and circulating glucose, as well as the physiological responses to hyperoxia and meal ingestion. The results obtained will be analyzed using MatLab, in order to develop an algorithm with predictive value for early diagnosis of metabolic diseases. We are also developing a standard test mixed- meal to assess post-prandial glucose excursions with the CBmeter. The work is currently in the prototype development phase.

A preliminary pilot-test performed with the prototype revealed that all the proposed variables are assessable with the CBmeter. The standardized test meal used in the pilot-test caused a glucose excursion curve that stabilized 30 minutes after ingestion, being suitable for metabolic evaluation with the CBmeter. Interstitial glucose variation was 16.6mg/dl glucose with a latency time of 21min. Heart rate did not vary significantly after the meal ingestion.

The CBmeter prototype is currently optimized to be used in a medical device clinical-trial with healthy volunteers. The mixed meal developed has proven to be suited in healthy volunteers to determine variations in CB-related cardiorespiratory parameters.

Project funded by FCT/SAICT-POL/23278/2016

Carotid body, Diabetes, Early diagnosis, Medical device.

S7 Help to care for users and caregivers: Help2care

Maria dos anjos coelho rodrigues dixe 1,2 ([email protected]), 1 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal.

There are several studies showing that the family members providing care to their relatives need to acquire abilities that enable them to be competent in their performance, having the health care professionals an indispensable role in their training [1]. Empowering caregivers can help in reducing health care costs, improve the quality of life of both user and caregiver [2], their mental health [3] and greater satisfaction with their care [4]. The continued support to caregivers can help them in decision making in less serious heath situations and to use fewer health services [5].

The main aims are: to construct assessment instruments to evaluate the patient and caregivers needs and abilities concerning self-care; to develop a support manual accessible to all caregivers; to make videos that demonstrate technics and task procedures to support the caregiver in the caring process; to develop a digital platform where all the developed resources will be available (website and app) to support the care transition from the hospital to the residence integrating professionals from the hospital and from the primary healthcare services; to empower health professionals to use the caregivers’ and users’ self-care empowerment model.

This project will include participation of students, teachers, researchers and stakeholders throughout the project using an action research, where, as the materials are developed, the population target acceptance will be tested, justifying the corrections needed before moving to the next step, using a consistent methodology with an action and learning research process. Population: The population will be: dependent patients diagnosed with a chronic illness, total or partial dependency admitted to the Hospital and require caregiver after hospital discharge; Informal Caregivers whose dependent members of the family present the criteria laid up and Heath professional. To evaluate the patient and caregivers’ needs and capacity concerning self-care we will construct them (activity 1). During the pilot test period we will have, two kinds of metrics: Qualitative metrics available on (http://garyperlman.com/quest/). And quantitative monitoring metrics for the use of the mobile app, including retention rate, churn rate, daily active users (DAU), daily sessions per DAU and stickiness, and also access statistics per module/feature on the app.

The mains output will be: A training model of caregivers and users for self-care composed with: a caregiver’s. support manual; a digital platform and a manual with the empowerment model to be used by health professionals

The current abstract is being presented on behalf of a research group. It is also part of the Help2care - project: Help to care for users and caregivers, which is a Portuguese project with the support of COMPETE 2020 under the Scientific and Technological Research Support System, in the co-promotion phase. We acknowledge the Polytechnic of Leiria, the Polytechnic of SantarĂŠm, Polythecnic of Castelo Branco, Centro Hospitalar de Leiria and also to other members, institutions and students involved in the project.

1. Clarke DJ, Hawkins R, Sadler E, Harding G., McKevitt C, Godfrey M, Dickerson J, Farrin, A.J, Kalra L, Smithard D, Forster. A .Introducing structured caregiver training in stroke care: findings from the TRACS process evaluation study. BMJ Open. 2014;4:1-10.

2. Cheng HY., Chair SY, Chau JP. The effectiveness of psychosocial interventions for stroke family caregivers and stroke survivors: A systematic review and meta-analysis. Patient Education and Counseling. 2014;95:30-44.

3. Legg, LA, Quinn TJ, Mahmood F, Weir CJ, Tierney J, Stott DJ, Smith L.N, Langhorne P. Non-pharmacological interventions for caregivers of stroke survivors. The Cochrane Database of Systematic Reviews. 2014. 10. CD008179. Doi: 10.1002/14651858

4. Bakas T, Farran CJ, Austin JK, Given BA, Johnson EA, Williams LS. Content Validity and Satisfaction With a Stroke Caregiver Intervention Program. Journal of nursing Scholarship. 2009;41(4):368-375.

5. Pierce L, Steiner VL, Khuder SA, Govoni, AL, Horn LJ. The effect of a Web-based stroke intervention on carers' well-being and survivors' use of healthcare services. Disability and Rehabilitation. 2009;31(20):1676-1684.

Transitions of care, Caregivers, Self-care, Users.

S8 TeenPower: e-Empowering teenagers to prevent obesity

Pedro sousa 1,2 ([email protected]).

Adolescent obesity has reached epidemic proportions, being urgent to find effective prevention strategies. The core components of classic prevention programs have been unable to obtain the desired adherence. The solution may involve a more extensive and frequent contact with the healthcare team and the use of alternative communication channels and interacting/dynamic technologies with adolescents. TeenPower is a transdisciplinary practice-based action research project that aims to develop innovative interventions to promote healthy behaviors. This project is promoted by the polytechnics of Leiria, SantarĂŠm and Castelo Branco, MunicĂ­pio de Leiria (City Council), as well as local schools and primary healthcare stakeholders, key partners in the development phase and in the implementation of the intervention program.

The main goal is the development, implementation and evaluation of a program for the promotion of healthy behaviors and prevention of obesity in adolescence, based on e-therapy and sustained by the case management methodology. The project is directed to the cognitive-behavioral empowerment of adolescents, through increased and interactive contact between adolescents and a multidisciplinary healthcare team. The use of Information and Communication Technologies (ICT) in the intervention can optimize resources and maximize impact, as a complement to conventional approaches.

The project includes the development of three complementary studies: (S1) evaluation of adolescents’ health status and cognitive-behavioral indicators, (S2) usability evaluation of the TeenPower platform and mobile app (S3) implementation and adherence evaluation to the TeenPower intervention program. Participants will be recruited from the school groups of Leiria, Santarém and Castelo Branco, aged between 12 and 16, with easy access to internet and smartphone/tablet (inclusion criteria). Intervention include behavioral, nutritional and physical activity counselling (online and face-to-face psycho-educative sessions). The e-therapeutic platform and mobile app (TeenPower) includes educational resources, self-monitoring, social support, interactive training modules and motivational tools. In addition to the case manager, the program will also have the direct support of an interdisciplinary team (nurse, nutritionist, exercise physiologist, among others).

Expected results includes the delivery of the TeenPower intervention program, including an interactive application (web and mobile); scientific papers, communications and reports; workshops and conferences. We will evaluate adolescents’ health status and cognitive-behavioral indicators, evaluate Teenpower usability and program adherence.

The positive evaluation of the intervention program will stimulate the inclusion of ICT in the promotion of salutogenic behaviors and overweight prevention, creating technological interfaces that will allow customizing the intervention parameters and facilitating the monitoring and tracking.

The current abstract is presented on the behalf of a research group, the TeenPower research team. It is also part of the project TeenPower: e-Empowering teenagers to prevent obesity, co-funded by FEDER (European Regional Development Fund), under the Portugal 2020 Program, through COMPETE 2020 (Competitiveness and Internationalization Operational Program). We acknowledge the Polytechnic Institutes of Leiria, SantarĂŠm and Castelo Branco, the Municipality of Leiria (city council), and also other members, institutions and students involved in the project.

Adolescents, Obesity, Health promotion, e-health, Mobile.

S9 The Early Warning System for Basic School - SAPIE-EB in the promotion of school success, psychological health and career development

Pedro cordeiro, paula paixĂŁo, faculdade de psicologia e de ciĂŞncias da educação, universidade de coimbra, 3000-115 coimbra, portugal, correspondence: pedro cordeiro ([email protected]).

We present the “Sistema de Alerta Precoce do Insucesso Escolar no Ensino Básico (SAPIE-EB)”, a sophisticated early warning system that ealy flag the students’risk for school failure and ill-being, systematically monitor the students’ progress in the dimensions of and empirically assess the educational impact of the interventions in the dimensions of academic success, psychological health and career development. The SAPIE-EB is a user-friendly system that converts students’ raw data available at schools in knowledge, providing easy-delivered and intuitive reports on students’school failure, dropout and interventions, hereby allowing to deepen the knowledge about their causes and explanatory processes. The SAPIE-EB will be tested in 75 Portuguese basic schools. It is expected, with the implementation of the SAPIE-EB to reduce school retention in about 3% in a two-year interval period. Longitudinal research will attest the efficacy of the SAPIE-EB, from longitudinal quase-experimental research designs.

Early Warning Systems, SAPIE-EB, School Success, Psychological Health, Career development.

Oral Communications

O1 nursing professionals victims of verbal abuse by their coworkers from the same work unit, maiara bordignon, inĂŞs monteiro, school of nursing, university of campinas campinas, 13083-887 campinas, sĂŁo paulo, brazil, correspondence: maiara bordignon ([email protected]).

Violence among professionals of health teams has been exploited over the years with considerable attention to nursing [1-5]. The literature has also highlighted the impact of horizontal violence on the individual, unit and institution, such as, its negative influence on job satisfaction and possible harms to the safety culture, as well as to the wellbeing of professionals [3-5].

To present the frequency with which nursing professionals have suffered verbal abuse perpetrated by their coworkers, from the same work unit, during the last year and the professional categories involved in the abuse.

A cross-sectional study performed with a sample of 267 nursing professionals – registered nurse, nursing assistant/technician – working in Emergency units in Brazil. The experience of verbal abuse at work suffered by nursing professionals in the last 12 months was accessed using questions of a questionnaire about verbal abuse [6]. Data were described allowing to identify frequencies and professionals involved in the abuse. This study was authorized by institutions and approved by the Ethical Research Board of the university.

Among the victims of verbal abuse, 23 (15%) nursing professionals reported that the last verbal abuse suffered was from coworkers of the same work unit, excluding abuses perpetrated by a boss or supervisor. At least twenty-one (91%) cases occurred in the emergency units that were part of the study and of these eighteen professionals indicated the perpetrator's profession, revealing that in nine (50%) cases the abuse was perpetrated by a coworker with the same profession, mostly registered nurse-to-registered nurse (5–56%). When the perpetrator's and the victim's profession were different (9–50%), the abuse occurred more frequently from nursing technician-to-registered nurse (3–33%). The presence of the doctor was identified in at least one situation of abuse occurred in the emergency units studied and was directed to a nursing technician. There was no report of verbal abuse perpetrated by other professionals of the nonmedical team and who did not participate in the nursing team.

Our study showed that in almost all instances verbal abuse occurred among the nursing staff professionals themselves. Organizational policies and strategies focusing on violence contributors factors in health care teams need to be structured to prevent violence among professionals, representing a challenge to health management.

Authors are grateful for the funding by grant#2016/06128-7, SĂŁo Paulo Research Foundation (FAPESP), National Council for Scientific and Technological Development (CNPq) and Coordination for the Improvement of Higher Education Personnel (CAPES), Brazil.

1. Duffy E. Horizontal violence: a conundrum for nursing. Collegian. 1995;2(2):5-17.

2. McKenna BG, Smith NA, Poole SJ, Coverdale JH. Horizontal violence: experiences of Registered Nurses in their first year of practice. J Adv Nurs. 2003;42(1):90-96.

3. Longo J, Cassidy L, Sherman R. Charge nurses’ experiences with horizontal violence: implications for leadership development. J Contin Educ Nurs. 2016;47(11):493-499.

4. Purpora C, Blegen MA. Job satisfaction and horizontal violence in hospital staff registered nurses: the mediating role of peer relationships. J Clin Nurs. 2015;24(15-16):2286-94.

5. Armmer F, Ball C. Perceptions of horizontal violence in staff nurses and intent to leave. Work. 2015;51(1):91-7.

6. Bordignon M, Monteiro MI. Apparent validity of a questionnaire to assess workplace violence. Acta Paul Enferm. 2015;28(6):601-8.

Workplace violence, Nurses, Nursing, Emergency Nursing.

O2 Social representations of violence on the elderly: an injustice and a badness

Felismina rp mendes 1,2 , otĂ­lia zangĂŁo 1 , tatiana mestre 1, 1 escola superior de enfermagem s. joĂŁo de deus, universidade de ĂŠvora, 7004-516 ĂŠvora, portugal; 2 centro de investigação em desporto, saĂşde e desenvolvimento humano, universidade de ĂŠvora, 7004-516 ĂŠvora, portugal, correspondence: otĂ­lia zangĂŁo ([email protected]).

In contemporary society, ageing is a phenomenon that marks all developed societies. Portugal is one of the most aged countries in Europe. Currently, Portugal shows a life expectancy at birth of 81.3 years, an average value in terms of EU [1]. Social representations allow access to lay forms of thought, fundamental for understanding social phenomena and their consequences, and for the construction of scientific knowledge itself [2]. The Social Representations conduct “ the behaviors and the practices and, in this way, justify the positions taken and the behaviors ” [3]. Analyzing the social representations of violence on the elderly, from the current and past conceptions and daily practices of the elderly, allows us to have access to the dominant constructions in society about the social phenomenon that is violence and the way it is socially and individually expressed by its main actors.

To analyze the social representations of a group of elderly people about violence on the elderly and the reasons why this violence occurs.

Exploratory and descriptive research with qualitative approach, supported by Theory of Social Representations. It was attended by 237 elderly people aged 65-96 years, from the project “Ageing Safely in Alentejo” from University of Évora. The Free Speech Association technique was used and data were processed through qualitative data analysis software. All ethical procedures of human research were followed. Thus, all necessary authorizations for the study were requested, such as informed consent to the elderly. All conditions of anonymity and confidentiality of the responses obtained were also guaranteed.

In social representations of violence on the elderly, the words most evoked by the elderly were injustice, to which were added: mistreatment, badness, bad, lack of respect, sadness, horrible and abandonment. In social representations about the reasons that lead to violence on the elderly, the words such as: lack of respect, lack of education and badness were predominant. These terms refer to the social devaluation of the elderly and their role in today's society, as in the representations about violence.

The social representations of these elderly people about violence and their reasons points to the stereotypes associated with the prevalent ageism in our society, where the social devaluation of the elderly dominates the daily life conceptions and practices.

This study was carried out under the ESACA - Ageing Safely in Alentejo - Ref: ALT20-03-0145-FEDER-000007, financed by Alentejo 2020, Portugal 2020 and EU.

1. PORDATA. Esperança de vida à nascença: total e por sexo - 2015 [Internet]. 2003 [cited 2017 Out 23]. p. 1–4. Available from: https://www.pordata.pt/Europa/Esperança+de+vida+à+nascença+total+e+por+sexo-1260.

2. Dantas, M., Abrão, F., Freitas, C. & Oliveira, D. Representaçþes sociais do HIV/AIDS por profissionais de saúde em serviços de referência. Revista Gaúcha Enfermagem. [periódico na Internet]. 2014 [cited 2017 Set 05]; 35 (4): 94-100. Available from http://seer.ufrgs.br/index.php/RevistaGauchadeEnfermagem/article/view/45860/3 2387.

3. Mendes, F., Zangão, M., Gemido, M., & Serra, I. Representaçþes sociais dos estudantes de enfermagem sobre assistência hospitalar e atenção primåria. Revista Brasileira de Enfermagem. [periódico na Internet]. 2015 [cited 2017 Set 18]; 69 (2): 343-350. Available from http://www.redalyc.org/pdf/2670/267045808018.pdf.

Social representations, Violence, Elderly, Elderly health, Discrimination.

O3 Relationship between the use of new technologies and musculoskeletal symptoms in children and adolescents

Paula c santos 1,2 , sofia lopes 1,3,4 , rosa oliveira 1 , helena santos, jorge mota 2 , cristina mesquita 1,4, 1 department of phisioterapy, school of allied health technologies, polytechnic institute of porto, 4050-313 porto, portugal; 2 research centre in physical activity, health and leisure, faculty of sport, university of porto, 4050-313 porto, portugal; 3 escola superior de saĂşde de vale de sousa, 4585-116 gandra, portugal; 4 centro de estudos do movimento e atividade humana, escola superior de saĂşde, instituto politĂŠcnico do porto, 4200-072 porto, portugal, correspondence: paula c santos ([email protected]).

Childhood and adolescence are determinants of musculoskeletal development, and the attitudes and habits adopted during these periods can have repercussions in adult life. The increasing use of technologies is becoming more worrying due to the sustained and prolonged postures due to the use of these devices and the consequent impact on musculoskeletal health.

This study analyzes the relationship between the use of new technologies with musculoskeletal symptoms (MSS) in children and adolescents.

Cross-sectional study with a sample of 460 students aged between 10 and 18 years. Data were collected through a questionnaire that included the Nordic Musculoskeletal Questionnaire.

98.5% of students reported the used a mobile phone, 84.3% laptop and 52.4% tablet. Only 50.0% of the individuals who used mobile phones, 48.5% of the laptop and 31.1% of the tablet considered having a correct posture during the use of these technologies. We verified that the individuals with MSS showed more times of use of new technologies than individuals without MSS. There were differences in the time (min/day) of mobile phone and laptop use among children and adolescents, respectively 102.6 Âą 121.47 vs 205.8 Âą 175.89 (p < 0.001) and 74.0 Âą 78.08 vs 117.9 Âą 127.26 (p < 0.001).

Most students use new technologies in their daily lives, with less than half of them considering using these technologies in the right posture. It was also verified that individuals with MSS used more times new technologies than individuals without MSS. The time of use of new technologies increases with the age.

New technologies, Musculoskeletal symptoms, Children and adolescents.

O4 An ecological approach to fall risk factors for preventive interventions design: a pilot study

Jorge bravo 1 , hugo rosado 1 , felismina mendes 3 , catarina pereira 2, 1 nursing department, sĂŁo joĂŁo de deus superior nursing school, university of ĂŠvora, 7000-811 ĂŠvora, portugal; 2 health sciences and human development center, health and sports department, science and technology school, university of ĂŠvora, 7000-671 ĂŠvora, portugal; 3 health sciences and human development center, sĂŁo joĂŁo de deus superior nursing school, university of ĂŠvora, 7000-811 ĂŠvora, portugal, correspondence: jorge bravo ([email protected]).

Recent literature reinforces that interventions for fall prevention should include multimodal training [1]. However, even multimodal training tends to focus on exercises separately in single physical, cognitive or environmental hazards variables. An ecological approach to explain phenomena’s such as fall occurrence, underlines not only the accumulative effect of isolated variables but also interactions between different variables.

To reduce a set of correlated variables to a smaller number that may explain fall occurrence.

187 older adults aged 65 to 96 years were assessed for falling risk factors. Principal component analysis (PCA) was performed including data from the 6-minute walk test (6MWT) [2], Gait Scale [3], Fullerton Advanced Balance Scale (FAB) [4], body composition - fat body mass percentage (FBM %), Mini-Mental State Examination (MMSE) [5], Environmental Hazards Scale (EH) [6], health conditions (HC), time up and go test (TUG) [2] and the Epworth Sleepiness Scale (ESS) [7]. Factors with eigenvalues of at least 1.0 were retained and a varimax rotation was used to produce interpretable factors. A binary regression analysis was performed using the forward stepwise (conditional) technique to identify the most significant components explaining fall occurrence. Receiver operating characteristics (ROC) curves were used to assess the discriminative ability of the logistic model.

Three principal components were identified. In component 1, the dominant variables concerned physical and cognitive fit (6MWT, Gait Scale, FAB, MMSE, TUG), in component 2 dominant variables concerned health and environmental conditions (FBM %, EH, HC), whereas in component 3, the dominant variable concerned alertness (ESS). These components explained cumulatively 37%, 56% and 70% of the variance in fall occurrence. Logistic regression selected components 1 (OR: 0.527; 95% CI: 0.328–0.845) and 2 (OR: 1.614; 95% CI: 1.050–2.482) as predictive of falls. The cut-off level yielding the maximal sensitivity and specificity for predicting fall occurrence was set as 0.206 (specificity = 72.7%, sensitivity = 47.7%, and the area of the ROC curve was computed as 0.660 (95% CI: 0.564-0.756).

This pilot study showed that multiple correlated variables for fall risk assessment can be reduced to three uncorrelated components characterized by: physical and cognitive fit; health and environmental conditions; and alertness. The first two were the main determinants of falls. Recommendations: Interventions for fall prevention should privilege multimodal training including tasks that work simultaneously physical fitness, cognitive fitness and alertness, considering participant’s specific health and environmental conditions.

NCT03446352

1. HafstrÜm A, MalmstrÜm EM, Terdèn J, Fransson PA, Magnusson M. Improved balance confidence and stability for elderly after 6 weeks of a multimodal self administered balance-enhancing exercise program: a randomized single arm crossover study. Gerontology and geriatric medicine 2016;2:2333721416644149.

2. Rikli RE, Jones CJ. Development and validation of a functional fitness test for community-residing older adults. Journal of aging and physical activity 1999; 7(2): 12961.

3. Tinetti ME. Performance-Oriented Assessment of Mobility Problems in Elderly Patients. Journal of the American Geriatrics Society 1986; 34(2): 119-26.

4. Rose DJ, Lucchese N, Wiersma LD. Development of a multidimensional balance scale for use with functionally independent older adults. Archives of physical medicine and rehabilitation 2006; 87(11): 1478-85.

5. Guerreiro M, Silva AP, Botelho MA, Leitão O, Castro-Caldas A, Garcia C. Adaptação à população portuguesa da tradução do Mini Mental State Examination (MMSE). Revista Portuguesa de Neurologia 1994; 1(9): 9-10.

6. Tinetti ME, Speechley M. Prevention of Falls among the Elderly. New England Journal of Medicine 1989; 320(16): 1055-9.

7. Johns MW. A new method for measuring daytime sleepiness: the Epworth sleepiness scale. sleep 1991; 14(6): 540-5.

Principal component analysis, Falling risk, Physical fitness, Cognitive fitness, Environmental hazards.

O5 Relationship between smartphone use and musculoskeletal symptoms in adolescents

Paula c santos 1,2 , cristina mesquita 1 , rosa oliveira, raquel azevedo, sofia lopes 1,3, 1 department of physiotherapy, school of allied health technologies, polytechnic institute of porto, 4050-313 porto, portugal; 2 research centre in physical activity, health and leisure, faculty of sport, university of porto, 4050-313 porto, portugal; 3 north polytechnic institute of health, 4585-116 gandra, portugal.

We are currently facing a society of adolescents who are increasingly dependent on technology, in particular the smartphone, and this phenomenon can even lead to limiting situations in which the person’s physical well-being is called into question. Intensive use of the smartphone may contribute to a decrease in physical activity and generate musculoskeletal symptoms (MMS).

To verify the existence of a relationship between the use of the smartphone and: 1) MMS; 2) vigorous, moderate and sedentary physical activity

An observational, analytical, cross-sectional study was conducted on a sample of 834 adolescents from five schools in the regions of Viseu, Vila Real and Porto. Data collection was performed through online questionnaires through the Qualtrics program, in order to perform the sociodemographic characterization of the sample and to determine behavioral habits related to health, as well as to the use of new technologies. Musculoskeletal symptoms were evaluated through the Portuguese version of the Nordic musculoskeletal questionnaire. (NMQ) and physical activity through the International Questionnaire of Physical Activity (IPAQ).

The adolescents who used the smartphone for the most time referred MMS in the cervical (p < 0.001), thoracic (p = 0.017), lumbar (p < 0.001), shoulders (p < 0.001), wrists/hands (p = 0.003) and knees (p = 0.013). Adolescents who practice more vigorous physical activity (p = 0.023) use less smartphone, and those who have more time in sedentary physical activity (p = 0.008) use it more.

Adolescents who spend more time on smartphones refer more MMS. The use of the smartphone is associated with a more sedentary lifestyle, unlike the adolescents who practice vigorous physical activity that give less use to it.

Smartphone, Physical Activity, Musculoskeletal Symptoms.

O6 Functional fitness and cognitive performance in independent older adults – fallers and non-fallers: an exploratory study

Jorge bravo 1 , hugo rosado 1 , felismina mendes 2 , catarina pereira 3, 1 sĂŁo joĂŁo de deus superior nursing school, university of ĂŠvora, 7000-811 ĂŠvora, portugal; 2 health sciences and human development center, sĂŁo joĂŁo de deus superior nursing school, university of ĂŠvora, 7000-811 ĂŠvora, portugal; 3 health sciences and human development center, health and sports department, science and technology school, university of ĂŠvora, 7000-671 ĂŠvora, portugal.

Actual research reinforces the importance of multimodal exercise programs for fall prevention; however remains unclear which components should be included in exercise programs, considering physical and cognitive components.

This exploratory study aims to identify the associations between functional fitness (FF) and cognitive performance (CP) in independent older adults, regarding fallers and non-fallers.

63 males and 124 females (65-96 years) were selected based on the criteria of moderate or high functional independency (≥18 points) determined by responses to the 12-item of Composite Physical Functioning Scale [1]. FF was assessed by the Senior Fitness Test Battery [2]. A composite Z-score was created based on the individual scores for each fitness item. CP was assessed by the Mini-mental State Examination adapted for the Portuguese population [3]. Descriptive statistics were calculated for all outcome measurements and comparisons were performed using independent sample t-Tests. Multiple regression analyses were performed to test associations between FF and CP.

T-test comparisons showed that females were more flexible than males (p < 0.05). Males were taller and heavier than females (p < 0.05). No differences were observed between these independent fallers and non-fallers sample. Multiple regression analyses were performed to understand the association of FF with CP in fallers and non-fallers. Agility was negatively associated with the MMSE score in fallers and non-fallers; however, after adjusting for gender, age and education, this association was not significant for non-fallers (p < 0.05). Lower body strength showed positive associations (p < 0.05) with the MMSE score exclusively in non-fallers, regardless the adjustments. Likewise, the upper body strength was positively associated with the MMSE score (p < 0.05) in non-fallers after adjusting for age, gender and education (p < 0.05). On the other hand, the upper body flexibility showed negative associations with the MMSE score (p < 0.05) however this association did not remain significant after adjusting for gender, age and education.

Independent older adults with higher agility scores were more likely to have an improved CP, whether they are fallers or non-fallers. Body strength, particularly improved lower body strength, is associated with higher CP in non-faller older adults, independently of age, gender and education. This exploratory study increases the spectrum of research in multimodal programs by suggesting that agility and strength training should be included in exercise prescription for fall prevention, in order to foment CP.

Study was funded by Horizon 2020, Portugal 2020 (ALT20-030145-FEDER-000007).

1. Rikli RE, Jones CJ. The reliability and validity of a 6-minute walk test as a measure of physical endurance in older adults. Journal of aging and physical activity 1998; 6(4): 363-75.

2. Rikli RE, Jones CJ. Development and validation of a functional fitness test for community-residing older adults. Journal of aging and physical activity 1999; 7(2): 129-61.

3. Guerreiro M, Silva AP, Botelho MA, Leitão O, Castro-Caldas A, Garcia C. Adaptação à população portuguesa da tradução do Mini Mental State Examination (MMSE). Revista Portuguesa de Neurologia 1994; 1(9): 9-10.

Aging, Physical fitness, Accidental falls, Cognitive aging.

O7 Association between endurance of the trunk extensor muscles and the risk of falling in community-dwelling older adults

Sofia flora, ana tavares, joana ferreira, nuno morais, school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal, correspondence: sofia flora ([email protected]).

Falls in the elderly are a serious health problem and the result of the complex interaction between individual and environmental risk factors. Balance is considered a key factor for higher falling risk in this population [1, 2]; thus, assessment and preventive/rehabilitation programs targeting the balance control system are currently a clinical guideline [1]. Programs commonly include strength/power training of the lower limbs and trunk muscles and postural control exercises [1, 3]. Recently it has been shown that the elderly reach premature muscle fatigue during upright stance tasks [2], and that fatigue leads to poor balance control [4]. Muscular endurance hence appears to play an important role in the efficiency of the balance control system, particularly during performance of long lasting functional tasks. However, the association between muscle endurance and balance control measures has been overlooked, especially in the trunk muscles, despite its potential to assist clinicians and researchers to comprehensively screen falling risk factors and tailoring interventions accordingly.

The main purpose of this cross-sectional study was to determine the association between endurance of the trunk extensor muscles and the risk of falls in the elderly, considering possible co-factors such as age and BMI.

Community-dwelling adults ≥ 65 years were recruited from senior universities in the Centre region of Portugal. Exclusion criteria included severe physical/cognitive limitations that would prevent subjects from performing the testing protocol. Falling risk/balance was assessed using the Berg Balance Scale (BBS, score 0–56). Muscle performance was measured through the trunk extensor endurance test (in seconds). Simple and multiple linear regression analyses, using SPSS (v20), were conducted to predict the effects of muscle endurance, BMI and age on balance control. Statistical significance was set at 0.05.

Fifty-nine volunteers (44 females, age = 71 ¹ 5 years, height = 1.60 ¹ 0.09 m, mass = 71.67 ¹ 14.35 kg, BMI = 28.02 ¹ 4.62 kg/m 2 ) were included in the study. The largest correlation was found between the BBS score, and muscle endurance ( ρ = 0.379), and BMI ( ρ = -0.335). Muscle endurance predicted 7% of the BBS score ( r 2 a = 0.070, p = 0.024). When combined with BMI, muscle endurance accounted for ~16% ( r 2 a =0.162, p = 0.003) of the total variance of the BBS score. Unsurprisingly, when age was added to the previous model the predictive capacity increased, reaching ~21% ( r 2 a = 0.214, p = 0.01).

Endurance of the trunk extensor muscles and BMI predicted approximately 16% of the BBS score. Since these are modifiable factors, it is recommended that they should be routinely included in the screening of falling risk factors in the elderly and addressed accordingly in preventive programs.

1. Phelan EA, Mahoney JE, Voit JC, Stevens JA. Assessment and management of fall risk in primary care settings. Med Clin North Am. 2015 Mar;99(2):281–93.

2. Pizzigalli L, Micheletti Cremasco M, Mulasso A, Rainoldi A. The contribution of postural balance analysis in older adult fallers: A narrative review. Journal of Bodywork and Movement Therapies. 2016 Apr;20(2):409–17.

3. Granacher U, Gollhofer A, Hortobágyi T, Kressig RW, Muehlbauer T. The Importance of Trunk Muscle Strength for Balance, Functional Performance, and Fall Prevention in Seniors: A Systematic Review. Sports Med. 2013 Apr 9;43(7):627–41.

4. Papa EV, Garg H, Dibble LE. Acute Effects of Muscle Fatigue on Anticipatory and Reactive Postural Control in Older Individuals. Journal of Geriatric Physical Therapy. 2015;38(1):40–8.

Muscle Endurance, Balance, Elderly, Risk of falls, Trunk Extensor Muscles.

O8 ICF Core Set for Obstructive Pulmonary Diseases: validation of the environmental factors component through the perspective of patients with asthma

Cristina jĂĄcome 1,2 , susan m lage 3 , ana oliveira 2 , augusto g araĂşjo 4 , danielle ag pereira 3 , verĂ´nica f parreira 3, 1 centro de investigação em tecnologias e serviços de saĂşde, faculdade de medicina, universidade do porto, 4200-319 porto, portugal; 2 laboratĂłrio de investigação e reabilitação respiratĂłria, escola superior de saĂşde, universidade de aveiro, 3810-193 aveiro, portugal; 3 universidade federal de minas gerais, 31270-901 belo horizonte, minas gerais, brasil; 4 hospital carlos chagas, 35900-595, itabira, minas gerais, brasil, correspondence: cristina jĂĄcome ([email protected]).

To optimize a patient-oriented approach in asthma management, health professionals need to consider all aspects of the patient’s actual context (physical, social and attitudinal). This context can be assessed using the Environmental Factors component of the International Classification of Functioning, Disability and Health (ICF) Core Set for Obstructive Pulmonary Diseases (OPD). The categories included in the Environmental factors component have been selected by respiratory experts, and have been validated from the perspective of physicians, physiotherapists and patients with chronic obstructive pulmonary disease. However, the validation from the perspective of patients with asthma will be essential to allow a more widespread application of the ICF in this population.

This study aimed to validate the Environmental factors component of the Comprehensive and Brief versions of ICF Core Set for OPD from the perspective of patients with asthma.

A cross-sectional qualitative study was conducted with outpatients with asthma using semi-structured individual interviews. Qualitative data were analysed through the meaning condensation procedure by two researchers with expertise in the ICF.

Thirty-five participants (26 females; 41Âą13 years) were included. Eight (35%) categories contained in the Environmental factors component of the Comprehensive version of the ICF Core Set for OPD and 4 (100%) of those contained in the Brief version were confirmed by the participants. Additionally, 5 second level categories (Products and technology for employment; Flora and fauna; Natural environment and human-made changes to environment, unspecified; Domesticated animals; Support and relationships, unspecified) and 13 third level categories (Food; Drugs; General products and technology for personal use in daily living; General products and technology for employment; Assistive products and technology for employment; Design, construction and building products and technology for gaining access to facilities in buildings for private use; Plants; Animals; Temperature; Humidity; Indoor air quality; Outdoor air quality; Health services) not included in the Core Set were identified.

The Environmental factors component of the Brief ICF Core Set for OPD was fully supported by the perspective of patients with asthma, contrasting with only one third of the categories of the Comprehensive version. The categories included in the ICF Core Set that were not confirmed by the participants and the additional categories that were raised need to be further investigated in order to develop an instrument tailored to patients’ needs. This will promote more patient-centred assessments and rehabilitation interventions.

Asthma, International Classification of Functioning, Disability and Health, Environmental factors, Patient’s perspective.

O9 Attachment, self-compassion and mental health in the use of the internet to establish intimate relationships

SĂłnia c simĂľes 1,2 , vanessa vieira 1 , mariana marques 1 , laura lemos 1, 1 instituto superior miguel torga, 3000-132 coimbra, portugal; 2 centro de estudos da população, economia e sociedade, 4150-171 porto, portugal, correspondence: sĂłnia c simĂľes ([email protected]).

Currently, to our knowledge, there is no research with Portuguese samples comparing the levels of attachment, self-compassion and psychopathological symptoms in subjects who use and do not use the Internet to establish intimate relationships.

The present study aimed to investigate how individuals, who use the Internet to establish intimate relationships, differ psychologically from individuals who do not use the Internet for this purpose.

We used the following scales: Experiences in Close Relationships (ERP), Self-Compassion Scale (SELFCS), Brief Symptom Inventory (BSI) and a short sociodemographic questionnaire. The sample consisted of 350 individuals of whom 284 used social networks to establish intimate relationships (I) and 66 did not use the Internet for this purpose (NI), with a mean age of 29.90 (SD = 7.41) for the I group and 30.72 (SD = 8.26) for the NI group. The majority of the sample was single in both groups (I: 83.5% vs. NI: 68.2%), heterosexual (I: 79.6% vs. NU: 93.9%), was attending or attended Higher Education (I: 66.5% vs. NI: 60.6%).

We found that man used more, compared to woman, the Internet to establish intimate relationships, and that the individuals that used most the Internet for this purpose had a higher number of short-term intimate relationships, compared to the group of subjects that did not use the Internet for this purpose. We also found that individuals without a romantic relationship (single, separated/divorced or widowed) were who resorted to this type of online service. Differences between groups regarding attachment, self-compassion and psychopathology were not found. However, it is important to highlight the stronger associations between psychopathological symptoms (global severity index and BSI dimensions) with attachment and self-compassion in the group that did not use the Internet to establish intimate relationships, comparing to the group that reported using the Internet with this intention.

The results show the importance of deepening the research about the use of Internet to establish intimate relationships, since there are no studies concerning this area in Portugal.

Self-compassion, Attachment, Psychopathology, Intimate relationships, Online dating.

O10 The obsessive-compulsive symptomatology: its relation with alexithymia, traumatic experiences and psychopathological symptoms

SĂłnia c simĂľes 1,2 , timĂłteo areosa 1 , helena espĂ­rito-santo 1 , laura lemos 1.

Alexithymia has been reported more significantly in subjects with obsessive-compulsive disorder (OCD), since they have a hard time recognizing and describing their own emotions. It should also be noted that many individuals with OCD often report experiencing traumatic situations. However, to our knowledge there are no Portuguese studies on the relationship between OCD (or obsessive-compulsive symptomatology), traumatic experiences, and alexithymia, justifying the relevance of this study.

The following goals were outlined: 1) To study the relationship between psychopathological symptoms, alexithymia and traumatic experiences in clinical and non-clinical samples; 2) To study and compare the traumatic experiences, the levels of alexithymia, and the psychopathological symptoms in clinical and non-clinical samples; 3) Check the relations of variables such as age, gender, marital status, and literacy with the presence or absence of obsessive-compulsive symptomatology in order to identify potential confounding factors.

The total sample comprised 115 individuals aged between 18 and 64 years (M = 31.50; SD = 10.61). For the creation of the 2 groups in comparison, the Maudsley Obsessive Compulsive Inventory (MOCI) cut-off point was used, in which results above 10 indicated the presence of obsessive-compulsive symptomatology. The clinical group had 40 subjects, aged between 18 and 49 years (M = 27.03; SD = 7.68) and the nonclinical group had 75 subjects, aged between 18 and 60 years (M = 33.89; SD = 11.21). The research protocol included: MOCI, Traumatic Experiences Checklist (TEC), Toronto Alexithymia Scale and the Brief Symptom Inventory.

The clinical sample presented more psychopathological symptomatology and higher values of alexithymia compared to the non-clinical sample. No differences were found between groups in the presence of traumatic experiences, but the clinical sample presented higher scores of sexual abuse and trauma in the family of origin. Finally, there was a greater number of statistically significant associations and of a stronger magnitude in the non-clinical sample among the studied variables, compared to the clinical sample, especially according to the TEC.

It was verified that traumatic experiences and alexithymia, in particular, might be factors associated with the onset of obsessive-compulsive symptomatology, being a future study area to understand if they will also be risk factors for developing OCD.

Obsessive-compulsive symptomatology, Traumatic experiences, Alexithymia.

O11 Loss of group immunity against measles

JoĂŁo mg frade 1,2 , carla nunes 3 , joĂŁo r mesquita 4 , maria sj nascimento 5 , guilherme gonçalves 1, 1 multidisciplinary unit for biomedical research, institute of biomedical sciences abel salazar, university of porto, 4050-313 porto, portugal; 2 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 public health research centre, national school of public health, nova university, 1600-560 lisboa, portugal; 4 agrarian superior school, polytechnic institute of viseu, 3500-606 viseu, portugal; 5 laboratory of microbiology, department of biological sciences, faculty of pharmacy, university of porto, 4050-313 porto, portugal, correspondence: joĂŁo mg frade ([email protected]).

Vaccination coverage rates higher than 95% contribute to the so-called group immunity effect with regard to measles vaccination [1]. However, to guarantee such immunity, it is important that those vaccine coverage rates also correspond to levels of seropositivity of measles antibodies (specific IgG antibodies levels > 150 mIU/ml) higher than 95% [2].

This study intends to evaluate from which moment, after having been vaccinated with MMR II (triple viral vaccine against mumps, measles and rubella), 95% of the individuals have specific IgG antibodies (Anti-Measles-IgG) <150 mIU/ml.

A cross-sectional study was conducted on 190 individuals, born in Portugal after 1990, with records of vaccine history documented in the Individual Record of Vaccination (IRV) and in the Individual Health Bulletin (IHB). Specific IgG antibodies to measles virus (Anti-Measles-IgG) were measured using the commercial immunoassay Siemens EnzygnostÂŽAnti-Measles Virus/IgG.

Data were grouped into three birth cohorts: born between 1990 and 1993, born between 1994 and 1995 and born between 2001 and 2004. It was found that those born between 2001 and 2004 were those that presented the highest levels of protection against measles, less than 2% of these individuals had levels of protection below the 150 mIU /ml. The cohort born between 1994 and 1995 was the one that presented the lowest protection against the disease, in which more than 50% of the individuals were below the protection threshold (150 mIU/ml). The cohort born between 1990 and 1993 is an intermediate cohort, where more than 50% of individuals are above protection threshold, but with a significant percentage of seronegative individuals. ANOVA analysis and Tukey's multiple comparison analysis showed a statistically significant difference among the 3 birth cohorts (p < 0.001). The use of mathematical modelling showed that at the end of 9 years, after individuals have received MMR II, more than 95% individuals no longer presented specific IgG antibodies against measles virus (Anti-Measles-IgG) above 150 mUI/ml (p < 0.0001).

The time elapsed since the last MMR II vaccination seems to be associated with protection against measles. Nine years after MMR II, more than 95% of individuals are seronegative for the Specific IgG antibodies to measles virus.

1. Gonçalves G, Frade J Nunes C, Mesquita J R, Nascimento MSJ, Persistence of measles antibodies, following changes in the recommended age for the second dose of MMRvaccine in Portugal. Vaccine 33, 2015: 5057–63.

2. Gonçalves G, Nunes C, Mesquita JR, Nascimento MSJ, Frade J. Measles antibodies in cord blood in Portugal. Possible consequences for the recommended age of vaccination. Vaccine 34, 2016: 2750–57.

Immunity, Measles, Vaccination.

O12 Code Stroke in an emergency department - evaluation of results after 7 years of protocol implementation

Ilda barreira 1 , matilde martins 2 , leonel preto 2 , norberto silva 1 , pedro preto 3, 1 serviço de urgĂŞncia, unidade local de saĂşde do nordeste, 5301-852 bragança, portugal; 2 departamento de enfermagem, escola superior de saĂşde, instituto politĂŠcnico de bragança, 5300-146 bragança, portugal; 3 serviço de ortotraumatologia, unidade local de saĂşde do nordeste, 5301-852 bragança, portugal, correspondence: leonel preto ([email protected]).

Fibrinolysis reduces mortality and disability after an ischemic stroke, and its benefits are documented with level of evidence I [1]. The major goal of the Code Stroke (CS) is to treat the eligible cases by fibrinolysis, within the therapeutic window of 4.5 hours after symptom onset [2]. Thus, an emergency department must operate efficient mechanisms to receive, diagnose, treat or transfer patients with stroke [3].

The main objective was to evaluate the results of the CS protocol implementation in an Emergency Department (ED) of a hospital in the North of Portugal. As secondary objectives we aimed to: (I) Characterize the patients in sociodemographic and clinical variables; (II) Calculate the activation rate of CS protocol and the rate of fibrinolysis.

Retrospective descriptive analysis, using data from the Manchester triage system and other secondary source of information, of all patients with ischemic stroke, haemorrhagic stroke, and transient ischemic attack (TIA) admitted to the Emergency Department between January 1, 2010 and December 31, 2016. Socio-demographic data, care times, cardiovascular risk factors and other clinical variables were collected. The statistical analysis was performed by ANOVA, at 0.05 significance level.

In the 7 years analysed, 1200 patients with cerebrovascular disease were admitted in the ED. Among these patients, 63.0% presented ischemic stroke, 17.3% haemorrhagic stroke and 19.8% TIA. The population was predominantly male (54.8%) and had a mean age of 77.4 (Âą 11.2) years. Stroke code was activated 431 times, covering 37.2% (n = 282) of ischemic stroke, and have received thrombolytic therapy 18.4% (n = 52) of these patients. Door-to-needle time was, in average, 69.5 minutes. Mean (Âą SD) NIHSS (National Institutes of Health Stroke Scale) score was 14.8 (Âą 5.2) before treatment, decreasing to 11.8 (Âą 6.0) at two hours post- fibrinolysis (p < 0.05). For all patients (N = 1,200), we obtained the following prevalence of risk factors: Hypertension (64.7%), dyslipidaemia (30.3%), diabetes (26.5%), atrial fibrillation 23.3%), obesity (12.9%), smoking (6.3%) and ischemic heart disease (5.9%). The 24-hour mortality rate was 0.9% for ischemic stroke, 10.6% for haemorrhagic stroke, and 0% for TIA.

High rates of activation protocol were obtained for acute ischemic stroke, but only 52 patients met the criteria for fibrinolysis. The high age and comorbidity of patients with ischemic disease and its origin, predominantly rural, may have influenced the therapeutic window and the eligibility criteria for fibrinolysis.

1. Jauch EC, Saver JL, Adams HP, Bruno A, Connors JJ, Demaerschalk BM, et al.Guidelines for the early management of patients with acute ischemic stroke: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2013;44(3):870-947.

2. Baldereschi M, Piccardi B, Di Carlo A, Lucente G, Guidetti D, Consoli D, et al. Relevance of prehospital stroke code activation for acute treatment measures in stroke care: a review. Cerebrovasc Dis. 2012;34(3):182-90.

3. Alonso de LeciĂąana M, Egido JA, Casado I, RibĂł M, DĂĄvalos A, Masjuan J, et al. Guidelines for the treatment of acute ischaemic stroke. Neurologia. 2014;29(2):102-22.

Stroke, Emergency Service Hospital, Fibrinolysis, Outcome and Process Assessment.

O13 SEMantic and PRAgmatic assessment platform for school-age children

Dulce tavares, eileen s kay, escola superior de saĂşde de alcoitĂŁo, 2649-506 alcabideche, portugal, correspondence: dulce tavares ([email protected]).

Semantic and pragmatic skills are developed throughout life and are essential in the development of school and social learning. Upon entering school, learning to read and write is developed in two large areas of knowledge. The first implies capacities of recognition and decoding of written symbols of the word and vocabulary development and the second allows the understanding of what is read through inferential capacities and non-literal interpretation. Often, students with reading comprehension difficulties go unnoticed. It is easier to detect a child who reads slowly, syllable by syllable, or with mistakes than those who read fluently but without understanding the content. These difficulties only become evident when questions are asked about the text and when it is necessary to understand the questions of subjects such as mathematics or science. Thus, success to reach the National Curricular Plan can be compromised.

Material was developed to evaluate semantic and pragmatic skills in school-aged children. In semantics, aspects of syntagmatic and paradigmatic relations (lexical field, synonymy and antonyms) and paronymy are evaluated. In pragmatics, competences are evaluated such as inferences, comprehension of idioms and proverbs. This material will be placed on a platform that can be consulted and used by different professionals working with children. The items that constitute this material took into account the stages of language development and school level. The lexicon used is in the domain of European Portuguese.

The 756 children who were assessed attended public and private schools in Portugal. The results show an increasing evolution of the lexical competences of the children, with significant differences between the different age groups in all tests. There were no significant differences between female and male except in the paronym test. Regarding the socio-professional level of the child's origin, it is verified that it is a differentiating factor of lexical competence because significant differences in all tests were observed regardless of the age of the child.

The authors concluded that it is of great importance to analyse lexical competence regarding the aspects of its organization, as it enables students to deal with academic tasks successfully, improving literacy as well as to be able to act in a systematic and productive way in the intervention with children with language disorders. The complexity and innovation of the pragmatic skills assessment (in European Portuguese) leads to this work to continue in development.

Semantic, Pragmatics, Assessment, School age.

O14 Quality of Life in Portugal – what factors can determine the QoL in people with Intellectual Disabilities and a great need of supports?

AntĂłnio rodrigo, sofia santos, fernando gomes, faculty of human kinetics, university of lisbon, 1495-687 cruz quebrada, portugal, correspondence: antĂłnio rodrigo ([email protected]).

In Portugal, the Quality of Life (QoL) concept has become increasingly relevant, leaving aside a vision that only focus on the person’s limitations to one that emphasizes the quality of interactions between personal characteristics and environmental demands, within a socioecological model. This new paradigm changes the approach to evaluation and planning individualized supports, regarding adults with Intellectual and Developmental Disability (IDD). Research shows an emerging interest in analysing what personal and environmental factors have impact in QoL of persons with IDD. Therefore, our main goal is to analyse how individual characteristics influence QoL of people with intellectual disability with greater need of supports.

The Portuguese version of the Escala San MartĂ­n, that focus on 8 QoL domains: Self-determination, Emotional Well-Being, Physical Well-Being, Material Well-Being, Rights, Personal Development, Social Inclusion and Interpersonal Relations was applied to 293 individuals with intellectual disabilities, over 18 years-old (32.31 Âą 8.29), 128 females and 165 males. All participants were institutionalized. The dependent variables were the domains/QoL total scores and the independent variables were gender, diagnosis, age, comorbidities and tacking medication. A comparative study was carried out using either independent samples t-tests or the one-way analyses of variance (ANOVA).

When comparing gender, age and medication consumption, no significant differences were found, with all groups presenting similar mean QoL scores. Regarding comorbidities, significant differences were found when comparing physical well-being (p < 0.001), rights (p < 0.001) and social inclusion (p = 0.001) domains, with higher mean QoL scores to those without comorbidities. Significant differences were also found regarding diagnosis, in all domains except for the material well-being. Higher mean scores were found in individuals with a mild intellectual disability diagnosis, when compared to those with severe or profound ID diagnosis.

The information about personal factors with impact in QoL will help to meet challenges and will allow a more adjusted person-centred planning. Discussion and implications for practice will be presented.

Individualized supports, Intellectual and Developmental Disability, Quality of Life, Escala San MartĂ­n.

O15 Relationship between the levels of functional capacity and family functionality and depression in the elderly

AndrĂŠia wb silva, akemi izumi, giovanna g vietta, mĂĄrcia r kretzer, universidade do sul de santa catarina, 88137-270 palhoça, santa catarina, brasil, correspondence: andrĂŠia wb silva ([email protected]).

Depression is one of the most prevalent mental health problems among the elderly [1]. Functional limitations and changes in family dynamics characterize important risk factors for the onset of depression [1,2].

To analyse the relation between levels of functional capacity and family functionality and depression in the elderly.

A cross-sectional study was conducted including one hundred and thirty-eight (138) elderly individuals. The presence of depression, the levels of capacity to perform Basic Activities of Daily Living (BADL) and Instrumental Activities of Daily Living (IADL) and family functionality were assessed, respectively, by the Geriatric Depression Scale (GDS), Katz Index of Independence in Activities of Daily Living, Lawton Scale and family APGAR. The Statistical Package for Social Sciences (SPSS) version 18.0 was used to enter and analyse data (p < 0.05). The present study was approved by the Research Ethics Committee of UNISUL.

The most prevalent characteristics were the age between 60 and 69 years (62.3%), the female gender (52.9%), the white ethnicity (87.0%), having accomplished up to 8 years of schooling (75.8%), and being retired (80.4%). 67.4% of the elderly did not have a spouse, and 14.5% lived alone. Depression presented a prevalence in 43.5% of the participants, of whom 88.3% were mildly depressive and 11.7% were severely depressive. There was a high frequency of hypertension (64.5%), Diabetes Mellitus (37.7%), osteoarthritis (39.1%), heart failure (28.3%), chronic obstructive pulmonary disease (15.9%) and asthma (9.4%). When evaluating functional capacity, 1.4% and 12.3% of the participants were classified as dependent to perform BADL and IADL, respectively. Family dysfunction was observed in 12.3% of the elderly (5.1% moderate dysfunction and 7.2% high dysfunction). When testing associations between depression and sociodemographic characteristics, the results showed statistical differences when comparing gender and marital status. Women were 1,538 times more likely to have depression than men (p = 0.031), and individuals who had a spouse were 1,580 times more likely to suffer from the disease (p = 0.018). When associating depression with other comorbidities, arterial hypertension was 1.652 times more prevalent (p = 0.024). Statistical differences were also identified with heart failure (PR = 1.941, p = 0.001) and asthma (PR = 1.923, p = 0.012). The functional capacity for BADL and IADL did not differ statistically. Family dysfunction was significantly associated with depression, which was 1,969 times more frequent in dysfunctional families (p = 0.003).

Depression in the elderly is associated with the female gender, having a spouse, cardiovascular and respiratory morbidities and family dysfunction.

1. Lima AMP, Ramos JLS, Bezerra IMP, Rocha R, Batista HHMT, Pinheiro WR. Depressão em idosos: uma revisão sistemåtica da literatura. Rev Epidemiol Controle Infecç. 2016;6(2):97-103.

2. Bretanha AF, Fachinni LA, Nunes BP, Munhoz TN, Tomazi E, ThumĂŠ E. Sintomas depressivos em idosos residentes em ĂĄreas de abrangĂŞncia das Unidades BĂĄsucas de SaĂşde da zona urbana de BagĂŠ, RS. Rev Bras Epidemiol. 2015;18(1):1-12.

Depression in the elderly, Functional capacity, Family functionality.

O16 Microbiological evaluation of hotel units swimming pools

Diana assembleia, teresa moreira, antĂłnio araĂşjo, cecĂ­lia rodrigues, marlene mota, manuela amorim, escola superior de saĂşde, instituto politĂŠcnico do porto, 4200-072, porto, portugal, correspondence: manuela amorim ([email protected]).

Swimming pools are currently operated by public and private entities for the development of sports, recreational and therapeutic activities [1]. For this reason, it is essential to guarantee the chemical and microbiological quality of the pool water, since they may be the origin of several pathologies [2].

The present research aimed to analyse data from the microbiological evaluation of indoor and outdoor swimming pool waters of Hotel Units of Mainland Portugal and Madeira in the year 2016, in order to verify the water quality.

A cross-sectional descriptive study was performed using database records from a northern laboratory. The microbiological parameters studied to characterize the indoor and outdoor swimming pool waters included CFU/mL of viable microorganisms at 37ºC/24h, CFU/100mL of total coliforms, CFU/100mL of Escherichia coli , CFU/100mL of Enterococcus spp ., CFU/100mL of Pseudomonas aeruginosa , CFU/100mL of total Staphylococcus and CFU/100mL of coagulase producers Staphylococcus . The samples were characterized as conforming and non-conforming according to the reference intervals indicated in Regulatory Circular nº 14/DA of 21/08/2009 of Direção Geral de Saúde [1].

Of the total of indoor pools (n = 610) analysed, 25.09% (n = 153) were classified as non-conforming, being the microorganisms viable at 37 ÂşC the most frequent cause of nonconformities (n = 105), followed by total coliforms and total Staphylococcus (n = 42 each). For the outdoor pools (n = 1982), 29.92% (n = 593) were also classified as non-conforming, once more being microorganisms viable at 37 ÂşC the most frequent cause of nonconformities (n = 420), followed by total coliforms (n = 154).

Indoor swimming pools have a lower frequency of nonconformities compared to outdoor swimming pools. The ambient temperature and the presence of soils around the pool influence the microbiological quality of the water [2]. These results also suggest that water treatment is not effective, indicating water pollution, being hygienic care other factor that influence the microbiological quality of the water [3]. The determination of these parameters is useful when microbiological monitoring is carried out constantly.

1. Direcção-Geral da Saúde. Normativa Circular Nº 14/DA de 21/08/2009. Programa de Vigilância Sanitåria de Piscinas; 2009.

2. Rebelo H, Rodrigues R, Grossinho J, Almeida C, Silva M, Silva C, et al. Avaliação da qualidade da ågua de piscinas: estudo de alguns parâmetros bacteriológicos e físico-químicos. Boletim Epidemiológico Observaçþes. 2014, 3(4):3-5.

3. World Health Organization. Guidelines for safe recreational water environments. Swimming pools and similar environments. Geneva: World Health Organization; 2006.

Microbiological evaluation, Microbiological quality, Swimming pool waters, Fecal contamination indicators.

O17 Normative values of functionality and quality of life of the portuguese healthy older people

CĂĄtia paixĂŁo 1 , sara almeida 1,2 , alda marques 1,2, 1 respiratory research and rehabilitation laboratory, school of health sciences, university of aveiro, 3810-193 aveiro, portugal; 2 institute for research in biomedicine, university of aveiro, 3810-193 aveiro, portugal, correspondence: alda marques ([email protected]).

The older population is increasing worldwide [1]. Since the average life expectancy is currently 71.4 years at birth and the healthy life expectancy is only 63.1 years, there is a need to enhance the focus on public health to promote health and healthy ageing [2, 3]. Measures of functionality and health-related quality of life (HRQoL) have been identified as predictors of healthy aging [4-6]. However, to interpret results from those measures, and compare them within a population or across populations, normative data are necessary [7-9].

To establish age and gender-related normative values for the Five Times Sit to Stand Test (5 STS), 10 Meter Walk Test (10MWT), and World Health Organization Quality of Life-Bref (WHOQoL-Bref) for Portuguese healthy older people.

An exploratory cross-sectional study was conducted. Participants were recruited from the community. Functionality was assessed with the 5STS [4, 10] and 10MWT [5, 11] and Health-related Quality of Life (HRQoL) with the WHOQoL-Bref (scores: 0-20 and 0-100) [6]. Descriptive statistics was used to determine normative scores by age decades (60-69; 70-79; 80-89 years) and gender. Differences between age and gender were explored with multiple comparison tests using the Bonferroni correction.

118 older people (76.2 Âą 8.9yrs; n = 79, 66.9% female) participated in this study. Mean scores of 5STS (9.4 Âą 3.5s; 13.0 Âą 4.9s; 16.7 Âą 6.7s), 10MWT (5.4 Âą 2.1s; 6.5 Âą 3.1s; 12.4 Âą 5.9s) increased with age. Mean scores of the different domains of the WHOQoL-Bref 0-20 scale: physical health (15.9 Âą 2.6; 15.1 Âą 2.2; 13.6 Âą 2.3), psychological (15.6 Âą 2.6; 15.0 Âą 2.3; 13.9 Âą 1.9), social relationships (15.8 Âą 2.8; 14.6 Âą 2.4; 13.5 Âą 2.4), environment (16.4 Âą 2.3; 16.0 Âą 2.3; 15.1 Âą 1.6) decreased with age. Similar findings were observed in the WHOQoL-Bref 0-100 scale: physical health (74.6 Âą 16.4; 69.3 Âą 13.5; 60.4 Âą 14.2), psychological (72.6 Âą 16.4; 68.4 Âą 14.5; 61.8 Âą 12.1), social relationships (73.6 Âą 17.6; 66.4 Âą 15.2; 59.6 Âą 15.2) and environment (77.6 Âą 2.3; 74.9 Âą 14.6; 69.4 Âą 10.2). Females showed worst results in all measures. Mean scores of most measures were significantly different among age decades and gender (p < 0.05).

This study provided normative values of 5STS, 10MWT and WHOQoL-Bref for the Portuguese healthy older people. These data may improve the utility of these measures for health professionals to screen people and develop tailored interventions to improve functionality and HRQoL in this population. Normative values of WHOQoL-Bref will also allow identifying vulnerable groups and describing the profile of HRQoL in Portuguese healthy older people living in the community.

This work was partial supported by Programa Operacional de Competitividade e Internacionalização (COMPETE), through Fundo Europeu de Desenvolvimento Regional (FEDER) and Fundação para a Ciência e Tecnologia (FCT) under the project UID/BIM/04501/2013.

1. WHO. Global health and aging. Geneva: World Health Organization. 2011.

2. Ageing H. Keystone for a Sustainable Europe–EU Health Policy in the Context of Demogrpahic Change. European Comission. 2007.

3. Fuchs J, Scheidt-Nave C, Hinrichs T, Mergenthaler A, Stein J, Riedel-Heller SG, et al. Indicators for healthy ageing—a debate. International journal of environmental research and public health. 2013;10(12):6630-44.

4. Marques A, Almeida S, Carvalho J, Cruz J, Oliveira A, JĂĄcome C. Reliability, Validity, and Ability to Identify Fall Status of the Balance Evaluation Systems Test, Mini-Balance Evaluation Systems Test, and Brief-Balance Evaluation Systems Test in Older People Living in the Community. Arch Phys Med Rehabil. 2016;97(12):2166-73.

5. Marques A, Cruz J, Quina S, RegĂŞncio M, JĂĄcome C. Reliability, Agreement and Minimal Detectable Change of the Timed Up & Go and the 10-Meter Walk Tests in Older Patients with COPD. COPD: Journal of Chronic Obstructive Pulmonary Disease. 2016;13(3):279-87.

6. Huang T, Wang W. Comparison of three established measures of fear of falling in community-dwelling older adults: psychometric testing. Int J Nurs Stud. 2009;46(10):1313-9.

7. Rothstein JM, Echternach JL. Primer on measurement: an introductory guide to measurement issues, featuring the American Physical Therapy Association's standards for tests and measurements in physical therapy practice: Amer Physical Therapy Assn; 1993.

8. Bohannon RW, Andrews AW. Normal walking speed: a descriptive meta-analysis. Physiotherapy. 2011;97(3):182-9.

9. Mitrushina M, Boone KB, Razani J, D'Elia LF. Handbook of normative data for neuropsychological assessment: Oxford University Press; 2005.

10. Bohannon R. Reference values for the five-repetition sit-to-stand test: a descriptive metaanalysis of data from elders. Percept Mot Skills. 2006;103(1):215-22.

11. Bohannon R. Comfortable and maximum walking speed of adults aged 20-79 years: reference values and determinants. Age Ageing. 1997;26(1):15-9.

Normative values, Functionality, Quality of Life, Portuguese healthy older people.

O18 Relationship between balance and functionality, gait speed, physical activity and quality of life in community-dwelling older people

Sara almeida 1,2 , cĂĄtia paixĂŁo 1 , alda marques 1,2.

Balance is a modifiable risk factor for falls which represent a major public health problem for healthy ageing [1]. Predictors of healthy ageing in older people (i.e., functionality, gait speed, physical activity (PA) and health-related quality of life (HRQoL)) have been correlated with balance measures [2-5]. However, most balance measures do not assess the different components of balance hindering the design of interventions. To overcome this difficulty the Balance Evaluation System Test (BESTest) [6] and its short versions [7, 8] (new comprehensive measures of balance) were developed. Nevertheless, the relationship between the BESTest [6] and its short versions [7, 8] with functionality, gait speed, physical activity and health-related quality of life older people living in the community is still unknown.

To explore the relationship between the BESTest, Mini-BESTest and Brief-BESTest with functionality, gait speed, PA and HRQoL in community-dwelling older people.

An exploratory cross-sectional study was conducted. Community-dwelling older people (> 60 yrs) were recruited. Balance was assessed with the BESTest, Mini-BESTest and Brief-BESTest, functionality with the 5STS [9], gait speed with the 10MWT [10], PA with the Brief-PA questionnaire [11] and HRQoL with the WHOQoL-Bref [12]. Descriptive statistics was used to characterize the sample. Correlations were explored with the Spearman correlation coefficient. By convention, the interpreting size of a correlation coefficient was negligible (0.00-0.30), low (0.30-0.50), moderate (0.50-0.70), high (0.70-0.90) and very high (0.90-1.00) correlation [13].

One hundred and eighteen older people living in the community (76.2 Âą 8.9 years; n = 79, 66.9% female) participated in this study. On average participants were overweight, with high body mass index (male: 26.9 Âą 4.2 kg/m 2 ; female: 26.8 Âą 4.3 kg/m 2 ) and fat-free mass (male: 29.5 Âą 6.3 %; female: 37.6 Âą 6.2%). BESTest, Brief-BESTest and Mini-BESTest were I) low and negatively correlated with intense (-0.34; -0.37; -0.32, respectively) and moderate (-0.37; -0.37; -0.35, respectively) PA; II) moderate and negatively correlated with the 5STS (-0.51; -0.61; -0.59, respectively); III) moderate to high and negatively correlated with the 10MWT (-0.69; -0.77; -0.78) and IV) negligible to moderate and positively correlated with the WHOQoL-Bref domains (I-Physical health 0.46; 0.57; 0.53; II-Psychological 0.47; 0.52; 0.53; III-Social relationships 0.32; 0.36; 0.28; IV-Environment 0.46; 0.51; 0.46).

This study shows that there is a relationship between the BESTest and its short versions with functionality, gait speed and HRQoL in community-dwelling older people. Higher correlations were found in the short versions, especially with functionality measures. This is useful for clinical practice since these versions are simpler, require less material and are quicker to apply.

This work was partial supported by Programa Operacional de Competitividade e Internacionalização (COMPETE), through Funfo Europeu de Desenvolvimento Regional (FEDER) and Fundação para a Ciência e a Tecnologia (FCT) under the project UID/BIM/04501/2013.

1. WHO. Falls: fact sheet. Geneva; 2016.

2. Iwakura M OK, Shibata K, Kawagoshi A, Sugawara K, Takahashi H, et al. . Relationship between balance and physical activity measured by an activity monitor in elderly COPD patients. Int J Chron Obstruct Pulmon Dis. 2016;11:1505-14.

3. Ozcan A DH, Gelecek N, Ozdirenc M, Karadibak D. . The relationship between risk factors for falling and the quality of life in older adults. BMC Public Health. 2005;5:90.

4. Spagnuolo D JS, Iwama Â, Dourado V. Walking for the Assessment of Balance in Healthy Subjects Older than 40 Years. Gerontology. 2010;56(5):467-73.

5. NilsagĂĽrd Y AM, Carling A, Vesterlin H. Examining the validity and sensitivity to change of the 5 and 10 sit-to-stand tests in people with multiple sclerosis. Physiotherapy Research International. 2017;22(4):e1681.

6. Horak F, Wrisley D, Frank J. The Balance Evaluation Systems Test (BESTest) to Differentiate Balance Deficits. Phys Ther. 2009;89(5):484-98.

7. Franchignoni F, Horak F, Godi M, Nardone A, Giordano A. Using psychometric techniques to improve the Balance Evaluation Systems Test: the mini-BESTest. Journal of rehabilitation medicine. 2010;42(4):323-31.

8. Padgett P, Jacobs J, Kasser S. Is the BESTest at its best? A suggested brief version based on interrater reliability, validity, internal consistency, and theoretical construct. Phys Ther. 2012;92(9):1197-207.

9. Goldberg A, Chavis M, Watkins J, Wilson T. The five-times-sit-to-stand test: validity, reliability and detectable change in older females. Aging Clin Exp Res. 2012;24(4):339-44.

10. Peters D, Fritz S, Krotish D. Assessing the Reliability and Validity of a Shorter Walk Test Compared With the 10-Meter Walk Test for Measurements of Gait Speed in Healthy, Older Adults. J Geriatr Phys Ther. 2013;36(1):24-30.

11. Marshall A, Smith B, Bauman A, Kaur S. Reliability and validity of a brief physical activity assessment for use by family doctors. Br J Sports Med. 2005;39(5):294-7.

12. Kluthcovsky A, Kluthcovsky F. O WHOQOL-bref, um instrumento para avaliar qualidade de vida: uma revisĂŁo sistemĂĄtica. Revista de Psiquiatria do Rio Grande do Sul. 2009;31(3).

13. Mukaka M. A guide to appropriate use of Correlation coefficient in medical research. Malawi Med J. 2012;24(3):69-71.

Correlations, Balance, Healthy ageing predictors, Older people.

O19 Trends of hospitalization for chronic obstructive pulmonary disease in Brazil from 1998 to 2016

BĂĄrbara o gama, andrĂŠia wb silva, fabiana o gama, giovanna g vietta, mĂĄrcia r kretzer, university of southern santa catarina, campus pedra branca, 88137-270 palhoça, santa catarina, brazil, correspondence: bĂĄrbara o gama ([email protected]).

Chronic Obstructive Pulmonary Disease (COPD) is a major public health problem. In Brazil, it is the fifth largest cause of hospitalization in the public health system when analysing patients over 40 years of age.

To analyse the trend of hospitalization for COPD in Brazil from 1998 to 2016.

Trend analysis of hospitalization for COPD were based on data from the “ Sistema de Informação Hospitalar do Departamento de Informática do Sistema Único de Saúde ” (DATASUS). Simple linear regression analysis, p < 0.05. Approved by the Ethics Committee of the Universidade do Sul de Santa Catarina (UNISUL).

In the period, 3,403,536 hospitalizations for COPD were analysed in Brazil, with a strong tendency to reduce rates (β= -6,257, p < 0.001), from 166.2/100,000 inhabitants in 1998 to 56.7/100,000 inhabitants in 2016. Among the Brazilian regions, there were higher hospitalization rates in the southern region, 461.8/100,000 inhabitants in 1998 and 133.1/100,000 inhabitants in 2016, followed by the central-west region, 222.2/100,000 inhabitants for 61,6/100,000 inhabitants. The region with the lowest rates of hospitalization was the northeast, with 69.6/100,000 inhabitants at 36.7/100,000 inhabitants. There largest decreases in the southern region (β= -19.4). The trend is decreasing in both sexes, with the largest reduction in male (β = -6,976), which has the highest admission rates at the beginning and end of the period (180.7 and 60.6/100.000 inhabitants, respectively). All age groups analysed showed a significant reduction tendency, with the largest decreases in the age groups above 60 years. In the age group of 80 years or more, there was reduction from 3370.1/100,000 in 1998 to 1535.3/100,000 inhabitants in 2016 (β = -101,198). In females, the reduction was from 2089.1/100,000 inhabitants to 548.4/100,000 inhabitants (β = -84,372).

The trend of hospitalization for COPD in Brazil is decreasing. The southern region has the highest rates, as does the male sex. The age groups of 60 and older in both sexes present the highest rates of hospitalization, with increase proportional to the increase in age. The results indicate a change in the profile of this disease, which can be attributed to a greater coverage of the family health strategy, better monitoring of diagnosed cases, and free access to medicines dispensed by the SUS ( Sistema Único de Saúde ), which reduce exacerbations of the cases.

Chronic Obstructive Pulmonary Disease, Hospitalization, Trends.

O20 Temporal trend of hospitalization for acute myocardial infarction in the southern Brazilian states from 2008 to 2016

Jessica m okabe, bĂĄrbara o gama, pedro f simĂŁo, mĂĄrcia kretzer, giovanna g vietta, fabiana o gama, correspondence: jessica m okabe ([email protected]).

Acute Myocardial Infarction (AMI) is responsible for high hospitalization rates in Brazil Southern regions and represents one of the major causes of morbidity and mortality.

To analyse the temporal trend of hospitalization for AMI in the southern Brazilian states from 2008 to 2016.

Ecological study of time series of hospitalization for AMI, with data from the Hospital Information System provided by the Department of Informatics of the Single System (CID 10 - code I 21.9) in the resident population of the States of ParanĂĄ (PR), Santa Catarina (SC) and Rio Grande do Sul (RS), according to sex and age group. Simple linear regression was performed, with p < 0.05. Study approved by the Research Ethics Committee of the Southern University of Santa Catarina.

In the analysed period, there were 154,828 hospitalizations in the South region. There was an upward trend in rates, with an average annual increase of 4,261 hospitalizations per AMI/100,000 inhabitants. At the beginning of the period a rate of 82.65/100,000 inhabitants was registered and at the end a rate of 118.67/100,000 inhabitants. The same trend was observed in the three southern states. Paranå presented a rate of 15.57/100,000 in 2008, to 24.89/100,000 in 2016 (β = 1.064; p = 0.002). In Santa Catarina the rate ranged from 17.11/100,000 in 2008 to 28.23/100,000 in 2016 (β = 1.159; p < 0.001). Rio Grande do Sul presented the highest rates among states, from 21.98/100,000 in 2008 to 30.57/100,000 in 2016 (β = 1.156; p < 0.001). Both sexes had an upward trend (male β = 15.732; p < 0.001; female β = 8.553; p < 0.001), with a variation from 279.32 (2008) to 418.46/100,000 inhabitants (2016) among men; and from 165.76 to 238.05/100,000 inhabitants among women. It was observed that the age groups between 40-49 years and 70-79 years, in both sexes, presented an upward tendency of hospitalization rates. In the male age group between 40 and 49 years, the increase in hospitalization rate was 368.56/100,000 in 2008 to 475.21/100,000 in 2016 (β = 10.553; p = 0.01). Between 70-79 years there was an increase of 10791.40/100,000 to 12458.46/100,000 in this same period (β = 160.084; p = 0.04). In the female age group between 40-49 years the increase in hospitalization rate was from 168.11/100,000 in 2008 to 210.63/100,000 in 2016 (β = 5.184; p = 0.02). And between 70-79 years there was an increase from 4452.09/100,000 to 5130.12/100,000 in this same period (β = 78.868; p = 0.02).

The study showed an upward trend in hospitalization rates for AMI in the Southern Region, by sex and age groups above 30 years for both sexes. Males present the highest rates.

Acute myocardial infarction, Trend, Hospitalization.

O21 Temporal trend of the incidence of tuberculosis in the state of Santa Catarina from 2001 to 2015

Rafaela f abreu, bĂĄrbara o gama, pedro f simĂŁo, giovanna g vietta, mĂĄrcia kretzer, fabiana o gama, correspondence: rafaela f abreu ([email protected]).

World Health Organization (WHO) declared Tuberculosis (TB) as a global public health emergency. TB control is a priority in Brazil.

To analyse the temporal trend of incidence of Tuberculosis in the State of Santa Catarina from 2001 to 2015.

Ecological study of time series of TB incidence trends selected from the SINAN (Information System for Notifiable Diseases) of the Ministry of Health in the population residing in the State, by sex, age group and macro-regions. Simple linear regression was performed, p < 0.05. Study approved by the Research Ethics Committee of the Southern University of Santa Catarina.

From 2001 to 2015, 30,213 TB cases were confirmed in Santa Catarina, with a steady trend in incidence rates, with 31.2/100,000 inhabitants in 2001 and 32.0/100,000 inhabitants in 2015 (p = 0.27). The male gender presented the highest rates, showing a strong tendency to increase, with an increase of 0.456 per year, ranging from 42.24/100,000 inhabitants in 2001 to 47.55/100,000 inhabitants, in the year 2015 (p < 0.001). In the male groups, aged from 0 to 19 years and from 20 to 29 years, a significant trend occurred in the increase of incidence rates, with an increase of 0.159 and 0.606, respectively, of the rate (p = 0.02) per year. The age group from 40 to 49 years, in turn, showed a decreasing trend, with a reduction of 1.292 in the incidence rate per year (p = 0.001). In females, there was a reduction of 0.802 in the rate per year in the age group of 20 to 29 years (p = 0.003). The macro-regions of the Midwest, Foz do Rio ItajaĂ­ and Plateau Norte presented a reduction in TB incidence rates. In the macro-regions of Greater FlorianĂłpolis and South, the trend was increasing (p < 0.05).

TB incidence rates in Santa Catarina are stationary. Growing trend in males. Growing trend in the male age groups up to 29 years and decreasing between 40 and 49 years. Decreasing trend in the female age group from 20 to 29 years. Macro-regions located in the coastal range have an increasing tendency and the macro-regions located in the Centre West of the State, a decreasing trend.

Tuberculosis, Trend, Incidence.

O22 Trauma, impulsivity, suicidality and binge eating

Ana c ribeiro, mariana marques, sandra soares, pedro correia, cidĂĄlia alves, paula silva, laura lemos, sĂłnia simĂľes, instituto superior miguel torga, 3000-132 coimbra, portugal, correspondence: ana c ribeiro ([email protected]).

Binge eating is a public health problem with physical and psychological effects, throughout life. Several studies explored the association between some variables (e.g. shame) and binge eating symptoms, but it is important to continue exploring the contribution of other correlates.

Explore the association and the predictive role of traumatic experiences, impulsivity and suicidality with/to binge eating symptoms.

421 subjects from the general population and college students (women, n = 300, 71.3%) completed the Traumatic Events Checklist, the Binge Eating Scale, the Barratt Impulsiveness Scale and the Suicidality Scale.

The values of punctual prevalence of binge eating symptoms were similar to those from recent national studies, having found a severe severity of 2.6% in the total sample (3.3% in women). In both genders, suicidality total score and the body mass index (BMI) associated with binge eating total score. Only in women this score correlated with sexual and family trauma total scores and with the total score of traumatic events. If in men suicidality total score associated with family trauma total score and with the total score of traumatic events; in women that score also correlated with sexual trauma total score. In men, binge eating total score associated to attentional impulsivity (one of the first order impulsivity factors) and, in women, to all the first order impulsivity factors (attentional impulsivity, motor and non-planning), and with all the second order impulsivity factors (psychological attention, cognitive instability, motor, self-control and cognitive complexity), with the exception of perseverance. In women, attentional impulsivity particularly associated with sexual and family trauma total scores and with the total score of traumatic experiences. In women, the BMI, suicidality and attentional impulsivity total scores were the binge eating total score predictors.

In a sample from the general population and college students, we found that it is salient and of importance for future interventions, mainly in women, the predictive role of BMI, suicidality and attentional impulsivity scores to binge eating symptoms, with traumatic events (a more distal correlate) revealing significant associations, but not predicting these symptoms.

Traumatic events, Impulsiveness, Suicidality, Binge eating.

O23 Computerised respiratory sounds during acute exacerbations of chronic obstructive pulmonary disease

Ana oliveira 1,2,3 , patrĂ­cia rebelo 2,3 , lĂ­lia andrade 4 , carla valente 4 , alda marques 2,3, 1 faculty of sports, university of porto, porto,4200-450 portugal; 2 respiratory research and rehabilitation laboratory, school of health sciences, university of aveiro, 3810-193 aveiro, portugal; 3 institute for research in biomedicine, university of aveiro, 3810-193 aveiro, portugal; 4 pulmonology department, baixo vouga hospital center, 3810-501 aveiro, portugal, correspondence: ana oliveira ([email protected]).

Timely treatment and adequate monitoring of acute exacerbations of chronic obstructive pulmonary disease (AECOPD) have shown to reduce hospital admissions and recovery time, while improving patients’ quality of life [1]. Nevertheless, this is challenging as AECOPD diagnosis/monitoring relies exclusively on patients’ reports of symptoms worsening [2]. AECOPD are characterised by an increased airway inflammation and obstruction, abnormal bronchial mucus and air trapping, which results in changes in lung acoustics [2,3]. Thus, changes in respiratory mechanics related with AECOPD may be successfully monitored by respiratory sounds, namely adventitious respiratory sounds (ARS, crackles and wheezes) [3]. Nevertheless, little is known on ARS changes during the time course of AECOPD.

To evaluate ARS changes during the time course of AECOPD.

25 non-hospitalised patients with AECOPD (16 males, 70.0 ± 9.8yrs, FEV1 54.2 ± 20.6% predicted) were enrolled. Patients were treated with pharmacological therapy. ARS at anterior and posterior right/left chest were simultaneously recorded at hospital presentation (T1) and at weeks 3 (T3) and 8 (T8). ARS (no. of crackles and wheeze occupation rate–%Wh) were processed, per respiratory phase, using validated algorithms [4,5]. Differences were examined with Friedman and Cochran tests and both tests were corrected with Bonferroni corrections.

Significant differences were found in no. of inspiratory crackles (0.6 [0.1-2.2] vs. 0.5 [0.1-2.5] vs. 0.3 [0.0-0.9]; p = 0.008) in T1, T3 and T8 at posterior chest, namely participants presented more inspiratory crackles (p = 0.013) at T1 than at T8. Similar results were found for inspiratory %Wh (0.0 [0.0-12.3] vs. 0.0 [0.0-0.0] vs. 0.0 [0.0-0.0]; p = 0.019), namely, participants presented significantly more inspiratory %Wh at T1 than at T3 (p = 0.006). A significant higher number of participants presenting inspiratory wheezes was found at T1 than at T3 at the anterior chest (%Wh: 10 vs. 2 vs. 5; p=0.017) and a trend to significance was found at posterior chest (%Wh: 10 vs. 3 vs. 4; p = 0.052). No differences were found for the remaining variables.

Crackles and wheezes seem to be sensitive to monitor the course of AECOPD. Inspiratory crackles seem to persist until 15 days after the exacerbations ( i.e. , approximate time needed to resolve AECOPD [6]) whilst inspiratory %Wh significantly decreased after this period. This information may allow further advances in the monitoring of patients with COPD across all clinical and non-clinical settings, as respiratory sounds are simple, non-invasive population-specific and available by nearly universally means. Further studies with larger samples and including data collected before the AECOPD are needed to confirm these findings.

1. Wilkinson TM, Donaldson GC, Hurst JR, Seemungal TA, and Wedzicha JA. Early therapy improves outcomes of exacerbations of chronic obstructive pulmonary disease. Am J Respir Crit Care Med 169: 1298-1303, 2004.

2. The Global Initiative for Chronic Obstructive Lung Disease. Global Strategy for Diagnosis, Management, and Prevention of 543 Chronic Obstructive Pulmonary Disease—2017 Report. The 544 Global Initiative for Chronic Obstructive Lung Disease, Inc.; 2017.

3. Gavriely N, Nissan M, Cugell DW, Rubin AH. Respiratory health screening using pulmonary function tests and lung sound analysis. Eur Respir Rev. 1994;7(1):35–42.

4. Pinho C, Oliveira A, JĂĄcome C, Rodrigues JM, Marques A. Integrated approach for automatic crackle detection based on fractal dimension and box filtering. IJRQEH. 2016;5(4):34-50.

5. Taplidou SA, Hadjileontiadis LJ. Wheeze detection based on time-frequency analysis of breath sounds. Comput Bio Med. 2007;37(8):1073-83.

6. Seemungal TA, Donaldson GC, Bhowmik A, Jeffries DJ, Wedzicha, JA. Time course and recovery of exacerbations in patients with chronic obstructive pulmonary disease. Am J Respir Crit Care Med. 2000; 161(5): 1608-1613.

Chronic Obstructive Pulmonary Disease, Acute exacerbations, Computerised respiratory sounds, Crackles, Wheezes.

O24 Trauma, self-disgust and binge eating

Sandra soares, mariana marques, ana c ribeiro, pedro correia, cidĂĄlia alves, paula silva, helena e santo, laura lemos, correspondence: sandra soares ([email protected]).

Binge eating disorder is finally recognized in the current Diagnostic and Statistical Manual of Mental Disorders-5. Additionally, international and national studies explored correlate binge eating symptoms, but it is important to evaluate the role of other variables for these symptoms, in the general population.

Explore the association and predictive role of traumatic experiences and of self-disgust with/in binge eating symptoms, exploring, also, the possible mediation role of self-disgust in the relation between traumatic experiences and those symptoms.

421 subjects from the general population and college students (women, n = 300, 71.3%) completed the Traumatic Events Checklist, the Binge Eating Scale and the Multidimensional Self-disgust scale.

We found binge eating (BE) values similar to those from other national studies: mild to moderate BE (women: 6.3%; men: 5.0%) and severe BE (women: 3.3%; men: 0.8%). In men, BE total score positively correlated with defensive activation, cognitive-emotional and avoidance dimensions (self-disgust). Body mass index (BMI) positively correlated with BE total score and defensive activation (self-disgust) and negatively with family trauma. In women, BE total score positively associated with all self-disgust dimensions. Sexual trauma, family trauma, total of traumatic events and BMI positively associated with BE total score and all the self-disgust dimensions. In a hierarchical multiple regression analysis, BMI, total of traumatic events and the cognitive-emotional of self-disgust predicted BE total score. The cognitive-emotional (self-disgust) dimension mediated totally the relation between traumatic events and the BE total score.

In a sample from the general population and college students, BE values were similar to those from national studies. In women, sexual trauma, family trauma and total traumatic experiences (and all self-disgust dimensions) associated with BE. A higher BMI was associated with higher BE levels. In future interventions focusing on BE, in women, it seems important to consider the role of cognitive-emotional self-disgust in the relation between BE occurrence and distal traumatic events.

Traumatic events, Self-disgust, Binge eating.

O25 New paediatric screening procedures: health promotion in primary care

Marisa lousada 1,2 , ana p mendes 3,4 , helena loureiro 1,5 , graça clemĂŞncio 6 , elsa melo 1,5 , ana rs valente 1, 1 school of health sciences, university of aveiro, 3810-193 aveiro, portugal; 2 center for health technology and services research, university of aveiro, 3810-193 aveiro, portugal; 3 health sciences school, polytechnic institute of setĂşbal, 2914-503 setĂşbal, portugal; 4 centro interdisciplinar de investigação aplicada em saĂşde, polytechnic institute of setĂşbal, 2914-503 setĂşbal, portugal; 5 health sciences research unit: nursing, nursing school of coimbra, 3046-851 coimbra, portugal; 6 aces baixo vouga, 3804-502 aveiro, portugal, correspondence: marisa lousada ([email protected]).

Screening procedures do not identify the specific disorder but allow a quick identification of children who may need a detailed assessment in speech therapy. Screening instruments are usually performed by different health professionals (e.g. pediatricians, nurses). The Child Health Program for primary care in Portugal determined that all 5-year-old children should be screened by nurses and general practitioners to conclude if they present a typical development suitable to school requirements. This screening is usually implemented through the Mary Sheridan test and there is no speech-language screening test used in primary care. Recently a Speech and Language Screening was validated for Portuguese children in kindergartens with excellent levels of specificity, sensitivity and reliability. RALF aims to quickly identify (5 minutes) children who may be at risk of speech-language impairment and need to be referred to a in depth assessment by a Speech-Language Therapist.

This study aims to implement a new screening procedure in primary health care contributing to best practices. Specifically, the study aims to identify children with speech-language disorder that are undiagnosed due to the absence of a known condition such as neurological, hearing or cognitive impairment.

Ethical approval was granted by the Ethics Committee (UICISA) (ref.14/2016). A sociocultural questionnaire characterizing child and family background was fulfilled by caregivers to collect information about the child’s background (e.g., mother language; neurological, hearing, cognitive disorder) and child’s family background. Subject selection criteria included: Portuguese as native language and absence of a language disorder secondary to a known condition. The sample comprised 37 children whose parents returned informed consents. The screening was applied by 10 nurses in the Global Health Examination of 5 years old children in 2 health care centres.

Twenty-one percent of children failed the screening. This illustrates the high level of speech-language difficulties (without any other associated condition) and is consistent with previous research studies. The children that failed the screening were already been referred to speech-language services for a detailed assessment.

This study highlights the importance of the implementation of a screening procedure in primary health care contributing to best practices.

Study supported by FEDER through POCI-01-0145-FEDER-007746 and FCT via CINTESIS, R&D Unit (ref. UID/IC/4255/2013).

Screening, Speech and language, Health promotion.

O26 Practices on using wearables during aquatic activities

Henrique p neiva 1,2 , luĂ­s faĂ­l 1 , mĂĄrio c marques 1,2 , maria h gil 1,2 , daniel a marinho 1,2, 1 department of sport sciences, university of beira interior, 6201-001 covilhĂŁ, portugal; 2 research center in sports sciences, health sciences and human development, 6201-001 covilhĂŁ, portugal, correspondence: henrique p neiva ([email protected]).

Several studies described the use of different sensors to detect the daily activity, movement and sleep patterns and physical activities [1]. These are easily available for all those who are interested in tracking physical activity and progresses to improve physical fitness and health-related parameters [1]. However, little is known about the people’s knowledge about this equipment and specially in some specific activities that have some restrictions, for instance those performed in-water.

The purpose of this study was to characterize Portuguese practices on the use of wearable technology during aquatic activities.

Swimming pools from the interior region of Portugal were selected randomly and their users completed a questionnaire consisting of 33 questions. The first part focused on the characterization of their motivations and usual in-water activities, and the second focused on their views on the value of the wearable technology, its use and suggestions for future development of those devices according to aquatic activities.

Ten swimming pools were accessed, and 418 questionnaires were filled by people ranging from 18 to 79 years-old. About 79% of these subjects have heard about wearables for sport, but 65% never used them during exercise. At the time of the inquiries, 24% still used and 11% gave up using it mainly because of lack of interest or because the devices did not work well underwater. Among the non-users, most reported that they did not have the opportunity (53%), considering that they are not useful (17%), or complaining about the financial cost (15%). However, most of them (74%) would be interested in trying this type of equipment during aquatic activities. Interestingly, 71% did not consider doing more exercise after they have the equipment. From those subjects using wearables, only a few (n = 24) used during in-water exercise.

For future, the devices should be more comfortable, be more reliable, be water resistant, with longer battery life. Besides the usual feedbacks provided, they also would like to see some technical corrections evidenced by that technology. People seemed to know about the existence of wearables to monitor physical activity but are still reluctant because of their underwater reliability, cost, and opportunity to try them. These results evidenced a need for improving these technological devices according to subjects needs and the activities performed. Some suggestions were made according to the future development of these devices to use during in-water exercitation.

NanoSTIMA: Macro-to-Nano Human Sensing Towards Integrated Multimodal Health Monitoring and Analytics, NORTE-01-0145-FEDER-000016, co-financed by FEDER-NORTE2020.

1. Chambers R, Gabbett TJ, Cole MH, Beard A. The use of wearable microsensors to quantify sport-specific movements. Sports Med. 2015, 45(7): 1065-1081.

Technology, In-water activities, Sensors.

O27 Does the recall of caregiver eating messages exacerbate the pathogenic impact of shame on eating and weight-related difficulties?

Sara oliveira, clĂĄudia ferreira, cognitive and behavioural centre for research and intervention, university of coimbra, 3000-115 coimbra, portugal, correspondence: sara oliveira ([email protected]).

It has been recognized the central role of caregiver eating messages - restriction of food intake and pressures to eat - on later individual's eating behaviour, body image and weight status [1-3]. Additionally, shame is a painful emotion [4] also associated with the development and maintenance of body image and eating-related difficulties [5, 6], namely inflexible eating and concerns and maladaptive attitudes regarding body weight and shape [7].

The main aim of the present study was to test whether recalling caregiver eating messages [3] moderates the association of external shame [8] with inflexible eating rules [7] and with concerns and maladaptive attitudes regarding body weight and shape [9,10].

The sample comprised 479 Portuguese women, aged between 18 and 60 (M = 25.66; SD = 8.50), who completed validated self-report measures. The relationship between the study variables was accessed by Pearson product-moment correlation and the moderator effect was tested through path analysis.

Results revealed that caregiver restrictive/critical messages played a significant moderator effect on the relationships of external shame with inflexible eating rules, and with concerns and maladaptive attitudes regarding body weight and shape. These findings suggested that caregiver restrictive/critical eating messages exacerbated the impact of shame on these psychopathological outcomes, with the tested model accounting for 17% and 29% of the variance of inflexible eating rules and body weight and shape concerns, respectively. In addition, pressure to eat caregiver messages was not correlated with all variables examined. A graphical representation of the moderation analyses allowed to understand that, for the same levels of external shame, women who recall more caregiver restrictive/critical eating messages tend to adopt more inflexible eating rules and present greater concerns and maladaptive attitudes regarding body weight and shape.

These findings appear to offer important clinical and investigational implications, highlighting the importance of the development of efficient parental intervention approaches as a refuge against maladaptive eating regulation strategies.

1. Abramovitz BA, Birch LL. Five-year-old girls’ ideas about dieting are predicted by their mothers’ dieting. J Am Diet Assoc. 2000; 100: 1157-1163. doi: 10.1016/S0002-8223(00)00339-4.

2. Birch LL, Fisher JO. Mother’s child-feeding practices influence daughters eating and weight. Am J Clin Nutr. 2000; 71: 1054-1061.

3. Kroon Van Diest A, Tylka T. The Caregiver Eating Messages Scale: Development and psychometric investigation. Body Image. 2010; 7:317-326. doi: 10.1016/j.bodyim.201006.002.

4. Gilbert P. What is shame? Some core issues and controversies. In: Gilbert P, Andrews B, editors. Shame: Interpersonal behavior, psychopathology and culture. New York: Oxford University Press; 1998. pp. 3- 38.

5. Goss K, Gilbert P. Eating disorders, shame and pride: A cognitive behavioural functional analysis. In: Gilbert P, Miles J, editors. Body shame: Conceptualization, research & treatment. Hove, UK: Brunner Routledge; 2002. pp. 219–255.

6. Hayaki J, Friedman M, Brownell K. Shame and severity of bulimic symptoms. Eat Behav. 2002; 3:73-83. doi:10.1016/S1471-0153(01)00046-0.

7. Duarte C, Ferreira C, Pinto-Gouveia J, Trindade I, Martinho A. What makes dietary restraint problematic? Development and validation of the Inflexible Eating Questionnaire. Appetite. 2017; 114:146-154. doi: 10.1016/j.appet.2017.03.034.

8. Matos M, Pinto-Gouveia J, Gilbert P, Duarte C, Figueiredo C. The Other As Shamer Scale – 2: Development and validation of a short version of a measure of external shame. Personal Individ Differ. 2015; 74:6-11. doi: 10.1016/j.paid.2014.09.037.

9. Fairburn CG, Beglin SJ. Assessment of eating disorders: interview of self report questionnaire? Int J Eat Disord. 1994; 16(4):363–370. doi:10.1002/1098-108X(199412).

10. Machado PP, Martins C, Vaz AR, Conceição E, Bastos AP, Gonçalves S. Eating Disorder Examination Questionnaire: psychometric properties and norms for the Portuguese population. Eur Eat Disord Rev. 2014; 22(6):448–453. doi:10.1002/erv.2318.

Caregiver eating messages, External shame, Inflexible eating rules, Eating disordered, Women.

O28 How does shame mediate the link between a secure attachment and negative body attitudes in men?

Shame is a painful self-conscious and universal emotion [1] regarded as a central feature of the development and maintenance of body image difficulties [2]. Additionally, it’s known the association between attachment style and body concerns, among women [3]. Particularly, a secure attachment may promote a more favourable body image [4]. However, few studies have focused on mechanisms that may explain body image difficulties in men.

The present study tested a model which hypothesized that the impact of a secure attachment on the engagement in negative male body attitudes, namely attitudes towards their muscularity and body fat [5, 6], is carried by general feelings of shame [7], while controlling the effect of body mass index.

The sample comprised 133 men, aged between 18 and 60 years old (M = 28.83; SD = 10.24), who completed validated self-report measures. The relationship between the study variables was accessed by Pearson product-moment correlation and the mediator effect was conducted through path analysis.

The tested path model explained 22% and 49% of negative male attitudes towards their muscularity’s and low body fat’s variance, respectively. Results demonstrated that a secure attachment presented a significant direct effect on attitudes towards body fat, and an indirect effect through external shame on attitudes towards muscularity. In fact, these findings seem to suggest that men who were secure in attachment tend to experience less general feelings of shame and, consequently, presented low negative body attitudes, namely in regards to their muscularity and body fat.

These data support the relevance of addressing shame experiences when working with men with body image related-difficulties, especially in a context of early adverse experiences in their attachment.

1. Gilbert P. What is shame? Some core issues and controversies. In: Gilbert P, Andrews B, editors. Shame: Interpersonal problems, psychopathology and culture. New York: Oxford University Press; 1998. pp.3–3.

2. Goss K, Gilbert P. Eating disorders, shame and pride: A cognitive behavioural functional analysis. In Gilbert P, Miles J, editors. Body shame: Conceptualization, research & treatment. Hove, UK: Brunner Routledge; 2002. pp. 219–255.

3. Sharpe TM, Killen JD, Bryson SW, Shisslak CM, Estes LS, Gray N, et al. Attachment style and weight concerns in preadolescent and adolescent girls. Int J Eat Disord. 1998; 23(1):39-44.

4. Cash T. Cognitive-behavioral perspectives on body image. In: Cash T, Pruzinsky T, editors. Body image: A handbook of theory, research, and clinical practice. New York: The Guilford Press; 2002. pp.38-36.

5. Tylka TL, Bergeron D, Schwartz JP. Development and psychometric evaluation of the Male Body Attitudes Scale (MBAS). Body Image. 2005; 2(2):161-175. doi: 10.1016/j.bodyim.2005.03.001.

6. Ferreira C, Oliveira S, Marta-SimĂľes J. Validation and psychometric properties of Portuguese Version of Male Body Attitudes Scale-Revised (MBAS-R). Manuscript in preparation, 2017.

7. Matos M, Pinto-Gouveia J, Gilbert P, Duarte C, Figueiredo C. The Other As Shamer Scale – 2: Development and validation of a short version of a measure of external shame. Personal Individ Differ. 2015; 74:6–11. doi:10.1016/j.paid.2014.09.03.

Secure attachment, External shame, Negative body attitudes, Men.

O29 Potential contamination of tourniquets used in peripheral venipuncture: preliminary results of a scoping review

Anabela s oliveira 1 , pedro parreira 1 , nĂĄdia osĂłrio 2 , paulo costa 1 , vânia oliveira 1 , fernando gama 3 , joĂŁo graveto 1, 1 health sciences research unit: nursing, nursing school of coimbra, coimbra, 3046-851, portugal; 2 coimbra health school, polytechnic institute of coimbra, coimbra, 3046-854, portugal; 3 coimbra hospital and universitary centre, 3000-075 coimbra, 3000-075, portugal, correspondence: joĂŁo graveto ([email protected]).

Peripheral venipuncture constitutes one of the most frequent and invasive clinical procedures performed in healthcare settings [1-2]. In order to stop blood flow and promote vascular distension, the use of a tourniquet five to ten centimetres above the desired puncture site is recommended [3]. The irregular management of these specific medical devices, without complying with guidelines, constitutes a risk of microorganism dissemination [4-5].

To map the available evidence on the microbiological contamination of tourniquets used in peripheral venipuncture, identifying recurrent practices in their manipulation.

Scoping review based on the principles advocated by Joanna Briggs Institute [6]. The analysis of relevance of the articles, the extraction and synthesis of data was performed by two independent reviewers. The search strategy included all articles published until November 2017, written in Portuguese, Spanish, French and English.

An initial total of 2,052 articles derived from the search conducted. Through Endnote software, 998 duplicates were removed. The remaining 1,054 articles were screened by title and abstract. Of these, 33 articles were included for full-text analysis by two independent reviewers. During this process, the reference lists of all included articles were screened, which resulted in the inclusion of 3 new articles. Ten studies were excluded due to absence of microbiological data inclusion and 6 were excluded due to lack of full-text access and author's reply. Overall, a total of 1,337 tourniquets belonging to nurses, nursing assistants, doctors, phlebotomists and lab workers were analysed for microorganism contamination. A small number of studies verified that the same tourniquets were used continuously by professionals between 3 days to 104 weeks. Preliminary results evidenced contamination rates varying between 9% and 100%, composed by diverse microorganisms such as Staphylococcus aureus , Escherichia coli , Pseudomonas aeruginosa , Enterococcus and Acinteobacter baumannii . Several of the included studies described conflicting practices during tourniquet manipulation by health professionals, especially when focused on domains such as hand hygiene before and after tourniquet use, glove usage during venipuncture, tourniquet cleaning and disinfecting, sharing tourniquets with other professionals and storage conditions. The most cited reason for tourniquet replacement in clinical settings was due to their loss by health professionals.

As a contribution to clinical practice, it is expected that the mapping of the available scientific evidence regarding the potential contamination of these devices will appear as an informative contribution that supports the analysis of current practices in this field, promoting the implementation of quality assurance systems in health institutions.

This protocol is part of the project “Transfer of technological innovations to nursing practice: a contribution to the prevention of infections”, funded from the European Regional Development Fund, by the Operational Program Competitiveness and Internationalization of PORTUGAL 2020.

1. Marsh N, Webster J, Mihala G, Rickard C. Devices and dressings to secure peripheral venous catheters: A Cochrane systematic review and meta-analysis. International Journal of Nursing Studies. 2017;67:12-19.

2. Oliveira AS. Intervenção nas pråticas dos enfermeiros na prevenção de flebites em pessoas portadoras de cateteres venosos perifÊricos: um estudo de investigação-ação [PhD thesis]. Universidade de Lisboa; 2014.

3. Veigar B, Henriques E, Barata F, Santos F, Santos I, Martins M et al. Manual de Normas de Enfermagem: Procedimentos TÊcnicos. 2nd ed. Administração Central do Sistema de Saúde, IP; 2011.

4. World Health Organization. Decontamination and reprocessing of medical devices for healthcare facilities. Geneva, Switzerland: WHO Document Production Services; 2016.

5. Costa P. Gestão de material clínico de bolso por enfermeiros: fatores determinantes e avaliação microbiológica [Masters dissertation]. Nursing School of Coimbra; 2017.

6. Peters M, Godfrey C, McInerney P, Baldini Soares C, Khalil H, Parker D. Chapter 11: Scoping Reviews. In: Aromataris E, Munn Z, ed. by. Joanna Briggs Institute Reviewer's Manual [Internet]. The Joanna Briggs Institute; 2017 [cited 14 December 2017]. Available from: https://reviewersmanual.joannabriggs.org/.

Tourniquets, Contamination, Peripheral venipuncture.

O30 Effects of a community-based food education program on nutrition-related knowledge in middle-aged and older patients with type 2 diabetes: a RCT

Carlos vasconcelos 1,2 , antĂłnio almeida 1 , maria cabral 3 , elisabete ramos 3,4 , romeu mendes 1,3,5, 1 university of trĂĄs-os-montes e alto douro, 5000-801 vila real, portugal; 2 polytechnic institute of viseu, 3504-510 viseu, portugal; 3 instituto de saĂşde pĂşblica, universidade do porto, 4050-600 porto, portugal; 4 faculty of medicine, university of porto, 4200-319 porto, portugal; 5 public health unit, aces douro i – marĂŁo e douro norte, 5000-524 vila real, portugal, correspondence: romeu mendes ([email protected]).

Peripheral Diabetes imposes an unacceptably high human, social and economic cost, especially on aging populations. Nutrition-related knowledge is of crucial importance to make healthier food choices, contributing for type 2 diabetes (T2D) control and related comorbidities prevention.

To analyse the effects of a food education program (FEP) on the nutrition-related knowledge (NRK) in middle-aged and older patients with T2D.

Forty-two individuals between 50 and 80 years old with T2D were recruited in primary health care institutions, to participate in Diabetes em Movimento ÂŽ, a community-based exercise program (3 exercise sessions per week; 75 minutes each; during 9 months), developed in Vila Real, Portugal. Participants were randomized into two groups: a control group (CG; N = 19; exercise program only) and an experimental group (EG; N = 23; exercise program plus a FEP). The FEP was 16 weeks long and, on each week, a different nutrition-related theme was addressed. Each theme was driven through a theoretical session (15 minutes) and dual-task exercise strategies integrated in Diabetes em Movimento ÂŽ's sessions. The NRK was evaluated, before and after the 9-month intervention, using the Portuguese reduced version of Nutritional Knowledge Questionnaire (from 0 to 56 points; higher score, better knowledge).

Thirty-six participants completed the study (CG, N = 16; EG, N = 20). The baseline score was 30.19 ± 6.10 (CG) vs. 29.40 ± 6.16 points (EG). After the intervention, the score was 31.31 ± 7.40 (CG) vs. 35.20 ± 5.68 points (EG). A significant time*group interaction effect was identified (p = 0.001; η2p = 0.290). Considering the FEP's sessions adherence level (< 50% vs. ≥ 50%), a significant time*group interaction effect was also identified (baseline, 29.78 ± 7.84 [< 50%] vs. 29.09 ± 4.76 [≥ 50 %]; after intervention, 32.78 ± 5.93 [50 %] vs. 37.18 ± 4.85 [≥ 50%]; p = 0.004, η2p = 0.370).

A community-based easy-to-implement food education program was effective in increasing NRK of middle-aged and older patients with type 2 diabetes and may contribute to better food choices. Program's adherence levels play a major role on knowledge acquisition.

NCT02631902

Type 2 diabetes; Food education program; Nutrition-related knowledge; Community-based intervention.

O31 Perception of oral antidiabetic agents adverse events and their impact on Health Related Quality of Life in type 2 diabetic patients

Rui s cruz 1 , luiz m santiago 2 , carlos f ribeiro 3, 1 coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal; 2 faculty of medicine, university of coimbra, 3004-504 coimbra, portugal; 3 department of pharmacology and experimental therapeutics, faculty of medicine, university of coimbra, 3004-504 coimbra, portugal, correspondence: rui s cruz ([email protected]).

Currently, drug therapy with oral antidiabetic agents, is capable of inducing normoglycemia levels able to decrease the risk of complications associated with diabetes mellitus. However, it is also known that the various existing oral antidiabetic agents may trigger a large number of adverse events, either alone or in combination. Some of these tolerability and security issues related to the oral antidiabetic are reported by patients and can influence negatively or satisfaction with treatment or glycaemic control, or the therapeutic adherence and maintenance. It is therefore very important the role of patients in monitoring adverse events related to the use of the oral antidiabetic drugs in order to optimize treatment and improve the quality of life of patients with type 2 diabetes (DM2).

The aim of this study was to determine the prevalence of adverse events associated with use of oral antidiabetics and assessing their impact on Health-related Quality of Life (HRQoL) of diabetic patients tracked in primary health care.

A total of 357 DM2 patients were enrolled in observational and cross-sectional study, recruited in six Health Care Centres/Family Health Units (FHU) of the central region of Portugal. Data collection comprised three questionnaires to measure the prevalence of adverse events, the diabetes health profile (DHP-18) and EQ-5D-3L.

Results showed that the highest prevalence of adverse events is in the DipeptidylPeptidase-4 Inhibitors followed by Metformin+Sitagliptin (fixed dose) and Metformin+Vildagliptin (fixed dose) therapeutic classes. We also found that all correlations between different variables were statistically significant (p < 0.001).

Thus, we conclude that patients who show a greater number of adverse events tend to have poorer health profiles, worse general health and also lower health-related quality of life.

Diabetes Medication, Therapy, Quality of Life.

O32 Third stage of waterbirth: observational study

Joyce cs camargo 1,2 , vitor varela 3 , elisabete santos 3 , natalucia m araĂşjo 2 , kelly cmp venâncio 4 , manuela nĂŠnĂŠ 5 , maria clr grande 6, 1 abel salazar institute of biomedical sciences of the university of porto, 4200-135 porto, portugal; 2 school of arts, sciences and humanities, university of sĂŁo paulo, 03828-000 sĂŁo paulo, brazil; 3 sĂŁo bernardo hospital, 2910-445 setĂşbal, portugal; 4 college of nursing, university of sĂŁo paulo, 05403-000 sĂŁo paulo, brazil; 5 escola superior de saĂşde da cruz vermelha portuguesa, 1350-125 lisboa, portugal; 6 faculty of psychology and educational sciences, university of porto, 4200-135 porto, portugal, correspondence: joyce cs camargo ([email protected]).

The placental-delivery, in waterbirth (WB), usually occurs while maternal wellbeing is monitored through clinical aspects, heart-rate and blood-pressure, as well as water-coloration. Adequate care should be taken in this period, with prevention of postpartum-haemorrhage(HPP), which is the main cause of maternal death in developing countries, and approximately Âź of all maternal deaths worldwide [1].

To verify the outcome of the labour’s third stage at the Waterbirth-Project (PWB), in Setúbal, at São Bernardo’s Hospital located in Portugal. Study's question: What’s the maternal outcome of childbirth’s 3rd stage in PWB?

Observational-study, cross-sectional, descriptive based on ethical guidelines (CNPD-9885/2015) approved by the hospital, where delivery room’s infrastructure, protocol’s definition and technical and scientific training of the obstetric-team began in 2006. The PWB occurred between 2011-2014. 153 women, with a single pregnancy, gestational age ≥ 37 weeks with low-risk prenatal care participated in the PWB, signed an informed consent form about study’s benefits and risks, resulting in 90 waterbirths. Data were collected from the specific PWB forms in April 2016. Data management was developed in Excel® and SPSS® version 16.0 programs.

In the PWB, 51.1% of women had placental waterbirth vs 48.9% out-of-water. The active management occurred in 7.7% vs 86.7% of physiological management, 92.3% of the women had physiological blood loss, and 7.7% had increased bleeding, controlled with uterotonics. These results corroborate with evidence: Swiss study [2], with 89 WB vs 279 out-of-water births, 57% had physiological defect in the water and the 3rd-Stage was significant (p < 0.01) in PA; English study [3] with 5,192 WB, with third physiological stage, 86.1% of which 55.8% were in water. For 3rd-Stage’s management [4]: 1. Active: administration of uterotonic (oxytocin [1], 1st choice) after birth, timely clamping of the umbilical cord and controlled cord traction. 2. Physiological: Spontaneous relief assisted by gravity and/or maternal effort. In order to evaluate PPH in WB, in addition to the clinical state of the puerperium, the coloration of the comparison to wine is analysed: 50-100 ml blood loss to the pink-Chablis; from 150-250 ml to bleed and 500-750 ml to Merlot-wine [5] or, study-site midwife’s puts-hand below the surface of the water horizontally. If visible hand, haemorrhage ≤ 500mL, if hand not-visible, haemorrhage > 500mL.

The 3rd-Stage’s management in WB is safe and according to the experience of the PWB, no adverse events were related to it. More studies are needed to support good clinical practices based on scientific evidence.

1. OMS. Recomendaçþes da OMS para a prevenção e tratamento da hemorragia pós-parto. In: Saúde OMd, editor. 2014. p. 48.

2. Zanetti-Dallenbach RA, Lapaire O, Maertens A, Holzgreve W, Hosli I. Water birth, more than a trendy alternative: a prospective, observational study. Arch Gynecol Obstet. 2006;274(6):355-65.

3. Burns EE, Boulton MG, Cluett E, Cornelius VR, Smith LA. Characteristics, interventions, and outcomes of women who used a birthing pool: A prospective observational study. Birth. 2012;39(3):192-202.

4. ICM, FIGO. Prevention and Treatment of Post-partum Haemorrhage: New Advances for Low Resource Settings International Confederation of Midwives (ICM). International Federation of Gynaecology and Obstetrics (FIGO); 2006.

5. Harper B. Gentle Birth Choices. Revised Edition Š2005 Barbara Harper ed: Inner Traditions Bear and Company; 2005 August 09, 2005.

Childbirth’s 3rd stage, Placental-delivery, Postpartum-haemorrhage, Waterbirth, Midwifery.

O33 Aqua apgar in waterbirth: cross-sectional study

Joyce cs camargo 1,2 , vitor varela 3 , elisabete santos 3 , maria aj belli 2 , maryam mj trintinĂĄlia 2 , manuela nĂŠnĂŠ 4 , maria clr grande 5, 1 abel salazar institute of biomedical sciences of the university of porto, portugal; 2 school of arts, sciences and humanities, university of sĂŁo paulo, 03828-000 sĂŁo paulo, brazil; 3 sĂŁo bernardo hospital, 2910-445 setĂşbal, portugal; 4 escola superior de saĂşde da cruz vermelha portuguesa, 1350-125 lisboa, portugal; 5 faculty of psychology and educational sciences, university of porto, 4200-135 porto, portugal.

Waterbirth (WB) is the complete underwater fetal expulsion [1,2] with much discussion [3] about it. Aqua Apgar [4] is an index that evaluates the newborn’s vitality (NB) while still submerged in water until life’s first minute, developed by Cornelia Enning.

To know the neonatal outcome of the Waterbirth Project (PWB) at a Setubal’s (Portugal) Hospital, São Bernardo. Study’s Question: What is the neonatal outcome of new-borns born in PWB?

Cross-sectional study, observational, descriptive based on ethical guidelines (CNPD-9885/2015) approved by the hospital, whose delivery-room’s infrastructure, protocol’s definition and obstetric-team’s technical and scientific training began in 2006. The PWB occurred between 2011-2014. 153 women with single pregnancy, gestational age ≥ 37 weeks with low-risk prenatal care participated in the PWB and signed an informed consent form about study’s benefits and risks, resulting in 90 waterbirth. Data were collected from the specific PWB form in April 2016. Data management was developed in Excel® and SPSS® version 16.0 programs.

The 1st minute’s Aqua Apgar and the 5th minute’s Apgar were superior to 7 in all the cases, with an average of 9.4 at the 1st minute and 9.9 at the 5th minute. A cross-sectional-study in Sydney [5] observed minor Apgar and may be due to disregarding that water-born NB manifest their vitality by moving the legs and arms, opening and closing their eyes and mouth and swallowing [4]. A cohort study in the UK [6] corroborates our study that NB of aquatic birth were less likely to have a low Apgar score in the 5th minute. The use of Aqua Apgar in our study allowed a coherent outcome in the NBs who is kept submerged in water until life’s first minute, with a soft transition to extra uterine life and with no negative repercussions on heart rate and absence of complication or neonatal hospitalization.

This study provides evidences that may support clinical decisions regarding delivery in water. Further studies on Aqua Apgar should be conducted to support evidence-based practices.

1. Nutter, E., Meyer, S., Shaw-Batista, J. & Marowitz, A. (2014). Waterbirth: an integrative analysis of peer – reviewed literature. Journal of Midwifery & Women’s Health. 59, (3), 286-319.

2. Cluett ER, Burns E. Immersion in water in labour and birth. Cochrane Database Syst Rev 2009;(2):CD000111.

3. ACOG. American College of Obstetricians & Gynecologists. (2014). Immersion in water during labor and delivery (Committee Opinion No. 594). Retrieved from http://www.acog.org/ Resources_And_Publications/Committee_Opinions/ Committee_on_Obstetric_Practice/Immersion_in_ ater_During_Labor_and_Delivery 4.Garland D. Revisiting Waternirth: an attitude to care. 2011. Published by Palgrave Macmillan. ISBN 10: 0230273572 / ISBN 13: 978023273573

5. Bovbjerg M L; Cheyney M; Everson C. (2016). Maternal and Newborn Outcomes Following Waterbirth: The Midwives Alliance of North America Statistics Project, 2004 to 2009 Cohort. J Midwifery Womens Health. Jan-Feb;61(1):11-20. doi: 10.1111/jmwh.12394. Epub 2016 Jan 20.

6. Dahlen HG, Dowling H, Tracy M, Schmied V, Tracy S. (2013). Maternal and perinatal outcomes amongst low risk women giving birth in water compared to six birth positions on land. A descriptive cross sectional study in a birth centre over 12 years. Midwifery;29(7):759-64.

Aqua Apgar, Waterbirth, Midwifery, Apgar, Childbirth.

O34 Portuguese centenarians from Oporto and Beira Interior: distinctive health profiles?

Daniela brandĂŁo 1,2 , oscar ribeiro 1,3 , rosa m afonso 1,4 , constança paĂşl 1,5, 1 center for health technology and services research, 4200-450 porto, portugal ; 2 faculty of medicine, university of porto, 4200-319 porto, portugal ; 3 university of aveiro, 3810-193 aveiro, portugal ; 4 university of beira interior, 6201-001 covilhĂŁ, portugal ; 5 institute of biomedical sciences abel salazar, university of porto, 4050-313 porto, portugal, correspondence: daniela brandĂŁo ([email protected]).

In Portugal, the number of centenarians almost tripled over the last decade from 589 centenarians in 2001 to 1526 in 2011 [1], and recent projections point to the existence of 3,393 centenarians in 2013 [2]. Reaching the age of 100, though an important landmark, does not necessarily indicates successful aging as it is often accompanied by severe health and functional constraints. Understanding health trajectories of these long-lived individuals and studying the prevalence of diseases that are the most common causes of death is important for conveniently addressing their current caregiving needs.

The aim of this study is to present an overview of the sociodemographic and health-related characteristics of two distinct samples of Portuguese centenarians (predominantly rural vs . predominantly urban) and acknowledge potential dissimilarities.

A sample of 241 centenarians was considered (140 from the PT100 Oporto Centenarian Study and 101 from the PT100 Beira Interior Centenarian Study). Sociodemographic information, nature and number of diseases, functionality and physical health variables were collected.

In both samples, most centenarians were female (89.3% in Oporto, and 86.1% in Beira Interior), and widowed (76.4% in Oporto, 91.1% in Beira Interior), and lived in the community (57.9% in Oporto, 49.0% in Beira Interior). Higher levels of basic activities of daily living (BADL) and instrumental activities of daily living (IADL) dependency were found in the Oporto sample, as well as a higher percentage of bedridden centenarians (61.0% in Oporto vs . 38.1% in Beira Interior). Sensorial impairments and incontinence were the most frequent conditions reported in both samples; however, lower percentages of age-related illnesses were found in the Beira Interior sample. Considering the three most lethal diseases among the elderly population (heart disease, non-skin cancer and stroke), 60.0% of centenarians in Oporto escaped these conditions, whereas in Beira Interior this percentage increases to 85.4%.

This study provides a general overview about the health profile of Portuguese centenarians in two types of communities: one rural and with low population density, and another in an urban context. Our findings raise important differences between centenarians from the two samples, which reinforce the heterogeneity of this population, and the importance of environmental factors in how such an advanced age was achieved. Findings highlight the need for potentially distinctive health promotion initiatives in these two settings.

This work was supported by the Portuguese Foundation for Science and Technology (FCT) [PhD Grant for the first author - SFRH/BD/101595/2014]. The PT100 Oporto Centenarian Study was supported by the Portuguese Foundation for Science and Technology (FCT; Grant Pest – C/SAU/UI0688/2011 and C/SAU/UI0688/2014).

1. National Statistical Institute of Portugal, (INE). Censos - Resultados definitivos. Região Norte – 2011. Lisboa: Instituto Nacional de Estatística; 2012.

2. National Statistical Institute of Portugal (INE). Projeções de População Residente 2015–2080. Lisboa: Instituto Nacional de Estatística; 2017.

Centenarians, Health, Functionality, Diseases, Portugal, Morbidity.

O35 Implementation of an educational program to promote functionality in medical wards: quasi-experimental study

JoĂŁo tavares 1,2 , joana grĂĄcio 3 , lisa nunes 3, 1 nursing school of coimbra, coimbra, 3046-851, portugal; 2 coimbra education school, polytechnic institute of coimbra, 3030-329, portugal; 3 coimbra hospital and universitary centre, 3000-075 coimbra, 3000-075, portugal, correspondence: joĂŁo tavares ([email protected]).

Functional decline, diminished performance in at least one activity of daily living, is often of 30 to 60% among hospitalized older adults (OA) [1]. Quality nursing care is essential to prevent functional decline. A “new” theoretically based philosophy of care has been proposed: the Function Focused Care (FFC), which is geared toward optimization of function and physical activity during all personal/care related activities that occur throughout the hospital stay [2]. The FFC has demonstrated better outcomes at discharge and post-acute periods.

To evaluate the effect of an educational program for nurses in promoting the FFC among hospitalized OA.

This is a prospective quasi-experimental study developed in four internal medical units. These units were randomly selected in two units for case (intervention) and two for control. Participants were 117 OA and 94 registered nurses (RN). Intervention consisted in the development and implementation of an educational program about FFC to RN, lasting 10 hours, and a maintenance program, during 5 months. Further details about the program can be found in Tavares et al [3]. Implementation of FFC activities by RN was assessed through the FFC Behaviour Checklist, which was completed by the researchers through non-participant observation [4]. The measures for patients were the functional decline (DF) assessed by the Katz Index: difference between baseline and discharge (t0), discharge and follow-up of 3 months (t1) and baseline and follow-up (t2). For comparison of the case and control groups, an independent t-test was calculated.

The patient’s sociodemographic and clinical characteristics showed no statistical differences between groups. The provision of FFC mean was 0.46 ± 0.22, indicating that RN promoted only 46% of total possible FFC activities. Significant statistical differences were found between case and control group ( t (91)= -2.85; p= 0.01), with means of 0.52 ± 0.24 and 0.39 ± 0.19, respectively. No statistical difference was found between the promotion of FFC and the functional decline at t0 (U = 30.5, p = 0.15), t1 ( t (38.82) = 6.293; p< 0.15) or t2 ( t (83) = 2.49, p = 0.44).

Promotion of functionality is very low, which could be explained by the lack of impact in FD prevention. However, in the case group, more FFC activities were developed. These results suggest a positive impact of the educational program in OA care. The FFC can be seen as a challenge and opportunity for change, innovation, and creativity, in order to improve the effectiveness, efficiency, and quality of care of hospitalized OA.

1. Hoogerduijn JG, Schuurmans MJ, Duijnstee MSH, De Rooij SE, Grypdonck MFH. A systematic review of predictors and screening instruments to identify older hospitalized patients at risk for functional decline. J Clin Nurs. 2007;16(1):46-57.

2. Burket TL, Hippensteel D, Penrod J, Resnick B. Pilot testing of the function focused care intervention on an acute care trauma unit. Geriatr Nurs. 2013;34(3):241-246.

3. Gråcio J, Tavares JP de A, Nunes L, Silva R. Programa educacional para enfermeiros: eficåcia de duas estratÊgias formativas. In: XI Congresso Internacional Galego-Português de Psicopedagogia, 2017; Braga: Universidade do Minho. Instituto de Educação. Centro de Investigação em Educação Universidade Minho, 2017; 540-541.

4. Tavares JP de A, GrĂĄcio J, Nunes L. Functional Focused care: content validity of Functional Focused Care Behavior Checklist. Eur Geriatr Med. 2016;7(supplement 1):S1-S282.

Function focused care, Older adults, Hospitalization functionality, Educational program.

O36 Self-reported data and its relation to the standard and validated measures to predict falls

Anabela c martins, catarina silva, juliana moreira, nuno tavares, physiotherapy department, coimbra health school, polytechnic institute of coimbra, 3026-854 coimbra, portugal, correspondence: anabela c martins ([email protected]).

According to National Institute for Health and Care Excellence quality standards, the assessment of fall risk and preventing falls should be multifactorial and include self-reported questions like fall history, fear of falling (FoF), self-perception of functional ability, environment hazards, gait pattern, balance, mobility and muscle strength [1]. Concerning the self-reported data, some studies described subjectivity and difficulty in extracting reliable information when using such methods. History and number of previous falls are often used as golden standard in fall risk assessment studies [2]; however, these questions are source of misjudgement, in part, due to difficulty for an older person remember exactly how many times he/she had fallen in a past period of time.

The study aimed to compare self-reported questions and standard and validated measures for screening risk of fall to verify the confidence of the self-reported data.

506 community-dwelling adults aged 50+ years old (mean age 69.56 ± 10.29 years old; 71.7% female) were surveyed regarding demographics, history of fall, FoF, sedentary lifestyle, use of upper-extremities to stand up from a chair, by self-reported questionnaire; analysis of gait, balance and muscle strength, by standard and validated measures for screening risk of fall - 10 meters walking speed test [3], Timed Up & Go test [4] and 30 second sit to stand test [4], respectively. Independent samples t tests were performed to compare groups.

33.2% of the sample reported at least one fall in the last year (fallers), 50% reported FoF, 46.4% sedentary lifestyle, 31.8% needed their upper extremities assistance to stand from a chair. Fallers demonstrated lower scores of gait velocity (p < 0.001), lower extremities strength (p < 0.001) and balance (p = 0.034) compared with non-fallers; who reported sedentary lifestyle also showed lower scores of gait velocity (p < 0.001), lower extremities strength (p = 0.001) and balance (p < 0.001) compared with non-sedentary. Simultaneously, who assumed FoF showed lower scores of gait velocity (p < 0.001), lower extremities strength (p < 0.001) and balance (p < 0.001) compared with who had no FoF. Finally, those who use the upper-extremities to stand up from a chair showed lower scores of gait velocity (p < 0.001), lower extremities strength (p < 0.001) and balance (p < 0.001) compared with those who do not.

The findings suggest that self-reported data like history of falls, sedentary lifestyle, FoF and use of upper extremities to stand up from a chair, obtained by simple questions, have emerged as reliable information on risk factors for falling and can be used to complete the fall risk screening.

Authors would like to thank all participants and centres, clinics and other entities hosting the screenings. Financial support from project FallSensing: Technological solution for fall risk screening and falls prevention (POCI-01-0247-FEDER-003464), co-funded by Portugal 2020, framed under the COMPETE 2020 (Operational Programme Competitiveness and Internationalization) and European Regional Development Fund (ERDF) from European Union (EU).

1. NICE, Nacional Institute for Health and Care Excellence. Falls in older people: assessing risk and prevention. Clinical Guideline, 2013 Available at: nice.org.uk/guidance/cg161 (faltam dados Ă  referĂŞncia)

2. Garcia AG, Dias JMDD, Silva SLA, Dias RC. Prospective monitoring and self-report of previous falls among older women at high risk of falls and fractures: a study of comparison and agreement. Braz J Phys Ther. 2015; 19(3).

3. Fritz S & Lusardi M. White paper: “walking speed: the sixth vital sign”. J Geriatr Phys Ther. 2009; 32(2): 2-5.

4. Stevens JA. The STEADI tool kit: a fall prevention resource for health care providers. IHS Prim Care Provid. 2016, 39: 162-6.

Self-reported data, Fall Risk Assessment, Community dwelling adults.

O37 Life after falling: which factors better explain participation in community dwelling adults?

Juliana moreira, catarina silva, anabela c martins, correspondence: juliana moreira ([email protected]).

Participation is defined by World Health Organization (WHO), as the person’s involvement in a life situation [1]. There are few studies exploring the association between participation restriction and being older, exhibiting more depressive moods, poor mobility, and a lack of balance confidence [2,3].

The objective of this study was to identify which factors, namely, age, functional capacity and self-efficacy for exercise have the best association with participation.

A sample of 168 community-dwelling adults (age ≥50 years), mean age 70.45 ± 10.40 years old (78.6% female), with history of at least one fall in the previous year, participated in the study. Measures included demographic variables, functional capacity, assessed by six functional tests: Grip strength, Timed Up and Go (TUG), 30 seconds Sit-to-Stand, Step test, 4 Stage Balance “modified” and 10 meters Walking Speed and two questionnaires (Self-efficacy for exercise and Activities and Participation Profile related to Mobility - PAPM). Descriptive and correlational statistics were performed to analyse data.

Fifty-nine percent of participants presented restrictions in participation (34.8% mild restrictions, 17.4% moderate restrictions and 6.8% severe restrictions). Participation showed a strong correlation with 10 meters walking speed (r = -0.572) and TUG (r = 0.620) for a significance level p < 0.001. A moderate correlation was found between participation and 30 seconds Sit-to-Stand (r = -0.478), Step test (r = -0.436), Grip strength (r = -0.397), 4 Stage Balance test “modified” (r = -0.334), as well as, Self-efficacy for exercise (r = -0.401) and age (r = 0.330), for a significance level p < 0.001.

This study suggests that participation of individuals with history of fall is associated with functional capacity, self-efficacy for exercise and age. Previous studies have showed comparable findings [4,5,6], however, admitting the strong association between participation and 10 meters Walking Speed and TUG, it is essential to include these instruments in a comprehensive evaluation of the individuals who have suffered a fall in the past year to predict participation restrictions. The performance assessed, in few minutes, by these tests, will gather information about balance and mobility impairments, that associated with a quick assess of Self-efficacy for exercise [7] will outline the quality of life of persons with history of falls.

1. WHO, World Health Organization. International Classification of Functioning, Disability, and Health. Geneva: Classification, Assessment, Surveys and Terminology Team, 2001

2. Liu J. The severity and associated factors of participation restriction among community dwelling frail older people: an application of the International Classification of Functioning, Disability and Health (WHO-ICF). BMC Geriatrics, 2017, 17:43.

3. Desrosiers J, Robichaud L , Demers L, Ge’linas I, Noreau L, Durand D. Comparison and correlates of participation in older adults without disabilities. Archives of Gerontology and Geriatrics, 2009, 49: 397–403.

4. Anaby D, Miller WC, Eng JJ, Jarus T, Noreau L, Group PR. Can personal and environmental factors explain participation of older adults? Disability and Rehabilitation, 2009;31(15):1275–82.

5. Rubio E, LĂĄzaro A, SĂĄnchez-SĂĄnchez A. Social participation and independence in activities of daily living: across sectional study. BMC Geriatrics, 2009, 9:26.

6. Tomioka K, Kurumatani N, Hosoi H. Social Participation and the Prevention of Decline in Effectance among Community-Dwelling Elderly: A Population-Based Cohort Study. PLoS ONE, 2015, 10(9).

7. Martins AC, Silva C, Moreira J, Rocha C, Gonçalves A. Escala de Autoeficåcia para o Exercício: validação para a população portuguesa. Conversas de Psicologia e do Envelhecimento Ativo, 2017, 126-141

Participation, Community-dwelling adults, Falls, Functional capacity, Self-efficacy for exercise.

O38 History of fall and social participation profile among community dwelling older adults: is there any relation with frailty phenotype?

MĂłnica calha, anabela c martins, correspondence: mĂłnica calha ([email protected]).

Ageing population is a worldwide phenomenon. The number of older frail people increases rapidly, which leads to a substantial impact on the economic, social and health systems. Cardiovascular Health Study data [1] estimated that, in a population with 65 years or more, 6.3% of aged adults have the frailty phenotype. According to Fried et al. , frailty is a vulnerable condition characterized by the decline of biological reserves [2,3]. This happens due to deregulation of multiple physiological systems, which puts the individuals at risk by reducing the organism resistance to stressful factors, with a subsequent loss of functional homeostasis. One of the most significant aspects described in the literature is the fact that frailty is an important risk factor for falls. It is estimated that one in every three adults over 65 years fall each year. The frailty syndrome also compromises the social participation of aged adults.

To understand if adults with 65 years or over with frailty phenotype have history of falls in the period of the previous 12 months prior to the study and worst social participation, when compared to the ones who don't have this phenotype.

A sample of 122 community-dwelling adults (age ≥65 years), mean age 72.22 ± 6.44 years old (63.9% female), with history of at least one fall in the previous year participated in this cross-sectional study. Data were collected by a demographic, clinical and history of falls questionnaire, functional tests and the Activities and Participation Profile related to Mobility (PAPM).

We verified that there are statistically significant differences in the history of falls between no-frailty (n = 24; mean number of falls = 1.92) and frailty/pre-frailty (n = 31; mean number of falls = 3.06) individuals (p = 0.036), as well as in the social participation score of both groups, with worse profile among the frailty/pre-frailty (0.821), when compared to no-frailty (0.276) (p = 0.000).

Adults with 65 years or over who present frailty or pre-frailty phenotype, when compared to no-frailty ones, have higher rate of falls in the previous 12 months and more restrictions in social participation. Physiotherapists benefit from this knowledge to understand needs of this population and to plan interventions focus on prevention of falls and strategies to promote participation as promising outcomes.

1 Etman, A., Burdorf, A., Van der Cammen, T.J.M., Mackenbach, J.P., Van Lenthe, F.J. (2012). Socio-demographic determinants of worsening in frailty among community-dwelling older people in 11 European countries. Journal of Epidemiology and Community Health; 66(12):1116-1121.

2 Eyigor, S., Kutsal, Y.G., Duran, E., et al. (2015). Frailty prevalence and related factors in the older adult - FrailTURK Project. American Aging Association; 37(3):1-13.

3 Tarazona-Santabalbina, F.J., GĂłmez-Cabrera, M.C., PĂŠrez-Ros, P., et al. (2016). A Multicomponent Exercise Intervention that Reverses Frailty and Improves Cognition, Emotion, and Social Networking in the Community-Dwelling Frail Elderly: A Randomized Clinical Trial. Journal of the American Medical Directors Association; 17(5):426-433.

Community dwelling adults, Frailty phenotype, Risk of falls, Social participation.

O39 Sexual assistance through the eyes of sex workers: one path to improve sexual lives of people with disabilities

Ana r pinho 1 , fernando a pocahy 2 , conceição nogueira 1, 1 center for psychology, faculty of psychology and education sciences, university of porto, 4200-135 porto, portugal; 2 universidade do estado do rio de janeiro, 20550900 rio de janeiro, brazil, correspondence: ana r pinho ([email protected]).

Historically, people with disabilities have been seen as asexual and their sexual rights were often neglected. Nowadays, some progresses have been made but they still face multiple stereotypes and barriers that limit their social and sexual lives. Sexual assistance is a way of sexual expression in which trained individuals provide sexual services to clients with disabilities, improving their well-being in relation to sexuality. However, in Portugal the only way to access commercial sex is through sex workers who have no training to attend disabled clients.

To understand if sex workers see training as a useful aspect to be taken into account for improving psychological and sexual health of clients with disabilities and themselves.

An explorative study of qualitative approach, with 13 sex workers interviews analysed using the thematic analysis method proposed by Braun and Clarke (2006) [1].

From the analysis of the interviews four themes have emerged. Sex workers theme focus on the life experiences and motivations to attend clients with disabilities. Clients theme characterizes who are the people with disabilities seeking commercial sex. Search for sex work theme deepens knowledge about how they get in touch with sex workers. Finally, the attendance theme explains the dynamics of the relationship established and the many obstacles they overcome in order to express their sexuality through commercial sex.

The main conclusions provide evidence of the use of commercial sex by people with disabilities who seek in this service sexual and emotional satisfaction. Certain relationship specificities tend to be experienced with feelings of embarrassment on the part of professionals. Based on the experiences and obstacles sex workers observed when working with people with disabilities, measures were pointed out to improve the psychological and sexual health of those involved in the situation, which highlights the need for training to serve this group of clients, as well as the need for legalization of sex work.

1. Braun V, Clarke V. Using thematic analysis in psychology. Qualitative Research in Psychology. 2006; 3: 77-101.

Sexual health, Sexual Assistance, Sex Work, Clients with Disabilities.

O40 Results of an intervention program for men who batter women: perceptions of accompanied men

Anne clg silva, elza b coelho, department of public health, federal university of santa catarina, 88040-900 florianĂłpolis, santa catarina, brazil, correspondence: anne clg silva ([email protected]).

In intimate partner violence, man is the main perpetrator of violence, and it is essential to include him in interventions to decrease violence, because he can take responsibility for violence, seeking new forms of expression. However, intervention with men is criticized, such as: using resources that could be targeted to victims; the imposition of re-education measures rather than punitive measures; consider that men do not change their behaviour [1]. Nevertheless, it is within the framework of evaluation that we find one of the major shortcomings of batterer intervention programs, since the effects of participation of men in that, have been receiving little analysis [2].

This research aims to analyse the results of a batterer intervention program from the perspective of man accompanied by the program.

It is a case study conducted in a batterer intervention program with 86 men. It was used the Centres for Disease Control and Prevention Follow Up Questionnaire, adapted to be used in Brazil. Data were analysed according to content analysis techniques. This project was approved by the Human Research Ethics Committee of the Infantile Hospital Joana de GusmĂŁo. Subjects were asked to agree through an informed consent.

When asked about changes occurred after 3 months of follow-up in the program some men reported having not noticed any changes, which indicates that the program is not effective to all participants and the importance of longer follow-ups. However, most men cited changes in the way they act and perceive the division of tasks between men and women, thus participation in the program can be the starting point for rethinking and building new ways of expressing masculinity. And the changes cited go beyond the scope of the marital relationship, encompassing the relationship with the children, the abandonment of addictions and the desire to seek school education.

According to the data, attention to perpetrators of violence has a positive influence not only in the behaviour towards the partner, but also on the relationship with children and the abandonment of addictions. Although longer follow-ups - including the couple - are needed, the batterer intervention program may be a tool to decrease violence against women.

1. Antezana AP. Intervenção com Homens que Praticam violência contra seus cônjuges: reformulaçþes teórico-conceituais para uma proposta de intervenção construtivista-narrativista com perspectiva de gênero. Nova Perspectiva sistêmica, 2012; 42:9-27.

2. Toneli MJF; Lago MCS; Beiras A; Climaco DA, organizadores. Atendimento a homens autores de violĂŞncia contra as mulheres: experiĂŞncias latino-americanas. FlorianĂłpolis: UFSC/CFH/NUPPE; 2010.

Violence against women, Batterer intervention, Men, Program evaluation.

O41 Effectiveness of a reminiscence program on cognitive frailty, quality of life and depressive symptomatology in the elderly attending day-care centres

Isabel gil 1 , paulo costa 2 , elzbieta bobrowicz-campos 2 , rosa silva 3 , maria l almeida 1 , joĂŁo apĂłstolo 4, 1 nursing school of coimbra, 3046-851 coimbra, portugal; 2 health sciences research unit: nursing, nursing school of coimbra, 3046- 851 coimbra, portugal; 3 universidade catĂłlica portuguesa, institute of health sciences, 4200-374 porto, portugal; 4 health sciences research unit: nursing, portugal center for evidence-based practice: a jbi centre of excellence, 3046-851 coimbra, portugal, correspondence: isabel gil ([email protected]).

Reminiscence is a therapeutic intervention based on the account of personal experiences that allows access to significant life events. Evidence suggests that this intervention is particularly beneficial for the elderly with neurocognitive disorders [1], especially with regard to psychosocial variables. In Portugal, this intervention is underused, and there is a need to study its applicability and efficacy.

To evaluate the effect of a reminiscence-based program (RBP) [2] on cognitive frailty, quality of life and depressive symptomatology in elderly people attending day-care centres. Evaluate professional's satisfaction with the program and identify obstacles in its implementation.

A quasi-experimental study with one group was carried out in four day-care centres in the central region of Portugal. The framing sample included 69 older adults aged ≥ 65 years. Of those, 28 (average age of 79.33 ± 7.35 years and average education of 3.29 ± 1.86 years) participated in the 7-week RBP, twice a week. Outcomes of interest were cognitive frailty indicators measured through the Montreal Cognitive Assessment (MoCA); quality of life measured using the short version of World Health Organization Quality of Life Scale-module for older adults (WHOQOL-OLD-8); and depressive symptomatology measured by the 10-item Geriatric Depression Scale (GDS-10). In addition, the eight professionals conducting the study were asked to identify obstacles to the successful implementation of the program, and to evaluate its structure, themes, contents, and the involvement of the elderly in each session.

RBP was shown to have positive effects on the MoCA and WHOQOL-OLD-8 score (p < 0.05). Improvement in the GDS-10 score was observed; however, it was statistically non-significant. The structure of the program sessions was considered as mostly clear and perceptible (94%), and themes and contents as mostly pleasant and appropriate (94%). Positive feedback was obtained regarding the program capacity to involve the elderly in the activities proposed (87.5%). However, according to the professionals’ opinion, there is a need for ampler capacitation of the teams implementing RBP, better articulation with the institutions regarding the used space and activities schedule, and better articulation with the elderly to guarantee their commitment to the program.

Reminiscence was shown to be effective in improving cognition and quality of life, as well as potentially effective in decreasing depressive symptomatology. Therefore, it presents a therapeutic potential, contributing to the improvement of the care provided. It is also worth mentioning the good acceptance of the program which, however, implies the qualification of the professional teams for its implementation.

This study was developed within the context of the project “664367/FOCUS” (funded under the European Union’s Health Programme (2014-2020)) and project ECOG (funded by the Nursing School of Coimbra).

1. Thorgrimsen L, Schweitzer P, Orrell M. Evaluating reminiscence for people with dementia: a pilot study. The Arts in Psychotherapy. 2002;29(2):93-97.

2. Gil I, Costa P, Bobrowicz-Campos E, Cardoso D, Almeida M, ApĂłstolo J. Reminiscence therapy: development of a program for institutionalized older people with cognitive impairment. Revista de Enfermagem ReferĂŞncia. 2017;4(15):121-132.

Reminiscence, Elderly, Cognition, Quality of life, Depressive symptomatology.

O42 Development and validation of a reminiscence group therapy program for older adults with cognitive decline in institutional settings

Isabel gil 1 , paulo costa 2 , elzbieta bobrowicz-campos 2 , rosa silva 3 , daniela cardoso 4 , maria almeida 1 , joĂŁo apĂłstolo 4.

Research has evidenced the positive impact of non-pharmacological therapies aimed at elderly people with cognitive decline in the institutional setting. Reminiscence Therapy (RT) emerges in this category as an enabling strategy, which favours moments of happiness, dignity and life purpose [1]. Nonetheless, studies centred in RT are limited in Portugal, with a clear absence of structured interventional programs, emerging the need to develop and validate well-defined and replicable RT programs [2].

We intend to construct and validate a RT program directed to elderly people with cognitive decline, to be implemented in institutional settings by healthcare professionals.

Guidelines for complex interventions development from the Medical Research Council were followed [3]. The program was conceptualized in four distinct phases: Phase I (Preliminary), the initial conceptualization of the program design and supportive materials; Phase II (Modelling), consisting in the conduction of interviews and focus groups with healthcare specialists; Phase III (Field Test), aiming at the evaluation of each program session; and Phase IV (Consensus Conference), to synthesize the contributions and analyse challenges that emerged in preceding phases.

Based on the contributions of experts, healthcare professionals and the institutionalized elderly, a RT program divided into two strands was formed. The main strand includes 14 sessions, performed twice a week. The maintenance strand included seven weekly sessions. Each thematic session is related to the participants' life course, with a maximum duration of 60 minutes. The 4-phase conceptualization process resulted in the creation of a digital platform with audio-visual contents to aid professionals during each session; inclusion of an introductory section that contextualizes the therapeutic potential of RT; introduction of complementary activities that can be developed additionally in the institutional settings; reinforcement of multisensory stimulation throughout the program; introduction of a final moment of relaxation through abdominal breathing. The terminology used and visual presentation of the program were reformulated in order to improve user experience. The created program was considered by the elderly and healthcare professionals involved during the course of this process as pleasant and interesting, praising its structure, thematic contents and proposed activities.

The involvement of experts and potential users enabled the program to mirror the needs of the elderly with cognitive decline in an institutional setting. The RT program, structured and validated in the course of this study, demonstrated characteristics adjusted to the target population and setting. However, the effectiveness of the program should be tested in a future pilot study.

This study was developed within the context of the project ECOG, funded by the Nursing School of Coimbra.

1. Subramaniam P, Woods B. The impact of individual reminiscence therapy for people with dementia: systematic review. Expert Review of Neurotherapeutics. 2012;12(5):545-555.

2. Berg A, Sadowski K, Beyrodt M, Hanns S, Zimmermann M, Langer G et al. Snoezelen, structured reminiscence therapy and 10-minutes activation in long term care residents with dementia (WISDE): study protocol of a cluster randomized controlled trial. BMC Geriatrics. 2010;10(1).

3. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008.

Cognitive dysfunction, Aged, Program development, Reminiscence therapy.

O43 Short-term efficacy of a nursing psychotherapeutic intervention for anxiety on adult psychiatric outpatients: a randomised controlled trial

Francisco sampaio 1,2,3 , odete araĂşjo 3,4 , carlos sequeira 2,4 , teresa l canut 5 , teresa martins 2,4, 1 psychiatry department, hospital of braga, 4710-243 braga, portugal; 2 nursing school of porto, 4200-072 porto, portugal; 3 center for health technology and services research, 4200-450 porto, portugal; 4 school of nursing, university of minho, 4710-057 braga, portugal; 5 department of public health, mental health and perinatal nursing, school of nursing, barcelona university, 08907 barcelona, spain, correspondence: francisco sampaio ([email protected]).

Several efficacious treatments for anxiety are available, among which different forms of psychotherapy and pharmacotherapy [1]. However, literature favour more findings stemming from studies about the efficacy of psychotherapies/therapies provided by nurses [2,3] than those arising from studies about the efficacy of nursing psychotherapeutic interventions (interventions classified, for instance, on Nursing Interventions Classification) [4]. Moreover, no studies were found in literature about the efficacy of psychotherapeutic interventions on anxiety as a symptom.

Evaluating the short-term efficacy of a psychotherapeutic intervention in nursing on Portuguese adult psychiatric outpatients with the nursing diagnosis “anxiety”.

A single-blind randomised controlled trial was conducted at a Psychiatry Ward Outpatient Service of a Hospital in the north of Portugal. Participants were psychiatric outpatients, aged 18-64, with nursing diagnosis “anxiety”, who were randomly allocated to an intervention group (n = 29) or a treatment-as-usual control group (n = 31). The interventions consisted in psychotherapeutic interventions for the nursing diagnosis “anxiety”, integrated in the Nursing Interventions Classification. One mental health nurse provided the individual-based intervention over a 5-week period (one 45-60 minutes weekly session). A treatment-as-usual control group received only pharmacotherapy (if applicable). The primary outcomes, anxiety level and anxiety self-control, were assessed with the outcomes “Anxiety level” and “Anxiety self-control”, integrated in the Nursing Outcomes Classification (Portuguese version) [5] respectively. Time frames for assessment were at baseline and post-test (6 weeks after).

Patients from both groups presented improvements in anxiety levels, between the pre-test and the post-test assessment; however, analysis of means showed that patients of the intervention group presented significantly better results than those of the control group. Furthermore, only patients in the intervention group presented significant improvements in anxiety self- control. The psychotherapeutic intervention presented a very large effect size on the anxiety level and a huge effect size on the anxiety self-control. 22.8% and 40% of the outcomes related to the anxiety level and anxiety self-control, respectively, are predicted in the event of integrating the intervention group.

This study demonstrated the psychotherapeutic intervention model in nursing was efficacious in the decrease of anxiety level and improvement of anxiety self-control in a group of Portuguese adult psychiatric outpatients with pathological anxiety, immediately after the intervention. The results of the multiple linear regression and the very large effect size identified suggest that a significant part of the improvements could be directly attributed to the intervention.

Trial Registration Number

NCT02930473

1. Cuijpers P, Sijbrandij M, Koole SL, Andersson G, Beekman AT, Reynolds CF. The efficacy of psychotherapy and pharmacotherapy in treating depressive and anxiety disorders: a meta-analysis of direct comparisons. World Psychiatry. 2013, 12: 137-148.

2. Asl NH, Barahmand U. Effectiveness of mindfulness-based cognitive therapy for comorbid depression in drug-dependent males. Arch Psychiatr Nurs. 2014, 28: 314- 318.

3. Hyun M, Chung HC, De Gagne JC, Kang HS. The effects of cognitive-behavioral therapy on depression, anger, and self-control for Korean soldiers. J Psychosoc Nurs Ment Health Serv. 2014, 52: 22-28.

4. Bulechek GM, Butcher HK, Dochterman JM, Wagner C. Nursing Interventions Classification (NIC). 6th ed. St. Louis: Elsevier; 2012.

5. Moorhead S, Johnson M, Maas ML, Swanson E. Nursing Outcomes Classification (NOC). 5th ed. St. Louis: Elsevier; 2013.

Anxiety, Clinical nursing research, Nursing, Psychiatric nursing, Psychotherapy, Brief.

O44 “Art therapy” in acute psychiatry: a Portuguese case study

Clara campos 1 , aida bessa 1 , goreti neves 1 , isabel marques 2 , carlos laranjeira 3, 1 centro hospitalar e universitĂĄrio de coimbra, 3000-075 coimbra, portugal; 2 escola superior de enfermagem de coimbra, 3046-851 coimbra, portugal; 3 hospital distrital da figueira da foz, 3094-001 figueira da foz, portugal, correspondence: clara campos ([email protected]).

In the last few years, some researchers have focused on the valorisation of interventions that stimulate the use of art therapy in individuals with mental illness. This assessment is based on the assumption that biological programs (including psycho pharmaceuticals) should be increasingly inclusive, and therefore should include psychosocial approaches based on the recovery model. However, there is as yet no effective consensus on techniques and interventions that reveal greater effectiveness as well as systematization.

a) Evaluate the effectiveness of a program of 3 sessions of “art therapy” in individuals with mental illness, in the change of emotional indicators, namely depression, anxiety, stress, and psychological well-being; b) analyse the meanings attributed by the person to his creative self-expression.

We chose a pre-experimental study, of mixed approach (quantitative and qualitative), with pre- and post-test design and without control group. Twelve male subjects mostly diagnosed with Schizophrenia and Mood Disorders, who were admitted to an acute psychiatry unit, participated in the study. The instruments used to collect information were: Depression, Anxiety and Stress Scale [DASS-21]; Subjective Well-Being Scale (EBEP-18 items) and a semi-structured interview.

The main results suggest, after the evaluation between the pre- and post-test that there was an improvement in the dimensions anxiety, stress, self-acceptance, life goals and overall psychological well-being. The categories that resulted from the thematic analysis of the interviews (hope for the future, learning to manage difficulties and dealing with difficult emotions) revealed the usefulness of the program in the participant’s recovery process.

The inclusion of this type of psychosocial intervention in specialized clinical practice in Mental Health Nursing allows minimizing the impact of the disease in an organizational culture that should increasingly be oriented towards recovery.

Trial registration

NCT03575442

Recovery, Art therapy, Mental Health Nursing.

O45 Nursing care at the postpartum home visit: the couple perspective

BĂĄrbara pinto 1 , marĂ­lia rua 2 , elsa melo 2, 1 unidade de cuidados de saĂşde primĂĄrios estarreja i, agrupamento de centros de saĂşde baixo vouga, 3860-335 beduĂ­do, portugal; 2 escola superior de saĂşde, universidade de aveiro, 3810-193 aveiro, portugal, correspondence: bĂĄrbara pinto ([email protected]).

The birth of a child corresponds to a new stage in the family life cycle and implies a process of restructuring, adaptation to physical, psychological, family and social readjustments [1]. This transition predicts a change of roles of all the members of the family and the construction of a new personal, conjugal and familiar identity [2]. From institutions and health professionals, interventions are expected to successfully overcome these challenges. At this stage, the home visit imposes itself as an important intervention in Nursing care. Its accomplishment, by the family nurse, promotes individual and family empowerment and autonomy in healthy parenting.

Understanding the couple's perception about nursing practices in the context of home visit postpartum, as a contribution to the transition to parenting.

The research was based on the phenomenological domain, in a qualitative approach and includes eleven couples experiencing parenthood for the first time, between October 2016 and January 2017, enrolled in the Family Health Unit of Barrinha. Data collection included semi-structured interviews were conducted in order to guarantee the narratives of the experiences and their deeper understanding. The information was analysed according to the technique of content analysis, using WEBQDA software.

This study revealed that the birth of the first child is an event of individual and family development and growth, which implies adaptation to a set of changes and redefinition of roles, built on a day-to-day basis, and in close cooperation between the family and the nursing team. The approach in the home nursing visit was directed to the well-being of the new born and its mother, appearing to the family as a resource, not having the concern to explore the interaction and the reciprocity within the family. We highlight three dimensions: Postpartum home visit, that describes the experiences of the participants about the care operationalized in this visit; Family Nursing, which traces the way they understand the work of the family nurse in this transition and, lastly, the Postpartum parenting, which reports the mother perception about this stage of the life cycle.

The home visit and work philosophy by the family nurse contributed to the positive adaptation to parenthood and to approach the family and added value for improvement of the quality of health care, yet it was not assumed as a reality in the context of caring.

1. Walsh F. Processos normativos da famĂ­lia: diversidade e complexidade. 4 ed. Porto Alegre: Artmed; 2016.

2. Martins C, Abreu W, Figueiredo MC. Transição para a parentalidade: A Grounded Theory na construção de uma teoria explicativa de Enfermagem. Investigação qualitativa em saúde. 2017;2(2017):40-49.

Family Nurse, Family, Puerperium, Home visit.

O46 Skills of occupational therapy students required for an effective relationship

MarĂ­a y gonzĂĄlez-alonso 1 , valeriana g blanco 1 , reninka de koker 2 , luc vercruysee 2, 1 department of health sciences, university of burgos, 09001 burgos, spain; 2 department of occupational therapy, university odisee, 1000 brussel, belgium, correspondence: marĂ­a y gonzĂĄlez-alonso ([email protected]).

The acquisition of skills throughout the career facilitates the professional practice and satisfaction of the occupational therapist. In order to give direct attention to a situation of disability or risk, the professional must apply the best evidence-based strategy; and to establish a productive relationship with the client, the professional needs to learn to use interpersonal skills.

The objective of the study was to analyse how the perception of occupational therapy students changes their personal traits and challenges throughout their careers.

This is a descriptive, cross-sectional study of an intentional sample consisting of 183 students of occupational therapy. The study is part of the 2016-2017 academic course. An ad hoc questionnaire was prepared based on the collection of personal data and the perception of 29 skills that [1] proposed: The students should value the traits and challenges.

Of the 183 students, 122 were from the first and 61 from the final year, 47.5% from Belgium and 52.5% from Spain. The profile of the sample was 85.8% women; 60.1% live with their family and 85.8% had not done work placements outside their country. Regarding the skills that defined them, the respondents indicated friendly, respectful and loyal, with an average of 22.4 skills. Regarding abilities that they felt they must achieve, they identified patient, firm and assertive with an average of 9.9. The first-year students self-evaluated more positively than those in their final year in respect to the different variables. Significant differences related to the course were only observed in two traits: empathetic and collaborative.

Occupational therapy students, those in both their first and final years, consider that they have a large number of relational skills which enable them to give an appropriate response to the events that occur in therapy. Empathy is the only trait which indicates differences depending on the independent variables studied. for improvement of the quality of health care, yet it was not assumed as a reality in the context of caring.

1.Taylor, R.R. The International relationship: Occupational Therapy and the use of self; Philadelphia: F.A. Davis; 2008.

Occupational Therapy, Traits, Challenges, Interpersonal Relationships, Attitudes.

O47 Use of performance-enhancing substances in Portuguese gym/fitness users: an exploratory study

Ana s tavares, elisabete carolino, escola superior de tecnologia da saĂşde de lisboa, instituto politĂŠcnico de lisboa, 1990-096 lisboa, portugal, correspondence: ana s tavares ([email protected]).

The use of performance-enhancing substances (PES) by competitive or recreational sports practitioners is a pertinent and current topic, particularly in the field of public health. People who use gyms come from diverse socio-demographic conditions, where the consumption of this type of substances is not only used for the purpose of improving physical performance, but also to obtain a more muscular physique, especially for men, and leaner, especially for women whose goal is faster weight loss [1]. In Portugal there are practically no studies on the use of PES outside competitive sport, highlighting a study developed in 2012 by the European Health & Fitness Association [2].

Investigate the prevalence and profile of PES users amongst a sample of Portuguese gym/fitness users.

Cross-sectional, quantitative and exploratory study, amongst a convenience sample of 453 Portuguese gym/fitness users, recruited, directly on social networks (Facebook) and by institutional email (via gyms). Data were collected via a structured on-line questionnaire. Statistical analysis was performed using SPSS 22.

Among the 453 gym/fitness users (61.3% female; 38.7% male) who participated in the survey, 50 (11.1%) reported PES use (5.4% female; 19.5% male). The mean age of PES users was 34.96 years (Std. Dev. = 10.00). They were married, unemployed and with a low level of education (until 9 years = 41.7%). PES users showed more years of training (4 years) than no PES users. The main sports modalities of the respondents were cardio fitness (57.0%), bodybuilding (56.5%), stretching (27.8%) and localized (27.2%). PES use was suggested mostly by friends (51.9%), peers (30.8%) and by internet (30.8%). The most commonly consumed PES were diuretics (46.0%) and anabolic steroids (44.0%). Thirty percent of PES users reported side effects and the most commonly reported was acne (53.3%), agitation and tremors (40.0%). The main reason for using PES is the improvement of the physical condition (54.0%). Five-point three percent of non-PES users expressed an interest in using PES in the future.

This exploratory survey revealed the use of PES amongst Portuguese gym/fitness users and its increasing importance to investigate the psychosocial factors that may influence PES use in this specific population. Exploring these factors may improve the effectiveness of practical interventions and motivational strategies to reduce PES use among gym/fitness users.

1. European Health and Fitness Association. Fitness against doping: RelatĂłrio intercalar - principais resultados. Bruxelles; 2011.

2. European Health and Fitness Association. Executive summary of the final report for the Copenhagen Fitness Anti-Doping Conference. Bruxelles; 2012.

Performance enhancing substances, Orevalence, Gym/fitness users.

O48 Identification of frailty condition of elderly people in the community

InĂŞs machado 1 , pedro sĂĄ-couto 2 , joĂŁo tavares 3,4, 1 department of medical sciences, university of aveiro, 3810-193 aveiro, portugal; 2 center for research and development in mathematics and applications, department of mathematics, university of aveiro, 3810-193 aveiro, portugal; 3 nursing school of coimbra, 3046-851 coimbra, portugal; 4 coimbra education school, polytechnic institute of coimbra, 3030-329 coimbra, portugal, correspondence: inĂŞs machado ([email protected]).

Frailty is a geriatric syndrome with multiple causes and contributors that is characterized by diminished strength, endurance, and reduced physiologic function that increases an individual’s vulnerability for developing increased dependency and/or death [1]. Current research is still under discussion regarding the nature, definition, characteristics and prevalence of frailty. Identify frail older adults (OA) has recently been recognized as an important priority, especially in community-dwelling OA.

Determine the prevalence of frail OA in a primary care (PC) settings and to assess the concurrent validation of the Portuguese version of Prisma7 (P7) with two other published and validated instruments: Frailty Phenotype (FF) and the Groningen Frailty Indicator (IFG).

This study was conducted in one PC unit in the north region of Portugal with a convenience sample of 136 OA (≥ 65 years). The questionnaire included: 1) sociodemographic, family and health variables; and 2) the frailty instruments P7, FF and IFG. OA were considered frail: ≥ 3 positive questions out of 7 for P7; ≥3 factors out of 5 for FF; and ≥ 4 dimensions out of 8 for IFG. Further details about these scales can be found in Machado. For the concurrent validity, methods based on correlation (Spearman Rank test) and agreement (Cohen's Kappa, sensitivity and specificity values) were used. For comparison of the two groups (frailty or non-frailty), an independent t-test was calculated. Finally, binary logistic regression model was considered to identify predictors of frailty.

According to the characterization of P7, IFG and FF, the prevalence of frail OA was 7.4%, 19.9% and 26.5%, respectively. The agreement percentage between the instruments was moderate ranging from 68% to 77%, observing that the P7 is partially concordant with the other instruments. The P7 showed high specificity values, but low sensitivity values. Frail OA were characterized (p < 0.05) as being older, having worse health perception, lower physical capacity, slower walking velocity, higher IFG scores, and decreased hand grip strength. As predictors of frailty, in the multivariate model, older age (OR = 1.111) and better physical capacity (OR = 0.675) were significant (p < 0.01).

A sample of more robust people and a “synthetic” application of P7 (without explaining the questions) may have influenced the prevalence results presented. More studies are needed in order to further evaluate the psychometric properties of the various tools tested. The P7 should be used with caution in identifying frailty in PC, therefore we suggest the incorporation of another measure of frailty assessment.

This work was supported in part by the Portuguese Foundation for Science and Technology (FCT-Fundação para a Ciência e a Tecnologia), through CIDMA - Center for Research and Development in Mathematics and Applications, within project UID/MAT/04106/2013.

1. Morley JE, Vellas B, van Kan GA, et al. Frailty Consensus: A Call to Action. Journal of the American Medical Directors Association. 2013;14(6):392-397.

Frailty, Elderly, Instrument, Prisma7.

O49 Use of software in learning difficulties of reading: comparative analysis between digital environment and hybrid environment

Ana sucena 1,2,3,4 , ana f. silva 1,2, 1 instituto politĂŠcnico do porto, 4200-465 porto, portugal; 2 centro de investigação e intervenção na leitura, instituto politĂŠcnico do porto, 4200-465 porto, portugal; 3 centro de investigação em estudos da criança, instituto de educação, universidade do minho, 4710-057 braga, portugal; 4 centro de investigação em reabilitação, escola superior de saĂşde, instituto politĂŠcnico do porto, 4200-465 porto, portugal, correspondence: ana sucena ([email protected]).

The learning difficulties of the letter-sound relations are seen as a risk factor for future difficulties in learning to read [1]. Ideally, the identification of children at risk of failure to learn reading and writing should occur in the last year of pre-school or early in the first year, so that intentional programs can be implemented to promote basic reading skills [2,3,4]. The most promising reading learning support programs combine explicit phonological awareness training with highly structured reading instruction [5,6].

This study evaluated the impact of two early intervention programs on reading learning difficulties. A program exclusively in virtual environment and a hybrid program, comprising sessions in virtual environment and in real environment.

Participants were 57 children, attending the first year of schooling, native speakers of European Portuguese, identified as at risk of having learning reading difficulties. The children were divided into three groups: (a) virtual environment intervention - training with Graphogame software, (b) hybrid intervention - training using Graphogame software and real-time sessions of pre-reading and reading skills oriented by a technician from the CiiL team (Center for Research and Intervention in Reading) and (c) absence of intervention beyond that provided for in the regular system of education. The intervention programs were developed in a school context, with the virtual component (Graphogame) developed with daily periodicity, with duration between 10 to 15 minutes. The intervention in real environment was carried out once a week, with activities of 30 to 40 minutes, using materials of a playful character, created specifically for the present study. In both types of sessions, the groups consisted of two to five children. The participants were evaluated at the level of letter-sound relations, phonemic awareness, word reading and pseudo word reading.

Both intervention environments produced significantly more positive effects than those obtained by the control group. Still, the software Portuguese Basis Graphogame is an effective tool, however, with a more positive effect when used in parallel with a face-to-face reading promotion session.

The early intervention in reading difficulties should promote the explicit training of phonemic awareness and letter-sound relations in order for the decoding process to be developed. Although the virtual environment – in this case the software Portuguese Basis Graphogame – is a highly effective tool, ideally, it should be combined with a real-environment intervention to ensure that the child effectively dominates letter-sound relationships and that trains intensively the decoding process.

1. Lyytinen H. State-of-Science Review: SR-D12 New Technologies and Interventions for Learning Difficulties: Dyslexia in Finnish as a Case Study. Foresight Mental Capital and Wellbeing Project: The Government Office for Science. London: UK. 2008.

2. Hatcher P, Hulme C, Snowling M. Explicit phoneme training combined with phonic reading instruction helps young children at risk of reading failure. Journal of Child Psychology and Psychiatry, University of York, UK. 2004, 45: R338-358

3. Wimmer H, Mayringer H. Dysfluent reading in the absence of spelling difficulties: A specific disability in regular orthographies. Journal of Educational Psychology. 2002, 94: R272-277

4. Saine N, Lerkkane M, Ahonen T, Tolvanen A, Lyytinen H. Computer-Assisted Remedial Reading Intervention for School Beginners at Risk for Reading Disability. Child Development. 2011, 82: R1013-1028

5. Hatcher P, Hulme C, Ellis A. Ameliorating early reading failure by integrating the teaching of reading and phonological skills: The phonological linkage hypothesis. Child Development. 1994, 65: R41-57

6. Hatcher P, Hulme C, Miles J, Carroll J, Hatcher J, Gibbs S, Smith G, Bowyer-crane C, Snowling M. Efficacy of small group reading intervention for beginning readers with reading-delay: a randomised controlled trial. Journal of Child Psychology and Psychiatry. 2006, 47: 820-827.

Graphogame, Reading acquisition, Reading intervention.

O50 Consumption patterns of non-steroidal anti-inflammatory drugs and attitudes towards the medicine residues in north and central regions of Portugal

Andreia carreira 1 , catarina valente 1 , joana tomĂŠ 1 , tânia henriques 1 , fĂĄtima roque 1,2 , mĂĄrcio rodrigues 1,2 , maximiano p ribeiro 1,2 , paula coutinho 1,2 , sandra ventura 1,2 , sara flores 1,2 , cecĂ­lia fonseca 1,2 , andrĂŠ rts araujo 1,2,3, 1 school of health sciences, polytechnic institute of guarda, 6300-749 guarda, portugal; 2 research unit for inland development, polytechnic institute of guarda, 6300-559 guarda, portugal; 3 laqv/requimte, department of chemical sciences, faculty of pharmacy, university of porto, 4050-313 porto, portugal, correspondence: andrĂŠ rts araujo ([email protected]).

Non-steroidal anti-inflammatory drugs (NSAIDs) are one of the most commonly used medications in the world because of their analgesic, antipyretic and anti-inflammatory properties [1–3]. However, their use is associated with the occurrence of serious adverse drug events, particularly gastrointestinal, cardiovascular and renal complications [3].

To assess the NSAIDs consumption pattern by the adult residents in the north and central regions of Portugal, as well as, to evaluate their individual's behaviour concerning the resulting residues after the use of the packages of medicines.

A questionnaire survey was administered to a sample of 400 pharmacy costumers in the districts of Aveiro, Leiria, Porto and Viseu between December 2015 and February 2016, with questions regarding the knowledge of NSAIDs consumption and their attitudes towards the medicine residues.

In our study, the prevalence rate of NSAIDs use in the last 6 months was 74.3 % (95 % CI 70.0–78.6), showing a high level of consumption of this pharmacotherapeutic group. The most commonly used NSAID was ibuprofen (76.4 %), followed by diclofenac (36.0 %) and nimesulide (8.4 %). The most reported therapeutic indications were headaches (36.4 %), followed by back pain (33.3%), fever (24.6%) and flu (20.5%). Surprisingly, adverse drug events were reported by only 6.7 % of respondents. Even so, the most common adverse drug event was diarrhoea (4.0%). These results could be explained considering that NSAIDs use is episodic and limited to shorter periods and probably the respondents did not correlate the adverse effects of these medicines. Relatively to the destination of the packages of medicines that respondents no longer used, it was verified that 58% of the respondents claimed to deliver them in a pharmacy, 17.3% throw away in the common waste, 24.0% keep them at home, 0.5% put in sanitary sewers and 0.2% donate to charities.

According to these findings, it was evident the trivialization of NSAIDs consumption, being imperative to monitor their use and educate the users for its rational use. On another hand, it is important to maintain the incentive and to educate the population to adopt adequate attitudes regarding medicine residues recycling.

1. Cryer B, Barnett MA, Wagner J, Wilcox CM. Overuse and misperceptions of nonsteroidal anti-inflammatory drugs in the United States. Am J Med Sci. 2016, 352(5):472–80.

2. Green M, Norman KE. Knowledge and use of, and attitudes toward, non-steroidal antiinflammatory drugs (NSAIDs) in practice: A survey of ontario physiotherapists. Physiother Canada. 2016, 68(3):230–41.

3. Koffeman AR, Valkhoff VE, Celik S, W’t Jong G, Sturkenboom MCJM, Bindels PJE, et al. High-risk use of over-the-counter non-steroidal anti-inflammatory drugs: a populationbased cross-sectional study. Br J Gen Pract. 2014, 64(621):e191-8.

Nonsteroidal anti-inflammatory drugs; Consumption patterns; Attitudes; Medicine residues.

O51 Allergic rhinitis characterization in community pharmacy customers of Guarda city

HĂŠlio guedes 1 , agostinho cruz 2 , cecĂ­lia fonseca 1,3 , andrĂŠ rts araujo 1,3,4, 1 school of health sciences, polytechnic institute of guarda, 6300-749 guarda, portugal; 2 school of health sciences, polytechnic institute of porto, 4200-072, porto, portugal; 3 research unit for inland development, polytechnic institute of guarda, 6300-559 guarda, portugal; 4 laqv/requimte, department of chemical sciences, faculty of pharmacy, university of porto, 4050-313 porto, portugal.

Allergic rhinitis (AR) is a hypersensitivity reaction caused when inhaled particles contact the nasal mucosa and induce an immunoglobulin E -mediated inflammatory response resulting in sneezing, nasal itching, rhinorrhoea, nasal obstruction, or a combination of those symptoms [1]. The prevalence of AR is sometimes cited as 10% to 30% in adults [1]. It is increasingly recognized that the symptoms of AR often adversely impact the quality of life of the affected individuals and impose a significant health and socio-economic burden on the individual and society [2].

The aims of the present research were to estimate the prevalence of AR, determine the predominance of the symptoms, determine the impact on quality of life (QoL), as well as characterize the control strategies and treatment of AR in pharmacy customers of Guarda city.

An observational, cross-sectional and analytical study was conducted, and a questionnaire survey was developed and used as the data collection instrument. This included the Control of Allergic Rhinitis and Asthma (CARAT) test and the scale Quality of Life of the World Health Organization (WHOQOL-Bref). Data collection took place in community pharmacies in the city of Guarda between May and December of 2014.

In the sample of 804 respondents, there was a predominance of females (66.3%) and the average age was 48.3 ± 16.5 years. The prevalence rate of AR was 13.1% (95 % CI 10.8–15.4). About 40% of the respondents with AR had no medical diagnosis. It was verified that there weren’t differences by gender in terms of quality of life (p = 0.929) or in the control of the AR symptoms (p = 0.168). On another hand, a high level of education (higher education) seemed to be a factor that contributed to a better quality of life (p = 0.001) and to a better control of symptoms (p = 0.019). It was also observed that a better control of the symptoms of AR was associated with a better quality of life (Pearson’s r = 0.292, p = 0.003).

The prevalence rate was estimated between 10.8% and 15.4%, which resulted from the medical diagnosis and the symptomatic diagnosis made through the data collection instrument. The results indicated that although the respondents do not have properly controlled the AR and suffer from associated comorbidities, they have a reasonable quality of life indexes.

1. Mims JW. Epidemiology of allergic rhinitis. Int Forum Allergy Rhinol. 2014,4(S2):S18–20.

2. Maspero J, Lee BW, Katelaris CH, Potter PC, Cingi C, Lopatin A, et al. Quality of life and control of allergic rhinitis in patients from regions beyond western Europe and the United States. Clin Exp Allergy 2012, 42(12):1684–96.

Allergic rhinitis; Community pharmacy customers; Quality of life.

O52 Weight transfer during walking and functional recovery post-stroke in the first 6 months of recovery – an exploratory study

Marlene rosa 1,2 ([email protected]), 1 school of health science, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal.

One of the most controversial abnormal patterns during walking in patients with stroke occurs during weight transfer (WT) of the paretic lower limb, however no perception of the knee patterns developed during stroke recovery exists.

To explore the importance of the knee kinematic pattern in the weight transfer (WT) walking period for functional recovery in the first 6 months post-stroke.

Inpatients with a first ischemic stroke (< 3 months), able to walk, were evaluated (T0) and revaluated 6 months post-stroke (T1). Patients were video-recorded in the sagittal plane while walking at their self-speed and the video was used to classify the knee pattern during WT. Walking speed, self-perceived balance, knee muscle strength and sensory-motor function of the hemiparetic lower limb were also assessed. Participants were stratified according to the knee pattern recovery. Comparisons between and within groups were conducted.

Thirty-two patients (70.28 Âą 10.19 years; 25.54 Âą 3.26 Kg/m 2 ) were included. Different groups were identified, according to the knee pattern: (1) normal at T0 and T1 (N = 10); (2) normal pattern only at T1 (N = 7); (3) acquisition/change in the knee pattern deviation (N = 7); (4) maintenance of the knee pattern deviation (N = 8). Modifications in the normal knee pattern might be developed to reach acceptable levels of functioning performance (p > 0.05, Groups 1/ 3). Speed and balance recovery was restricted when an abnormal knee pattern in WT was observed (Group 3 and 4), being worst when this pattern persisted (Group 4).

The knee pattern correction in WT might have benefits for stroke recovery. A further understanding of the causes for deviations in the knee pattern in WT will help establishing stroke treatment priorities.

NCT02746835

Weight transfer, Gait, Stroke, Knee patterns.

O53 Reference values of cardiorespiratory fitness field tests for the healthy elderly Portuguese

PatrĂ­cia rebelo 1,2 , ana oliveira 1,2,3 , alda marques 1,2, 1 respiratory research and rehabilitation laboratory, school of health sciences, university of aveiro, 3810-193 aveiro, portugal; 2 institute for research in biomedicine, university of aveiro, 3810-193 aveiro, portugal; 3 faculty of sports, university of porto, 4200-450 porto, portugal, correspondence: patrĂ­cia rebelo ([email protected]).

Cardiorespiratory fitness (CRF) is recognized as an independent predictor of all age morbidity and mortality and is closely related with people’s functional capacity [1]. Recently, CRF has been described as a clinical vital sign, which highlights its role in health promotion and disease prevention [2]. The 6-min-walk test (6MWT), incremental shuttle walk test (ISWT), unsupported upper limb exercise test (UULEX) and the 1-min sit-to-stand test (1’STS) are worldwide tests to assess CRF. Reference values of these tests are however, lacking for the Portuguese elderly population. This hinders the interpretability and limits the confidence of clinical decision-making in the field of CRF.

To contribute for establishing reference values for the 6MWT, ISWT, UULEX and 1’STS in the Portuguese healthy elderly population.

A cross-sectional study was conducted with healthy elderly volunteers [3] recruited from the Centre region of Portugal. Each participant conducted two repetitions of the 6MWT, ISWT, UULEX and 1-min STS. The best repetition was considered for analysis. Descriptive statistics were used to determine reference values by age decade (61-70; 71-80; 81- 90) and gender. Two-way ANOVA was used to investigate significant effects for age/gender and their interaction. Values were presented as mean Âą standard deviation or median [95%, Confidence Intervals].

262 healthy people were enrolled (61.5% female; 75.0±0.5yrs), 125 completed the 6MWT (66.4% female; 75.1±0.7yrs), 83 the ISWT (57.3% female; 76.1±1.0yrs), 210 the UULEX (63.3% female; 75.7±0.6yrs) and 50 the 1’STS (54% female; 72.1±1.0yrs). Values decreased significantly along the decades and were statistically different between male and female (p < 0.05) across all tests. The following values were found for the I) 6MWT (61-70y: males - 519.7[484.7-554.7]m vs . females - 488.3[458.8-517.7]m; 71-80y: 461.0[389.5-532.5]m vs . 377.1[316.4-437.8]m; 81-90y: 294.0[226.5-361.6]m vs . 254.1[211.6-296.6]m); II) ISWT (61-70y: males - 515.0[304.2-725.8]m vs . females - 353.3[212.5-494.2]m; 71-80y: 428.8[299.9-557.6]m vs . 234.6.2[151.6-317.6]m; 81-90y: 131.8[45.3-218.4]m vs . 161.0[113.2-208.8]m); III) UULEX (61-70y: males - 9.6[8.5-10.7]min. vs . females - 8.3[7.4-9.2]min.; 71-80y: 8.5[7.1-9.9]min. vs . 6.6[5.4-7.8]min.; 81-90y: 5.8[4.4-7.1]min. vs . 4.7[3.9-5.9]min.) and IV) 1’STS (61-70y: males - 40.0[33.2-46.8]rep/min vs . females - 37.13[33.65-40.61]rep/min; 71-80y: 30.7[26.4-34.9]rep/min vs . 33.3[26.1-40.4]rep/min; 81-90y: 29.0[9.1-67.1]rep/min vs . 22.3[16.2-28.3]rep/min). Significant interactions between age and gender were only observed in the ISWT.

The population studied presented worse results in the 6MWT, similar results in the ISWT and better results in the 1’STS test comparing with international studies [4-6]. No studies were found for the UULEX test. These differences highlight the importance of using population specific reference values in CRF assessment. Further studies with larger and representative sample sizes are needed to confirm results.

1. Harber MP, Kaminsky LA, Arena R, Blair SN, Franklin BA, Myers J, et al. Impact of cardiorespiratory fitness on all-cause and disease-specific mortality: Advances since 2009. Prog Cardiovasc Dis. 2017; 60(1):11-20.

2. Ross R, Blair SN, Arena R, Church TS, DesprĂŠs J-P, Franklin BA, et al. Importance of assessing cardiorespiratory fitness in clinical practice: a case for fitness as a clinical vital sign: a scientific statement from the American Heart Association. Circulation. 2016;134(24)

3. Organization WH. World report on ageing and health: World Health Organization; 2015.

4. Casanova C, Celli B, Barria P, Casas A, Cote C, De Torres J, et al. The 6-min walk distance in healthy subjects: reference standards from seven countries. Eur Respir J. 2011;37(1):150-6.

5. Dourado VZ, Vidotto MC, Guerra RLF. Reference equations for the performance of healthy adults on field walking tests. J Bras Pneumol. 2011;37(5):607-14.

6. Strassmann A, Steurer-Stey C, Dalla Lana K, Zoller M, Turk AJ, Suter P, et al. Population-based reference values for the 1-min sit-to-stand test. Int J Public Health. 2013;58(6):949-53.

Cardiorespiratory Fitness, Cardiorespiratory field tests, Reference values, Elderly population.

O54 Prevalence and factors associated with frailty in the elderly attended in ambulatory care

ClĂłris rb grden 1 , luciane pa cabral 1 , carla rb rodrigues 2 , pĂŠricles m reche 1 , pollyanna ko borges 1 , everson a krum 2, 1 departamento de enfermagem e saĂşde pĂşblica, universidade estadual de ponta grossa, 4748 ponta grossa, paranĂĄ; 2 hospital universitĂĄrio regional dos campos gerais, universidade estadual de ponta grossa, 84031-510 ponta grossa, paranĂĄ, correspondence: clĂłris rb grden ([email protected]).

The aging process, understood as dynamic and progressive, contributes to the reduction of physical reserves and a higher prevalence of pathological processes, predisposing the elderly to frailty [1]. Canadian researchers define frailty as a multifactor syndrome involving biological, physical, cognitive, and social factors [2], which contribute significantly to disability and hospitalization [3]. Considered a modern geriatric syndrome, it is related to physiological changes, diseases, polypharmacy, malnutrition, social isolation and unfavourable economic situation [4, 5].

The objective of this study was to identify the prevalence and factors associated with frailty in the elderly attended in outpatient care. A cross-sectional study was carried out with 374 elderly individuals in outpatient care between October 2015 and March 2016. Data collection was applied to the Edmonton Fragility Scale [2]. Data were analysed by Stata software version 12 and described by measures of frequency, mean and standard deviation (SD). Prevalence ratios (PR) were calculated to investigate associations between independent variables and frailty. The adjusted prevalence ratios were obtained by multiple Poisson regression analysis. It was started with a saturated model and the variables that were not statistically relevant were removed, since their exclusion did not modify the results of the independent variables that remained in the model. The statistical significance was p < 0.05. The study complied with national and international standards of research ethics involving human subjects and was approved by the Research Ethics Committee in Human Beings of the institution under registration CAAE: 34905214.0.0000.0105.

The results showed a predominance of female (67.4%), married (54.4%), with low educational level (55.1%), who lived with relatives (46.3%). The mean age of participants was 67.9 years. Regarding the clinical variables, 97% of the elderly reported having some type of disease, 92.3% used medication, 56.9% had no urine loss, 4.5% used walking sticks, 65.8% denied falls and 69.8% hospitalization. Regarding the fragility syndrome, the mean score was 5.9 points, with 40.1% elderly classified as fragile and 59.9% non-fragile. After multiple regression analysis, the variables that remained associated with the fragility were gender (p = 0.002), low education (p = 0.01), falls (p = 0.005), urinary incontinence (p = 0.000) (p = 0.001), medications (p = 0.02) and hospitalization (p = 0.001).

The study identified important factors associated with frailty in the elderly attending the outpatient clinic. Such results may support the development of gerontological care plans aimed at preventing functional decline and negative outcomes of the syndrome.

1. Maciel GMC, Santos RS, Santos TM, Menezes RMP, Vitor AF, Lira ALBC. Avaliação da fragilidade no idoso pelo enfermeiro: revisão integrativa. R. Enferm. Cent. O. Min. 2016, 6(3):2430-2438.

2. Rolfson D, Majumdar S, Tsuyuki R, Tahir A , Rockwood K. Validity and reliability of the Edmonton Frail Scale. Age Ageing. 2006, 35(5):526-9.

3. Vermeiren S, Vella-Azzopardi R, Beckwée D, Habbig AK, Scafoglieri A, Jansen B, et al. Frailty and the prediction of negative health outcomes: a meta-analysis. J Am Med Dir Assoc. 2016; 17(12): 1163.e1–1163.e17.

4. Morley JE, Vellas B, Kan GAV, Anker SD, Bauer JM, Bernabei R, et al. Frailty consensus: a call to action. JAMDA. 2013, 14(6):392-7.

5. Dent E, Kowal P, Hoogendijk EO. Frailty measurement in research and clinical practice: a review. Eur J Intern Med. 2016, 31:3-10.

Aged, Frail Elderly, Prevalence, Geriatric Nursing.

O55 Trend of mortality for acute myocardial infarction in state Santa Catarina, Brazil, for the period from 1996 to 2014

Pedro cm morais 1 , aline pinho 2 , daniel m medeiros 3 , giovanna g vietta 3 , pedro f simĂŁo 3 , bĂĄrbara o gama 3 , fabiana o gama 3 , paulo f freitas 3 , mĂĄrcia r kretzer 3, 1 secretaria municipal de saĂşde de palhoça, 88132-149 palhoça, santa catarina, brasil; 2 hospital universitĂĄrio polydoro ernani de sĂŁo thiago, universidade federal de santa catarina, 88036-800 florianĂłpolis, santa catarina, brasil; 3 universidade do sul de santa catarina, campus pedra branca, 88137-270 palhoça, santa catarina, brasil, correspondence: pedro cm morais ([email protected]).

Acute Myocardial Infarction (AMI) is a public health problem in the world and in Brazil, due to the high morbimortality rates observed. Despite major advances in treatment, AMI accounts for 30% of deaths in Brazil.

To analyse the mortality trend due to acute myocardial infarction in the State of Santa Catarina, from 1996 to 2014.

Ecological study of time series, based on the Database of the Mortality Information System, made available by the Department of Informatics of SUS (DATASUS). Selected deaths by AMI, ICD-10, code I21, of the resident population in the state, according to gender and age group. Performed simple linear regression. The Research Ethics Committee of the Southern University of Santa Catarina approved this study.

There were 40,204 deaths from AMI between 1996 and 2014 in Santa Catarina, with small oscillations in mortality rates in the period, 40.33/100,000 inhabitants in 1996 and 36.58/100,000 in 2014 (β = -0.062, p = 0.546). There were higher rates in males, but stationary, with 49.41/100,000 inhabitants in 1996 and 45.62/100,00 in 2014 (β = -0.008; p = 0.949). The female sex presented a steady trend, with a rate of 31.19/100,000 inhabitants in 1996 and 27.49 / 100,000 in 2014 (β = -0,113; p = 0.224). The male and female age groups showed a decreasing and significant trend after 30 years. Male age group 60 and older presented high mortality rates, however, declining. It stands out that the male age group from 70 to 79 years of age presented a decrease in rates of -13,936 per year, with a variation from 626.51/100,000 inhabitants in 1996 to 379.21/100,000 inhabitants in 2014 (p > 0.001). In the male age group of 80 years or more the rate was 935.88/100,000 inhabitants in 1996 and 669.99/100,000 in 2014 (β = -12,267; p = 0.004). Female age group 70 and older presented high mortality rates. In the female age group of 70 to 79 years, there was a decrease from 399.60/100,000 inhabitants in 1996 to 193.78/100,000 in 2014 (β = -12,115; p < 0.001). The female age group aged 80 years and over, from 811.78/100,000 inhabitants in 1996, decreased to 497.81/100,000 in 2014 (β = -16,081, p < 0.001).

The trend of AMI mortality in Santa Catarina is stationary for both genres but decreasing significantly in the age groups over 30 years, with the greatest reductions over 70 years.

Acute Myocardial Infarction, Mortality rate, Ecological study.

O56 Family nurse intervention in the mental adjustment of patients with arterial hypertension

Ana alves 1 , joĂŁo simĂľes 1 , alexandre rodrigues 1 , pedro couto 2, 1 escola superior de saĂşde da universidade de aveiro, 3810-193 aveiro, portugal; 2 departamento de matemĂĄtica, universidade de aveiro, 3810-193 aveiro, portugal, correspondence: ana alves ([email protected]).

The increase on life expectancy, and the raise of chronic condition, represents new challenges to the family nurse practitioner. Cardiovascular diseases are the main cause of death among the population, however the numbers have been declining in past years [1]. There is an important economic impact that is a result of the incapacity cause by such diseases, as well as the treatment-related costs. Arterial hypertension has been gaining some relevance, due to its prevalence, and because its epidemiological base studies reveal a lack of control sample. Considering this problematic, a study was conducted, that analyses the mental adjustment of patients with arterial hypertension and the impact of the family nurse practitioner during appointments for hypertension monitoring.

The objective was to evaluate the impact of the family nurse practitioner on the mental adjustment of patients that suffer from arterial hypertension, registered at the “HTA” program of the Personalised Healthcare Unit of Healthcare Centre of Sever do Vouga.

A quantitative study was conducted, meeting one of the cycles of the research-action method, since an initial analysis was conducted, followed by the implementation of the intervention, and then carried by a new evaluation. Thus, using the Mental Adjustment to Disease Scale, as an evaluation instrument, regarding mental adjustment, along with a sociodemographic and clinical characterization questionnaire, addressed to the participants. The ethical principles were followed during the entire course of the investigation.

The participants in this study had and average age of 70.8 years, and being mostly females, diagnosed with for 8.4 years. To evaluate the internal consistency of MADS, it was calculated the Cronbach Alpha in moments 1 and 2, obtaining acceptable results, except for the subscale regarding fighting spirit. At moment 1 of the data collection, in the subscale regarding fighting spirit, all participants were classified as “fitted”, however for the remaining subscales, the participants were classified as “fitted” and “Not fitted”. The results obtained at moment 2, have revealed the impact from the conducted intervention, since the participants initially classified as “not fitted”, shifted to “fitted” at the 2nd moment.

Performing a balance of the internship, it can be claimed that the expected competences and objectives were achieved. Ultimately, we can withdraw the conclusion that the intervention developed for the mental adjustment obtained the expected results, since the participants classified as “not fitted” on the 1st moment, were classified as “fitted” during the 2nd assessment moments.

1. Trindade, I. D. (2016). AnĂĄlise PragmĂŠtica da Comorbilidade Associada a Doentes com HipertensĂŁo Arterial em Cuidados de SaĂşde PrimĂĄrios. Revista Portuguesa de HipertensĂŁo E Risco Cardiocascular, 51, 40.

Mental adjustment, Nursing Family, Arterial Hypertension.

O57 Family conferences – the two year experience of a palliative care support team (PCST) in a tertiary hospital

JĂşlia alves, joana mirra, rita f soares, margarida santos, isabel barbedo, sara silva, elga freire, equipa intra-hospitalar de suporte em cuidados paliativos, departamento de medicina interna, centro hospitalar do porto, 4099-001 porto, portugal, correspondence: jĂşlia alves ([email protected]).

Family conference (FC) is an important work that represents an opportunity to evaluate family dynamics, provide anticipatory care and support feelings related to the loss of a loved one [1]. FC facilitates the communication between healthcare providers, patient and family and allows the discussion of different options and summarize consensus with the ultimate goal of problem solving, decision making and instituting a plan [2].

Describe the experience of FCs made by a PCST from 1 of January 2015 to 31 of December 2016.

Raw data from FCs registers was retrieved and a descriptive analysis was performed.

We consulted 809 patients and 431 FCs were held (81% scheduled). In 56 FC the patient was present; when the patient was absent, most cases were due to clinical condition and in a minority the patient chose not to be present. Family members attending FCs were offspring in 67%, spouses in 23%, other relatives in 38% and the parents in 4% of cases. All meetings occurred in the presence of one physician and one nurse from the PCST. FCs were held because of patient discharge (88%), worsening of clinical condition (24%), family needs (59%), discussion of therapeutic goals (38%) and conspiracy of silence (1%). As a consultant team, the PCST is concerned with post-discharge and evaluates the needs of patients at home with the help of an ambulatory healthcare team or primary care team. Therefore, FCs have the objective of preparing families for patient discharge. During FC the main subjects of discussion were post-discharge healthcare referral (94%), objectives of healthcare (84%), clinical information about diagnosis and prognosis (42%), symptom control (47%), management of expectations concerning the illness (73%), nutrition (12%) and family needs (psychological support in 17% and nursing instructions 10%).

There is an increasing number of FC and more are being requested by the referral healthcare team. To facilitate the registry of FC a document was elaborated and soon it will be made a software. This registry will allow an easier analysis and strategy planning to improve interventions and healthcare quality provided.

1. Neto I. A conferĂŞncia familiar como instrumento de apoio Ă  famĂ­lia em cuidados paliativos. Revista Portuguesa de ClĂ­nica Geral. Vol.19 (2003), p.68-74.

2. Barbosa A, et al. Manual de Cuidados Paliativos. 2ÂŞed. Lisboa (2010): NĂşcleo de Cuidados Paliativos, Centro de BioĂŠtica, Faculdade de Medicina da Universidade de Lisboa.

Palliative care, Family, Health.

O58 Diabetes mellitus and polypharmacy in elderly population: what is the reality?

Claudia oliveira 1 , helena josĂŠ 2,3 , alexandre c caldas 1, 1 institute of health sciences, universidade catĂłlica portuguesa, 1649-023 lisbon, portugal; 2 health sciences research unit: nursing, nursing school of coimbra, 3046- 851 coimbra, portugal; 3 centro de formação de saĂşde multiperfil, luanda, angola, correspondence: claudia oliveira ([email protected]).

Diabetes mellitus is a prevalent disease among the elderly and is listed as one of the leading causes of admissions and readmission [1]. Older people with diabetes represent a challenge, in terms of effective coordination and management in multiple areas. In this sense, older population needs to adhere to a medication regimen, sometimes complex. Polypharmacy is a reality and leads to unnecessary disease progression and complications, reduces functional abilities, increases hospitalizations, reduces the quality of life, increases health costs and even deaths [2]. Management of such phenomenon is extremely hard and requires awareness.

To identify the clinical profile of the older people with Diabetes mellitus in two Family Health Units in Faro (FHUs Farol and Al-Gharb).

Observational and descriptive study was performed, with people aged 65 years or above, living in the community and registered at the Health Centre of Faro (FHUs Farol and Al-Gharb). Three hundred and ninety-five patients were interview in terms of their medication regimen. For data collection, a sociodemographic questionnaire, Medication Regimen Complexity Index (MRCI) and chemical parameters (glycated haemoglobin (HbA1C) and capillary glycaemia) were used.

The sample was composed of people aged 65 years and over [75.59 (Âą6.75)], with a maximum of 93 years (52.9% were women and 47.1% were men). Regarding the MRCI, an average of 15.63 (Âą 6.84), with a minimum of 5 and a maximum of 32 was found. We verified the existence of a high and statistically significant positive correlation (r = .897; p-value <.001) between the MRCI and the number of drugs prescribed. The study also showed that the increase of the number of drugs prescribed is related to advanced age. For HbA1C, an average of 7.09 (Âą 1.14), a minimum of 5.3 and a maximum of 12.4 was obtained. It was found that 57.47% of the patients had HbA1C value lower than 7%, 22.78% had values between 7%-7.9% and 19.75% had values higher than 7.9%. In relation to capillary glycaemia, we obtained a mean of 181.13 (Âą 66.54) with a minimum of 83 and a maximum of 500.

Medication non-adherence and polypharmacy are real problems with negative impact, and potentially fatal. High numbers of medications prescribed are nowadays more common. Unfortunately, the elevated rates of HbA1C and capillary glycaemia values demonstrate that disease management is not effective, so it is urgent to implement programs to help older people self-manage chronic condition.

1. Kirkman MS, Briscoe VJ, Clark N et al. Diabetes in Older Adults. Diabetes Care. 2012; 35(12): 2650-64.

2. Masnoon N, Shakib S, Kalisch-Ellet L, Caughey G. What is polypharmacy? A systematic review of definitions BMC Geriatrics. 2017; 17: 230.

Medications Adherence, Patients, Aged, Polypharmacy, Diabetes Mellitus.

O59 Nurses' perception of Computerized Information Systems impact on the global nurses’ workload

Paulino sousa 1 , marisa bailas 2, 1 centro de investigação em tecnologias e serviços de saĂşde, 4200-450 porto, portugal ; 2 centro hospitalar de sĂŁo joĂŁo, 4200-319 porto, portugal, correspondence: paulino sousa ([email protected]).

Portugal has a story of almost two decades of computerized information systems use on health area, particularly in the nursing area, with the large-scale implementation of Computerized Information Systems to support nursing practice. We know that the use of electronic health information to support patient care will undoubtedly be responsible for a substantial time-spent on the overall workload of nurses. Too often, we are confronted with nurses’ opinions that the use of computerized information systems (CIS) has a great impact on the overall nurses’ workload (35 to 50% of the global nurses’ workload).

To identify the perception of nurses on the time spent on CIS in use in a hospital and his impact on the global nurses’ workload.

A cross-sectional survey was applied to collect data from 148 nurses that use CIS in a hospital (medical and surgical services). This allowed knowing the average percentage of nurses’ perception time spent on the use of SClinico® and other information supports, as well as their distribution by a set of nursing activities in use of the system.

The results showed that nurses consider that time spent on information supports has an average of 42.4% on their total working time: 33.5% on the use of SClĂ­nicoÂŽ (mode and median of 30%; SD Âą16.25) and 8.9 % (mode and median of 5 %; SD Âą6.67) on nursing records in other non-computerized structures (particularly on paper). These values are overlapping to those presented in some national and international studies. However, results are higher than the real-time shown in studies of Silva (2001) and Sousa and colleagues (2015). Nurses who underwent training processes on Nursing Information System in use (SClĂ­nicoÂŽ) and on ICNPÂŽ have differences in time-spent perception on the use of CIS.

A permanent issue in the debate on the use of CIS is the time spent in its use, in particular in the processes of data access, care planning, and record keeping. Nurses have the perception that the time spent on CIS has, in fact, an essential part of nursing practice, but with a high impact on the workload of nurses. However, there are several national and international studies that point out in certain contexts for a lower “real-time” in the global nurses’ workload on the use of computerized information systems.

Computerized Information Systems, Electronic health information, Nursing workload, Time spent.

O60 Pain in people 75 and older: association with activity patterns

Maria c rocha, josĂŠ g sousa, madalena g silva, physiotherapy department, school of health, polytechnic institute of setĂşbal, 2910-761 setĂşbal, portugal, correspondence: madalena g silva ([email protected]).

The prevalence of pain amongst the elderly population (19.5% [1] to 52.8%[2]) may not be disregarded and varies depending on the age range and context. Regardless, pain in this population group has been associated with reduced functional capacity, changes in gait and sleep patterns, depression and reduced social participation [3]. Exercise has often been recommended as an intervention to manage pain in the elderly [4], however long-term adherence to exercise programs is limited [5]. Characterization of pain and exploring its associations with light intensity physical activity may provide a base for discussing alternative clinical interventions for the management of pain in this population group.

To characterize the presence, location and duration of pain in very old adults and investigate its association with light intensity physical activity.

A cross-sectional study was implemented with 65 participants aged above 75 years, without cognitive impairment, average age of 79.48 ± 4.98. Presence, location and duration of pain were assessed with the socio-demographic and clinical characterization questionnaire. Light intensity activity was characterized with an Activity Diary. Given de non-normal distribution, dichotomic nominal variables were analysed with the biserial point correlation, and Spearmen’s rho was used for the remaining.

Two thirds (61.5%) of our sample was female, with a low educational level (64.6%). Eighty three percent (n = 54) reported experiencing pain, and from these, 45 (83.3%) had pain for more than one year. Pain was mainly localized in the knees (n = 30) and in the lower back (n = 27). Our sample spent an average of 5h46min per day in sedentary behaviour (< 1.5 METs) and 4h47min in physical activity of light intensity (> 1.5 and < 3 METs, Metabolic Equivalents). Physical activity of low intensity showed a non-significant association with the presence of pain (p = 0.622) nor with the duration of pain (p = 0.525).

We conclude that our sample had a very high prevalence of pain for more than one year, and that this is not associated with the time spent in light intensity physical activity. Further studies are required to provide a better understanding of the association of specific types and location of pain and light intensity physical activity, before it can be promoted as a clinical intervention strategy.

1. Satghare P, Chong SA, Vaingankar J, Picco L, Abdin E, Chua BY, et al. Prevalence and correlates of pain in people aged 60 years and above in Singapore: Results from the wise study. Pain Res Manag. 2016;2016.

2. Pereira LV, Vasconcelos PP de, Souza LAF, Pereira G de A, Nakatani AYK, Bachion MM. Prevalence and intensity of chronic pain and self-perceived health among elderly people: a population-based study. Rev Lat Am Enfermagem. 2014;22(4):662–9.

3. Herr K. Pain assessment strategies in older patients. J Pain. 2011;12(3 SUPPL.):S3–13.

4. Tse MM, Vong SK, Tang SK. Motivational interviewing and exercise programme for community-dwelling older persons with chronic pain: A randomised controlled study. J Clin Nurs. 2013;22(13–14):1843–56.

5. Shubert TE, Goto LS, Smith ML, Jiang L, Rudman H, Ory MG. The Otago Exercise Program: Innovative Delivery Models to Maximize Sustained Outcomes for High Risk, Homebound Older Adults. Front Public Heal. 2017;5.

Pain, Activity, Older adults.

O61 Impact of the people with intellectual disability and proxies’ characteristics on quality of life assessment

Cristina simĂľes 1,2 , sofia santos 2,3, 1 economics and social sciences department, portuguese catholic university, 3504-505 viseu, portugal; 2 study center for special education, faculdade de motricidade humana, university of lisbon, 1499-002 cruz quebrada, portugal; 3 unidade de investigação e desenvolvimento em educação e formação, instituto de educação, university of lisbon, 1649-013 lisbon, portugal, correspondence: cristina simĂľes ([email protected]).

The quality of life (QoL) assessment should include self-report measures in the field of intellectual disability (ID), which provide useful information for personalized support plans and give those with ID the opportunity to express their own perspectives regarding themselves and their individual contexts of life. Nevertheless, the communication and understanding limitations of people with ID can be a barrier to obtaining self-report perceptions. The inclusion of a proxy who knows the individual with ID well has been used to overcome the difficulties of the subjective assessments.

This proposal aims to explore the factors that could potentially explain the disagreements in QoL assessment of people with ID and their proxies.

Data were collected from 207 participants: 69 people with ID, 69 practitioners and 69 family members. QoL was assessed by the Portuguese version of the Personal Outcomes Scale. Paired-sample t tests were performed to examine the differences between the mean scores. Multiple regressions were calculated to analyse the determinants that could explain the directional mean difference between people with ID, support staff, and family members.

The personal and environmental characteristics of people with ID (gender, diagnosis, living circumstances, and type of transportation) and the characteristics of practitioners (age, education level, relationship, health status of the person with the ID) had scores with a medium explanation of the disagreements between those participants. The education level of support staff and the health status of the person with the ID had largely explained the discrepancies between people with ID and key workers. Furthermore, the results revealed that four characteristics were major predictors of disagreement between people with ID and family members: the age of the person with ID, the type of transportation, the self-reported health status of the person with ID, and the health status of the person with the ID, as assessed by the family members. Finally, robust factors seemed to explain the discrepancies between practitioners and family members: living circumstances, self-reported health status of the person with the ID, education level of the key worker and education level of family members.

Among other factors, the health status was a major predictor of the different perceptions on QoL assessment. Findings showed that it was possible to predict differences among the three groups of respondents. Strictly speaking, the personal and environmental characteristics of people with ID and proxies predicted the disagreement among the participants.

Quality of life, Intellectual disability, Self-report, Proxies, Predictors.

O62 Factors that influence the decision on how and when to use a “health kiosk”

JoĂŁo rodrigues 1,4 , paulino sousa 2 , pedro brandĂŁo 3,4, 1 administração regional de saĂşde do norte, 4000-099 porto, portugal; 2 centro de investigação em tecnologias e serviços de saĂşde, 4200-450 porto, portugal; 3 faculdade de ciĂŞncias, universidade do porto, 4169-007 porto, portugal; 4 instituto de telecomunicaçþes, 1049-001 lisboa, portugal, correspondence: joĂŁo rodrigues ([email protected]).

Health kiosks have been recognized as an effective way to develop knowledge and capabilities of citizens, which can improve the promotion of a healthy behaviour. But several have been the problems that have hindered the large-scale implementation and use of health kiosks, one of the most prominent being the limited acceptance of technology by citizens.

To identify factors that influence the decision on how and when to use a health kiosk

This kiosk appears as an innovative project, allowing to monitor anthropometric data (weight) and vital signs (heart rate, oximetry, and blood pressure) on a routine basis or prior to a medical appointment. This was an exploratory study, descriptive and correlational, of a cross-sectional study, with a mixed approach (quantitative and qualitative). In the elaboration of the instruments of data collection, we based on the Technological Acceptance Model (TAM). The analysis of the factors influencing the decision on how and when to use the “health check” was supported by the constructs: perceived utility, perceived ease of use, perceived credibility, and perceived knowledge.

92 citizens accepted to participate in the study. But, 34 refused to use the kiosk (justifying their refusal): they considered “ not having enough time to use the kiosk ”, most of which verbalized that “ they were afraid of losing their medical appointment if they did not hear the call ”, “ not feeling able to use it ”, “ not being able to use computers ” or “ not associating any utility in its use ”. The kiosk was used by 58 people who had come to the Health Centre with different objectives: nursing appointment (41.4%), medical consultation (36.2%), administrative contact (13.8%) and the remaining were companions of other health care users (5.6%). Participants were mostly female (70.7%), with an average of 51.3 years (median 51.4, SD±17.6). They reported using technological devices: 94.8% used mobile phones (62.1% has “smartphones”) and 60.3% use computers. Only one participant had experienced prior “health kiosk” use. Users appreciated the utility (94.1%) and easy use (85.7%), as well as the credibility (94.6%) of the kiosk. The perceived knowledge was considered by 80.4% of participants as very good (5.4%) or good (75.0%).

TAM was crucial to understand the strength that some of its dimensions may have as factors that influence the decision on how and when to use the “health kiosk”. Among the citizens who used the health kiosk, mostly found it useful, easy to use, credible and secure.

This article is a result of the project NanoSTIMA Macro-to-Nano Human Sensing: Towards Integrated Multimodal Health Monitoring and Analytics, Norte-01-0145-FEDER-000016, supported by Norte Portugal Regional Operational Programme (NORTE 2020), through Portugal 2020 and the European Development Fund.

Health kiosk, Technological Acceptance Mode, Monitoring.

O63 Delirium care: a survey into nursing perceptions and knowledge

Marta bento 1 , rita marques 2, 1 universidade catĂłlica portuguesa, 1649-023 lisbon, portugal ; 2 hospital de santa maria, centro hospitalar lisboa norte, 1649-035 lisbon, portugal, correspondence: marta bento ([email protected]).

Delirium is a reversible cognitive manifestation of sudden onset, developing in a matter of hours or days; characterized by a fluctuating course of disturbed attention, memory and perception [1]. Although common, this syndrome is often under-diagnosed and nursing staff are in the best position to recognize, prevent and monitoring delirium symptoms. The current approach to delirium care seems to be insufficient and nurses need to receive more support and guidance providing high quality care [2]. The education of nurses in all care settings can provide the foundation to address this massive international challenge.

The aim of the study is to assess nursing knowledge, in order to understand and perceive delirious adult/elderly patients.

In this exploratory study, we applied a questionnaire with closed questions and the sample consisted of 49 nurses working at an ER of a Central Hospital at Lisbon, during the month of December 2017. In order to safeguard ethical issues, we requested approval and informed consent to all participants in the study, with the anonymity and confidentiality of the data being ensured.

The data yielded revealed that there was a high level of knowledge on the definition of delirium (93.8%) and also on the application of the Confusion Assessment Method (86.4%), although in this unit this instrument is not applied routinely. The analysis also reveals that there is a very high level of knowledge about the characteristics of a delirious patients and 100% of the nursing Staff recognize these patients, has not always aggressive. Furthermore, the dehydration and the poor nutrition were identified has risk factors for delirium (95.8% and 91.8%, respectively). On the contrary 63.3% (n = 31) of the respondents assumed that a patient with impaired vision isn’t at increased risk of delirium or neither 22.4% (n = 11) the risk for delirium increases with age 22.4% (n = 11). Equally important, 28.6% (n = 14) of the respondents did not know that patients with delirium present higher mortality rates.

Despite the literature assumes in same hospital settings nurses have insufficient knowledge of delirium-related information, the results of this study evidence an overall positively answered mean score, showing a high level of knowledge of delirium and its risk factors. Nurses have a key role to accurately recognizing and caring for delirious patients given the poor outcomes of untreated delirium.

1. American psychiatric association. Diagnostic and Statistical Manual of Mental Disorders, DSM-5. Fifth Edition. Artmed; 2014. 976 p.

2. Zamoscik K, Godbold R, Freeman P. Intensive care nurses’ experiences and perceptions of delirium and delirium care. Intensive Crit Care Nurs. Junho de 2017;40:94–100.

Delirium, Nursing, Knowledge, Risk factors.

O64 Nurses' satisfaction with the use of Health Information System in Funchal hospitals

PlĂĄcida silva 1 , paulino sousa 2 , ĂŠlvio jesus 1, 1 hospital dr. nĂŠlio mendonça, 9004-514 funchal, portugal; 2 centro de investigação em tecnologias e serviços de saĂşde, 4200-450 porto, portugal, correspondence: plĂĄcida silva ([email protected]).

The evaluation of the Health Information System (HIS) is a fundamental activity to determine the success of the system and guarantee the continuity of its use. That is why it is important to know the true impact on the use and satisfaction of its users. In recent years, we have seen in Portugal different studies on nurses’ satisfaction with HIS use. However, none of the studies refers user satisfaction with the HIS structure that supports the practice of nurses in the Autonomous Region of Madeira (ARM).

To identify dimensions and level of nurses' satisfaction with the HIS in use.

A cross-sectional, exploratory and descriptive study was carried out in the ARM, in inpatient units of Funchal Hospitals. Data collection was supported by the application of the “User Satisfaction Questionnaire for Nursing Information Systems” based on DeLone & McLean Model of Information System Success (2003). This instrument uses a 5-point Likert scale structure with a semantic differential operationalized “ 1-unsatisfied ” and “ 5-very satisfied ”, in an increasing logic of level of satisfaction, in which there is no neutral intermediate point.

The adherence rate of the study population was 50.5%, corresponding to a sample of 283 nurses. The exploratory factor analysis process was reduced to 5 factors, similar to previous studies which resulted in the following dimensions: 1) information sharing; 2) structure and content of information needed for decision-making; 3) support structures and HIS contributions; 4) Security, data protection and technical support; and 5) graphical data presentation. The dimension “ satisfaction with access to necessary information for decision making ” with an average value of 3.13 (Median 3, SD ±0.70), reports the area where the higher level of satisfaction is observed. The dimension “ support structures and HIS contributions ” has the lower average value of 2.81 (Median 2.8, SD ±0.64), The overall “nurses’ satisfaction with NIS in use” was 2.96 (±0.57), with a median of 3 on a Likert scale. This overall result reports a good level of nurses’ satisfaction with the HIS that they use.

This study allowed us to identify dimensions that incorporate the DeLone & McLean Model of Information System Success. At the same time, it has allowed us to identify factors that determine the level of satisfaction of the nurses with the HIS that they use and being able to determine the “use” and “intention to use them”.

Health Information Systems, Nurses, Satisfaction, Evaluation.

O65 How prevalent are psycoactive substances among health students?

Sandra ventura 1 , andrĂŠ rts araujo 1 , joĂŁo leitĂŁo1, odĂ­lia d cavaco 1 , rui correia 2 , maria j. nunes 2, 1 escola superior de saĂşde, instituto politĂŠcnico da guarda, 6300-749 guarda, portugal; 2 centro de respostas integradas da guarda, administração regional de saĂşde do centro, 6300-725 guarda, portugal, correspondence: sandra ventura ([email protected]).

Consumption of psychoactive substances is widespread among young adolescents and young adults and constitutes a public health problem with significant consequences for individuals and societies throughout the world. The main consequences depend on the consumption pattern of the substance used and may result from the immediate or cumulative toxic effect of the substance consumed, from intoxication or psychoactive effects, or from addiction or addiction syndrome. Particularly alcohol and tobacco consumptions are the second and third risk factors of morbidity and mortality in Europe. Harmful use of alcohol, under acute and chronic conditions, can have serious developmental and social consequences, including violence, neglect and accidents, as well as health problems. Tobacco use also has negative consequences on health and social life.

In this context, the objective of this study was to characterize the consumption of psychoactive substances by students of the Superior Health School of the Polytechnic Institute of Guarda and to reflect on prevention strategies to be implemented to dissuade and reduce consumption among students.

A questionnaire was applied to the students and we collected 261 answers, from a total of 175 students of the Nursing Course and 88 of the Pharmacy Course.

The results obtained indicate that the most consumed substances were, in descending order: alcohol, tobacco, psychoactive drugs and illicit substances. Regarding alcohol experimentation, it was found that 76.6% of nursing students and 85.2% of pharmacy students had already consumed alcohol. These data were higher than the national prevalence throughout life in 2015 (71%). The consumption of alcohol by nursing students (74.3%) and by pharmacy students (89.8%) was also higher than the consumption of Portuguese young people between 13 and 18 years old (62%). Tobacco consumption by nursing and pharmacy students were of 51.6% and 56.8%, respectively, both higher than the national data of 40%. The consumption of tobacco was of 39.3% by nursing students and 45.5% by pharmacy students, and it was also higher than the 30% reported for national consumption. Illicit substances were the least consumed, with a prevalence throughout life of 13.3% and 9.0%, respectively, by nursing and pharmacy students, inferior than the national data (19%). Cannabis was the illicit substance more consumed by either nursing and pharmacy students.

These results indicate that there is work to be done in the prevention and dissuasion of consumption of psychoactive substances by our student community.

Psychoactive substances, Consumption, Prevalence, Prevention, Dissuassion.

O66 Diabetes Mellitus as a key indirect causal factor for pressure ulcer development

Pedro sardo 1,2 , jenifer guedes 2 , josĂŠ alvarelhĂŁo 1 , elsa melo 1, 1 school of health sciences, university of aveiro, 3810-193 aveiro, portugal; 2 centro hospitalar do baixo vouga, 3810-501 aveiro, portugal, correspondence: pedro sardo ([email protected]).

Ensuring patient safety in healthcare is a challenge [1-6]. With the growing Diabetes Mellitus incidence, healthcare professionals and planners are encouraged to pay further attention to the major complications of this disorder [3]. According to EPUAP and EWMA [3], poor circulation and infection are among the most common complications that effect diabetic patients. A recent pressure ulcer conceptual framework [7, 8] identified Diabetes Mellitus as a key indirect causal factor (and poor perfusion as a direct causal factor) for pressure ulcer development and encourages the development of clinical studies that explore the correlation(s) between these specific risk factors and pressure ulcer development.

To identify the influence of Diabetes Mellitus on pressure ulcer development in adult patients admitted to medical and surgical wards in 3 Portuguese hospitals.

Cross sectional design survey developed on June 16th, 2015 with 236 adult patients admitted to medical and surgical wards in 3 Portuguese hospitals. The study was performed after Hospital Council Board and Ethics Committee approval (Reference Number 049688). Data were analysed using SPSS v25.0. Descriptive statistics were calculated for the sample characterization. Pressure ulcer risk, prevalence and incidence were calculated according to EPUAP statement [9]. Odds ratio (OR) was calculated by univariate logistic regression.

This study included a sample of 236 participants with the median age of 76 years (Q1 = 62 years; Q3 = 83 years). The majority of the participants was male (56.8%), admitted trough the emergency service (80.9%) and stayed in medical units (60.2%). On the day of the survey, 121 (51.3%) participants were classified as “ high risk of pressure ulcer development ” (Braden Scale score ≤ 16); 45 (19.1%) participants had at least one pressure ulcer documented; 7 (3.0%) participants developed a new pressure ulcer since the admission in inpatient setting; and 67 (28.4%) participants had Diabetes Mellitus . Using a univariate logistic regression model, the odds of developing a pressure ulcer during the length of inpatient stay were significantly higher for the participants with Diabetes Mellitus with OR = 6.73 (95% CI:1.27-35.61, Nagelkerke R 2 = 0.103) compared to the other participants.

This study supports the pressure ulcer conceptual framework proposed by Coleman, Nelson [7] and Coleman, Nixon [8], showing that Diabetes Mellitus is a key (indirect) causal factor for pressure ulcer development in inpatient settings. However, further studies are needed in order to understand the influence of Diabetes Mellitus on skin and tissue (poor) perfusion and consequently on pressure ulcer development.

Thanks are due to “Centro Hospitalar Baixo Vouga, EPE” (Portugal), particularly to the Nursing Council Board, head nurses and to the nurses that recorded the data in the medical and surgical services of Águeda Hospital, Aveiro Hospital and Estarreja Hospital.

1. EPUAP, EWMA. Patient safety across Europe: the perspective of pressure ulcers.2017.

2. EPUAP, EWMA. The time to invest in patient safety and pressure ulcer prevention is now! 2017.

3. EPUAP, EWMA. Diabetic Control & Pressure Ulcers: fighting fatal complications and improving quality of life. 2017.

4. Sardo P, SimĂľes C, AlvarelhĂŁo J, Costa C, SimĂľes CJ, Figueira J, et al. Pressure ulcer risk assessment: retrospective analysis of Braden Scale scores in Portuguese hospitalised adult patients. Journal of Clinical Nursing. 2015;24(21-22):3165-76.

5. Sardo P, SimĂľes C, AlvarelhĂŁo J, Costa C, SimĂľes CJ, Figueira J, et al. Analyses of pressure ulcer point prevalence at the first skin assessment in a Portuguese hospital. Journal of Tissue Viability. 2016;25(2):75-82.

6. Sardo P, SimĂľes C, AlvarelhĂŁo J, SimĂľes JL, Machado P, Amado F, et al. Analyses of pressure ulcer incidence in inpatient setting in a Portuguese hospital. Journal of Tissue Viability. 2016;25(4):209-15.

7. Coleman S, Nelson EA, Keen J, Wilson L, McGinnis E, Dealey C, et al. Developing a pressure ulcer risk factor minimum data set and risk assessment framework. J Adv Nurs. 2014;70(10):2339-52.

8. Coleman S, Nixon J, Keen J, Wilson L, McGinnis E, Dealey C, et al. A new pressure ulcer conceptual framework. J Adv Nurs. 2014;70(10):2222-34.

Diabetes Mellitus, Nursing Assessment, Portugal, Pressure Ulcer, Risk Assessment

O67 Frailty syndrome in the elderly hospitalized in a teaching hospital

Luciane cabral, clĂłris regina, bruno a condas, pĂŠricles reche, danielle bordim, jacy sousa, departamento de enfermagem e saĂşde pĂşblica, universidade estadual de ponta grossa, 4748 ponta grossa, paranĂĄ, brasil, correspondence: luciane cabral ([email protected]).

In Brazil and in the world, the growth of the elderly population is an indisputable reality, so it is necessary to identify the factors that favour the sickness of this age group, with emphasis on the fragility, which can be defined as a syndrome which presents innumerable causes and is characterized by a set of clinical manifestations, such as decreased in strength, endurance and physiological function, collaborating to make the individual more vulnerable to addiction and/or death [1].

In view of the above, the present study aimed to evaluate the fragility syndrome of the elderly hospitalized in a teaching hospital.

A cross-sectional study, carried out with a convenience sample of 107 elderly patients admitted to the emergency room, at the medical, surgical and neurology clinic of the medical, surgical and neurology of a teaching hospital in the Campos Gerais region, from October 2016 to April 2017. Data collection included the application of the Mini Mental State Examination [2] for cognitive screening and Edmonton Fragility Scale [3], culturally adapted to the Portuguese language in Brazil [4]. Data were analysed using StataÂŽ12 software. The association was verified through simple linear regression (Fisher's F and Student's t tests), significance level of p = 0.05. The project was approved by the Ethics Committee of the State University of Ponta Grossa (CAAE nÂş 34905214.0.0000.0105).

The results showed a predominance of females (58.9%), married (61.0%), low schooling (71.0%), living with spouse (n = 42, 39.3%), considered their income satisfactory (50.5%). The mean age of participants was 70.3 years. Regarding clinical variables, 99.1% had a disease, 36.5% used medication and 50.5% reported hospitalization in the last 12 months. The fragility evaluation identified that 19.6% of the elderly were nonfragile, 24.3% apparently vulnerable, 26.2% had mild fragility, 15.9% moderate and 14.0% severe. It was found that any level of schooling used medication (p = 0.001), solitude (p = 0.001), loss of urine (p = 0.001) and hospitalization in the last 12 months (p = 0.001) and was associated with the fragility syndrome.

The importance of early detection of the syndrome is emphasized through the use of an instrument that is valid, reliable and easy to apply by the health team, such as the Edmonton Fragility Scale. The results presented can support the planning of health care, considering the characteristics and demands of the elderly who are hospitalized, thus contributing to improve the quality of care provided.

1. Morley JE, Vellas B, Kan GAV, Anker SD, Bauer JM, Bernabei R, et al. Frailty consensus: a call to action. JAMDA. 2013, 14(6):392-7.

2. Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”: a practical method for granding the cognitive state of patients for the clinican. J PsychiatrRes. 1975, 12(3):189-98.

3. Rolfson D, Majumdar S, Tsuyuki R, Tahir A , Rockwood K. Validity and reliability of the Edmonton Frail Scale. Age Ageing. 2006, 35(5):526-9.

4. FabrĂ­cio-Wehbe SCC, Cruz IR, Haas VJ, Diniz MA, Dantas RAS, Rodrigues RAP. Reproducibility of the Brazilian version of the Edmonton Frail Scale for elderly living in the community. Rev Latino-Am Enfermagem. 2013; 21(6):1330-6.

Aged, Frail Elderly, Geriatric Nursing.

O68 Braden Scale accuracy tests

Pressure ulcers management is a challenge [1-6]. Portuguese guidelines [7] encourage the implementation of regular pressure ulcer risk assessments through the application of the Braden Scale and the patients’ categorisation into two levels of risk (defined by cut-off point of 16). However, the development of pressure ulcer(s) is complex and multifactorial [8] and whenever the Braden Scale score falls below 18, each patient functional deficit and/or risk factor should be individually addressed [9].

To analyse the Braden Scale accuracy tests in adult patients admitted to general wards in a Portuguese hospital during one year, using different cut-off points.

The study was designed as a retrospective cohort analysis of electronic health record database from 6,552 adult patients admitted without any pressure ulcer in medical and surgical wards in a Portuguese hospital during 2012. All data were extracted after Hospital Council Board and Ethics Committee approval (Reference Number 049688). Braden Scale Accuracy Tests (BSAT) such as sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and the area under the curve (AUC) were assessed [10].

The study included 6,552 participants with a mean age of 64 years and 6 months. The majority of participants was male (52.6%); admitted trough emergency service (69.1%); in surgical (64.5%) units. During the length of stay 153 (2.3%) participants developed (at least) one pressure ulcer. Considering the cut-off point of 15, the BSAT showed: sensitivity of 50%(95%CI:42%-58%); specificity of 82%(95%CI:81%-83%); PPV of 6%(95%CI:5%-8%); NPV 99%(95%CI:98%-99%); and AUC of 66%(95%CI:61%-71%). Considering the cut-off point of 16, the BSAT showed: sensitivity of 63%(95%CI:55%-71%); specificity of 74%(95%CI:73%-75%); PPV of 5%(95%CI:4%-7%); NPV 99%(95%CI:98%-99%); and AUC of 69%(95%CI:64%-73%). Considering the cut-off point of 17 the BSAT showed: sensitivity of 78%(95%CI:71%-84%); specificity of 63%(95%CI:61%-64%); PPV of 5%(95%CI:4%-6%); NPV 99%(95%CI:99%-99%); and AUC of 71%(95%CI:67%-74%).

Although our BSATs follow the trend of the results found in a recent systematic review [11], they showed some of the limitations of the patients’ categorisation into two levels of risk according to a specific cut-off value. Like Braden [9], we believe that this assessment tool should be supplied with clinical judgment in order to identify patients’ specific risk factors that should be individually addressed with accurate preventive interventions. Furthermore, in order to develop evidence-based practice, we should create a minimum data set for pressure ulcer prevention, assessment and documentation [12-14] based on patients’ characteristics, international guidelines [15] and conceptual frameworks [12-14].

1. EPUAP, EWMA. Patient safety across Europe: the perspective of pressure ulcers. 2017.

7. DGS. Escala de Braden: Versão Adulto e Pediåtrica (Braden Q). Lisboa: Direção-Geral da Saúde; 2011.

8. Cox J. Predictors of pressure ulcers in adult critical care patients. Am J Crit Care. 2011;20(5):364-75.

9. Braden BJ. The Braden Scale for Predicting Pressure Sore Risk: reflections after 25 years. Adv Skin Wound Care. 2012;25(2):61.

10. Lalkhen AG, McCluskey A. Clinical tests: sensitivity and specificity. Continuing Education in Anaesthesia, Critical Care & Pain. 2008;8(6):221-3.

11. Park SH, Choi YK, Kang CB. Predictive validity of the Braden Scale for pressure ulcer risk in hospitalized patients. J Tissue Viability. 2015;24(3):102-13.

12. Coleman S, Nelson EA, Keen J, Wilson L, McGinnis E, Dealey C, et al. Developing a pressure ulcer risk factor minimum data set and risk assessment framework. J Adv Nurs. 2014;70(10):2339-52.

13. Coleman S, Nixon J, Keen J, Wilson L, McGinnis E, Dealey C, et al. A new pressure ulcer conceptual framework. J Adv Nurs. 2014;70(10):2222-34.

14. Coleman S, Nelson EA, Vowden P, Vowden K, Adderley U, Sunderland L, et al. Development of a generic wound care assessment minimum data set. J Tissue Viability. 2017;26(4):226-40.

15. NPUAP, EPUAP, PPPIA. Prevention and Treatment of Pressure Ulcers: Quick Reference Guide. Perth, Australia: Cambridge Media; 2014.

Nursing Assessment, Portugal, Pressure Ulcer, Risk Assessment, Sensitivity and Specificity.

O69 Health literacy: the importance of experimental activities in the 1st cycle of basic education: report of an educational intervention on hand hygiene

Maria c lamas 1,2,3 , carla lago 4, 1 escola superior saĂşde, instituto politĂŠcnico do porto, 4200-072 porto, portugal; 2 centro de investigação em saĂşde e ambiente, instituto politĂŠcnico do porto, 4200-072 porto, portugal; 3 centro de investigação em tecnologias e serviços de saĂşde, 4200-450 porto, portugal; 4 escola eb1/ji pĂ­cua, agrupamento de escolas de ĂĄguas santas, 4425-143 ĂĄguas santas, maia, portugal, correspondence: maria c lamas ([email protected]).

Primary school students usually have very little previous knowledge about a number of educational issues. So, it is important to create moments where the students can tell whatever they know about a subject, in order to make an additional scientific explanation. The program of the 1 st cycle of basic education aims to develop an attitude of permanent research and experimentation, and the part “ The health of your body ”, to produce knowledge and the application of norms of body hygiene [1]. However, the contents expressed in the textbooks for these levels of education do not justify the need for children to adopt these hygiene habits, which must be acquired as early as possible, to be a systematic routine throughout life. On the other hand, it allows to eradicate some of the alternative conceptions that some 1 st cycle students present on some issues [2], as the notion about the morphological view of microorganisms away from reality, idealizing them similar to animals [3,4,5]. There is evidence that children are able to learn about microorganisms at this age [3,4,5] and it is desirable that it occurs as early as possible, avoiding late conceptual changes that are difficult to reconstruct in their entirety [4]. For some authors [6,7], children should realize that the knowledge learned in the classroom can be applied in their daily lives.

In this context and with the purpose of promoting scientific and critical literacy, we developed an activity about hand hygiene because handwashing should be learned and be a properly reasoned behaviour.

The activities were developed by all 26 students in the class A, 2 nd grade of the School EB1/JI Picua. The students' age ranged from 7 to 8 years, with 54% (14) boys and 46% (12) girls. It started with the question “ Handwashing: Why, When, How ?”. According to the conceptions expressed by the students the appropriate theoretical contents were presented in a gradual and interactive way. This was followed by the experimental procedure with permanent monitoring and support based on the succeeding steps: role-playing stages for proper handwashing; applied activity; listing expected results; observation of cultures and microscopic observation of microorganisms, recording and reflection about the results achieved.

All groups showed the expected results, i.e. , higher microbial growth in the quadrants corresponding to unwashed hands.

Giving the results and the theoretical framework, the students learned proper concepts on the subject, which allowed them a better understanding of the world around them.

1. DGE - Direção Geral de Educação (2004). Organização Curricular e Programas. 1º Ciclo do Ensino Båsico. Lisboa, MinistÊrio da Educação e Ciência, 4ª edição.

2. Mafra, P., Lima, N., Carvalho, G. (2015). Microbiologia no 1º Ciclo do Ensino Båsico: Uma proposta de atividade experimental sobre a higiene das mãos. Livro de atas do XI Seminårio Internacional de Educação Física, Lazer e Saúde.

3. Byrne J, Sharp J. Children’s ideas about micro-organisms. School science review. 2006;88(322):71-79.

4. Byrne J. Models of Micro-Organisms: Children’s knowledge and understanding of micro-organisms from 7 to 14 years old. International Journal of Science Education. 2011;33(14):1927-1961.

5. Mafra, P. (2012). Os Microrganismos no 1.º e 2.º Ciclos do Ensino Båsico: Abordagem Curricular, Conceçþes Alternativas e Propostas de Atividades Experimentais. Tese de Doutoramento. Braga: Universidade do Minho, Portugal.

6. Pro, A. (2012). Los cuidadanos necessitan connocimientos de ciĂŞncias para dar respuestas a los problemas de su contexto. In Pedrinaci, E. (coord.), CaamaĂąo, A. CaĂąal, P.; Pro, A. 11 ideas clave. El desarollo de la competĂŞncia cientĂ­fica. Barcelona: Editorial GraĂł.

7. LupiĂłn, T. e Prieto, T. (2014). La contaminaciĂłn atmosfĂŠrica: un contexto para ell desarollo de competĂŞncias en el aula de secundĂĄria. EnseĂąanza de las Ciencias, 32 (1), 1-18.

Hygiene, Handwashing, 1 st cycle, Monitored support, Microrganisms.

O70 Stand by me! Assessing the risk of falls in community –dwelling older adults

LuĂ­s pt lemos 1 , joĂŁo pinheiro 2 , edite teixeira-lemos 3,4 , jorge oliveira 3,4 , ana p melo 5 , anabela c martins 6, 1 centro hospitalar tondela-viseu, 3509-504 viseu, portugal; 2 faculty of medicine, university of coimbra, 3004-504 coimbra, portugal; 3 escola superior agrĂĄria de viseu, polytechnic institute of viseu, 3500-606 viseu, portugal; 4 centre for the study of education, technologies and health, polytechnic institute of viseu, 3504-510 viseu portugal; 5 laboratory medicine unit and department of quality and risk management, hospital distrital da figueira da foz, 3094-001 figueira da foz, portugal; 6 physiotherapy department, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra portugal, correspondence: edite teixeira-lemos ([email protected]).

About a third of community-dwelling adults age 65 and older fall each year. Accidental falls are a cause of fractures, traumatic brain injury, and even death. They can also lead to restrictions in participation, eventually resulting in loss of independence in normal activities of self-care. Falls in older adults are multifactorial and can be caused by medical conditions, cognitive impairment, medications, and home hazards. Therefore, a single identifiable factor may account for only a small portion of the fall risk in the community-dwelling elderly population, stressing the need for a multifactorial evaluation in this population.

Identify the risk of accidental falls in an independent elderly population by using functional tests that can be routinely applied in clinical practice and evaluate the influence of medication in the risk of falling.

The sample consisted of 108 individuals who attended a health care facility between October 2016-January 2017. Inclusion criteria: age 65-85, Functional Independence Measure (FIM) ≥ 120 and Timed Up and Go (TUG) ≤ 12s. Individuals with serious cognitive or motor impairment were excluded. A form was filled with sociodemographic data, daily medication and history of falls. Handgrip strength was measured. Fear of falling was assessed using the Activities-specific Balance Confidence (ABC) scale. Participation was evaluated using the Activities and Participation Profile related to Mobility (APPM). Informed written consent was obtained for all participants.

The average age was 72.28Âą6.02. The majority of subjects were female (54.6%). Fallers were older, had lower ABC and handgrip strength. ABC showed strong negative associations with APPM. All of the functional parameters were affected by age, with older individuals performing worse than younger participants. Polypharmacy was identified in 41.7% and increased the risk of falls (OR = 3.597; CI 95% 1.174-11.024; p = 0.025). Individuals taking antidepressants showed an increased risk of falls (OR = 9.467; CI 95% 2.337-38.495; p = 0.002). Anti-arrhythmic drugs (p = 0.002), benzodiazepines (p = 0.015) and other CNS-acting medication (p = 0.039) negatively influenced ABC scores. APPM scores were higher in subjects who reported taking CNS-acting medication (p = 0.012) and anti-arrhythmic medication (p = 0.035).

Individuals with low balance confidence showed higher restrictions in participation related to mobility. All of the functional parameters evaluated in this study were affected by age. These results stress that a comprehensive and multifactorial evaluation of risk factors for falls in older people and the adoption of interventions tailored to this age group, which could include a reassessment of their usual medication, are necessary in order to reduce fall risk and fall-related injury.

Accidental falls, Risk factors, Elderly, Community-dwelling, Polypharmacy.

O71 Trust requirements for the uptake of ambient assisted living digital advisory services

Soraia teles 1,2 , ana ferreira 2 , pedro vieira-marques 2 , diotima bertel 3 , constança paĂşl 1,2 , andrea c kofler 4, 1 institute of biomedical sciences abel salazar, department of behavioral sciences, university of porto, 4050-313 porto, portugal; 2 center for health technology and services research, 4200- 450, porto, portugal; 3 synyo gmbh, 1060 vienna, austria; 4 zurich university of applied sciences, reidbach 8820 wädenswil, zurich, switzerland, correspondence: soraia teles ([email protected]).

For the last 10 years, Ambient Assisted Living (AAL) solutions have been conquering an important place in policies addressing economic and social challenges resulting from population ageing [1]. The AAL concept corresponds to a new paradigm building on ubiquitous computing devices and new interaction forms to improve older adults’ health, autonomy and security [2]. In spite of promising contributions of AAL solutions for ageing in place, low adoption by end users was reported [3-5]. This is thought to result from the intersection of technology features, user characteristics and attitudes [6]. Research has suggested that among attitudinal factors preventing adoption of these solutions there is a lack of trust, substantiated, among other factors, by user’s concerns about data security and privacy [5,7-10]. Digital advisory services for AAL solutions have to foster not only user’s trust on the advisory service per se , but also on AAL products and services and the web communication within a community.

To analyse stakeholders’ attitudes and requirements towards AAL digital advisory services, applying the findings to develop a pan-European advisory and decision-support platform for AAL solutions (ActiveAdvice).

A qualitative approach was used. Thirty-eight semi-structured interviews with AAL stakeholders– older adults and informal caregivers, businesses and government representatives– were conducted in six European countries (Austria, Switzerland, Belgium, Netherlands, Portugal and UK). The data was analysed using the matrix method [11].

For the uptake of AAL digital advisory services, the level of user’s trust in the system seems to be critical. Features emerging as crucial to foment trust in digital advisory services were threefold: presence of security and privacy cues; personalization-related cues; and community features, including availability of client-to-client interactions and feedback given by reliable peers or experts. Older adults expressed their interest in becoming active in a digital community if provided an environment perceived as secure and, simultaneously, easy to use.

Building trust in AAL digital advisory services depends on multiple and complex user requirements. Security issues have shown to be of utmost relevance due to the nature of information exchanged, i.e. personal, health-related and sensitive data, and generational preferences, with privacy and security cues having primacy for ‘Baby Boomers’, as supported by previous research [12]. These findings stress the need for a paradigm shift towards user-centred and user empowering models and mechanisms for securing the interaction with systems (e.g. authentication mechanisms, access control models and visualization techniques; e.g. SoTRAACE model) [13].

The authors would like to acknowledge the co-financing by the European Commission AAL Joint

Programme and the related national agencies in Austria, Belgium, the Netherlands, Portugal,

Switzerland and the United Kingdom.

1. AAL Programme. Stategy 2014-2020 for the Active and Assisted Living Programme. 2014; Retrieved from: http://www.aal-europe.eu/wp-content/uploads/2015/11/20151001-AAL Strategy_Final.pdf.

2. Betchold U, Sotoudeh M. Assistive technologies: Their development from a technology assessment perspective. Gerontechnology. 2013;11(4):521-533.

3. Doyle J, Bailey C, Scanaill CN, van den Berg F. Lessons learned in deploying independent living technologies to older adults’ homes. Univ Access Inf Soc. 2013;13:191.

4. Michel JP, Franco A. Geriatricians and Technology. J Am Med Dir Assoc. 2014;15(12):860-2.

5. Peek ST, Wouters EJ, van Hoof J, Luijkx KG, Boeije HR, Vrijhoef HJ. Factors influencing acceptance of technology for aging in place: A systematic review. Int J Med Inform. 2014;83(4) :235-248.

6. Nedopil C, Schauber C, Glende I. AAL stakeholders and their requirement. 2013; Report by the Ambient and Assisted Living Association.

7. Damodaran L, Olphert W. User Responses to Assisted Living Technologies (ALTs) — A Review of the

Literature. Journal of Integrated Care. 2010;18(2):25-32.

8. Nordgren A. Personal health monitoring: ethical considerations for stakeholders. Journal of Information, Communication and Ethics in Society. 2013;11(3):156-173.

9. Olphert W, Damodaran L, Balatsoukas P, Parkinson C. Process requirements for building sustainable digital assistive technology for older people. Journal of Assistive Technologies. 2009;3(3):4-13.

10. Wright D. Structuring stakeholder e-inclusion needs. Journal of Information, Communication and Ethics in Society. 2010;8(2):178-205.

11. Nadin S, Cassell C. Using Data Matrices. In: Cassel C, Symon G, editors. Essential Guide to Qualitative Methods in Organizational Research. London: SAGE Publications Ltd ; 2011. p. 271–287.

12. Obal M, Kunz W. Trust development in e-services: a cohort analysis of Millennials and Baby Boomers. J. Serv. Manag. 2013;24(1):45–63.

13. Moura P, Fazendeiro P, Marques P, Ferreira A. SoTRAACE — Socio-technical risk-adaptable access control model. 2017 International Carnahan Conference on Security Technology (ICCST) [Internet]. IEEE; 2017 Oct; Available from: https://doi.org/10.1109/ccst.2017.8167835

Ambient Assisted Living (AAL), Digital Advisory Services, Trust in Online Services, Data Security.

O72 Self-confidence for emergency intervention and nurses’ perceptions of importance of the Intra-Hospital Emergency Team

Marisa j cardo 1 , pedro sousa 2,3, 1 centro hospitalar de leiria, 2410-197 leiria, portugal; 2 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 center for innovative care and health tecnhology, polytechnic institute of leiria, 2411-901 leiria, portugal, correspondence: marisa j cardo ([email protected]).

The Intra-Hospital Emergency Team (IHET) has emerged to respond to situations of clinical deterioration of hospitalized patients, being nurses the fundamental links in the activation. During a situation of clinical deterioration with the need for IHET intervention, the actions of nurses in inpatient services depend on many factors. To facilitate the activation and effectiveness of this team, nurses must have self-confidence to carry out the activation of IHET. Nurses believe that IHET’s intervention is important in the pursuit for safe care and that the most important ingredient for the effective use of this team is the nurse.

This study aims to evaluate the level of self-confidence for emergency intervention of nurses and to identify the nurses’ perceptions of the importance of the IHET.

This correlational study included 129 nurses from the Centro Hospitalar de Leiria who answered a questionnaire about the perception of IHET importance and the self-confidence scale for emergency situations validated for the Portuguese population by Martins et al. (2014), which consists of 12 items with Likert type responses. Pearson correlation and t-student were used for data analysis.

In this study, 84% were female nurses with a mean age of 39.70 ± 9.02 years, with an average professional experience of 16.97 ± 8.95 years, and the majority with training in the emergency area (94%). The mean self-confidence level of the nurses was 3.263 ± 0.571, for a maximum of 5 points. Regarding the nurses’ perception of importance of the IHET, a positive tendency was observed (results ranged from 3.426 ± 0.570 to 4.775 ± 0.419). A partial relation between the professional experience (r = 0.25; p = 0.004), the training (t = 6.143; p ≤ 0.0001) and the level of self-confidence (r = -0.205; p = 0.020) with the level of perception is highlighted.

In this study, regarding the self-confidence for emergency intervention, the nurses demonstrated confidence, albeit modestly. Likewise, they presented a tendency towards positive agreement regarding the importance of IHET. Of note, the higher the experience of the nurses, the greater the importance attributed to the IHET. Finally, it was verified that the higher the nurse's self-confidence index, the lower the feeling of insecurity in an emergency situation.

Hospital Rapid Response Team, Nursing, Emergency situation, Perception of importance, Self-confidence in emergency

O73 Relationship between cognitive impairment and nutritional assessment on functional status in institutionalized Portuguese older adults

Catarina caçador 1 , edite teixeira-lemos 2,3 , jorge oliveira 2,3 , fernando ramos 1 , manuel t verĂ­ssimo 4,5 , maria c castilho 6, 1 faculty of pharmacy, university of coimbra, 3000-548 coimbra, portugal; 2 agrarian school, polytechnic institute of viseu, 3500-606 viseu, portugal; 3 centre for the study of education, technologies and health, polytechnic institute of viseu, 3504-510 viseu, portugal; 4 hospitais da universidade de coimbra, centro hospitalar e universitĂĄrio de coimbra, 3000-075 coimbra, portugal; 5 faculty of medicine, university of coimbra, 3004-504 coimbra, portugal; 6 laboratory of bromatology, pharmacognosy and analytical science, faculty of pharmacy, university of coimbra, 3000-548 coimbra, portugal, correspondence: catarina caçador ([email protected]).

Elderly are particularly vulnerable to nutritional change deficits. Malnutrition in elderly patients is frequently underdiagnosed [1] and it has a large number of negative consequences on health and quality of life [2]. Even in industrialized countries undernutrition is becoming an alarming phenomenon, especially involving elderly institutionalized subjects. Few studies have focused on the relationship between patient’s nutritional assessment and a severity of cognitive impairments, comorbidity and functional status in institutionalized older adults.

In the present study we evaluated the relationship between functional disability, cognitive impairment and nutritional status.

This was an observational study with data collected from residents living in institutions in the district of Viseu (centre of Portugal). Inclusion criteria were: subjects aged 65 or older, living in institutions, that voluntarily accepted to participate in the study. All of the 216 subjects studied underwent multidimensional geriatric assessment. A form was filled with sociodemographic data and the nutritional state was assessed with the Mini Nutritional Assessment (MNA), whereas cognitive performance was evaluated by the Mini-Mental State Examination (MMSE). The functional state was assessed by Barthel Index (BI). Statistical evaluations (p < 0.05) were based on Qui-square tests between Barthel Index (BI), Mini Mental State Evaluation (MMSE), Body Mass Index (BMI) and Mini Nutritional Assessment (MNA) scores. Statistical evaluations (p < 0.05) were based on Qui-square tests between BI, MMSE, BMI and MNA scores.

A cognitive impairment in MMSE performance was displayed in 39.4% patients. Slight disability occurred in 69.4% of the residents, 24.1% were independent in activities of daily living and only 6.5% of the seniors had moderate dependence. There was a proportional increase of the cognitive impairment of the elderly (p ≤ 0.001) with increasing dependence. According to MNA, 27.8% of the elderly were at risk of malnutrition and 71.3% showed no nutritional problems. Statistical analysis showed that dependence increased the risk of malnutrition.

A close relationship between malnutrition and functional dependence has been obtained. Both tests, MNA and BI, are positively associated. The scores of BI can help to determine who may be at risk of poor nutrition.

1. Gariballa SE. Nutritional support in elderly patients. J Nutr Health Aging. 2000; 4: 25-7.

2. PĂŠrez-Llamas F. Risk of desnutrition in the Spanish population. Evaluation of the current situation and need for a nutritional intervention. Med Clin(Barc) 2012; 139:163-4

Elderly, Cognitive impairment, Functional status, Nutritional assessment, Malnutrition.

O74 The evaluation of nursing care provided by Integrated Continuing Care Teams

Carlos vilela 1,2 , paulino sousa 2 , filipe pereira 2, 1 institute of health sciences, portuguese catholic university, 1649-023 lisboa, portugal; 2 nursing school of porto, 4200-072 porto, portugal, correspondence: carlos vilela ([email protected]).

The evaluation of nursing care is an imperative for the continuous development of quality improvement and for the cyclical redefinition of action plans that correspond to the real needs of the population. In the specific case of the Integrated Continuing Care Teams (ICCT), inserted in the National Network of Continued Integrated Care of Portugal, it is justified the use of a panel of indicators of health gains “Sensitive to Nursing Care”, given the nature of the services provided in these units, which helps to measure the clinical results obtained.

1) To identify the main nursing care needs of ICCT clients; and 2) Identify health gains sensitive to nursing care related to those needs.

Based on the definition model of indicators “sensitive to nursing care”, we developed a quantitative study-exploratory, descriptive and correlational, using the analysis of the nursing documentation available in Information Systems in use, in a convenience sample of 217 cases, attended in four ICCT of the northern region of Portugal, from October 2012 to May 2013.

From the analysis of 9,258 documented nursing diagnoses, it was possible to generate eight “types” of health gains indicators. Five related to the dependent person and three referring to the family caregiver (FC). Here the great incidence of the care provided in these ICCT was revealed, where the Gains in autonomy/independence in the universal requirements of self-care and Gains in knowledge of the FC were the most representative areas of the care provided. Then, came the domains of Gains in the evolution of nursing diagnoses of the dependent person (“others”) and Gains in knowledge of the dependent person. Approximately 10% of the results computed were in areas that focused on Gains in FC performance (11.20%), Gains in performance of dependent persons (10.43%) and Gains in the prevention of Ulcers of Pressure (9.32%). With an order of magnitude in the order of 3.30% emerged the field of indicators related to Gains in the evolution of nursing diagnoses centred on the Role of Care Provider.

Reflecting on these results allows a more effective approach to the representation of nursing care in the evaluation of quality and, certainly, a better redefinition of the panel of indicators at the national level, with “sensitivity” to the nursing work developed by ICCT.

Healthcare Quality Indicators, Quality of Health Care, Home Care Services, Nursing Care, Nursing Care Management.

O75 Psychosocial impact of the powered wheelchair on the social participation of its users

InĂŞs domingues 1 , joĂŁo pinheiro 1 , anabela martins 2 , patrĂ­cia francisco 2, 1 faculty of medicine, university of coimbra, 3004-504 coimbra, portugal; 2 physiotherapy department, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal, correspondence: inĂŞs domingues ([email protected]).

There is a growing prevalence of disability worldwide, which indicates an increasing number of persons who might benefit from assistive technologies. Several studies showed positive effects of the use of assistive technologies on activity and participation of adults with mobility impairments [1, 2], as well as on psychosocial factors [3, 4].

The purpose of this study is to assess the psychosocial impact of the powered wheelchair, evaluating its repercussions on the social participation of its users.

Design - Observational, descriptive, cross-sectional study; Setting – All data was collected from May to October 2017; Participants - 30 powered wheelchair users with mean age of 40.63 years old (60% male) with diverse medical conditions (SCI, TBI, CP, among others); Main outcome measures - Interviews were conducted by an independent researcher using the Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST), the Psychosocial Impact of Assistive Devices Scale (PIADS) and the Activities and Participation Profile Related to Mobility (PAPM), in addition to demographic, clinical and wheelchair-related questions.

Participants were quite satisfied with both the assistive technologies and the related services, with the lowest QUEST scores belonging to those who had been using their wheelchairs for a longer period of time. PAPM scores revealed significant restrictions in participation (6.7% of participants with mild restrictions, 56.7% with moderate restrictions and 36.7% with severe restrictions), with a worst participation profile also among the users who had the wheelchairs for a longer period. The most satisfied users were the ones with better performance in terms of social participation. PIADS scores showed a positive impact of the powered wheelchairs in all subscales, with the following average scores: total 1.37, competence 1.39, adaptability 1.32 and self-esteem 1.38. The psychosocial impact, in terms of adaptability, was higher among users who transitioned from a manual wheelchair to a powered wheelchair compared to those who already had a powered wheelchair previously (1.85 vs 1.10; p = 0.02).

There was an overall positive psychosocial impact of powered wheelchairs, and, therefore, an increase in the quality of life of the users. Adaptability to the device seems to be the most contributing factor to social participation.

1. Salminen AL, Brandt A, Samuelsson K, Toytari O, Malmivaara A. Mobility devices to promote activity and participation: a systematic review. J Rehabil Med. 2009;41(9):697-706.

2. Lofqvist C, Pettersson C, Iwarsson S, Brandt A. Mobility and mobility-related participation outcomes of powered wheelchair and scooter interventions after 4-months and 1-year use. Disabil Rehabil Assist Technol. 2012;7(3):211-8.

3. Martins A, Pinheiro J, Farias B, Jutai J. Psychosocial Impact of Assistive Technologies for Mobility and Their Implications for Active Ageing. Technologies. 2016;4(3):28.

4. Buning ME, Angelo JA, Schmeler MR. Occupational performance and the transition to powered mobility: a pilot study. Am J Occup Ther. 2001;55(3):339-44.

Assistive technologies, Powered wheelchair, Psychosocial impact, Social participation.

O76 Work and breastfeeding: mom’s double duty

Rita mf leal 1 , amâncio as carvalho 2 , marĂ­lia s rua 1, 1 school of health, university of aveiro, 3810-193 aveiro, portugal; 2 school of health, university of trĂĄs-os-montes e alto douro, 5000-801 vila real, portugal, correspondence: rita mf leal ([email protected]).

Work is known to be an obstacle to breastfeeding (BF) continuation [1–5].

To analyse if there is a relationship between the employment status of the mother, the age of the child when she returns to work, the level of difficulty in reconciling work with BF and BF duration.

An observational, descriptive-correlational and cross-sectional study was conducted. The population comprised mothers who had a biological child in 2012 and 2013 in the Centre region of Portugal. A non-probabilistic sample (n = 427) was collected using an online questionnaire with snowball effect from November 2015 to September 2016. Data was analysed using SPSS software.

Most women had difficulties in reconciling work with BF (72.5%). Of these, 26.6% said that conciliating work and BF was “very difficult” or “difficult”, while 73.4% said they had “some difficulty” or “very little difficulty”. Job status (employed versus unemployed) did not present a statistically significant relationship with BF duration. As to the age of the child when the mother returned to work, we verified a statistically significant relationship (p = 0.001) with BF duration. The level of difficulty the mother experienced in reconciling her job with BF also presented a statistically significant relationship (p = 0.002) with BF duration.

The majority of mothers reported difficulties in reconciling work with BF. Women who returned to work before their child was 6 months old had shorter BF duration. This reinforces the need to explore family’s timely expectations, and plan strategies to promote an effective management of work and BF. In this context, we believe there is a need to establish in our culture jurisdiction policies that favour BF and work. Health professionals should act as mediators in the definition of family-oriented health policies. In this case, the extension of a parental leave for mothers up to 6 months postpartum, the existence of a place at work where breastfeeding mothers can extract and conserve their breastmilk or the existence of day-care centres in the workplace, in accordance to the Global Strategy for Infant and Young Child Feeding [6,7], may prolong BF duration and ease mothers difficulty in managing BF when returning to work.

1. Lynch S. Breastfeeding and the workplace. Community Pract. 2016;89(6):29–31.

2. Smith JP, McIntyre E, Craig L, Javanparast S, Strazdins L, Mortensen K. Workplace support, breastfeeding and health. Fam Matters. 2013;93:58–73.

3. Sriraman NK, Kellams A. Breastfeeding: What are the Barriers? Why Women Struggle to Achieve Their Goals. J Women’s Heal. 2016;0:1–9.

4. UNICEF. From the First Hour of Life: Making the case for improved infant and young child feeding everywhere PartI: Focus on Breastfeeding [Internet]. New York; 2016. Available from: http://www.unicef.pt/docs/pdf_publicacoes/FromTheFirstHourOfLife-Part1.pdf

5. Rivera-Pasquel M, Escobar-Zaragoza L, González de Cosío T. Breastfeeding and Maternal Employment: Results from Three National Nutritional Surveys in Mexico. Matern Child Health J. 2015;19(5):1162–72.

6. World Health Organization, United Nations Children’s Fund. Global Strategy for Infant and Young Child Feeding. World Heal Organ [Internet]. 2003 [cited 2016 Dec 21];1–30. Available from: http://www.paho.org/english/ad/fch/ca/GSIYCF_infantfeeding_eng.pdf

7. IBFAN Portugal Rede Internacional Pró-Alimentação Infantil. Relatório de Portugal da Iniciativa Mundial Sobre Tendências do Aleitamento Materno (WBTi) Situação da EstratÊgia Global para a Alimentação de Lactentes e Criança [Internet]. 2015 [cited 2016 Aug 29]. Available from: http://www.worldbreastfeedingtrends.org/GenerateReports/report/WBTi-Portugal-2015.pdf

Breastfeeding, Employment, Job, Parental leave.

O77 Health information shared in blogs by breast cancer survivors living in Portugal

Francisca mmc pinto 1 , paulino af sousa 2,3 , maria rsp esteves 4,5, 1 universidade catĂłlica portuguesa, 4169-005 porto, portugal; 2 escola superior de enfermagem do porto, 4200-072 porto, portugal; 3 centro de investigação em tecnologias e serviços de saĂşde, 4200-450 porto, portugal; 4 cooperativa de ensino superior politĂŠcnico e universitĂĄrio, 4560-462 penafiel, portugal; 5 instituto de investigação e formação avançada em ciĂŞncias e tecnologias saĂşde, 4560-462 penafiel, portugal, correspondence: francisca mmc pinto ([email protected]).

Breast cancer survivors (BCS), and other cancer patients, are digital social networks users; have their personal blogs where share their life/disease experiences after being diagnosed for cancer [1, 2]. Research on blogging activity among BCS is scarce and suggests that is a multifaceted activity with several purposes: self-management of emotions, problem-solving, and sharing information [3]. But the question remains: what health information is share by BCS in blogosphere?

The present work reports the results of a study that explored personal blogs of BCS who lives in Portugal, with focus on written health information that was posted and/or commented.

A qualitative study design and thematic content analysis. Blog selection by snowball strategy that included 3 phases: I) phase 1: first 20 search results on Google, with Portuguese keywords “ blogues cancro da mama ”; II) phase 2 and 3: links to other blogs that are present on each blog selected on phase 1 and 2, respectively. Blogs included for analysis met all criteria for inclusion: I) personal blog of women self-identified with a breast cancer diagnosis; II) blogger profile allows to confirm residence in Portugal; III) blog is public domain; IV) blog must present data related to post-primary treatment phase. Data collection was done between March–November, 2017. The scope analysis started on first post after finish primary treatment for breast cancer and ended in blog’s last post at time we finished to read it.

38 blogs were included for analysis. Results refer to health information shared by BCS in posts and commentaries between 2007-2017. Most of the information shared were uncertainties regarding: I) nutrition & physical exercise recommendations; II) management of long-term side effects of cancer treatment and comorbidities; III) management recurrence risk and psychological wellbeing; IV) treatment plans & health surveillance during survivorship; V) management of body image changes; VI) non-conventional therapies benefits & risks; vii) news about research on cancer. Less shared information regards to: I) return to work & social protection; II) general healthcare recommendations; III) community support to cancer patients; IV) genetics & heredity; V) sexuality; VI) infertility/fertility after breast cancer.

BCS use personal blogs to share difficulties, uncertainties and to search for information support to manage their health condition. Having access to useful information and education may help BCS manage uncertainty in illness, improve health literacy and self-efficacy to manage health condition. This study alerts health professionals to pay attention to BCS information and emotional needs.

1. Kim S, Chung D. Characteristics of cancer blog users. J Med Libr Assoc. 2007;95(4):445-50.

2. Damåsio C, Nunes LM, Sobral JM. A Anålise de Redes Sociais no estudo do processo da construção da ajuda mútua da pessoa com doença oncológica com blogue. REDES- Revista hispana para el anålisis de redes sociales. 2014;25(1):153-89.

3. Koskan A, Klasko L, Davis SN, Gwede CK, Wells KJ, Kumar A, et al. Use and Taxonomy of Social Media in Cancer-Related Research: A Systematic Review. American Journal of Public Health. 2014;104(7):e20-e37.

Breast cancer survivors, Blogs, Health information needs, Content analysis.

O78 Visual images spectrum android classification for diabetic foot ulcers

Ricardo vardasca 1 , rita frade 2,3 , rui carvalho 4 , joaquim mendes 2,3, 1 faculdade de engenharia, universidade do porto, 4200-465 porto, portugal; 2 porto biomechanics laboratory, faculdade de engenharia, universidade do porto, 4200-465 porto, portugal; 3 instituto de ciĂŞncia e inovação em engenharia mecânica e engenharia industrial - laboratĂłrio associado de energia, transportes e aeronĂĄutica, 4200-465 porto, portugal; 4 clĂ­nica multidisciplinar do pĂŠ diabĂŠtico, centro hospitalar do porto, 4099-001 porto, portugal, correspondence: ricardo vardasca ([email protected]).

According to the Portuguese Society of Diabetology, about 415 million (8.8%) of the worldwide population was diagnosed with Diabetes Mellitus in 2015. Being the Portuguese population incidence of 1 million people (13.3% of the total population), with a national annual total estimated cost of 1.7 billion euros with this condition. One in each four patients of DM develop Diabetic Foot Ulcer (DFU) in their lifetime, ending some in amputations and consequently in death [1]. Furthermore, the Directorate-General of Health estimated that in 2016, the prevalence of DFU was of 11.5% in the Portuguese population [2]. Throughout the years, DFU assessment tools have been created, such as: I) scales, which depend on visual examination being highly subjective; II) invasive methods that use manual procedures for depicting the shape, area, depth and volume of wounds, that are time consuming, susceptible to human errors and can lead to wound contamination [3]; or III) non-invasive methods such as optical based techniques which provide three-dimensional information about the lesion, which are expensive, time consuming and require user training [4]. Therefore, more objective measures are required.

This research study aims to create an objective and simple methodology based in a mobile application which incorporates an algorithm that characterises DFU ulcers providing information about its area and tissue colour composition.

An Android mobile application was developed, tested and evaluated in 200 diabetic foot ulcers, after signing the informed consent and the procedure being explained to patients. The study was approved by the ethical committee of Centro Hospitalar do Porto.

Results & Conclusions

The use of this new android mobile app showed a high correlation with the traditional clinical assessment (r 2 = 0.97), reducing subjectivity, avoiding wound contamination probability and smaller costs when compared to conventional solutions.

The authors gratefully acknowledge the funding of project NORTE-01-0145-FEDER- 000022 - SciTech - Science and Technology for Competitive and Sustainable Industries, cofinanced by Programa Operacional Regional do Norte (NORTE2020), through Fundo Europeu de Desenvolvimento Regional (FEDER) and of project LAETA - UID/EMS/50022/2013.

1. Sociedade Portuguesa de Diabetologia. RelatĂłrio Anual do ObservatĂłrio Nacional da Diabetes. Diabetes: Factos e NĂşmeros; 2016.

2. Direcção Geral de Saúde. Relatório do Programa Nacional para a Diabetes; 2017.

3. Plassmann P. Measuring wounds. Journal of Wound Care 4(6); 1995. 269-272.

4. Wang L, Pedersen PC, Strong DM, Tulu B, Agu E, Ignotz R, He Q. An Automatic Assessment System of Diabetic Foot Ulcers Based on Wound Area Determination, Color Segmentation, and Healing Score Evaluation. Journal of diabetes science and technology 10(2); 2016. 421-428.

Android, Classification, Diabetic foot ulcers, Imaging, Wound assessment.

O79 Sleep disorders in elderly: is it a problem?

VĂ­tor moreira 1 , ângela mota 1 , bĂĄrbara santos 1 , olĂ­via r pereira 1,2 , xavier costa 1,3, 1 departamento de tecnologias de diagnĂłstico terapĂŞutica, escola superior de saĂşde, instituto politĂŠcnico de bragança, 5300-121 bragança, portugal; 2 centro de investigação de montanha, instituto politĂŠcnico de bragança, 5300-253 bragança, portugal; 3 centro hospitalar de trĂĄs-os-montes e alto douro, unidade hospitalar de chaves, 5400-279 chaves, portugal, correspondence: olĂ­via r pereira ([email protected]).

Sleep disorders are one of the most relevant clinical symptoms in adults, with increasing prevalence throughout life, reaching in large scale the elderly population.

The present study aimed to characterize sleep disorders in the elderly and its pharmacological therapy.

A cross-sectional study was performed through application of a questionnaire to 381 elderlies in pharmacies of Braga, Bragança and Porto cities. Descriptive statistics were used, as well as univariate and multivariate statistical analysis, with a significance level of 5%.

Elders were most from female gender (60.1%), aged between 65 and 74 years (49.6%) and lived in rural areas (73.4%). Just 36.5% of the elderly practice physical exercise and an important amount of elderly drink coffee and tea (68.8% and 73.2%, respectively). Concerning sleep characteristics, the elders go to bed between 6 p.m. and 2 a.m. and about half of participants (52.8%) go between 10 p.m. and 12 a.m. Approximately one third had difficulty in falling asleep (38.1%), especially the elderly from the region of Bragança. During sleep, a large proportion of the elderly reported having sleep stops (78.2%) usually for 15-30 minutes and 26.5% reported waking up twice during the night. Taking into account that the time of delay to sleep is an important factor, in the present study, this was statistically related with the gender (p = 0.003) and with taking medication to sleep (p < 0.001). The same two factors are statistically related with “ wake up during the night ” (p = 0.046, p = 0.003, respectively). 40.7% of the surveyed elderly have been diagnosed with sleep disorders, mainly insomnia (19.7%) following by restless legs syndrome (3.4%), excessive drowsiness (2.9%) and sleep apnoea, sleep-walking and narcolepsy (about 1%). It is important to refer that among the elderly, that assume to suffer from sleep disorders, just 40.7% have been consulted by a physician. Of those who consulted a doctor, 21.3% of the elderly were advised to change their lifestyle habits, such as, to avoid heavy meals before bedtime, to establish a sleep routine, to lie down only when he/she is sleepy and to practice physical activity. Concerning the pharmacological therapy, 41.7% take medication for sleep disorders, 9% take medication without consulting a doctor, while 32.5% elderly people take medication after consulting a doctor. From these, the most used are benzodiazepines such as alprazolam (12.5%), diazepam (8.6%), lorazepam (4.5%) and brotizolam (3.7%).

Sleep disorders are frequent in the elderly population. It is necessary to raise awareness in this population group, which associates sleep problems to age.

Sleep Disorders, Elderly, Pharmacological Therapy, Sleep and aging.

O80 Physical and mental health in community-dwelling elderly: functional assessment and implications to multidisciplinary clinical practice

RogĂŠrio rodrigues 1 , zaida azeredo 2 , sandrina crespo 1 , cristiana ribeiro 1 , isabel mendes 1, 1 health sciences research unit: nursing, nursing school of coimbra, 3046-851 coimbra, portugal; 2 research in education and community intervention, piaget institute, 1950-157 lisbon, portugal, correspondence: rogĂŠrio rodrigues ([email protected]).

Disability in old age may pose barriers to the achievement of goals and the ability to carry on roles that are important to a person. Knowledge about the functional disability in physical and mental areas in old people is crucial for the planning of interventions by the health technicians.

To evaluate functional physical and mental abilities of community-dwelling elderly for planning health care and the implementation of services.

Quantitative descriptive-correlational study of the project “ The oldest old: Coimbra ’ s ageing study ” PTDC/CS-SOC/114895/2009 [1]. Sample constituted by 202 elderlies from a population of 808 (three age groups: 65-74; 75-84 and ≥ 85 years old), obtained in a randomized and probabilistic trial, from the files of users of a Health Centre, after ethic commission approval. As instrument and method of data collection was used the QAFMI (Portuguese version of the Older Americans Resources and Services), to evaluate the functional status, in terms of physical and mental health. Data analysis: a) descriptive analyses on the most common pathologies, their limitation and medication consumption; b) functional evaluation using the score given by the computer software based on the model of QAFMI.

As main results we point out, the pathologies with major interference in physical activities, which are chronic bronchitis, skin disease, arthritis or rheumatism, effects of stroke and circulation troubles. Related to the consumption of medication, it was observed that for the most cited pathologies (hypertension and cardiac problems) there is a great percentage of consumption. Others (arthritis or rheumatism) have a lower prescription. There was no statistically significant difference for physical health, in the comparative study between genders. There are differences between the age groups, with lower scores for the oldest. Related to mental functional abilities, there is a statistically significant difference for the diverse age groups, with an increase of impaired capacity for the oldest. For the whole of the sample gender differences exist, being worst scored the women.

Women and the oldest, in general, appear as the ones that present lower functional physical abilities. The classification by the QAFMI model, regarding the area of mental health, reports the approach of cognitive decline and perception of memory loss. Like in other studies, differences were found between genders, resulting in worse scores for women.

1. Rodrigues R. Os muito idosos: estudo do envelhecimento em Coimbra – Perfis funcionais e intervenção [eBook]. Coimbra: Unidade de Investigação em Ciências da Saúde: Enfermagem, Escola Superior de Enfermagem de Coimbra; 2014.

Elderly, Community-dwelling, Functional assessment, Physical health, Mental health.

O81 Adherence to therapy in elderly

Ana rodrigues, clara rocha, jorge balteiro, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal, correspondence: jorge balteiro ([email protected]).

In recent decades, the elderly population has grown significantly, leading to an increase in the number of chronic diseases and, consequently, to an increased need for polymedication for disease control. Polymedication means the use of multiple medications, which can cause adverse reactions/drug interactions that increase depending on the number of medications administered. In elderly with high number of pathologies associated or not with age, complex therapies are instituted which may lead to non-adherence to therapy. This situation can impair the aim of treatment, worsen the disease, add errors to diagnosis and treatment itself, or even lead to therapy failure.

The objective of the present study is to assess the adherence to therapy in elderly institutionalized during the day and investigate the main factors that influence it.

The study was conducted with the collection and processing of questionnaires, consisting of 3 parts: demographic characterization (e.g., age, gender, marital status); therapeutic characterization (amount of daily medications and treatment regimen) and evaluation of adherence to therapy by adapting the scale of measurement of adherence to treatment (MAT). The study sample was made up of 51 elderlies institutionalized during the day.

It was observed that 98% of seniors join the instituted therapy: 37.3% showed a level of 5 therapeutic membership, approximately 49% showed a level of accession of 6 and only 14.9% expressed below. Of the factors studied as susceptible of influencing therapeutic membership, it was only found that oblivion is the conditioning factor associated with the recommended therapy (p = 0.047), affecting the levels of membership.

The results obtained allowed to conclude that the high levels of membership can be associated to the fact that the elderly were institutionalized during the day, having support available. Another possible explanation is the fact that the same live with family, being also accompanied during the night.

Adherence to therapy, Elderly, Polymedication, MAT.

O82 Ionizing radiation effects in a bladder and an esophageal cancer cell lines

Neuza oliveira 1 , mafalda vitorino 1 , ricardo santo 2 , paulo teixeira 1,3,4 , salomĂŠ pires 3,5 , ana m abrantes 3,5 , ana c gonçalves 5,6 , clara rocha 7,8 , paulo c simĂľes 9 , fernando mendes 1,3,5 , maria f botelho 3,5, 1 department biomedical laboratory sciences, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal; 2 faculty of sciences and technology, university of coimbra, 3030-790 coimbra, portugal; 3 biophysics institute-cnc.ibili, faculty of medicine, university of coimbra, 3000-548 coimbra, portugal; 4 serviço de anatomia patolĂłgica, cento hospitalar universitĂĄrio de coimbra, 3000-075 coimbra, portugal; 5 center of investigation in environment, genetics and oncobiology, faculty of medicine, university of coimbra, 3001-301 coimbra, portugal; 6 laboratory of oncobiology and hematology, faculty of medicine, university of coimbra, 3001-301 coimbra, portugal; 7 department complementary sciences, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal; 8 institute for systems engineering and computers at coimbra, 3030-290 coimbra, portugal; 9 radiation oncology department, hospital and university center of coimbra, 3000-075 coimbra, portugal, correspondence: fernando mendes ([email protected]).

Oesophageal cancer (EC) and bladder cancer (BC) share the same embryonic origin (endoderm) and according to the latest numbers, EC is the eighth and BC is the ninth most frequently diagnosed cancers worldwide. Radiotherapy (RT) is currently used in the treatment of both types of cancer [1–4].

Assessment of the effects of ionizing radiation (IR) on cell lines of EC (OE19) and of BC (HT1376), namely viability and cell proliferation, characterisation of cellular death type and cell cycle, as well to establish survival factor and determination of the aggression model, after different radiation dose exposure to calculate half lethal dose (DL50).

Cell lines were cultured and exposed to single-shot doses of X-rays from 0.5 Gy to 12.0 Gy, except control cells (0.0 Gy). Cell viability and proliferation were assessed by trypan blue assay. The proliferation index determination was performed by immunocytochemistry trough Ki-67 expression. Flow cytometry was used to assess cell death type and cell cycle and the main morphological features of cell death were evaluated by May-Grünwald Giemsa stain. Clonogenic assays enabled assessment to differences in reproductive viability (capacity of cells to produce progeny) [5–11].

Our results showed that IR induces cytotoxic and antiproliferative effects in OE19 and HT1376 cells in a dose-dependent manner and dose and time-dependent manner, respectively. Main types of cell death observed were apoptosis or necrosis. We also observed cell cycle arrest on G2/M phase and a decrease of the Ki-67 expression in both cell lines studied. The cell survival curves were established according to the quadratic linear model for both cell lines. For OE19 cell line the DL50 was 2.47 Gy and for HT1376 cell line was 3.10 Gy, accompanied by a decrease in the survival factor for both lines.

The direct effects of the DNA molecule that are unrepairable, activate multiple intracellular mechanisms on radio-sensitivity, such as cell death, namely by apoptosis and necrosis and cell cycle arresting G2/M phase. The increase radiation dose induces alterations in cell death, from apoptosis to necrosis. According to our results, OE19 is more radio-sensible than HT1376 cell line. This study demonstrates that molecular mechanisms underlying RT are important in oesophageal adenocarcinoma and bladder cancer therapeutic approaches.

The authors would like to thank to Institute for Biomedical Imaging and Life Sciences is a research Institution of the Faculty of Medicine, University of Coimbra and to Radiotherapy Service of the Centro Hospitalar e UniversitĂĄrio de Coimbra.

1. Torre LA, Bray F, Siegel RL, Ferlay J, Lortet-tieulent J, Jemal A. Global Cancer Statistics, 2012. CA a cancer J Clin. 2015;65(2):87–108.

2. Antoni S, Ferlay J, Soerjomataram I, Znaor A, Jemal A, Bray F. Bladder Cancer Incidence and Mortality: A Global Overview and Recent Trends. Eur Urol. 2016;1–13.

3. Nassim R, Mansure JJ, Chevalier S, Cury F, Kassouf W. Combining mTOR Inhibition with Radiation Improves Antitumor Activity in Bladder Cancer Cells In Vitro and In Vivo: A Novel Strategy for Treatment. PLoS One. 2013;8(6).

4. Rubenstein JH, Shaheen NJ. Epidemiology, diagnosis, and management of esophageal adenocarcinoma. Gastroenterology. 2015;149(2):302–317.e1.

5. Mendes F, Sales T, Domingues C, Schugk S, Abrantes AM argarida, Gon??alves AC ristina, et al. Effects of X-radiation on lung cancer cells: the interplay between oxidative stress and P53 levels. Med Oncol. 2015;32(12):266.

6. Mendes F, Domingues C, Schugk S, Abrantes AM, Gonçalves AC, Casalta-Lopes J, et al. Single Shot Irradiation and Molecular Effects on a Diffuse Large B Cell Lymphoma Cell Line. J Cancer Res Treat. 2016;4(1):9–16.

7. Santo RP. Resposta Celular à Radioterapia – Estudo in vitro em linhas celulares do carcinoma da próstata Resposta Celular à Radiação Ionizante. Faculdade de Medicina da Universidade de Coimbra; 2017.

8. Tran WT, Iradji S, Sofroni E, Giles A, Eddy D, Czarnota GJ. Microbubble and ultrasound radioenhancement of bladder cancer. Br J Cancer. 2012;107(3):469–76.

9. Barbosa T V., Rosas MP, Costa AC, Rapoport A. Valor prognóstico do Ki-67 no carcinoma indiferenciado de grandes células de glândula salivar maior: estudo de 11 casos. Rev Bras Otorrinolaringol [Internet]. 2003;69(5):629–34. Available from: http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0034-72992003000500007&lng=en&nrm=iso&tlng=pt

10. Ding Z, Yang H-W, Xia T-S, Wang B, Ding Q. Integrative genomic analyses of the RNA-binding protein, RNPC1, and its potential role in cancer prediction. Int J Mol Med. 2015;36(2):473–84.

11. Franken NAP, Rodermond HM, Stap J, Haveman J, Bree C Van. Clonogenic assay of cells in vitro. 2006;(October 2016).

Esophageal cancer, Urinary Bladder Neoplasms, Radiotherapy, Ki-67 antigen.

O83 Sociodemographic characteristics and breastfeeding duration

Sociodemographic characteristics have been related to breastfeeding (BF) duration. Research has shown interest in these factors which may even be predictive [1].

To analyse a correlation between sociodemographic characteristics (maternal age, level of education, marital status, number of older children, child gender, child’s year of birth) and breastfeeding (BF) duration.

An observational, descriptive-correlational and cross-sectional study was conducted. The population comprised mothers who had a biological child in 2012 and 2013 in the Centre region of Portugal. A non-probabilistic sample (n = 427) was collected using an online questionnaire with snowball effect. Data was analysed using SPSS software.

Maternal age, marital status, child’s gender and child’s year of birth did not present a statistically significant relationship with BF duration. On the other hand, mothers educational level (p = 0.001) and the number of older children (p = 0.018) presented a statistically significant relationship with BF duration.

Our study did not establish a correlation between maternal age and BF duration which contradicts other findings [1–3] and can be explained by pregnancy and childbirth postponement nationwide [4]. In relation to mothers’ educational level, we verified a statistically significant relationship with BF duration, being that the mothers who detained a higher level of education, breastfed longer [1,5–7]. As for marital status, we did not verify a statistically significant relationship with BF duration, which is in line with a recent Portuguese study [5] but differs from other international studies [2,3] that refer that married mothers breastfeed longer. This difference can be explained by cultural differences and by the paradigmatic change of the women’s role, concept of marriage, and a diversity of forms of family life (great increase in unmarried couples). We also verified an increasing tendency in BF duration with the increase of the number of older children [6,7], but it was not lowest in primiparous women which contradicts other findings [6,7]. This may be clarified, once again, by the sociodemographic changes in the last decade in Portugal that include population ageing, an increasing tendency for families with an only child [4] and the great investment in parenthood and child well-being. Health care professionals should consider that sociodemographic characteristics are in constant change and so is its relation to health. These findings help health professionals to identify who may be most vulnerable to early weaning and allows them to explore expectations and develop a care plan accordingly.

1. Scott J a, Binns CW. Factors associated with the initiation and duration of breastfeeding: a review of the literature. Breastfeed Rev. 1999;7(1):5–16.

2. Callen J, Pinelli J. Incidence and duration of breastfeeding for term infants in Canada, United States, Europe, and Australia: a literature review. Birth. 2004;31(4):285–92.

3. Ogbuanu C, Glover S, Probst J, Hussey J, Liu J. Balancing Work and Family: Effect of Employment Characteristics on Breastfeeding. J Hum Lact. 2011;27(3):225–38.

4. Delgado A, Wall K. Famílias nos Censos 2011: Diversidade e Mudança [Internet]. 1a edição. Instituto Nacional de Estatística, editor. Lisboa: Imprensa de Ciências Sociais; 2014 [cited 2016 Dec 13]. 239 p. Available from: http://repositorio.ul.pt/bitstream/10451/23625/1/ICS_SAtalaia_VCunha_KWall_SMarinho_VRamos_Como_ASITEN.pdf

5. Dias A, Monteiro T, Oliveira D, Guedes A, Godinho C, Alexandrino AM. Aleitamento materno no primeiro ano de vida : prevalência, fatores protetores e de abandono. Acta Pediátrica Port. 2013;44(6):6–11.

6. Caldeira T, Moreira P, Pinto E. Aleitamento materno: estudo dos factores relacionados com o seu abandono. Rev Port Clínica Geral Clin Geral. 2007;23:685–99.

7. Lopes B, Marques P. Prevalência do aleitamento materno no distrito de Viana do Castelo nos primeiros seis meses de vida. Rev Port Clínica Geral. 2004;20:539–44.

Age, Breastfeeding, Education, Population characteristics, Population dynamics.

O84 Population analysis of cleft lip and/or palate patients treated in the Postgraduate Orthodontic Department of the Faculty of Medicine of the University of Coimbra

Ana roseiro, inĂŞs francisco, luisa malo, francisco vale, universidade de coimbra, 3004-531 coimbra, portugal, correspondence: ana roseiro ([email protected]).

Cleft lip and palate is one of the most common dentofacial congenital anomalies, affecting on average 1:700 new-borns. Although the aetiology of this condition is not fully understood, it seems to be related with both genetic and environmental factors. This type of malformation may occur isolated or it could be associated with a syndrome. When compared with the general population, cleft lip and palate patients present in a larger number, a series of dental anomalies in number, size and tooth shape.

Analyse in a population of cleft lip and/or palate patients a number of anatomical and sociodemographic characteristics.

The study included 60 patients referred to the Postgraduate Orthodontic Department of the Faculty of Medicine of Coimbra by the Children’s Hospital during the year of 2015. All data related to patients was obtained through a meticulous and thorough orthodontic exam (medical history, cast models, intra and extra oral pictures and radiographic exams).

Of the 60 patients included in the study, 65% were of male gender. Patients with 11 years of age were the most prevalent ones (5-22 years of range). The most common anomaly was unilateral lip and palate cleft (63%). Maxillary endognathy was present in 75% of the cases. 74% of the patients presented at least one dental agenesis being the upper lateral incisor the more common one.

Cleft lip and palate is more frequent in male individuals and seems to be associated with conditions like maxillary endognathy and dental agenesis, with orthodontic treatment being required in these patients.

Cleft lip and palate, Dental agenesis, Maxillary endognathy.

O85 Drug use in pregnant women of Mirandela, Macedo de Cavaleiros and Bragança

Ana branco 1 , ana coutinho 1 , ana machado 1 , bårbara alves 1 , miguel nascimento 1,2 , olívia r pereira 1,3, 1 departamento de tecnologias de diagnóstico terapêutica, escola superior de saúde, instituto politÊcnico de bragança, 5300-121 bragança, portugal; 2 serviços farmacêuticos, unidade local de saúde do nordeste, 5301-852 bragança, portugal; 3 centro de investigação de montanha, instituto politÊcnico de bragança, 5300-253 bragança, portugal.

The use of drugs during the gestational period is a subject of great concern, once the exposure of medicines may result in toxicities with possible irreversible lesions for the foetus. In fact, drugs in pregnancy have been restricted since the accident of thalidomide. In 1979 the U.S. Food and Drug Administration (FDA) adopted the classification of drugs as the risk associated with their use during pregnancy, these being classified into 5 classes (A, B, C, D and X).

The aim of the present study was to characterize the use of drug therapy in pregnant women of Mirandela, Macedo de Cavaleiros and Bragança regions.

A cross-sectional study was performed through application of a questionnaire to 134 pregnant women in the Northeast (Mirandela, Macedo de Cavaleiros and Bragança regions) during consultation in a health centre. Descriptive statistics were used, as well as univariate and multivariate statistical analysis, with a significance level of 5%.

The sample comprised a total of 134 pregnant women from the Northeast area, mostly with ages between 21 and 30 years or between 31 and 40 years (56.7% and 35.8%, respectively), holding secondary or higher education (48.5% and 42.5%, respectively) and employment (67.2%). About half of the pregnant (47.8%) were in the 3rd quarter of pregnancy. 78.4% (105 women) of the pregnant had used drugs during the pregnancy, 64.4% after medical prescription, and 71.6% have acquired the medication at the pharmacy. In detail, the medication most used was folic acid (64.2%, 86 of the pregnant women) which belongs to class A; paracetamol from class B (35.1%, n = 47), iodine (17.2%, n = 23) and iron (14.9%, n=20), both belonging to the class A. Less reported drugs have included metoclopramide (6.0%) and Vitamin D 3 (6.0%), from Class C and Class D, respectively. It is important to refer that 12.7% of the women had a chronic disease and 2.2% had an acute disease during pregnancy. Diseases more reported were asthma and diabetes.

In the present study, the use of drugs in pregnancy was independent of the education level, chronic or acute disease, locality, marital status, employment status, gestational period and health centre. The drugs most used by pregnant women belong to class A (18.5%), class B (25.9%) and class C (33.3%) and the less used belong to class D and X. Supplements such as folic acid, iodine and iron and the analgesic paracetamol were the most reported.

Pregnancy, Drug therapy, Risk, Disease.

O86 Knowledge representation about self-management of medication regime in Portuguese Nursing Information Systems

InĂŞs cruz, fernanda bastos, filipe pereira, nursing school of porto, 4200-072 porto, portugal, correspondence: inĂŞs cruz ([email protected]).

The use of a standardized language in nursing supports the nursing science and contributes to the management of the discipline's own knowledge [1, 2]. Nurses can control, practice, and teach only what they can name. The documentation of care in Nursing Information Systems in Portugal is based on the international classification for nursing practice (ICNPÂŽ) [3].

The purpose of this study is to describe and specify nursing diagnoses centred on the clients’ knowledge for self-managing the medication regime in chronic diseases.

Exploratory study. All nursing documentation, concerning all health centres and public hospitals, customized in the Portuguese nursing information System-SAPEÂŽ (2012) and in SClinicoÂŽ (2016) was subject to content analysis. Content analysis of nursing documentation was based on ICNPÂŽ terminology. After conducting content analysis, the material was validated by a group of 14 nursing experts.

A set of nursing diagnoses related to the person's knowledge on the medication regime management were specified. Knowledge refers to the development of the client's informational content about how to manage his medication regime. These diagnoses focus on the potential to improve knowledge about: self-management of the medication regime; medication regime; response to medication and side effects of medication; health services; complications and preventing complications of compromised self-management; and the use of devices to facilitate drug intake.

The specified diagnoses reflect nursing care needs that nurses document in the Portuguese nursing information systems, related to medication self-management. These results are contributes to the formalization of nursing science’s knowledge in the field of self-care of people living with chronic diseases.

1. Peace J, Brennan P. Formalizing Nursing Knowledge: from theories and models to ontologies. Connecting Health and Humans. 2009; 347-351.

2. Pereira F, Silva A. Information technologies and nursing practice: the Portuguese case. In Nursing and informatics for the 21st century: an international look at practice, education and HER trends. Weaver C, Delaney P, Weber, et al. AMIA. 2010; 435-441.

3. Cruz I, Bastos F, Pereira F, Silva A, Sousa P. Analysis of the nursing documentation in use in Portugal – building a clinical data model of nursing centered on the management of treatment regimen. Nursing Informatics. 2016. 225; 407-411.

Self-management, Knowledge, Nursing Diagnoses, Nursing Information Systems.

O87 Illness perceptions, beliefs about medicines and medication adherence in hypertension

Teresa guimarĂŁes, andrĂŠ coelho, anabela graça, ana r fonseca, ana m silva, correspondence: teresa guimarĂŁes ([email protected]).

Hypertension constitutes the most prevalent modifiable cardiovascular risk factor and a major risk factor for cognitive decline and dementia. Antihypertensive medication is essential to minimize the consequences of the disease, stressing the need for a high adherence to treatment to achieve hypertension control. Illness perceptions and beliefs about medication have been identified as important determinants of treatment adherence.

To identify patients’ perceptions on hypertension and beliefs about antihypertensive medication and assess associations between these beliefs and medication adherence.

63 hypertensive patients, 69.8% females, 54-95 years (M = 69.02; SD=10.07), 96.8% diagnosed for more than one year and with antihypertensive medication prescribed completed the Revised Illness Perception Questionnaire (IPQ-R), the Beliefs about Medicines Questionnaire (BMQ-Specific) and a medication adherence measure (Medida de Adesão aos Tratamentos – MAT).

Most of the patients perceived hypertension as a chronic (100%), cyclical (96.8%) condition, which can be controlled by medication (96.8%) and behaviour (90.5%), presented strong beliefs in the necessity of medication (96.8%), but also strong concerns about the consequences of taking it (87.3%). Patients reported a high level of adherence to medication (M = 5.41; SD = 0.55 on MAT, 7 as highest possible score) and low frequency of non-adherent behaviours. Significant positive correlations were found between necessity scale (BMQ) and hypertension timeline (acute/chronic) (rs(63)= 0.34; p < 0.01), treatment control (rs(63)= 0.43; p < 0.01), emotional representations (rs(63)= 0.50; p < 0.01) and between concerns scale (BMQ) and hypertension consequences (rs(63)= 0.26; p < 0.05); timeline (cyclical) (rs(63)= 0.46; p < 0.01), emotional representations (rs(63) = 0.32; p < 0.05). Significant negative correlations were found between concerns scale and personal (rs(63)= - 0.35; p < 0.01) and treatment (rs(63)= - 0.28; p < 0,05) control. We also found significant negative correlations between adherence (MAT) and hypertension timeline (cyclical) (rs(63)= -0.27; p < 0.05), consequences (rs(63)= -0.50; p < 0.01) and emotional representations (rs(62)= -0.37; p < 0.01).

Our findings suggest that illness perceptions play a key role in the way patients cope with their illness, through the development of patient’s beliefs concerning the necessity of medication and concerns about taking it, and also by directly influencing adherence to treatment. In our study, non-adherence is essentially unintentional (patients forget or are careless about treatment), what explains the lack of association between adherence and beliefs about medication, although we have found that, for the majority of subjects, concerns about taking medicines are outweighed by a belief in the necessity of the prescribed medication.

Hypertension, Illness perceptions, Beliefs about medicines, Medication adherence.

O88 Self-determined motivation and life satisfaction of elderly for the supervised physical activity practice

Marco batista 1 , joĂŁo martinho 1 , jorge santos 1 , helena mesquita 1 , pedro duarte-mendes 1 , rui paulo 2, 1 polytechnic institute of castelo branco, 6000-084 castelo branco, portugal; 2 sport, health and exercise research unit, polytechnic institute of castelo branco, 6000-084 castelo branco, portugal, correspondence: marco batista ([email protected]).

The self-determination theory suggests that humans have several basic psychological needs that are innate, universal and essential to health and well-being, namely, autonomy, competence and relation perception. The wellness construct, measured by satisfaction with life is understood as a judgment process in which individuals generally estimate the quality of their lives based on their own criteria.

This study has as main objective to identify the motivations, basic psychological needs and satisfaction with the life of the Portuguese elderly for the practice of supervised Physical Activity; and to analyse the relations and comparisons between levels of practice, sex and institutional context.

A cross-sectional study was carried out with 62 elderly volunteers of both sexes (15 males and 47 females), institutionalized and non-institutionalized, belonging to the Municipality of Castelo Branco, with a mean age of 79.61 Âą 9.34 years. The instruments used were the Behavioural Regulation in Exercise Questionnaire, the Basic Psychological Needs Scale Exercise and the Satisfaction with Life Scale.

The results show that, apparently, the motivation that maintains the constant practice of physical activity supervised by the elderly focuses on the autonomous motivation. It can also be observed that, except for the amotivation, where women have higher levels of amotivation for the practice of supervised physical activity than men, that is, they may be more exposed to an absence of motivational orientation, there are no differences at the level of the remaining motivational variables, as well as the basic psychological needs and life satisfaction between the male and the female. The results showed that, in the supervised elderly, the satisfaction of the basic psychological needs leads to autonomously motivated behaviours, promoting high levels of satisfaction with life.

We can conclude that autonomous motivation and the satisfaction perception of basic psychological needs are externalized as factors of great importance, because they appear to be a catalyst for this population to remain active and, in a way, to “compromise” with This lifestyle.

Self-determination theory, Satisfaction with life, Exercise, Well- being, Elderly.

O89 Application of the transcontextual model of motivation in the prediction of healthy lifestyles of active adults

Marco batista 1 , marta leytĂłn 2 , susana lobato 3 , maria aspano 3 , ruth jimenez-castuera 3, 1 polytechnic institute of castelo branco, 6000-084 castelo branco, portugal; 2 universidad pablo de olavide, 41013 sevilla, spain; 3 universidad de extremadura, 06071 badajoz, spain.

The Transcontextual Model suggests an original contribution to knowledge, and can illustrate human behaviour, interpreting the theory of self-determination, contrasting with the hierarchical model of motivation, as well as with the theory of planned behaviour, seeking to predict behaviours. The strength of this model lies in the integration of different motivational theories, such that an explanation is a predicted complement to the motivational processes that are inexplicable in theory by each component.

The present study was designed with the objective of testing an extension of the Transcontextual Model of motivation in predicting healthy lifestyles of active adults.

The study sample consisted in 560 Portuguese active adults of both genders, aged between 30 and 64 years (M = 44.86; DP = 7.14). The instruments used were the Behavioural Regulation in Exercise Questionnaire, the Basic Psychological Needs Scale Exercise, Questionnaire for Planned Behaviour and the Healthy Lifestyle Questionnaire. A structural equation model was elaborated with which the predictive relations between the analysed variables were examined. The indices obtained in the measurement model were: χ 2 = 527.193, p < .001; χ 2 /gl = 3.46; CFI = .94; IFI = .94; TLI = .93; GFI = .93; RMSEA = .60; SRMR = .43. The model goodness test showed the following adjustment indices: χ 2 = 728.052, p < .001, χ 2 /gl = 4.79, CFI = .91; IFI = .91; TLI = .90; GFI = .91; RMSEA = .075; SRMR = .070.

The structural equations model showed that the perception of social relation positively and significantly predicts autonomous motivation. In turn, this positively and significantly predicts control perception, predicting this, positively and significantly, the intentions. Eating habits and rest habits were positively and significantly predicted by intentions, and tobacco consumption was predicted negatively and significantly by intentions.

From the conclusions reached in this study, it is important to emphasize the importance of fostering social relations, since this will favour autonomous motivation, promoting a greater behavioural control over the intentions of the practitioners, thus generating more healthy eating habits, rest habits and lower consumption of tobacco.

Theory of planned behavior, Self-determination theory, Structural equations models, Exercise, Lifestyles.

O90 Symptoms management and adherence to antiretroviral medication - a nursing intervention

Eunice henriques, maria fm gaspar, escola superior de enfermagem de lisboa, 1600-190 lisboa, portugal, correspondence: eunice henriques ([email protected]).

The increase of new diagnoses of HIV infection in young males, especially those who have sex with men, and the high percentage of late diagnoses, particularly in middle-aged heterosexuals continues to be our major concerns. As AIDS is now considered a chronic disease, its effective and sustainable treatment relies naturally on self-management of symptoms as well as promoting adherence to therapy. This will reduce costs and promote well-being in the person's life.

To develop a nursing intervention program to enhance effectiveness in managing symptoms and consequently adherence to antiretroviral therapy in the person with HIV/AIDS. Specific Objectives: Validate and adapt the following instruments: “Revised Sign and Symptom checklist for Persons with HIV disease”; “Self-care symptom management for people living with HIV/AIDS; AACTG (Adult Aids Trial Group) Instruments; “HIV/AIDS Stigma Instrument - PLWA (HASI-P) ©”; I) to assess the frequency and intensity of the most common signs and symptoms associated with HIV infection among participants, as well as the strategies used; II) to evaluate the adherence to antiretroviral therapy (ART); and III) evaluate self-perception of stigma.

This is a quasi-experimental, study with pre and post intervention evaluation. Participants were selected at the HCC Day Hospital; HIV-infected, multi-stage, more than 18-year-old, to have ART for at least 6 months. We carried out the sociodemographic characterization and validation of the instruments. The intervention consisted in the application of a strategy manual for self-management of the most frequent symptoms.

The 1st Study sample consisted of 374 individuals, 74.1% were males. The age varied from 20 years to 78. The average with 47.34 years. Of the 64 presented symptoms, the number of symptoms ranged from 0 to 53 by the same participant. The mean was 18.96 symptoms per person. The most frequent symptoms were: anxiety, fatigue, fear and worries and depression. Most do not use strategies, or they are not effective in managing these symptoms. Of the total number of respondents, 30% never stopped taking the medication and the same number failed to take the therapy in the last 3 months, the main reason is simple forgetting.

Most participants do not adequately manage the symptoms due to lack of knowledge of the appropriate strategies, indicating devaluation based on the belief of inevitability (pain). Most say they adhere to ART (70%), making more than 95% of the shots, which is not always consistent with viral load.

Nursing intervention, Symptoms management, Adherence to antirretroviral medication.

O91 Aortic valve prosthesis: cardiovascular complications due to surgical implantation

Virginia fonseca 1 , ana rita bento 1 , joana esteves 1 , adelaide almeida 1,2 , joĂŁo lobato 1, 1 escola superior de tecnologia da saĂşde de lisboa, 1990-094 lisboa, portugal; 2 hospital da luz, 1500-650 lisboa, portugal, correspondence: virginia fonseca ([email protected]).

Aortic stenosis represents the third most common cause of cardiovascular disease, with indication for surgical valve replacement with a biological or mechanical prosthesis in most symptomatic patients. Biological prostheses present a higher risk of reoperation due to valvular degeneration; however, do not require prolonged therapy. In spite of the long durability, mechanics prosthesis need chronic anticoagulant therapy. The surgical intervention for valvular replacement is not free of complications and can be grouped into three main categories: prosthesis complications, non-prosthesis related cardiac complications and non-cardiac complications.

To characterize the cardiovascular complications resulting from the implantation of biological or mechanical aortic valve prosthesis, by surgical procedure.

32 patients were evaluated in four follow-up moments. Cardiovascular complications resulting from the implantation of a valve prosthesis (biological or mechanical) were analysed, and also the value of the pre and post-surgical mean gradient, symptoms and cardiovascular risk factors. All the variables under study were characterized by descriptive statistics, except the mean gradient variable, for which the Friedman test was used.

The most frequent complication detected in individuals who implanted biological aortic prosthesis in the first (18.75%), second (21.88%), third (21.88%) and fourth (9.38%) follow-up moments were arrhythmias and electrical conduction disturbances. There is a higher prevalence of Atrioventricular Block and Left Bundle Branch Block, in the first and second follow up moments that reverts in the last two. The complication with higher prevalence in the sample of individuals with mechanical prosthesis at the first (9.38%) and second (3.13%) follow-up moments were arrhythmias. At the third and fourth follow-up moments, the main complication was paravalvular leaks (9.38%). Statistically significant differences were detected in the mean gradient throughout the follow-up (χ F 2 (4) = 12.122, p = 0.016).

The most frequent complications in individuals with aortic prosthesis were arrhythmias and paravalvular leaks. The structural deterioration of the biological valve prosthesis is the most commonly described complication, which may result in insufficiency of the valvular prosthesis and paravalvular leaks. However, electrical conduction disturbances after aortic valve replacement may occur through tissue manipulation of the conduction system. These disorders may be transient; however, certain patients require pacemaker implantation in the post-surgical period.

Prosthetic valves, Aortic prosthesis complications, Biological prosthesis, Mechanical prosthesis.

O92 A Safe staff nursing model: relationship between structure, process and result variables

Maria j freitas 1 , pedro parreira 2, 1 escola superior de enfermagem sĂŁo francisco das misericĂłrdias, 1169-023 lisboa, portugal; 2 escola superior de enfermagem de coimbra, 3046-851 coimbra, portugal, correspondence: maria j freitas ([email protected]).

The need to adequate nursing resources to the real needs of patients, while maintaining a balance between the quantity and the skills, without neglecting the quality and safety, has been a concern for managers. The absence of a consensual methodology to support the operationalization of a safe staff nursing, was the starting point of this investigation.

Develop an explanation Model Safe Staff Nursing (SSN) and analyse the relationships between the Structure, Process and Results.

Cross-sectional and correlational study. Data collection was achieved through a three-sample questionnaire: nurses (629), chief-nurses (43) and patients (1,290), from 43 units of 8 Portuguese hospitals. A patient’s form was applied to assess the satisfaction with nursing care. The data collection instrument for nurses and chief- nurses consisted of three parts: the first characterizing the personal and professional variables, the second featuring the healthcare organization and the service on which the respondent work, and the third part was to obtain information about: overall satisfaction at work (Evaluation Scale of Overall Satisfaction at Work [1]), intended to abandon the Work (Intent Abandonment of Employment Scale [2]), quality nursing care and risk/occurrence of adverse events (Adverse Events Associated with Nursing Practices Scale [3]). The psychometric assessment study of the measuring instruments, performed by Factorial Exploratory Analysis and Confirmatory demonstrated adequate validity and reliability. For model validation, we used the technique Structural Equation Modelling.

The relational structure of the model is statistically significant (χ 2 (421) = 2209.095; p = 0.000; χ 2 /gl = 5.247; GFI = 0.833; CFI =0.815; RMSEA = 0.082), being adequate to explain the impact of Structure variables over Results of SSN, and the Process variables over Results. The “Availability of nurses with the right mix of skills”, “Availability of nurses in adequate amount” and “Safe environment” (Structure-variables) explain 2% of the variable variance of “Provision of quality nursing care” (Process-variables), 15% of the variance “Patient satisfaction”, 94% of the variance “Risk and occurrence of adverse events on patients” (Results-Patients), 25% of the variance “Results-Nurses” and 100% of variance “Results-Organization”.

The Safe Staff Nursing Model clearly identifies the influence of SSN on the results obtained for patients, nurses and organizations. Warns are given for the need to give more attention to issues of SSN, in particular to the constitution of balanced teams, based on a mix that includes number and competencies of nurses, versus workload/patients nursing care needs, as a strategy for maximizing resources and promoting the sustainability of organizations.

1. Silva CF, Azevedo MH, Dias MR. Estudo Padronizado do Trabalho por Turnos: Versão Experimental. Bateria de escalas. Serviço de Psicologia MÊdica, Faculdade de Medicina da Universidade de Coimbra. Coimbra; 1994.

2. Meyer JP, Allen NJ, Smith CA. Commitment to organizations and occupations: Extension and test of a three-component conceptualization. J Appl Psychol 1993;78(4):538-51.

3. Castilho A, Parreira PM [InterSnet]. Design and assessment of the psychometric properties of an adverse event perception scale regarding nursing practices. Revista Investigação Em Enfermagem 2012;1(2):61–75.

Health care allocation resources, Nursing human resources, Safe nursing supplies, Safe Staff Nursing.

O93 Engineered healthy food with nutritional and therapeutic advantages

Geoffrey mitchell 1 , artur mateus 1 , maria gil 2 , susana mendes 2, 1 centre for rapid and sustainable product development, polytechnic institute of leiria, 2430-028 leiria, portugal; 2 marine and environmental sciences centre, polytechnic institute of leiria, 2520-641 peniche, portugal, correspondence: geoffrey mitchell ([email protected]).

Direct Digital Manufacturing is an emerging set of technologies, which are able to produce complex objects without the need for molds or specific tooling. The preparation of food for human consumption is a centuries old craft, which results in food with high sugar and salt levels leading to a poor health for many people and not contributing to the recovery of seriously ill-hospitalized patients. One of the key challenges is to tailor food to the individual to match their dietary requirement and metabolic characteristics. We believe that direct digital manufacturing is able to address these issues. To date the use of 3D printing for food has been restricted to the production of aesthetic shapes for chocolate and pasta products. We see the potential in a different way in which we can tailor the food to the individual incorporating where required drug-based therapies whilst retaining the requirements to be visually pleasing. In addition, it will be possible to guarantee all the organoleptic characteristics to which the consumer in general is familiar (such as smell, sound and texture). The mouth is the gateway for food and its acceptance requires specific taste triggers. We consider that by exploiting Direct Digital Manufacturing it will possible to optimize such taste triggers whilst retaining the nutritional balance and potential. Every consumer (in general) has challenges in chewing and swallowing and engineered food has the capability to achieve this, especially when coupled with a model of the processes of the mouth and the oesophagus, parameterized to the individual. Once the food reaches the digestive system, all of the foregoing topics are largely irrelevant, other than to consider where the nutritional or therapeutic agents are extracted. The challenge of the oral intake of insulin is the best-known situation, but the delivery of chemotherapeutics is another area of challenges. It is necessary to shield the toxic therapeutic from the taste sensors and the digestive system so that it reaches the critical area of the digestive system intact. We are developing a comprehensive food design and manufacturing system that will allow each of these challenges to be met. We expect the first use of such system will be to expedite the recovery of seriously ill patients in hospitals. As enhanced testing procedures become more widely available through technological developments a wider use in the home is expected.

Engineered Food, Nutritional, Therapeutic.

O94 The influence of an eight month multicomponent training program in edlerlies gait and bone mineral mass

AntĂłnio m monteiro 1,2 , filipe rodrigues 2,3 , pedro forte 2,3 , joana carvalho 4, 1 department of sports sciences, polytechnic institute of bragança, 5300-253 bragança, portugal; 2 research center in sports sciences, health and human development, university of trĂĄs-os-montes and alto douro, 5001-801 vila real, portugal; 3 department of sports sciences, university of beira interior, 6201-001 covilhĂŁ, portugal; 4 research centre in physical activity health and leisure, faculty of sport, university of porto, 4200-450 porto, portugal, correspondence: filipe rodrigues ([email protected]).

Aging induces neuromuscular changes and mass, strength, muscular resistance and power, motor coordination, so, reaction and movement speed reduction may be compromised [1]. These changes result in slower movements and functional limitations in gait and weight transfer activities [1]. Even more, the functional fitness decreasing due to aging, increases the risk of falls and bone fractures [2], reducing the elderly’s quality of life.

Thus, the aim of this study was to assess the influence of an eight months multicomponent training program in elderly’s gait and bone mineral mass (BMM).

Forty-nine elderlies were recruited for this research with 64.39 (¹ 6.33) years old, 11 males with 67.45 (¹ 4.93) and 38 females with 63.50 (¹ 7.47) years old. The subjects were community living persons of Bragança. All procedures carried out in this research were in accordance to the Declaration of Helsinki. The multicomponent training program followed the Carvalho et al. [1], recommendations. Each session time volume was between 50 to 60 minutes. The sessions were divided in five parts: 1) general warm-up; 2) walking with aerobic exercises; 3) 1 to 3 sets of exercises of muscular resistance with 12 to 15 repetitions; 4) Static and dynamic balance training; 5) An active recovery period with stretching and breading exercises. The elderlies gait was evaluated with Berg Balance Scale (BBS) and BMM with bioimpedance (Tanita, BC-545). The Wilcoxon-Mann-Whitney test allowed to assess the differences between pre and post 8 months of the training program in BBS and BMM. The tests were performed with a significant level of 5%.

The BBS values pre and post the multicomponent training program for BBS were 47.33 and 50.33 respectively. In BMM, the pre and post values were 2.36kg and 2.39 kg. Despite the differences in BMM means, they were not significant between the two moments (F = 1.253; p = 0.706). However, the same did not occur in terms of BMS values (F = 1.967; p< 0.001), where gait values increased significantly in the second moment.

Although the multicomponent training program did not increase the BMM in the elderly subjects, gait values increased significantly. Thus, it is possible to conclude that, the training program significantly improved the elderlies’ gait and quality of life.

1. Carvalho J, Marques E, Soares JM, Mota J. Isokinetic strength benefits after 24 weeks of multicomponent exercise training and a combined exercise training in older adults. Aging Clin Exp Res. 2010;22(1):63-69.

2. Miyamoto ST, Lombardi JĂşnior I, Berg KO, Ramos LR, Natour J. Brazilian version of the Berg balance scale. Brazilian Journal of Medical and Biological research. 2004;37(9):1411-1421.

Elderly, Bone, Gait, Multicomponent, Training.

O95 Perception of virginity among Portuguese and Cape Verdeans university students – crossborder study

SĂłnia ramalho 1,2 , carolina henriques 1,2 , caceiro elisa 1 , maria l santos 1, 1 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal, correspondence: sĂłnia ramalho ([email protected]).

Virginity can be defined as the attribute of a person who has never been subjected to any type of sexual intercourse. To be aware of the sexual behaviour and virginity of young people is fundamental that nurses construct health education intervention programs in this specific area.

To know the perception of Portuguese and Cape Verdean university students about virginity.

A descriptive, cross-sectional study using a questionnaire consisting of sociodemographic data and the perception scale on the loss of the virginity by Gouveia, Leal, Maroco and Cardoso (2010) [1]. A sample composed by 108 young people from the Republic of Cape Verde and 141 young Portuguese participated in the study. All formal and ethical procedures were taken into account.

Young Portuguese university students presented a mean age of 20 years and 73% of the young people reported having started their sexual life at 17.00 years old, on average. The majority of the young people (66.7%) started their sexual activity with their boyfriends, using protection/contraception (70.9%). Young college students from Cape Verde had a mean age of 21.26 years, 69.4% reported having started their sexual life, on average, at 17.37 years. The majority (63.0%) started their sexual activity with their boyfriend, using protection/contraception (62.0%). Portuguese young people showed high levels of agreement with the ideal associated with the genital vision of loss of virginity (Md = 18.95, X max = 25.00, X min = 11.00), while Cape Verdean students had lower levels of agreement (Md = 12.34, X max = 24.00, X min = 5.00), showing in 41.7% of the cases, disagreement that 'a lesbian woman, who has never had sex with a man, is virgin and a 38.0% disagreement with the statement that “men who only practice oral sex, or anal sex or other forms of sex, do not lose their virginity”.

The study shows that there is still considerable lack of knowledge in young people about the conceptualization of virginity and a very genitalized view of it in the Portuguese young people, in lower agreement with the perception of young Cape Verdeans.

1. Gouveia P, Leal I, Maroco J, Cardoso J. Escala de percepção sobre a perda da virgindade. In: Leal I, Maroco J, editors. Avaliação em sexualidade e parentalidade. Porto: LivPsic; 2010. p. 73-82.

Young, Sexuality, Virginity, Portugal, Cape Verde

O96 Influence of a specific exercise program in the institutionalized elderly balance

CĂĄtia guimarĂŁes 1 , margarida ferreira 1 , paula c santos 3 , mariana saavedra 2, 1 institute of research and advanced training in health, sciences and technologies, cooperativa de ensino superior politĂŠcnico e universitĂĄrio, 4585-116 gandra, portugal ; 2 hospital da senhora da oliveira, 4835-044 guimarĂŁes, portugal ; 3 department of physical therapy, school of health, polytechnic institute of porto, 4400-330 vila nova de gaia, portugal, correspondence: margarida ferreira ([email protected]).

To determinate the effectiveness of a specific exercise program on balance and functional capacity of the daily activities of institutionalized elderly.

A randomized controlled trial. A total of 21 elderly were selected from the Santa Casa da Misericórdia de Santo Tirso and randomly distributed into experimental (n = 11) and control groups (n=10). The experimental group performed a specific program of exercises (resistance training, balance, coordination and flexibility) during 4 weeks, while the control group wasn’t subjected to any intervention. The primary outcome was balance, as measured with a Performance Oriented Mobility Assessment scale (POMA), and the secondary outcome measure included functional capacity by the Timed Up & Go test. Evaluations were carried out at the beginning and end of the exercise program, for both groups. The data were analysed with Statistical Package for Social Sciences, version 22.0, for all test procedures, a probability of p< 0.05 was considered to be statistically significant. Statistical analyses of POMA and TUG were performed with use of independent and paired t-test. POMA and TUG score association were analysed via the Pearson correlation, after the intervention.

In the pre-intervention, groups were homogeneous (p < 0.05). After intervention, there were no statistically significant differences between groups in terms of the total balance and dynamic balance subscale, except static balance subscale (p < 0.048). In the functional capacity test, the experimental group reduced significantly the functional activity time into intragroup (p < 0.001), however there were no significant differences between groups (p < 0.633). After intervention, the experimental group had a significantly strong negative association (p = 0.001).

The results of this study demonstrated that this specific exercise program was not effective in terms of the total balance and functional ability of institutionalized elderly.

NCT03521752

Balance, Institutionalized elderly people, Therapeutic exercise, Functional capacity.

O97 Assessment of pain and effectiveness of analgesia in patient undergoing haemodialysis

LuĂ­s sousa 1,2 , cristina marques-vieira 3 , sandy severino 2,4, cristiana firmino 2 , ana v antunes 2 , helena josĂŠ 5, 1 hospital curry cabral, centro hospitalar lisboa central, 1069-166 lisboa, portugal; 2 escola superior de saĂşde atlântica, 2730-036 barcarena, portugal; 3 escola de enfermagem de lisboa, instituto de ciĂŞncias da saĂşde, universidade catĂłlica portuguesa, 1649-023 lisboa, portugal; 4 agrupamento de centros de saĂşde loures-odivelas, administração regional de saĂşde de lisboa e vale do tejo, 2685-101 sacavĂŠm, portugal; 5 instituto superior de saĂşde multiperfil, clĂ­nica multiperfil, luanda, angola, correspondence: luĂ­s sousa ([email protected]).

Pain is the most common symptom in patient’s undergoing haemodialysis, due to comorbidity, although it is frequently underdiagnosed [1-2]. Pain in these patients is not valued in its entirety and does not consider the limitations resulting in their quality of life [3]. The Brief Pain Inventory short form (SF-BPI) is the most widely used instrument and has the most number of foreign language translations [4].

To evaluate the prevalence of chronic pain, and intradialytic pain in patient undergoing haemodialysis, as well as the effectiveness of analgesic therapy.

Cross-sectional, descriptive and observational study. A random sample consisting of 172 patients undergoing haemodialysis in two clinics in the region of Lisbon, Portugal. The Brief Pain Inventory, which analyses the influence of pain in a patient’s life, was only applied to evaluate chronic pain [5]. The Visual Analogue Scale was used to assess the intradialytic pain. Tests were administered during dialysis sessions from May to June 2015. Categorical variables were expressed as percentages and continuous variables were expressed as mean standard deviations or medians. This study was approved by the Ethics Committee of Diaverum (N 1/2015).

The sample consisted mostly of men (61.6%) of Portuguese nationality (80.7%), the mean age was 60 years (Âą 14.4), and patients were under haemodialysis treatment for 72.6 months (Âą 54.4). Chronic pain occurs in 54.1% of patients and intradialytic pain in 75%. The causes of pain were musculoskeletal (69.3%), associated to vascular access (19.3%) and other causes (11.4%). Chronic pain was most commonly located in the legs (43.2%), followed by back (21.6%) and vascular access (19.3), head (8%), arms (4.5%), abdomen (2.3%) and, lastly, chest (1.1%). The percentage of patients that took analgesics for chronic pain was much higher (62.0%), of these 87.8% are non-opiates, 10.2% weak opiates and 2% strong opiates. The other therapeutic interventions referred were: rest (24.1%), massage and relaxation (6.3%), cryotherapy (1.3%), exercise (1.3%), while 5.1% reported doing nothing. The effectiveness of the treatment was successful for chronic pain, in 62.6% of the patients, there was a relief felt of over 50%.

Pain of musculoskeletal origin is a frequent symptom in our sample. The pharmacological management of chronic pain is the most applied intervention.

1. Pelayo Alonso R, Martínez Álvarez P, Cobo Sånchez JL, Gåndara Revuelta M, Ibarguren Rodríguez E. Evaluación del dolor y adecuación de la analgesia en pacientes en tratamiento con hemodiålisis. Enferm Nefrol. 2015;18(4):253-259.

2. Calls J, RodrĂ­guez CM, HernĂĄndez SD, GutiĂŠrrez NM, Juan AF, Tura D, Torrijos J. An evaluation of pain in haemodialysis patients using different validated measurement scales. Nefrologia. 2009;29(3):236-243.

3. AhĂ­s TomĂĄs P, Peris Ambou I, PĂŠrez Baylach CM, CastellĂł Benavent J. EvaluaciĂłn del dolor en la punciĂłn de una fĂ­stula arteriovenosa para hemodiĂĄlisis comparando pomada anestĂŠsica frente a frĂ­o local. Enferm Nefrol. 2014;17(1):11-15.

4. Upadhyay C, Cameron K, Murphy L, Battistella M. Measuring pain in patients undergoing hemodialysis: a review of pain assessment tools. Clin Kidney J. 2014;7(4):367-372.

5. Sousa LM, Marques-Vieira CM, Severino SS, Pozo-Rosado JL, JosĂŠ HM. ValidaciĂłn del Brief Pain Inventory en personas con enfermedad renal crĂłnica. Aquichan. 2016;17(1):42-52.

Renal Insufficiency, Chronic, Renal Dialysis, Quality of life, Pain.

O98 Prevalence of musculoskeletal symptoms in nursing students

Cristiana firmino 1,2 , luĂ­s sousa 2,3 , joana m marques 2,5 , fĂĄtima frade 2 , ana v antunes 2 , fĂĄtima m marques 4 , celeste simĂľes 6, 1 hospital cuf infante santo, 1350-070 lisboa, portugal; 2 escola superior de saĂşde atlântica, 2730-036 barcarena, portugal; 3 hospital curry cabral, centro hospitalar lisboa central, 1069-166 lisboa, portugal; 4 escola de enfermagem de lisboa, 1600-190 lisboa, portugal; 5 centro de medicina de reabilitação de alcoitĂŁo, 2645-109 alcabideche, portugal; 6 faculdade de motricidade humana / instituto de saĂşde ambiental, 1495-687 cruz quebrada, portugal, correspondence: cristiana firmino ([email protected]).

Musculoskeletal symptoms are the most common conditions in society, being indicated as one of the main factors of disability during the life cycle of an individual [1-2]. Students are exposed to the factors that can trigger these musculoskeletal symptoms [3], both during class periods and clinical teaching. Prevalence of musculoskeletal pain is higher in the cervical region among nursing students of 1st year and 2nd year, and lower back in nursing students of the 3rd and 4th years [4].

To determine the prevalence of musculoskeletal symptoms in nursing students.

Cross-sectional and descriptive study. One hundred and fifty-five (155) nursing students from two nursing schools in Lisbon participated in this study. The data collection instrument consisted on sociodemographic and health behaviour variables and the Nordic musculoskeletal questionnaire (NMQ). The NMQ consists of 27 binary choice questions (yes or no) [5]. The variables were expressed as percentages. This study was approved by the Ethics Committee of two nursing’s schools.

83.23% of the sample are females, single (88.38%) and 32.26% are working students. 81.94% are non-smoking; 87.1% do not usually ingest alcoholic drinks; 65.81% use a backpack and 23.23% carry objects on their way to school. 49.03% spend between 2 and 4 hours on the computer and electronic devices and 42.58% spend more than 4 hours. 71% spend more than 4 hours seated during classes. 85.8% had no training prevention of musculoskeletal injuries. The prevalence of musculoskeletal symptoms by location of the aches, pain, discomfort and numbness were as following: 66.23% in the neck; 52.29% shoulders; 7.24% elbows; 39.47% wrists/hands; 20.53% upper back; 69.33% lower back; 15.33% hips/thighs, 32% knees and 22.82% ankles/feet.

The most frequent aches, pain, discomfort, numbness location are located on the neck, shoulders and lower back. The main causes related to musculoskeletal injuries are the transportation of weights, use of computer and electronic devices and to be seated for long periods of time. It is recommended the implementation of prevention strategies in order to reduce the occurrence of musculoskeletal injuries.

1. Abledu JK, Offei EB. Musculoskeletal disorders among first-year Ghanaian students in a nursing college. Afr Health Sci. 2015;15(2):444-449.

2. Alhariri S, Ahmed AS, Kalas A, Chaudhry H, Tukur KM, Sendhil V, Muttappallymyalil J. Self-reported musculoskeletal disorders and their associated factors among university students in Ajman, UAE. Southern Med J. 2016;5(S2):S61-70.

3. Martins AC, Felli VE. Sintomas mĂşsculo-esquelĂŠticos em graduandos de enfermagem. Enferm Foco. 2013;4(1):58-62.

4. Nunes H, Cruz A, QueirĂłs P. Dor mĂşsculo esquelĂŠtica a nĂ­vel da coluna vertebral em estudantes de enfermagem: PrevalĂŞncia e fatores de risco. Rev Inv Enferm. 2016;II(14):28-37.

5. Mesquita CC, Ribeiro JC, Moreira P. Portuguese version of the standardized Nordic musculoskeletal questionnaire: cross cultural and reliability. J Public Health. 2010;18(5):461-466.

Nursing Students, Musculoskeletal Pain, Prevalence, Cross-Sectional Studies.

O99 Eating habits: determinants of Portuguese adolescents’ choices

Susana cardoso 1 , carla nunes 1 , osvaldo santos 2 , isabel loureiro 1, 1 escola nacional de saĂşde pĂşblica, universidade nova de lisboa, 1600-560 lisboa, portugal; 2 instituto de saĂşde ambiental, faculdade de medicina, universidade de lisboa, 1649-028 lisboa, portugal, correspondence: susana cardoso ([email protected]).

Proper eating habits are crucial to a healthy life. It is important to understand the determinants of eating choices made at adolescence because this stage of life is paramount for the formation of lifelong enduring habits.

To identify determinants of eating choices based on adolescents' perception and characterizing them, in particular, to the level of relevance attributed by adolescents.

A cross-sectional study was carried out, based on a sample of 358 adolescents (14-18 years old) from two schools of Coimbra. First, a quantitative study was carried out using the scales: EHA (eating habits scale), TAA-25 (eating attitudes test) and GSQ (general self-efficacy scale). In a second step, a qualitative study was carried out with subgroups that were selected from the results of the first phase of the study. These subgroups presented opposite patterns of habits (group A: better eating habits - EHA≥160 and group B: worst habits - EHA≤125) and we moved into a grounded theory approach with semi-structured individual interviews.

Gender emerges as a determinant of eating choices pattern, with girls assuming more adequate eating habits (t =3.84; p <. 0001; r 2 adjusted= .037, p< .0001). The perception of general self-efficacy assumes greater relevance for boys, functioning as a protective factor that reduces unhealthy options. Through multinomial regression models, we could see that gender and general self-efficacy have a big influence on eating habits. The ideals of beauty have influence on this effect. Resisting adversity has an important influence in the choices, being associated to self-regulation. The situations of risk to develop an eating behaviour disturbance appear mainly in the cases of adolescents presenting better habits (r SP = .203; p < .001) and are more frequent in girls (t = 3.54; p < .0001; OR = 4.04). Through content analysis it was possible to identify determinant factors that were perceived by adolescents in both groups. The ones that were more often mentioned (in a decreasing order) were family influence, taste preferences, knowledge of healthy eating rules and availability. This was followed by determinants such as self-control capacity, feeling well or bad, peer influence, feeling hungry or full, developing a task or not, impulsiveness, time available and humour/stress.

The differences found between sexes can justify differentiated interventions. Our results also suggest the relevance to work on self-image. Family must be considered as an integrating part of the interventions in health education. Political measures taken by schools and government agents can also have a very important role in making healthy choices easier.

Adolescents, Eating habits, Determinants.

O100 The influence of regular sports practice on motor skills and student’s physical fitness

JĂşlio martins 1,2 , joĂŁo cardoso 1 , josĂŠ reis 1 , samuel honĂłrio 3, 1 university of beira interior, 6201-001 covilhĂŁ, portugal; 2 research center in sports sciences, health sciences and human development, 6201-001 covilhĂŁ, portugal; 3 school of education, polytechnic institute of castelo branco, 6000-266 castelo branco, portugal, correspondence: samuel honĂłrio ([email protected]).

Sports practice develops several motor skills that help practitioners not only in game, but also brings benefits to their physical fitness. Practitioners develop an effective motor response and quick solutions to daily situations.

Determine if the regular practice of sports influences or not motor skills, physical fitness and body composition of students. We wish also to identify the correlation between the performance of students in specific motor skills tests of football, physical fitness and body composition.

The sample consisted in 160 (divided in two groups of 80) male students, 12-year-old, living in Madeira and that practice football regularly. One group of students had an extracurricular physical activity beyond curricular physical activity (Federated Sports), and the other group entails students who only practice physical activities as a curricular activity (School Sport). Body composition (BMI and %BF) were also evaluated by Fitnessgram. To evaluate motor abilities the software Predictive Analytics Software – PASW was applied.

After analysing the results, we became aware that students with extracurricular physical activities had better results in motor skills than students with curricular physical activities. This result is perhaps the less surprising in the set of all assessments due to the longer physical commitment of these students in relation to the students that only exercise in the context of curricular activities. Regarding physical fitness, students with extracurricular physical activity can have more students in HFZ (healthy fitness zone) than students with curricular physical activity. Regarding the arms extension and flexion, 26% of the students with curricular physical activity are above HFZ, achieving better performance than the students that have extracurricular physical activity. According to the skills measured, the students with extracurricular physical activities achieve a relatively higher average than students with curricular physical activities. Regarding body composition and after determining the BMI and %BF of each group, it appears that students with extracurricular physical activities achieve better results than those achieved by the students of curricular physical activities.

The regular practice of sports with curricular and extracurricular physical activities, seems to contribute to an improvement of physical fitness, motor skills and body composition of students, keeping them in a healthy fitness zone, thus preventing premature cardiovascular diseases, among others.

Motor Skills, Physical Fitness, Body Composition, Federated Sports, School Sports.

O101 Promotion of language skills in children aged 5-6 years, without language disorders

Tiago rodrigues, catarina mangas, school of education and social sciences, polytechnic institute of leiria, 2411-901 leiria, portugal, correspondence: tiago rodrigues ([email protected]).

Language is a human faculty that enables communication with various interlocutors. This appears early and develops, exponentially, in the early years of life. For such, it is important that the surrounding environment is stimulating and allows the exchange of experiences with other speakers. The pre-school age is a milestone in the development, not only for stimulation, but also to identify deviant behaviours. In order to avoid future complications, professionals who deal with these children should prevent any language disorders. The speech and language therapist, being a health professional trained to deal with the areas of communication and language, has a prominent role in this preventive action. In Portugal, there aren't many references concerning prevention in language disorders. As such, it is necessary to promote a definite change in the present scenario, namely with innovative and stimulating materials to develop language.

Therefore, this is an exploratory-descriptive study, with a qualitative-quantitative paradigm and the general objective of this study was “ to create and implement a Language Skills Promotion Program for 5-6 year-old children without any language disorders, in order to analyse their potential influence on their language skills ”.

To this end, a 10-session program was build, evaluated and validated by a panel of experts. Before the beginning of the program, all the sample children were evaluated with a language test for preschool age. A sample of 12 children, divided equally in two groups, was selected and the program was applied by a Speech and Language Therapist only to one of these groups. At the same time, the Childhood Educator’s opinion was collected in order to understand the influence of the respective program on the children's language skills.

The final results of the study show that children who participated in the program improved their language skills, which was not the case of the children who did not take part in that same study. These results prove that the investment in prevention actions, through the promotion of language skills, enhances the oral and written language skills of children, especially in terms of literacy, something that is indispensable for their educational success.

Finally, the study also emphasizes the importance of primary prevention actions, such as the application and development of programs to stimulate or organize information actions, with a view to promoting the health and the well-being of society in general.

Child language, Child-rearing, Early intervention, Prevention, Speech therapy.

O102 The impact of a training program on the performance of nurses working at a chemotherapy ward

Joana m silva, isabel m moreira, anabela salgueiro-oliveira, nursing school of coimbra, 3046-851 coimbra, portugal, correspondence: joana m silva ([email protected]).

Oral mucositis (OM) is the major complication reported by patients undergoing chemotherapy and/or radiotherapy, with a strong impact on their quality of life by compromising physical and psychological functions [1]. OM affects 40-76% of patients undergoing chemotherapy and up to 90% of patients undergoing radiotherapy [3,4]. Inadequate oral hygiene is a patient-related risk factor in which health professionals can intervene [5,7], namely nurses [6]. Oral examination allows diagnosing the different stages of OM and establishing an individualized care plan.

To assess the impact of a training program on the performance of nurses working at a chemotherapy ward regarding OM risk assessment and prevention in cancer patients.

This action-research study aimed to identify nurses' interventions in patients with or at risk for OM. Data were collected from the nursing records of 110 patients between October and November 2016 in order to analyse the nursing documentation pattern based on an evidence-based grid. Data were analysed using descriptive statistics. The discussion with the team nurses about the results obtained in the document analysis was used to design a three-session training program. The next step was to reanalyse the nursing documentation pattern with the purpose of identifying positive changes in the aspects under study. The study was approved by the Ethics Committee.

The analysis of the documentation pattern showed that 31.8% of the patients were not asked about oral hygiene practices, although 25.7% of the sampled patients were in the 1st cycle of chemotherapy. A total of 21.9% of patients were not observed during oral hygiene care. Only 14.5% of patients were given instructions about the treatment of side effects, and only 12.5% of them were given instructions about oral hygiene care. Only 2.7% of the patients had their oral cavity/mucous membranes examined, and all of them were diagnosed with OM. The implementation of the training program led to the introduction of standardized records for oral cavity surveillance. Nurses showed high adherence levels to this practice and considered it very relevant in clinical practice.

The research results show that nurses do not perform a systematic diagnostic evaluation of patients’ oral cavity and that few patients receive instruction on oral hygiene care, which does not contribute to patient empowerment in this area. The implementation of the training program showed that nurses recognize the need for and are committed to changing practices in this area.

1. Yarbro CH, WujciK D, Gobel B H. Cancer Nursing: Principles and Practice. 8 th edition. Destin, Florida: Jones & Bartlett Publishers; 2016.

2. Eilers J, Harris D, Henry K, Johnson L. Evidence-based interventions for cancer treatment – Related mucositis: Putting evidence into practice. Clin J Oncol Nurs. 2016;18:80-96.

3. Peterson D, Boers-Doets C, Bensadoun R, Herrstedt J. Management of oral and gastrointestinal mucosal injury: ESMO Clinical Practice Guidelines for diagnosis, treatment, and follow-up. Annals of Oncology. 2015;26(Suppl 5):139-151.

4. AraĂşjo S, Luz M, Silva G, Andrade E, Nunes L, Moura R. O paciente oncolĂłgico com mucosite oral: desafios para o cuidado de enfermagem. Rev. Latino-Am. Enfermagem. 2015;23(2):267-274.

5. Gondin F, Gomes I, Firmino F. Prevenção e tratamento da mucosite oral. Rev. enferm. UERJ. 2010;18(1):67-74.

6. Eilers J, Million R. Clinical update: Prevention and management of oral mucositis in patientes with cancer. Semin Oncol Nurs. 2011;27(4):e1-16.

7. Teixeira S. Mucosite oral em cuidados paliativos (Dissertação de mestrado em oncologia – Especialização em enfermagem oncológica). Universidade do Porto, Instituto de Ciências Biomédicas Abel Salazar, Porto, Portugal. 2010.

8. Kuhne GW, Quigley BA. Understanding and Using Action Research in Practice Settings. In Quigley BA, Kuhne GW, editors. Creating Practical Knowledge Trough Action Research: Posing Problems, Solving Problems, and Improving Daily Practice. San Francisco: Jossey-Bass Publishers. 1997. p. 23-40.

Oral mucositis, Nursing care, Oncology.

O103 Styles of conflict management and patient safety

Anabela almeida 1 , sara cabanas 2 , miguel c branco 1, 1 universidade da beira interior, 6200-001 covilhĂŁ, portugal; 2 centro hospitalar cova da beira, 6200-251 covilhĂŁ, portugal, correspondence: anabela almeida ([email protected]).

Health is a demanding scenario of changes and successive adaptations that, very easily, allows the emergence of differences between the involved and conflicts between professionals. Ineffectively managed conflicts in health organizations reduce quality, compromise safety, and increase the costs of health care delivery, secondarily to the goals of being effective and efficient.

Thus, the general objective of the study is to investigate the relationship between the conflict management styles used and the level of patient safety climate among the clinical services professionals at Hospital PĂŞro da CovilhĂŁ of CHCB.

The research is of a quantitative nature, of a descriptive and correlational character and of a transverse nature. The sample is non-probabilistic, consisting of 137 health professionals who work at CHCB. The use of ROCI-II and SAQ allowed the evaluation of the styles of conflict management and the perceptions of attitudes of health professionals related to patient safety.

The results show that professionals, in all relations with the opponent, opt preferentially for collaboration, with competition being the least common style of conflict management. There were no differences in the styles of conflict management used in relation to the opponent. The participants presented positive attitudes towards the patient's safety, and it was verified that the professionals perceived a lower security relative to the dimension of management's perception and greater in relation to the dimensions of job satisfaction and recognition of stress, which show the highest values. The relationship between conflict management styles and the security climate level was verified. There is an association between literacy and conflict management styles, and years of service and conflict management styles. Regarding the ordinal independent variables, all are associated with the perceptions of the security climate.

Gender, marital status, integration period, function and years of service, influence the conflict management styles used; and age, gender, choice of service, integration period, function, area of service and years of service, influence the perceptions of professionals' attitudes related to patient safety.

Conflict, Conflict management, Patient safety, Safety climate.

O104 Influence of a rehabilitation nursing care program on quality of life of the patients undergoing cardiac surgery

JosĂŠ moreira, jorge bravo, department of sports and health, school of science and technology, university of ĂŠvora, 7000 ĂŠvora, portugal, correspondence: josĂŠ moreira ([email protected]).

Cardiac rehabilitation (CR) is fundamental in the treatment of patients undergoing cardiac surgery (CS) regarding the educational, physical exercise and quality of life dimensions. Considering the competences of Specialist Nurses in Rehabilitation Nursing (SNRN) and the current prevalence of risk factors associated with cardiovascular disease, it is essential to implement programs in this area.

To assess the impact of SNRN interventions on a CR program during hospitalization (phase I) and 1 month after CS (phase II).

Participants (n = 11) submitted to CS, of both sexes, between 25 and 64 years of age (61.09 Âą 7.09 years), that according to the American Heart Association and the American Association of Cardiovascular and Pulmonary Rehabilitation, met the criteria for low or moderate risk, class B for participation and exercise supervision, absence of signs/symptoms after CS, with a left ventricular ejection fraction greater than 40%. Supervised interventions were performed during hospitalization, pre- and post-cardiac surgery, and 1 month after hospital discharge. In phase II, a physical exercise program was fulfilled according to the norms of the American College of Sports Medicine, comprising 3 sessions of physical exercise per week lasting between 30 to 60 minutes, including heating, aerobic exercise and recovery/stretching. Hemodynamic data (blood pressure, heart rate, peripheral oxygen saturation, pain) and the Borg scale were recorded in the initial, intermediate and final periods of each session. The aerobic capacity was evaluated through the 6-minute Walk Test and the health-related quality of life using the Short Form Health Survey 36 (SF-36V2) questionnaire.

Significant statistical improvements were observed in the time/walk relationship, such as the increase in the respective functional capacity (p = 0.05) and quality of life (in various domains). During the hospitalization, the subjective perception of the effort of session to session decreased in 81.82% of the participants. T-test for independent samples revealed that differences in resting heart rate (phase I) were not significant, however, the difference in distances was significant at a 95% confidence level.

Nursing rehabilitation care is essential to improve the quality of life of patients undergoing CS in a phase I and II rehabilitation program. The benefits of CR programs are evident when initiated early after CC, reinforcing the need to increase their implementation in the rehabilitation of cardiovascular disease. Although the reduced sample size, the results represent a basis for future studies with a larger number of participants and a longer intervention period after CC.

NCT03517605

Cardiac Rehabilitation, Quality of Life, Rehabilitation Nursing.

O105 Study of knee arthroplasty in the elderly population with agricultural activity

Carla costa 1 , jorge nunes 2,3 , ana p martins 4, 1 faculdade de ciĂŞncias da saĂşde, universidade da beira interior, 6200-506 covilhĂŁ, portugal ; 2 universidade da beira interior, 6200-001 covilhĂŁ, portugal ; 3 centro hospitalar cova da beira, 6200-251 covilhĂŁ, portugal ; 4 centro de matemĂĄtica e aplicaçþes, universidade da beira interior, 6200-001 covilhĂŁ, portugal, correspondence: carla costa ([email protected]).

Arthrosis is a major cause of pain, disability and loss of quality of life [1]. It affects the knee of elderly, overweight people and women frequently, and is influenced by articular overload, that occurs in Agriculture [2-9]. The majority of the individuals submitted to Total Knee Arthroplasty (TKA) refer significant decrease of knee pain and increase of knee functionality [1]. In this context, there are no studies on the recovery of elderly if they return to Agriculture after TKA.

Realize if elderly people between 65 and 80 years old, patients of PĂŞro da CovilhĂŁ Hospital, with Agricultural activity before surgery and submitted to TKA, with medial approach and posterior cruciate ligament sacrifice for the first time, can return to Agriculture and how long does it take; otherwise, identify the reasons for the interruption. Secondarily, analyse if Body Mass Index (BMI), gender, job, among others, influence this return.

This is an observational retrospective study with 38 patients between 65 and 80 years old submitted to TKA. Data was collected through clinical processes and patients self-report Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and analysed on SPSS and R software (statically significant p < 0.05).

Of the 38 patients, 76.3% were female. Average age was 72.21 ± 4.50 and 75.13 ± 5.01 years old at the time of TKA and at the time of the questionnaire, respectively. On both moments the majority of the individuals had overweight or obesity. 84.2% returned to Agriculture (81.2% partially and 18.8 % fully), on average 6.34 ± 4.90 months after TKA. The median age at the surgery of the seniors who didn’t return to Agriculture is superior to the one of the seniors who returned (p = 0.025). The higher score in Stiffness and the lowest total score on WOMAC was seen in the individuals who returned four or more months after TKA (p = 0.0125 and p = 0.026, respectively).

The majority of the individuals between 65 and 80 years old, with Agricultural activity before surgery and submitted to TKA with medial approach and posterior cruciate ligament sacrifice, can return to Agriculture, in 6 months. Most of them don’t return fully. The most cited reason was surgery consequences. The median age at the time of TKA of the seniors who didn’t return is superior to the one of the seniors who returned. A worst score in Stiffness and a better Total score was seen in the seniors who took longer to return to Agriculture.

1. Surgeons TAA of O. Artroplastia total de joelho (Total Knee Replacement ). OrthoInfo. 2017. p. 1–10.

2. Saúde DDA. Programa Nacional Contra as Doenças Reumåticas. Lisboa; 2005.

3. Silverwood V, Blagojevic-Bucknall M, Jinks C, Jordan JL, Protheroe J, Jordan KP. Current evidence on risk factors for knee osteoarthritis in older adults: A systematic review and meta-analysis. Osteoarthr Cartil. 2015;23(4):507–515.

4. Alfieri FM, Silva NCOVE, Battistella LR. Study of the relation between body weight and functional limitations and pain in patients with knee osteoarthritis. Einstein (São Paulo. 2017;15(3):307–12.

5. Toivanen AT, Heliövaara M, Impivaara O, Arokoski JPA, Knekt P, Lauren H, et al. Obesity, physically demanding work and traumatic knee injury are major risk factors for knee osteoarthritis-a population-based study with a follow-up of 22 years. Rheumatology. 2010;49(2):308–14.

6. Srikanth VK, Fryer JL, Zhai G, Winzenberg TM, Hosmer D, Jones G. A meta-analysis of sex differences prevalence, incidence and severity of osteoarthritis. Osteoarthr Cartil. 2005;13(9):769–81.

7. Pua YH, Seah FJ, Seet FJ, Tan JW, Liaw JS, Chong HC. Sex Differences and Impact of Body Mass Index on the Time Course of Knee Range of Motion, Knee Strength, and Gait Speed After Total Knee Arthroplasty. Arthritis Care Res. 2015;67(10):1397–405.

8. Liljensøe A, Lauersen JO, Søballe K, Mechlenburg I. Overweight preoperatively impairs clinical outcome after knee arthroplasty: a cohort study of 197 patients 3–5 years after surgery. Acta Orthop. 2013;84(4):392–397.

9. Kennedy JW, Johnston L, Cochrane L, Boscainos PJ. Total knee arthroplasty in the elderly: Does age affect pain, function or complications? Clin Orthop Relat Res. 2013;471(6):1964–1969.

Knee, Arthrosis, Elderly, Agriculture, Arthroplasty.

O106 Teachers Acceptance and Action Questionnaire Portuguese version (TAAQ-PT): factor structure and psychometric characteristics

Ana galhardo 1,2 , bruna carvalho 1 , ilda massano-cardoso 1 , marina cunha 1,2, 1 instituto superior miguel torga, 3000-132 coimbra, portugal; 2 cognitive and behavioural center for research and intervention, university of coimbra, 3001-802 coimbra, portugal; 3 faculty of medicine, university of coimbra, 3004-504 coimbra, portugal, correspondence: ana galhardo ([email protected]).

Teaching has the potential to provide high satisfaction levels but it is described as a demanding profession with multiple sources of stress. Teachers’ psychological well-being is essential not only for themselves abut also for students. Experiential avoidance of private events (e.g., thoughts, feelings, body sensations) has been pointed as a key construct linked to psychopathological symptoms. The Teachers Acceptance and Action Questionnaire (TAAQ) is a teacher-specific measure developed to target experiential avoidance related to the teaching activity.

The current study sought out to develop the Portuguese version of Teachers Acceptance and Action Questionnaire (TAAQ-PT), explore its factor structure and psychometric properties in a sample of Portuguese teachers teaching in the 1st, 2nd and 3rd basic cycles and secondary education.

A sample of 304 teachers, 256 women (84.2%) and 48 men (15.8%) was recruited through teachers’ professional associations. Participants completed online a sociodemographic and professional questionnaire and a set of self-report instruments: the TAAQ-PT, the Depression Anxiety and Stress Scale 21 (DASS – 21), the Utrecht Work Engagement Scale (UWES), and the Five Facet Mindfulness Questionnaire (FFMQ). TAAQ-PT confirmatory factor analysis was conducted and reliability and validity were estimated.

The TAAQ-PT revealed a single factor structure. Correlated measurement errors were specified for items 5 and 7, 3 and 10, and 8 and 9 due to similar phrasing. The one factor model, which specified method effects between those items, fits the data well: χ 2 /gl = 1.55, CFI = .99, GFI = .97, RMSEA = .043, MECVI = .321. The TAAQ-PT presented a Cronbach alpha of .91. Additionally, composite reliability (CR) was calculated, and a value of .95 was found. The TAAQ-PT presented significant negative correlations with mindfulness facets (r = -.60; p <.01), and work engagement (r = -62, p <.01), and positive correlations with negative emotional symptoms of depression (r = .69; p <.01), anxiety (r = .63; p <.01) and stress (r = .70; p <.01).

Similar to the original version, confirmatory factor analysis revealed that the single-component model fits the data well. It showed good internal consistency, and correlations with other mental health measures suggested good convergent and discriminant validity. The TAAQ-PT was found to be a valid and reliable measure of experiential avoidance in teachers to be used in clinical and research contexts.

Experimental avoidance, Teachers, Confirmatory factor analysis, Psychometric properties.

O107 Youth sports injuries according to health-related quality of life and parental instruction

Lara c silva 1,2 , jĂşlia teles 2,3 , isabel fragoso 1,2, 1 laboratory of physiology and biochemistry of exercise, faculty of human kinetics, university of lisbon, 1499-002 dafundo, portugal; 2 interdisciplinary center for the study of human performance, faculty of human kinetics, university of lisbon, 1499-002 dafundo, portugal; 3 mathematics unit, faculty of human kinetics, university of lisbon, 1499-002 dafundo, portugal, correspondence: lara c silva ([email protected]).

Participation in physical activity involves a risk of injury that has a considerable public health impact [1]. Sports injuries are the major cause of morbidity among children and adolescents in developed countries [2]. They account for half of all injuries in school age children. The relationship between sports injuries, health-related quality of life (HRQoL) and parental instruction is still not clear.

Determine sports injuries biosocial predictors in Portuguese youth.

Information about HRQoL, parental instruction and sports injuries was assessed via three questionnaires; KIDSCREEN-52 [3,4], RAPIL II [5,6] and LESADO [1,7,8] respectively. They were filled by 651 subjects aged 10 to 18 years, attending four Portuguese community schools. Univariate analyses were used to verify significant differences between groups. Logistic, linear and multinomial regression analyses were used to determine significant biosocial predictors of injury, injury rate, injury type and body area injury location.

Injury rate was higher in boys with lower scores in the school environment dimension of KIDSCREEN-52 (p = .022) and in girls was higher in those with lower scores in the moods and emotions dimension (p .001) and higher scores in the self-perception dimension (p < .001). Also in girls, upper limbs injuries were associated with higher scores in the moods and emotions dimension, and the spine and torso with lower scores (p = .037). Lower limbs injuries were associated with lower parents’ education and upper limbs (p = .046) and spine and torso (p = .034) injuries with higher parents’ education.

Surprisingly given the large number of injuries resulting from participation in sports and the associated high costs of health care, very few investigations have been conducted into biosocial variables and their relation to sports injuries. Injuries in the Portuguese youth were linked to three dimensions of KIDSCREEN-52 (moods and emotions, self-perception and school environment) and parents education level. Sports injuries usually result from the combination of several risk factors interacting at a given time [9]. Understanding the role of social and environmental factors related to sports injuries is needed, as they can be a part of this complex equation.

We would like to express our immeasurable gratitude to Ana LĂşcia Silva and JoĂŁo Albuquerque for helping in data collection, and Carlos Barrigas for evaluating all x rays. We also thank to Escola BĂĄsica 2,3 Professor Delfim Santos, Agrupamento de escolas de Portela e Moscavide and Escola SecundĂĄria Quinta do MarquĂŞs, for making both their infrastructures and students available for the study and to all participants for their time and effort. Lara Costa e Silva, Ana LĂşcia Silva e JoĂŁo Albuquerque were supported by a scholarship from the Portuguese Foundation for Science and Technology (SFRH/BD/77408/2011), (SFRH/BD/91029/2012), and PTDC/DES/113156/2009, respectively) and by the Interdisciplinary Center for the Study of Human Performance (CIPER).

1. Costa e Silva L, Fragoso I, Teles J. Prevalence and injury profile in Portuguese children and adolescents according to their level of sports participation. J Sports Med Phys Fitness. 2018 Mar;58(3):271-279.

2. Williams JM, Currie CE, Wright P, Elton RA, Beattie TF. Socioeconomic status and adolescent injuries. Soc Sci Med . 1997;44(12):1881–1891.

3. The Kidscreen Group. Description of the KIDSCREEN instruments. KIDSCREEN-52, KIDSCREEN-27 & KIDSCREEN-10 index. Health Related Quality of Life Questionnaires for Children and Adolescents. 2004. Report No.: EC Grant Number: QLG-CT-2000-00751.

4. Janssens L, Gorter JW, Ketelaar M, Kramer WLM, Holtslag HR. Health-related quality-of-life measures for long-term follow-up in children after major trauma. Qual Life Res. 2008;17(5):701–13.

5. Varela-Silva M, Fragoso I, Vieira F. Growth and nutritional status of Portuguese children from Lisbon, and their parents. Notes on time trends between 1971 and 2001. Ann Hum Biol. 2010;37:702–716.

6. Fragoso I, Vieira F, Barrigas C, Baptista F, Teixeira P, Santa-Clara H, et al. Influence of Maturation on Morphology, Food Ingestion and Motor Performance Variability of Lisbon Children Aged Between 7 to 8 Years. In: Olds T, Marfell- Jones M, editors. Kinanthropometry X Proceedings of the 10th Conference of the International Society for the Advancement of Kinanthropometry (ISAK). London: Routledge; 2007. p. 9–24.

7. Costa e Silva L, Fragoso MI, Teles J. Physical Activity–Related Injury Profile in Children and Adolescents According to Their Age, Maturation, and Level of Sports Participation. Sports Health. 2017;9(2):118–125.

8. Pires D, Oliveira R. Lesões no sistema musculo-esquelético em tenistas portugueses. Rev Port Fisioter no Desporto. 2010;4(2):15–22.

9. Powell J, Barber-Foss K. Injury patterns in selected high school sports: A review of the 1995-97 seasons. J Athl Train. 1999;34(3):277–84.

Sports Injuries, Children and Adolescents, Health Related Quality of Life, Parental Instruction.

O108 The influence of moderate- to vigorous-intensity activity on the physical fitness of non-institutionalised elderly people

Fernanda silva 1 , joĂŁo petrica 1,2 , joĂŁo serrano 1,2 , rui paulo 1,3 , andrĂŠ ramalho 1,3 , josĂŠ p ferreira 4 , pedro duarte-mendes 1,3, 1 department of sports and well-being, polytechnic institute of castelo branco, 6000-266 castelo branco, portugal; 2 centro de estudos em educação, tecnologias e saĂşde, instituto politĂŠcnico de viseu, 3504-510 viseu, portugal; 3 research on education and community intervention, 4411-801 arcozelo – vila nova de gaia, portugal; 4 research unit for sport and physical activity, university of coimbra, 3040-248 coimbra, portugal, correspondence: fernanda silva ([email protected]).

As a result of the ageing process, there is evidence of a decline in physical aptitude (strength, endurance, agility and flexibility) associated with a lower performance in the activities of daily living [1]. Physical activity plays therefore a key role in maintaining the health and physical fitness of the elderly [2]. The recommendations on physical activity for health suggest that the elderly should perform at least 30 minutes of moderate- to vigorous-intensity activity per day [3, 4].

The aim of this paper is to accurately quantify physical activity time in the elderly and to verify the existence of differences regarding physical fitness levels between two groups of people: those who complied and those who did not comply with the Global Recommendations on Physical Activity for Health [4].

This cross-sectional study sample includes 36 elderly individuals (72.28 ± 6.58 years old), both male and female, divided into two groups: the group which has fulfilled the recommendations (N = 16; 53.76 ± 24.39 minutes) and the group that has not fulfilled the recommendations (N = 20; 15.95 ± 7.79 minutes). Physical activity was assessed for 3 consecutive days and 600 minutes of daily recording, at least. The ActiGraph® GT1M Accelerometer was hence used. The “Functional Fitness Test” battery (Rikli and Jones) was used to assess the physical and functional autonomy of the elderly [5]. In order to analyse data, descriptive and inferential statistics were used. The Shapiro-Wilk test was applied to assess normality, whereas the Mann-Whitney test and the t-Test were used for independent samples.

On average, participants spent more time in sedentary activities than in physical activity. The group which has fulfilled the recommendations on physical activity has achieved better results on almost all physical fitness tests: 30 s chair stand (repetitions), arm curl (repetitions), 6-minute walk test (m), 8-foot up-and-go (s). However, no significant difference was found between the groups.

The results therefore suggest that only 44.4% of the evaluated participants complied with the Global Recommendations on Physical Activity for Health. Evidence also suggests that the adherence to these guidelines might have a positive influence on the physical fitness of the elderly, particularly muscular strength, endurance and agility, but not flexibility.

This work was supported by the Portuguese Foundation for Science and Technology (FCT; Grant Pest – OE/CED/UI4016/2016).

1. Tuna HD, Edeer AO, Malkoc M, Aksakoglu G. Effect of age and physical activity level on functional fitness in older adults. Eur Rev Aging Phys Act. 2009;6:99–106.

2. Nawrocka A, Mynarski W, Cholew J. Adherence to physical activity guidelines and functional fitness of elderly women, using objective measurement. Ann Agr Env Med. 2017;24:632-635.

3. WHO. Global Recommendations on Physical Activity for Health. Switzerland: World Health Organization; 2011. Available from: http://apps.who.int/iris/bitstream/10665/44399/1/9789241599979_eng.pdf

4. Department of Health. Start Active, Stay Active: A report on physical activity for health from the four home countries’ Chief Medical Officers. London: Department of Health; 2011. Available from: https://www.sportengland.org/media/2928/dh_128210.pdf.

5. Rikli R, Jones C. Development and validation of a funcional fitness test for community- residing older adults. J Aging Phys Activ. 1999;7:129-161.

Physical fitness, Elder, Physical activity, Recommendation.

O109 Effects of strength and conditioning programs in strength and dynamic balance in older adults

RogĂŠrio salvador 1,2 , luĂ­s coelho 1,2 , rui matos 1,2 , joĂŁo cruz 1,2 , ricardo gonçalves 1,2 , nuno amaro 1,2, 1 school of education and social sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 life quality research centre, 2001-904 santarĂŠm, portugal, correspondence: rogĂŠrio salvador ([email protected]).

To independently accomplish their daily routines with no need of assistance, older adults require an optimal physical fitness. In fact, this lack of physical fitness may reduce older individuals’ quality of life, leading to dependence on personal daily assistance or even to becoming significantly more prone to fatal falls [1]. Prevention through physical activity programs, are used to slow down and delay these aging effects, by improving individuals’ agility, flexibility and body improved functionality. Most of these programs take place in in-water environment due to age limiting factors such as high-risk osteoporosis, reduced mobility, higher risk of fracture from falls, arthrosis and spinal disorders among other.

To assess the effects of two strength and conditioning programs in strength and dynamic balance in older adults.

One hundred elderlies (36 males and 64 females) aged 67.3 Âą 5.2 years old enrolled the 5-year long intervention program and were assessed for lower body strength (LBS) and dynamic balance (DB). Two intervention programs were set up and subjects were included in each group according to their own will. Program A (n = 52; 24 males and 28 females; age 67.2 Âą 5.2 y-o) consisted of 1 in-water session and 2 in dry-land sessions per week. Program B (n = 48; 12 males and 36 females; age 67 Âą 5.2 y-o) consisted of 2 in-water sessions and 1 in dry-land session per week. Wilcoxon test was used on inferential analysis for repeated measures (pre-post). Significance level was kept at 5%. The effect size for this test was calculated by dividing the z value by the square root of N [2].

Combined data from both programs showed that LBS and DB improved significantly at the end of the intervention programs: LBS from 18.3±3.2 reps to 18.8±3.1 reps (p=0.003; r=-0.295), DB 4.2±0.7 secs to 4.0±0.7 secs (p=0.017; r=-0.245). Program A significantly improved LBS from 19.1±2.8 reps to 19.9±2.7 reps (p=0.001; r=-0.465) but not DB 4.1±0.7 secs to 4.0±0.7 secs (p=0.083; r=-0.240). No differences were found neither in Program B LBS – 17.5±3.4 reps to 17.6±3.1 reps (p=0.462; r=-0.106) – nor DB - 4.2±0.6 secs to 4.1±0.6 secs (p=0.083; r=-0.250).

Strength and conditioning programs over a 5-year time span seem to substantially delay the negative effects of aging on LBS/DB in the elderly. No visible decline in the assessed parameters was observed. Our results may suggest different effects of in-water and dry-land programs. However, participants generally responded positively to both intervention programs.

1. World Health Organization. Falls Fact Sheet. Updated August 2017. http://www.who.int/mediacentre/factsheets/fs344/en/

2. Rosenthal R. Parametric measures of effect size. In: Cooper H, Hedges LV, editors. The handbook of research synthesis. New York: Russell Sage Foundation; 1994. p. 231-244.

Elderly, Physical activity, Quality of life, Strength, Balance.

O110 Compassion attributes and actions in adolescents: are they related to affect and peer attachment quality?

Marina cunha 1,2 , cĂĄtia figueiredo 1 , margarida couto 1 , ana galhardo 1,2, 1 instituto superior miguel torga, 3000-132 coimbra, portugal; 2 cognitive and behavioural center for research and intervention, faculty of psychology and educational sciences, university of coimbra, 3001-802 coimbra, portugal, correspondence: marina cunha ([email protected]).

Research has been showing potential benefits of compassion practice in various populations, nonetheless it is relevant to extend the assessment of compassion attributes and actions for adolescents and explore its relationship with other psychosocial adjustment constructs.

To explore association patterns between the various directions of compassion (self- directed, directed to others and receiving compassionate from others) and variables related to affect, social comparison and peers’ attachment to quality.

A total of 338 adolescents, aged between 12 and 18 years old, completed a set of self-report instruments to assess their compassionate attitudes and actions towards themselves and others (EAAC), peers attachment to quality (AQ-C), positive and negative affect (PANAS), and peers social comparison (SCS-A).

Significant correlations were found in the expected direction between self-compassion, compassion for others and received from others and the study variables (positive and negative affect, social comparison and attachment style). Specifically, positive affect, positive peer comparison, and secure attachment style were positively associated with compassionate attributes and actions. Negative affect, in turn, showed a negative correlation with compassionate actions in the three analysed directions, and with compassionate attributes when considering receiving compassion from others. The avoidant unsecure attachment style revealed a negative association with compassionate attributes and actions in the different directions. Finally, the ambivalent insecure attachment style revealed a significant negative correlation with self-directed compassionate actions and with receiving compassion from others, regarding actions and attributes.

These findings suggest the importance of stimulating a compassionate mind in adolescents. In fact, the positive association between compassion and psychological and emotional adjustment variables point to the relevance of developing compassion skills during this developmental stage.

Compassion attributes, Compassion actions, Adolescents, Positive and negative affect, Peer attachmet

O111 Association palmar grip strength with self-reported symptoms in the arm

Alice carvalhais 1 , tatiana babo, raquel carvalho 1 , paula rocha, gabriela brochado 1 , sofia lopes 1,2, 1 department of technology physiotherapy, cooperativa de ensino superior politĂŠcnico e universitĂĄrio, polytechnic institute of health, 4585-116 paredes, portugal; 2 department of physical therapy, school of health technology, polytechnic institute of porto, 4200-465 porto, portugal, correspondence: gabriela brochado ([email protected]).

World Health Organization (WHO) defined work-related musculoskeletal injuries as multifactorial diseases. These injuries are the main concern of public health and individual health, and are becoming increasingly frequent, in both developed and developing countries. Workers during working hours are often exposed to repetitive movements, the lifting and carrying heavy loads, verifying an increase in demand in terms of muscle strength in the upper limbs. The palmar grip strength provides an objective index of the functional integrity for the evaluation of upper limbs.

Verify that the palmar grip strength is associated with self-reported symptoms in the arm in industry worker’s electrical components.

An observational, analytical study was performed on a sample of 167 workers. The Nordic Musculoskeletal Questionnaire was applied and the palmar grip strength was measured using the hydraulic dynamometer. Descriptive statistics were used to analyse the prevalence of self-reported symptoms and the U test of Mann-Whitney, Kruskal-Wallis H test, Chi-square test and Fisher's exact test was used to analyse relationships between variables, with a 95% confidence level.

The palmar grip strength was related to self-reported symptomatology in the dominant upper limb, shoulder regions (p = 0.018) and wrist (p = 0.005) in females. It was also found that the risk factors are not associated with palmar grip strength in individuals of both genders.

Palmar grip strength is associated with self-reported symptomatology in the shoulder and wrist of the dominant upper limb in female workers.

Dynamometer, Palmar grip strength, Upper limb, Symptomatology auto referred.

O112 Social-skills as facilitators of a healthy lifestyle

Luisa aires 1,2 , sara lima 3 , susana pedras 3 , raquel esteves 3 , fĂĄtima ribeiro 3 , assunção nogueira 3 , gustavo silva 1,4 , teresa herdeiro 3 , clarisse magalhĂŁes 3, 1 instituto universitĂĄrio da maia, 4475-690 maia, portugal; 2 centro de investigação em atividade fĂ­sica saĂşde e lazer, universidade do porto, 4099-002 porto, portugal; 3 cooperativa de ensino superior politĂŠcnico e universitĂĄrio, polytechnic institute of health, 4585-116 paredes, portugal; 4 research center in sports sciences, health sciences and human development, university of beira interior, 6201-001 covilhĂŁ, portugal, correspondence: luisa aires ([email protected]).

The Knowledge of behaviours and social-skills of adolescents can contribute to the construction of an effective school-based intervention to promote healthy lifestyles.

Identify homogeneous groups (clusters) according to lifestyle and social skills.

This cross-sectional study included 1,008 students from 5 elementary schools of Tâmega and Sousa region, mean age of 13.43 (SD = 1.1) and 50% of girls. A sociodemographic questionnaire “My Lifestyle” was used with 28 items composing 4 subscales: Physical Exercise (PE), Nutrition, Self-Care, Monitored Safety, Use of Drugs and Similar (UDS) (0.41< α <0.85). A “ Social Skills Inventory for Teenagers ” questionnaire (Social-Skills) was applied, including subscales: Empathy, Civility, Assertiveness, Self-Control, Affective Approach and Social-Development (0.64< α <0.90). Both questionnaires had 5 categories of answers from “almost always” to “almost never” or “rarely”. In order to identify homogeneous groups of students, according to lifestyle and social skills, it was performed a k-means cluster analysis

For Lifestyle, mean scores were: UDS = 4.09, Self-Care = 4.07, PE = 3.86, Monitored Safety = 3.63 and Nutrition = 3.40. For Social-Skills, 50.7% had a highly elaborate repertoire of Social Skills, 11% had elaborate repertoire, 20.1% had good repertoire and 2.7% had lower average of social skills repertoire. It was decided to follow a three-cluster solution. Cluster 1 included students with a poor elaborated repertoire of social skills, but with good lifestyle indicators in all subscales. In cluster 2, students had a good repertoire of social skills, with good lifestyle indicators in all subscales, except for subscales of nutrition with poor indicators (38.7) and Monitored Safety (46.95). Cluster 3 included students with highly developed repertoire of social skills and the best lifestyle indicators.

Results revealed healthy practices in general, however students had the lowest scores in Nutrition, especially in sugar intake and absence of dietary plan. Students included in cluster 2 presented also the lowest results in Monitored Safety, especially about driving with alcohol. These students at risk of develop unhealthy lifestyle need special attention. The high profile of social skills in particular Affective Approach and Assertiveness, should be taking into account as a mechanism for intervention programs. In addition, relevance given to PE, should also be used as a good strategy to reinforce the accomplishment of healthy eating habits in all students. In another point of view, good indicators of lifestyles (cluster 1) can act as matrix to reinforce improvements in social-skills.

Adolescents, Lifestyle, Social Skills.

O113 Palliative care: nursing student’s conceptions and motivations

Suzana duarte, vitor parola, adriana coelho, escola superior de enfermagem de coimbra, 3046-851 coimbra, portugal, correspondence: suzana duarte ([email protected]).

Palliative care (PC) is an inevitability in view of the demographic and epidemiological transition curves of Western society. The inclusion of a PC Curricular Unit (CU) in the Nursing Undergraduate Program (NUP) translates into the acquisition of competencies that allow caring for people and families in need of those carefulness. Although considering professional, institutional and family barriers, there is evidence that students apply, in clinical practice, the principles inherent of PC [1]. During clinical education, students are confronted with persons in need of PC, however without benefiting from such care. These experiences can form the basis, from which, it is possible to build the teaching-learning process of future nurses, regarding this theme.

To identify the conceptions and motivations for the frequency of the CU option of PC, by nursing undergraduate students'.

In the first class, nurses’ students were asked to anonymously write what they understood as PC and the motivation for attending this CU. The 210 responses collected over 5 years were subjected to content analysis [2].

The PC conceptions reported were grouped into the categories: “Care for people in the final stages of life”, “Care to alleviate suffering” and “Comfort care”. The reasons for choosing the PC option were grouped in “Difficult and not tackled area”, “Area that arouses more interest” and “Previous Experiences”. It is verified that the concept of PC remains as care for people in terminal phase of life and in suffering. Some students report experiencing situations that would lead to PC, conditions of therapeutic obstinacy and end of life in circumstances of intense suffering. Students also mention the nurses ‘attempts to provide those carefulness, which is not well favoured for, in the hospital wards. Students indicate interventions that are intrinsic to palliative care, such as, communication, psychological support, coping with death and mourning, without any reference to the need for knowledge in other areas, namely pathology, pharmacology, or maintenance and healthcare technologies. The orientation of care for quality of life, family integration and management of symptoms is not considered.

There is a need to include in each NUP a PC CU, preferably after a period of clinical education in hospital wards. In this way it is possible to consider the previous experiences of the students, capitalizing them to the understanding of the fundamental principles of Palliative Care.

1. Bassah N, Cox K, Seymour J. A qualitative evaluation of the impact of a palliative care course on preregistration nursing students’ practice in Cameroon. BMC Palliat Care. 2016;15(1):37.

2. Bardin L. Anålise de Conteúdo. 6th edition. Ediçþes 70; 2013.

Palliative Care, Nursing student’s, Motivations and conceptions.

O114 “As eat” effects of a physical exercise program and nutrition in obese and binge eating adults

Ana barroco 1 , josĂŠ a parraça 1 , nuno pascoa 1 , daniel collado-mateo 2 , jose adsuar 3 , jorge bravo 1, 1 department of sports and health, school of science and technology, university of ĂŠvora, 7000 ĂŠvora, portugal; 2 instituto de actividad fisica y salud, universid autonoma de chile, providencia, chile; 3 universidad de extremadura, 06006 badajoz, spain, correspondence: josĂŠ a parraça ([email protected]).

Overweight and Obesity are defined as an abnormal or excessive fat accumulation and present a health risk. Binge eating is a food disorder characterized by episodes of abusive food intake in the absence of regular compensatory behaviours such as vomiting or abuse of laxatives. Those who suffer from this disorder often increase their weight and fat mass by excessive intake of calories, thus becoming overweight or obese.

To determine the relation between the effects of an exercise and nutrition program, in overweight or obese adults (30-60 years) and binge eating, regarding body composition and physical fitness. The program also aimed to promote learning and self-control in the practice of physical activity and in the food choices of this population.

41 patients from USF Planície de Évora. Groups were randomly assigned: the experimental group (N = 23) and the control group (N = 18). The study lasted eight months and consisted of 47 practical sessions of one-hour group exercise, twice a week, one weekly self-help session, and three sessions of nutritional monitoring throughout the program. Practical sessions were structured with specific exercises aimed at improving the different components evaluated; namely in physical fitness (strength, cardiovascular endurance and flexibility) and body composition (fat loss).There were significant improvements in body composition, namely in the percentage of fat mass (40.75 (±6.46) to 37.44 (±7.06) p = .000), fat free mass (59.98 (±6.44) to 62.26 (±7.56) p = .001), of fat mass in the trunk (35.95 (±4.90) to 32.06 (±4.93) p = .000), in the Visceral index (12.00 (±3.42) to 10.88 (±2.97) p = .000) and in metabolic age (59.88 (±9.35) to 55.94 (±7.92) p =. 024). There were improvements in physical fitness, mainly in trunk flexibility (-0.18 (±9.72) to 8.93 (±10.06) p = .002) and in leg strength (0.10 (±0.03) to 0.13 (±0.02) p = .034) and arms (25.48 (±8.91) to 30.81 (±7.68) p = .000). Regarding weight, there is a tendency to significance, since there was a significant improvement (92.25 (±12.73) to 88.93 (±13.77) p = .056).

We conclude that the physical exercise and nutrition program allows improvements in physical fitness and body composition in the obese population suffering from binge eating.

Exercise, Nutrition, Obesity, Body composition, Food addiction.

O115 Exploratory analysis of the association between motives for the practice of physical exercise and body composition

Roberta frontini 1 , maria monteiro 2 , antĂłnio brandĂŁo 2 , filipe m clemente 2,3, 1 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 school of sports and leisure, polytechnic institute of viana do castelo, 4900-347 viana do castelo, portugal; 3 instituto de telecomunicaçþes, university of beira interior, 6201-001 covilhĂŁ, portugal, correspondence: roberta frontini ([email protected]).

Understanding what reasons lead individuals to start and maintain physical activity is extremely important to help individuals to engage and adhere to physical exercise. It allows exercise professionals to define the most appropriate actions to implement more suitable strategies and remove possible barriers to exercise. The decrease in body fat mass may be indirectly related to the motivation that the individual has for the practice of the physical exercise. Higher levels of motivation to lose weight may be related with higher adherence to, for example, the training plan and, consequently, be related to higher levels of body mass fat reduction.

This study aimed to analyse the association between body fat and motives to practice physical exercise.

The sample comprised 85 adults (38 males and 47 females) attending the gym, who completed a sociodemographic form and the Exercise Motivations Inventory - 2 (EMI-2). A multiple regression analysis was used to predict the value of a %body fat (%BF) based on the value of survey categories. The significance was set at p < 0.05. The statistical procedures were made in SPSS software (version 23.0, IBM, USA).

A multiple regression analysis was run to predict %BF of the participants from social recognition, positive health, weight management, stress management, revitalization, enjoyment, challenge, affiliation, competition, health pressures, health avoidance, appearance, strength and endurance and nimbleness categories. These variables statistically significantly predicted %BF, F(14.70) = 2.249, p < 0.014, R 2 = 0.310. Only three variables (social recognition, positive health and weight management) added statistically significance to the prediction, p < 0.05. The unstandardized coefficient, B1, for social recognition is equal to 2.178, for positive health is equal to 4.860 and for weight management is equal to 2.490.

Social variables (specifically social recognition), positive health and weight management are important for body mass fat reduction, more than variables related, for example, with health concerns. It is important, in future studies, to understand what processes influence those relations. The results of our study reinforce the importance of these three variables for the reduction of body fat mass emphasizing that it may be important to take these issues into account not only to maintain the adherence of individuals, but also to promote the practice of physical exercise.

Motivation, Physical exercise, Body fat, Social recognition, Positive health.

O116 Simulation as a pedagogical strategy in nursing teaching

ClĂĄudia chambel 1 , catarina carreira 1 , catarina pinheiro 1 , luĂ­s ramos 1 , catarina lobĂŁo 1,2, correspondence: luĂ­s ramos ([email protected]).

Nowadays, the use of laboratories with specific equipment and classes with resource to simulated practice, are increasingly advocated, especially in graduation courses, whose practice is a crucial tool for students to apply in real life situations.

Therefore, we intended to know the perception of students and teachers of a nursing degree, on the use of simulated practice as a pedagogical strategy.

To achieve this, we developed a research study using a qualitative approach and a semi-structured interview applied to six students of the nursing graduation course and to seven teachers who teach classes, with the resource of simulated practice, at Escola Superior de SaĂşde de Leiria.

From the results we verified that for teachers, simulation is a pedagogical strategy in the development of the students’ competences, in a way that will translate in a provision of care based on the scientific knowledge, safety and humanism that is expected from a health professional. However, from the students’ perspective, we verified that the results indicate that the simulation is undoubtedly an added value, since the interviewees were able to approach the concept of simulated practice at several levels, also enabling to highlight the partnership between the pertinence and the contributions of the simulation and, finally, to mention several constraints and respective solutions.

As Goostone et al., (2013) [1] states, simulation is a pedagogical strategy that allows the student to acquire skills necessary for clinical practice, in a risk-free real environment, that is, students are faced with a clinical situation like what they would find in a real clinical environment, receiving feedback on their performance. Thus, it’s fundamental that there are teachers with the necessary training to implement this type of pedagogical strategy, as well as the necessary resources, associated to the will and commitment of the students. This triad is essential for the development of the students’ competencies as future professionals. In summary, the groups interviewed highlighted the importance of simulation, being able to answer to our research questions, complementing each other, once they recognized the importance of simulation in the health field.

1. Goodstone L, Goodstone M, Cino K, Glaser C, Kupferman K, Dember-Neal T. Effect of Simulation on the Development of Critical Thinking in Associate Degree Nursing Students. Nurs Educ Perspect. 2013;34(3):159-62.

Simulation, Nursing, Education.

O117 Optimising protocol using dual in-situ hybridization to breast cancer in HER2 status

Paulo teixeira 1,2,3 , maria f silva 1,2 , paula c borges 1 , josĂŠ m ruivo 2 , diana martins 4 , fernando mendes 1,3,5,6, 1 department biomedical laboratory sciences, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal ; 2 pathologic anatomical service, centro hospitalar e universitĂĄrio de coimbra, 3000-075 coimbra, portugal ; 3 biophysics institute, cnc.ibili, faculty of medicine, university of coimbra, 3000-354 coimbra, portugal ; 4 instituto de investigação e inovação em saĂşde, university of porto, 4200-135 porto, portugal ; 5 center of investigation in environment, genetics and oncobiology, faculty of medicine, university of coimbra, 3001-301 coimbra, portugal ; 6 coimbra institute for clinical and biomedical research, university of coimbra, 3004-504 coimbra, portugal, correspondence: paulo teixeira ([email protected]).

Human epidermal growth factor receptor 2 (HER2) is overexpressed in 20 to 30 % of breast cancer, as well as in others human cancers [1,2]. The Dual in-situ hybridization (DISH) assay is widely used to study HER2 status, and gives predictive and therapeutic information in invasive breast cancer, although it is dependent on pre-analytical variables, as ischemic time and fixation, among others [1,3].

The aim is to implement a HER2 DISH assay, contributing to its optimization and decrease of variability, clarifying the pre-analytical and analytical variables, with impact in tissue staining and morphology.

Forty-four (44) cases of invasive breast cancer cases previously scored with HER2 2+, were included in this study. Thin 4 Îźm paraffin sections were submitted to DISH. Unsuccessful cases were submitted to subsequent DISH protocols to attempt a valid result. Slides were evaluated according to staining and morphology integrity, by three independent observers proficient in this methodology, in a blind way, with a light microscope.

From the 44 cases, 30 (68.2%) were readily validated, since 14 (31.8%) showed nuclear vacuolization and morphologic disruption leading to further tests with optimized protocols. Unsuccessful cases showed severe morphology damage and were reprocessed with further optimized protocols.

According to the obtained results, we can conclude that the pre-analytical variables with major impact on the standardization of the results were time of cold ischemic; unsliced operatory specimens and length of fixation. Analytical variables as time and temperature of cellular permeabilization can be changed to improve inadequate tissue preservation.

1. Meric-Bernstam F, Hung M-C. Advances in Targeting Human Epidermal Growth Factor Receptor-2 Signaling for Cancer Therapy. Clin Cancer Res. 2006;12(21):6326–30.

2. Brenton JD, Carey LA, Ahmed AA, Caldas C. Molecular Classification and Molecular Forecasting of Breast Cancer: Ready for Clinical Application? J Clin Oncol. 2005;23(29):7350–60.

3. Khoury T, Sait S, Hwang H, Chandrasekhar R, Wilding G, Tan D, et al. Delay to formalin fixation effect on breast biomarkers. Mod Pathol. 2009;22(11):1457–67.

Dual in situ hybridization, Pre-analytical variables, Breast cancer, Optimization protocols.

O118 Psychometric properties update of AGITE – a medication self-management and adherence in the elderly questionnaire

Maria almeida 1 , suzana duarte 1 , hugo neves 2,3, 1 coimbra nursing school, 3046-851 coimbra, portugal; 2 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal, correspondence: maria almeida ([email protected]).

As the human body ages, its function also tends to decline resulting in a higher risk of development of diseases. This leads to the presence of multiple and complex medications in the life of many older adults (OA). As the self-management and adherence behaviours of these medications require the development of competences, it is important to develop a quick and easy instrument to provide systematized data for the health professional that will allow a better decision-making process and a higher probability of developing interventions with impact in the medication-taking ability of the OA. With this purpose, AGITE was developed following a process of systematic literature review, content analysis and psychometric testing, resulting in a total of nineteen questions with a Likert scale approach. Previous studies demonstrated the need for further testing of its psychometric properties.

After application of the AGITE to 146 elders in day centres in Central Portugal, exploratory factorial analysis (EFA) using the eigenvalue criteria, and internal consistency (IC) through Cronbach’s Alpha were performed.

From the EFA, using varimax rotation and screen plot analysis, an acceptable KMO of 0.653 was obtained, with no items being eliminated through analysis of anti-image matrix. A total of five dimensions emerged explaining 53.6% of the variance: “Engagement”, “Neglect and External Influences”, “Perceived Benefits”, “Healthcare Professionals Support”, and “Value assigned to Written Information”. Through analysis of the items of each dimension, higher scores of “Engagement” indicate a responsible attitude towards self-management and adherence, while higher scores of “Neglect and External Influences” demonstrate a tendency to cease medication, according to individual and non-professional external beliefs. Regarding the dimension “Perceived Benefits”, higher scores evidence how the elder positively perceives the effects of the medication, while higher scores of “Healthcare Professionals Support” are related with perceived importance of the healthcare professionals in the medication-taking ability. Higher scores of “Value assigned to Written Information” demonstrate a tendency to attribute significance to written data regarding medication. Overall the questionnaire dimensions demonstrate questionable to acceptable IC (0.6 < α < 0.8).

New analysis of the psychometric properties evidences the emergence of new dimensions, allowing for a wider understanding of the profile of the medication-taking ability of the elder population in Portugal. These new dimensions will provide a better analysis of this skill to the healthcare professional, allowing a more personalized intervention, with higher chance of success.

Polypharmacy, Medication management, Elders.

O119 Emotional intelligence and fear of death in Spanish elders

Pedro garcia-ramiro 1 , juan fj dĂ­az 2 , maria gonzĂĄlez-melero 1 , maria dcp jimĂŠnez 1 , antonio mp jimĂŠnez 1 , francisco jr peregrina 1, 1 universidad de jaĂŠn, 23071 jaĂŠn, spain; 2 universidad las palmas de gran canaria, 35001 las palmas de gran canaria, spain, correspondence: pedro garcia-ramiro ([email protected]).

Researchers, stakeholders and policy makers agree about the importance of the population ageing in modern societies. Emotional Intelligence (EI) has generated a broad interest in the scientific community in Spain [1]. Prestigious social scientists from different lines of research contribute to assess important theoretical and empirical topics on this construct [2]. Aging is a process during which important changes occur in different areas of development and emotional intelligence plays an essential role. Throughout the years, the subject of death has been conceived in different ways. People abstain from talking about it, and a conduct of avoidance can be observed manifesting itself in fear and anxiety [3].

The objective of this study was to examine the relationship between emotional intelligence and fear of death in an older population.

A Spanish sample of 384 older people aged 65 years and older (51.82% women; 71.23 Âą 8.34 years of age), without cognitive impairment, were included in this descriptive and correlational study. Data on emotional intelligence and fear of death were obtained through the TMMS-24 and Collett-Lester scales, respectively.

Structural equation modelling indicated that emotional intelligence exerted an influence on fear of death. The emotional perception component was positively correlated with the fear of death (r = 0.14; p < 0.05), while understanding and emotional regulation were negatively correlated with fear of death (r = -012; p < 0.001). The higher scores for fear of death were associated with the female gender, and singles. These aspects underscore the importance of the results of this study.

These findings show that high levels of emotional intelligence were associated with less fear of death. After controlling sociodemographic variables, the EI dimensions, emotional perception and emotional regulation, accounted for part of the variance in several fears of dead facets. These dimensions can have an important role in the fear of dead of older people.

1. Wilson CA, Saklofske DH. The relationship between trait emotional intelligence, resiliency, and mental health in older adults: the mediating role of savouring. Aging Ment Health. 2018;22(5):646-654.

2. Lloyd SJ, Malek-Ahmadi M, Barclay K, Fernandez MR, Chartrand MS. Emotional intelligence (EI) as a predictor of depression status in older adults. Arch Gerontol Geriatr. 2012;55(3):570-573.

3. Arca MG. EnfermerĂ­a en el proceso de humanizaciĂłn de la muerte en los sistemas sanitarios. EnfermerĂ­a ClĂ­nica. 2014;24(5):296-301.

Emotional intelligence, Fear of death, Ageing, Older adults.

O120 Vitamin D in food supplements: are we taking too much?

Isabel m costa, alexandra figueiredo, deolinda auxtero, instituto universitĂĄrio egas moniz, 2829-511 caparica, portugal, correspondence: alexandra figueiredo ([email protected]).

Over the last years, an increase in vitamin D (VitD) supplements intake has been observed. Evidence has suggested multiple effects of VitD beyond bone homeostasis. Low VitD levels are associated with numerous disorders including diabetes, cancer, cardiovascular disease, Parkinson's disease, among others. Consumers have the general misperception that “vitamin” denotes something harmless and vital, disregarding its potentially harmful effects. Although vitD toxicity is uncommon, case reports attributed to vitD supplementation have raised. Being a fat-soluble vitamin, excessive supplementation may result in body accumulation and toxicity. It increases intestinal calcium absorption and plays a central role in its homeostasis. Thus, most symptoms of toxicity result from hypercalcemia. Adverse effects include gastrointestinal disorders (anorexia, diarrhoea, nausea, vomiting), muscle and joint pain, cardiac complaints, hypertension, central nervous system effects and renal disorders (polyuria, polydipsia).

The aim of this study was to evaluate whether VitD3 (cholecalciferol) daily dose indicated on food supplements (FS) labels coincided with the recommended daily allowance (RDA) for this vitamin defined by the European Union Directive.

Labels of 110 FS sold in Portuguese pharmacies, supermarkets or health shops were examined. Selection criteria included: oral solid pharmaceutical forms for adults, containing vitD in its composition, as stated in the label, regardless of the purpose of the FS.

66.4% of FS presented vitD label doses above RDA and four of them indicated a daily dose ≥ the tolerable upper intake level defined by EFSA (UL=100 μg/day). In the majority of the FS evaluated, vitD label dose far exceeded RDA value and some exceeded UL defined by EFSA.

At present, the safety of FS and the authenticity of label information is exclusively ensured by the economic operators who place FS on the market. Since FS are usually taken without any medical supervision or counselling and attending the potential adverse effects of vitD excess, it is imperative that the daily doses of vitD present in FS are reviewed attending to RDA values. Authors also suggest that FS should be under the same quality control of pharmaceuticals, regarding FS consumers health.

Vitamin D, Food Supplements, Recommended Daily Allowances, Tolerable Upper Intake Level.

O121 Functional ability and risk of falling - a base for exercise prescription

SĂ­lvia vaz, anabela martins, carla guapo, sara martins, physiotherapy department, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal, correspondence: sĂ­lvia vaz ([email protected]).

Falls are currently considered one of the most common and serious public health problems [1, 2]. Faced with this problem, it becomes necessary to explore which factors can better predict the risk of falls in individuals living in the community [3], so that, preventive measures can be considered.

To identify fall risk indicators and to relate them to exercise prescription levels; to relate the history of falls, the functional capacity (measured through the Timed Up & Go, 10-meter walking speed test, Step test) and the fall risk factors and propose a guide based on those relations to address exercise prescription.

Descriptive and exploratory study. Two hundred community dwelling adults aged 55 or older were assessed, integrating two sub-samples, a Portuguese and a Polish. Study participants were assessed for socio-demographic data, history of falls, fear of falling, exercise, sedentary lifestyle, hearing problems and/or dizziness, visual problems, alcohol consumption, exercise self-efficacy and confidence in activities of the daily life (FES-Portuguese version). Functional capacity was assessed by three golden measures for the risk of fall: the Timed Up and Go (TUG), the 10-meter walking speed test and the Step Test (15s). The statistical design included descriptive analyses, inferential analyses (bivariate: t-test for independent samples, One-Way ANOVA and Pearson’s correlation coefficient).

Fall incidence was 39.5% and 45.3% in the total and Portuguese samples, respectively. TUG, 10-meter walking speed test and step test can distinguish those with history of falls from those without, with statistically significant differences (p ≤ 0.05). Taking more than 4 different medications per day, fear of falling, hearing problems and/or dizziness and the need for help getting up from a chair were correlated to the history of falls, the TUG, the walking speed and the step test (p ≤ 0.05). The sedentary lifestyle and the use of assistive devices were associated with worst performance in the functional tests (p < 0.05) in the Portuguese sample. TUG, 10-meter walking speed test and step test were correlated with exercise self-efficacy.

The incidence of falls is higher than literature has reported and it is inversely associated with the functional capacity of community-dwelling adults aged over 55 years old. Data from this study is a valuable basis for exercise prescription, taking into account the levels of risk and the components of exercise prescription.

1. Gschwind Y, Kressig R, Lacroix A, Muehlbauer T, Pfenninger B, Granacher U. A best practice fall prevention exercise program to improve balance, strength / power, and psychosocial health in older adults: study protocol for a randomized controlled trial. BMC Geriatrics. 2013;13(1):105.

2.NICE. Falls in older people overview. NICE Pathways. 2016;1-13.

3. Avin K, Hanke T, Kirk-Sanchez N, McDonough C, Shubert T, Hardage J, Hartley G. Management of falls in community-dwelling older adults: clinical guidance statement from the Academy of Geriatric Physical Therapy of the American Physical Therapy Association.Physical Therapy. 2015;95(6):815-834.

Risk of fall, Functional capacity, Prevention of falls, Exercise prescription, Self-efficacy.

O122 The impact of the FIFA 11+ on physical performance of amateur futsal players: short and long term effects

MĂĄrio lopes 1 , daniela simĂľes 2 , joĂŁo m rodrigues 3,4 , rui costa 1,5 , josĂŠ oliveira 6 , fernando ribeiro 1,7, 1 school of health sciences, university of aveiro, 3810-193 aveiro, portugal; 2 santa maria health school, 4049-024 porto, portugal; 3 institute of electronics and informatics engineering of aveiro, 3829-193 aveiro, portugal; 4 department of electronics, telecommunications and informatics, university of aveiro, 3810-193 aveiro, portugal; 5 center for health technology and services research, university of aveiro, 3810-193 aveiro, portugal; 6 research centre in physical activity, health and leisure, faculty of sport, university of porto, 4200-450 porto, portugal; 7 institute of biomedicine, university of aveiro, 3810-193 aveiro, portugal, correspondence: mĂĄrio lopes ([email protected]).

The effects of the FIFA 11+ on physical performance parameters has demonstrated controversial results.

The aim of this study was to observe the short and long-term effects of the FIFA 11+ on performance in male amateur futsal players.

Seventy-one (71) male futsal players from six amateur clubs were randomized to an intervention (N = 37, age: 27.0 Âą 5.1 years) or a control group (N = 34, age: 26.0 Âą 5.1 years). The intervention group was submitted to 10 weeks of FIFA 11+ injury prevention program, 2 sessions/week, followed by a 10-week follow-up period, while the control group performed regular futsal warm-ups during the training sessions. During the follow-up period both groups performed only regular warm-ups during their training sessions. Physical performance was assessed by measuring agility (T-test), sprint (30-meter sprint), flexibility (sit and reach) and vertical jump performance (squat jump).

Differences between groups were found at baseline for training exposure, body mass index, body weight, flexibility and sprint. The results of the effect of the FIFA 11+ on the sit and reach, speed, jump performance and agility did not show differences pre-post intervention after adjustment for the baseline differences, as well as for the 10-week follow-up.

The current study has shown no short and long-term performance enhancement in sprint, flexibility, agility and jump performance after the FIFA 11+ in male amateur futsal players.

Prevention program, Warm-up, Injury, Neuromuscular training, Amateur male players.

O123 Effects of aquatic fitness in older women conditioning: an 8-week program

Pedro morouço 1 , sandra amado 1,2,3 , susana franco 4 , fĂĄtima ramalho 4, 1 centre for rapid and sustainable product development, polytechnic institute of leiria, 2430-028 marinha grande, portugal; 2 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 4 sport sciences school of rio maior, polytechnic institute of santarĂŠm, 2040-413 rio maior, portugal, correspondence: pedro morouço ([email protected]).

There are several evidences in the literature demonstrating a high positive association between increased levels of exercise and improved health, specifically in older adults [1]. As such, in recent years there has been a large number of studies examining the benefits imposed by different types of exercise (e.g. resistance training [2], and aquatic fitness [3]). However, in addition to the benefits imposed, it is crucial that the exercise is motivating and challenging.

It was aimed to examine the possible effects on conditioning induced by 8-weeks of aquatic fitness, in female older adults.

Fourteen women (64.3 ± 7.3 years old) enrolled in bi-weekly aquatic fitness of 45’ sessions for 8 weeks. Before and after the 8 weeks, participants performed the Senior Fitness Test [2], hand-grip strength and body measures. All participants were volunteer, informed consent was obtained and all procedures were in accordance to Helsinki Declaration. Sessions were instructed by a CSCS®.

Significant and meaningful improvements were observed in lower body strength (p < 0.001; d = 1.22), lower body flexibility (p < 0.001; d = 3.54), aerobic endurance (p < 0.001; d = 1.35), dynamic balance (p < 0.001; d = 1.53) and hand grip strength (p < 0.001; d = 2.02). Significant, but moderate improvements were observed in body mass (p = 0.021; d = 0.72) and hip circumference (p = 0.048; d = 0.59).

Eight weeks of aquatic fitness induced extensive benefits in older women conditioning, suggesting that this activity is able to promote an increase in life quality. The present results corroborate previous studies, demonstrating that aquatic exercise is a reliable approach for improved health in the elderly.

This research was supported by the European Regional Development Fund (FEDER), through COMPETE2020 under the PT2020 program (POCI-01-0145-FEDER-023423), and by the Portuguese Foundation for Science and Technology (UID/Multi/04044/2013).

1. Taylor D. Physical activity is medicine for older adults. Postgrad Med J. 2014;90:26-32.

2. Martins WR, Safons MP, Bottaro M, Blasczyk JC, Diniz LR, Fonseca RMC, et al. Effects of short term elastic resistance training on muscle mass and strength in untrained older adults: a randomized clinical trial. BMC Geriatr. 2015;15(1):99.

3. Bartolomeu RF, Barbosa TM, Morais JE, Lopes VP, Bragada JA, Costa MJ. The aging influence on cardiorespiratory, metabolic, and energy expenditure adaptations in head-out aquatic exercises: Differences between young and elderly women. Women Health. 2017;57(3):377–391.

4. Rikli RE, Jones CJ. Senior fitness test manual. 2 nd edition. Human Kinetics; 2013.

Exercise, Health, Aging, Physical Fitness.

O124 Predicting social participation in the community-dwelling older adults

Carla guapo, anabela c martins, sara martins, sĂ­lvia vaz.

Nowadays, active ageing is both a complex scientific term and a goal for most people, but also an undeniable political objective [1]. Participating socially helps to develop the feeling of belonging to a community and allows everyone to see each individual contribution in upholding the community [2-4].

Characterize the profile of community-dwelling adults aged 55 or older, regarding social participation, functional capacity (walking speed, grip strength, lower limb strength, static and dynamic balance) and personal factors (age, gender, BMI, confidence/fear of falling and perception of general health); verify the relationship between social participation and functional capacity as well as participation and personal factors; and find, among all the variables, which can be the best predictors of social participation.

Descriptive, exploratory and cross-sectional study. The sample is composed of 150 Portuguese community-dwelling older adults. The statistical design included descriptive analyses (measures of central tendency and dispersion); inferential analyses (bivariate: t-test for independent samples, One-Way ANOVA and Pearson’s correlation coefficient; multivariate: linear multiple regression, moderated linear multiple regression and hierarchical linear multiple regression). The level of significance was α =0.05, with a 95% confidence interval.

The results have shown that this sample was composed mostly by women, mean age approximately 69 years old. Statistically significant associations between social participation and all study variables: Age (r = 0.301, p = 0.00), BMI (r = 0.169, p = 0.039), Grip strength (r = -0.318, p = 0.00), Pressure platform Hercules® (r = -0.337, p = 0.00), perception of general health (r = 0.468, p = 0.00), Timed Up & Go (r = 0.668, p = 0.00), T10M (r = -0.576, p = 0.00), Test Step (r = -0.456, p=0.00) and Fall Efficacy Scale (r = 0.768, p = 0.00). Regression analysis shows that the confidence in performing activities of the daily living without fear of falling, health perception and dynamic balance, measured by Timed Up & Go test, as a whole, are responsible for 65.5% of the variance of social participation (R 2 = 0.655; p < 0.001). Using a second model we have seen that a sizeable part of the variance percentage related to the participants’ social participation, 55%, is due once again to dynamic balance and health perception, followed by age (R 2 = 0.549; p < 0.001).

The Timed Up & Go test and the unique question on health perception: “ In general, would you say that your health is excellent, very good, good, satisfactory or poor? ” account for a significant percentage of the variance in social participation in elderly individuals. Incorporating these two factors into the physical therapist's clinical practice takes very little time and greatly benefits the decision-making process and planning of interventions.

1. Fernandez-Ballesteros R, Zamarron MD, Diez-Nicolas J, Lopez-Bravo MD, Molina MA, Schettini R, Productivity in Old Age. Research on Aging. 2011;33(2):205–226.

2. Holt-Lunstad J, Smith TB, Layton JB. Social relationships and mortality risk: A meta- analytic review. PLoS Medicine. 2010;7(7):e1000316.

3. Korpershoek C, van der Bijl J, Hafsteinsdottir TB. Self-efficacy and its influence on recovery of patients with stroke: A systematic review. Journal of Advanced Nursing. 2011;67(9):1876–1894.

4. Nayak N, Mahajan P. Walking Capacity and Falls-Efficacy Correlates with Participation Restriction in Individuals with Chronic Stroke: A Cross Sectional Study. International Journal of Physiotherapy. 2015;2(1):311.

Active ageing, Elderly, Social participation, Functional capacity, Functioning.

O125 Motor development in children from 11 to 44 months old: influence of the variable “presence of siblings”

Miguel rebelo 1 , joĂŁo serrano 1 , daniel marinho 3,4 , rui paulo 1,2 , vivian corte 1 , pedro duarte-mendes 1,2, 1 department of sports and well-being, polytechnic institute of castelo branco, 6000-266 castelo branco, portugal; 2 research on education and community intervention, 4411-801 vila nova de gaia, portugal; 3 department of sport sciences, university of beira interior, 6201-001 covilhĂŁ, portugal; 4 research centre in sports, health and human development, university of beira interior, 6201-001 covilhĂŁ, portugal, correspondence: miguel rebelo ([email protected]).

Motor development presupposes a set of life-long processes of change. These processes occur mostly during the first years of the child’s life, having each child different developmental rhythms [1]. Motor skills are fundamental to our day-to-day life, representing the key to the child's development [2]. As such, it is important to know the different factors that influence the development of motor skills during childhood. According to bibliography, the presence of siblings may be an important factor, because this relationship provides a basis for learning and socialization opportunities in various contexts [3].

The main goal of this study was to verify if there were differences in the development of motor skills (global and fine) through the scales of the PDMS-2, comparing children that have or do not have siblings.

In this study 91 children of both sexes participated (30.20 Âą 10.56 months). Two groups were created: the sibling group, consisting of 48 children (31.06 Âą 10.76 months) and the non-sibling group, consisting of 43 children (29.23 Âą 10.37 months). Motor skills were assessed using the PDMS-2 test battery scales4. The evaluation was performed during 4 months, 3 times a week and individually (approximately 30 minutes for each child). For the data analysis, we used descriptive and inferential statistics. The Kolmogorov-Smirnov test was applied to test normality, and the Mann-Whitney test was applied to independent samples.

The sibling group achieved, on average, better results in all motor skills (global and fine). However, there were only statistically significant differences in fine motor skills (p = 0.016), where the sibling group had the best results (average = 52.29) compared to the non-sibling group (average = 38.98).

These results show that the presence of siblings in the family context positively influences motor development, providing cooperative activities through play and challenges that improve cognitive, social, emotional and physical development.

1. Barreiros J, Neto C. O Desenvolvimento Motor e o GÊnero. Lisboa: Faculdade de Motricidade Humana ; 2005.

2. Leonard HC, Hill EL. The impact of motor development on typical and atypical social cognition and language: a systematic review. Child and Adolescent Mental Health. 2014;19(3):163-170.

3. Brody GH. Siblings’ direct and indirect contributions to child development. Current Directions, Psychological Science. 2004;13(3):124–126.

4. Folio R, Fewell R. Peabody Developmental Motor Scales-2. Austin: TX: Pro-Ed; 2000.

Motor Development, Family Context, PDMS-2.

O126 Childhood obesity in the urban parishes of Coimbra municipality

Margarida pereira, cristina padez, helena nogueira, research centre for anthropology and health, university of coimbra, 3000-456 coimbra, portugal, correspondence: margarida pereira ([email protected]).

Childhood obesity is a major public health concern worldwide and Portugal has one of the highest rates of childhood obesity among the European countries. It is known that childhood obesity is particularly high in urban settings. Thus, a deeper understanding of the impact of such areas in children weight is needed. Evidence suggests that parents’ perception of the neighbourhood safety might determine children weight once unsafety perceptions of the neighbourhood prevent children from playing outside.

This work main goal was to examine the impact of parents’ safety perception of the neighbourhood in children’s weight status regarding the localization of the neighbourhood (urban centre or urban periphery).

Weight (kg) and height (cm) of 1,493 children from Coimbra municipality were measured and BMI (weight/height 2 ) was calculated to use IOTF cut-off points to classify children in “normal” or “obese”. Parents provided their parish of residence as well as their weight, height and number of schooling years. They also responded to a questionnaire regarding their neighbourhood perceptions and physical activity engagement of their children. The sample was analysed separately, i.e. , chi-square tests were computed to children living in parishes from the urban centre and posteriorly to children living in parishes from the urban periphery.

This study results showed proportionality between overweight or obese children residing in the urban centre, mainly girls, with low socioeconomic status and obese parents that strongly agree that their neighbourhood is unsafe to walk in during the day. Except for mother weight status, none of the variables analysed differentiated normal from overweight or obese children living in the urban periphery, regarding the chi-square tests.

Overall, parents’ perceptions of the environment might impact children’s weight status. However, even within the same urban area, perceptions of neighbourhood safety change. The aspects that influence children weight status differ according to the parishes they live in - urban centre or peripheric parishes. For example, parents from a significant proportion of overweight or obese children living in the urban centre parishes perceives their neighbourhood environment as unsafe to walk during the day, however, no differences were found between normal and overweight or obese children from the peripheric parishes. This should be held in consideration when developing healthy urban planning strategies.

Work funded by the Foundation for the Science and Technology (PTDC/DTP-SAP/1520/2014 and grant SFRH/BD/133140/2017).

Childhood Obesity, Urban Settings, Neighbourhood, Safety Perceptions.

O127 Cardiovascular causes of disqualification from competitive sports: young vs. veteran athletes

Ana p silva, virgĂ­nia fonseca, carolina diniz, daniel pereira, rodrigo sousa, joĂŁo lobato, escola superior de tecnologia da saĂşde de lisboa, instituto politĂŠcnico de lisboa, 1990-094 lisboa, portugal, correspondence: ana p silva ([email protected]).

Cardiovascular disease is the most common cause of disqualification from competitive sports. The pre-participation screening is fundamental in order to detect these diseases and is based on clinical history and physical examination in addition to a 12-lead electrocardiogram. Additional tests are requested only for those with any abnormality in the initial evaluation [1-2]. According to previous studies, the most common cardiovascular diseases that disqualify young athletes are different from those associated to veteran athletes: congenital arrhythmias vs. subclinical coronary disease, respectively [3-5].

To analyse and compare, amongst young and veteran athletes, the cardiovascular causes of disqualification from competitive sports, consecutively screened at a sports medicine unit in a decade (2007-2017).

Descriptive-comparative retrospective study. The study population consisted of all case files from athletes disqualified from competitive sports due to cardiovascular disease during the 2007-2017 period. A sample of 58 case files was divided into group A (young athletes, < 35 years, n A = 36) and group B (veteran athletes, ≥ 35 years, n B =22). It was evaluated the clinical history, sport disciplines, symptoms and cardiovascular diseases. Descriptive statistics and statistical inference (Chi-squared distribution) were applied for the characterization and comparison of the study variables.

Both sample groups consisted mainly in male athletes (group A 94.4%, group B 100%). The most referred symptom in group A was palpitations (16.7%), whereas in group B was chest pain (36.4%). There was a significant association between relevant cardiovascular history and veteran athletes. The most frequent cardiovascular diseases in group A were hypertrophic cardiomyopathy (19.4%), arterial hypertension (11.1%), left ventricle noncompaction (8.3%) and great vessel transposition (8.3%). Arterial hypertension (50%) and coronary disease (45.4% were the most frequent diseases that disqualified the practice of competition sports in veteran athletes. It’s important to emphasize that some veteran athletes presented simultaneously more than one cardiovascular cause of disqualification.

The most frequent cardiovascular diseases in groups A and B matched those found in literature [3-5]. The prevalence of hypertrophic cardiomyopathy and coronary disease in the respective groups may be associated with a higher awareness towards the dangers of these particular diseases in the practice of competition sports. The data in this study confirms the key role of pre-participation screening for the identification of cardiovascular diseases that can cause sudden cardiac death during sport.

1. Corrado D, Pelliccia A, Bjørnstad H, Vanhees L, Biffi A, Borjesson M, et al. Cardiovascular pre-participation screening of young competitive athletes for prevention of sudden death: proposal for a common European protocol. European Heart Journal. 2005;26:516-524.

2. Despacho no 25 357/2006. D.R. no 238 de 13.12.2006 - 2a SĂŠrie, (2006).

3. Abbatemarco J, Bennett C, Bell A, Dunne L, Matsumura M. Application of Pre-participation Cardiovascular Screening Guidelines to Novice Older Runners and Endurance Athletes. SAGE Open Medicine. 2016;4:1–8.

4. Pescatore V, Basso C, Brugin E, Bigon L, Compagno S, Reimers B et al. Cardiovascular causes of disqualification from competitive sports in young athletes and long term follow-up. European Heart Journal. 2013;34(sup 1):1783.

5. Corrado D, Basso C, Schiavon M., Thiene G. Screening for Hypertrophic Cardiomyopathy in Young Athletes. The New England Journal Of Medicine. 1998;339:364-69.

Cardiovascular diseases, Competitive sports, Pre-participation screening.

O128 Bioethics, health promotion and sustainability: interfaces in higher education

Ivani n carlotto, maria ap dinis, energy, environment and health research unit, energy, environment and environmental & public health research laboratories, fernando pessoa university, 4249-004 porto, portugal, correspondence: ivani n carlotto ([email protected]).

Universities are essential institutions for health promotion (HP) [1]. As they have their own ethos and distinct cultures, they may act as potential enhancers of the conceptual frameworks of HP and interdisciplinary values such as equity, social justice and sustainable growth [2]. Bioethics, as a transversal discipline, seeks to ethically analyse and systematize such values, strengthening the synergy between health and sustainability [3]. Bioethics is a reflexive, mutually shared and interdisciplinary tool whose goal is to promote health and sustainability in an integrated and coherent way, adapting life actions in their equitable and inclusive characters.

1) Identify how bioethics takes place in daily life and how it is possible to establish links between scientific and ethical knowledge, in order to avoid negative impacts on people's lives; 2) Describe the appropriate bioethical tools (principles) for intervention in the context of higher education (HE), HP and sustainability.

Exploratory-descriptive methodology using a quanti-qualitative approach [4]. Sample: University teachers from Rio Grande do Sul/Brazil, random sample, probabilistic sampling by convenience, CI = 95%, n = 1400 persons. The research was approved by the Research Ethics Committee of the Hospital de ClĂ­nicas of Porto Alegre (HCPA)/Brazil, Ethics Committee of the Universidade Fernando Pessoa (UFP)/Porto-Portugal, receiving the approval number CAAE 55066616.8.0000.5327/Plataforma Brasil/Brazil. The interviews were carried out after receiving the informed consent from the participants, taking into account the assumptions of the National Health Council Brazil (NHC) 466-2012.

Beyond the principalistic formulation - charity, non-maleficence, justice and respect for autonomy [5], certain subjacent referentials, such as, solidarity, shared commitment, and health environment/sustainability were evoked, causing a positive impact on HP, individual and collective well-being, quality of life, inclusion and social justice in the University environment.

HE upholds a fundamental role in HP for their faculty teachers. Universities act as places for investigation and learning in a way that it invigorates HP activities [6]. Bioethics, as a transdisciplinary activity, seeks to help building qualified actions in health, which uphold and promote well-being, cohesion, inclusion, sustainability and social justice, with the respective conceptual clarity that resides therein [2, 7].

1. Dooris M, Doherty S, Cawood J, Powell S. The Healthy Universities approach: Adding value to the higher education sector. In: Health promotion settings: Principles and practice. London: Sage; 2012. p. 153-169.

2. Dooris M, Doherty S, Orme J. The application of salutogenesis in universities. In: The Handbook of Salutogenesis. England: Springer; 2017.

3. Garrafa V. Da bioĂŠtica de princĂ­pios a uma bioĂŠtica interventiva. BioĂŠtica. 2005;13:125-134.

4. Prodanov CC. Metodologia do trabalho cientĂ­fico: mĂŠtodos e tĂŠcnicas da pesquisa e do trabalho acadĂŞmico. Novo Hamburgo: Feevale; 2013.

5. Beauchamp TL, Childress JF. The principles of biomedical ethics. New York: Oxford;1979.

6. Organização PanAmericana de Saúde [http://www.paho.org/]. Regional program on bioethics. [Accessed in 02 may 2017]. Available in http://www.paho.org/ hq/index.php?option=com_content&view=article& id=5582%3A2011-regional-program-onbioethics&ca tid=3347%3Abioethics&Itemid=4124&lang=es

7. Carlotto IN, Dinis MAP. BioÊtica e promoção da saúde docente na educação superior: uma interface necessåria. Revista Saber & Educar. 2017;23:168-179.

Bioethics, Health Promotion, Higher Education, Sustainability.

O129 Pilot program to develop clinical skills in counseling-based motivational interview (CBMI) to prevent obesity in Chile

Ricardo cerda 1 , daniela nicoletti 1 , macarena p lillo 3 , margarita andrade, patricia galvez 1 , lorena iglesias 1 , denisse parra 2 , magdalena c coke, natalia gomez 1, 1 department of nutrition, faculty of medicine, university of chile, santiago, chile; 2 department of nursing, faculty of medicine, university of chile, santiago, chile; 3 school of journalism, faculty of communication and letters, university diego portales, santiago, chile, correspondence: ricardo cerda ([email protected]).

To influence mediator variables of behavioural change in health and the adherence to treatment in an individual context, health professionals and users must develop a help-based relation mediated by effective communication. At the same time, health professionals must trigger processes in the users that allow them to recognize and develop intrinsic motivation towards change. In this sense, a pilot program is proposed for training CBMI for primary health care (PHC) nutritionists that develops knowledge and tools to foster behavioural change in users. The pilot project was developed as part of the Chilean health program “Vida Sana”.

Describe a training pilot program to develop clinical skills in CBMI, for PHC nutritionists, to prevent obesity in Chile.

A training program was built comprised by 34 face-to-face hours and 8 hours of accompaniment at the workplace for 13 Nutritionists. The program was based on a constructivist approach centred on the development of skills in the following sequence: critical analysis for regular practice, adherence comprehension and behavioural change, communication skills, motivational interviewing skills, skill integration in a simulated and real situation. The program employed psychometric scales for motivation, beliefs and self-efficacy for CBMI, video analysis, observations performed at the Centre for Clinical Skills at the Facultad de Medicina de la Universidad de Chile and accompaniment at PHC centres.

Participant knowledge increased on average 5.25 to 20.85 (p = 0.008). The average of total points did not vary at the beginning or at the end (74 pts). Effective beliefs increased from 61.3 to 68.7 (p < 0.05) and self-efficacy from 1617 to 1851 (p < 0.05). Observation and video analysis showed that Nutritionists went from delivering information to open and strategic inquiring during the course. Accompaniment showed that skills were deepened and the level of satisfaction improved with practice.

This is an innovative program that incorporates CBMI and defines a methodology centred on reflection, practice and accompaniment in real and simulated situations. It is necessary to evaluate the effects in indicators that measure user behaviour and the effect and impact on adult obesity. This training program represents a tool to promote behavioural change and adherence in PHC to prevent obesity.

The project was financed by CONICYT: FONIS SA16I0122.

Behavior Change, Motivational Interview, Professional Education, Obesity, Nutritionists Skills.

O130 Childhood body fat and motor competence in elementary school (5 to 9 years old)

Francisco campos 1 , ricardo santos 1 , mariana temudo 1 , kĂĄtia semedo 1 , diogo costa 1 , ricardo melo 1 , fernando martins 1,2, 1 coimbra education school, polytechnic institute of coimbra, 3030-329 coimbra, portugal; 2 instituto de telecomunicaçþes, university of beira interior, 6201-001 covilhĂŁ, portugal, correspondence: francisco campos ([email protected]).

Obesity rates have increased globally in the last decades, justifying the denomination “public health epidemic”. According some studies [1], in Portugal, childhood overweight and obesity affects about 31.5% of elementary school children, with higher values for girls, except between 7.5 and 9.0 years old. Overweight and obesity are strictly related with childhood motor competence [2]. To access overweight and obesity, among others, is recommended the body fat percentage (BFP), classified by age and gender by McCarthy centiles of BFP [3].

The main objectives of this investigation are: 1) to characterize childhood elementary school overweight/obesity and compare it by gender and age; 2) to correlate childhood elementary school overweight/obesity with motor competence.

Data was collected from 604 children’s between 5 to 9 years old (7.40 ± 1.16 years old; 295 female) of the 10 elementary schools from the “Agrupamento de Escolas de Montemor-o-Velho” (Coimbra-Portugal), using: a) the electrical bio-impedance (model BC-533®) method, to access BFP; and b) a battery of physical tests [shifting platforms and lateral jumps (stability); shuttle run and standing long jump (locomotion); throwing velocity and kicking velocity (manipulation); and handgrip strength] to access motor competence [3]. Data analysis was conducted using IBM SPSS software (version 24.0) for a statistical significance of 10%.

In this study case, only 58.8% (n = 355) of the elementary school children had normal weight, and 41.2% showed overweight/obesity [overweight: 17.0% (n = 103); obesity: 24.2% (n = 146)]. There were no significant statistical differences (p = 0.519) between genders (Mann-Whitney). By age (Kruskal-Wallis), there were significant statistical differences (p = 0.001), specially between the 5 years old (Md=2) [p=0.016 (7 years old; Md=1); p=0.003 (8 years old; Md=1); p=0.021 (9 years old; Md=1)] and the 6 years old (Md=2) [p=0.005 (7 years old; Md=1); p=0.001 (8 years old; Md=1); p=0.013 (9 years old; Md=1)]. For Md interpretation, normal weight is classified by 1, overweight by 2 and obesity by 3. The Spearman test (r) allowed to verify significant statistical correlations, two positive [shuttle run (p=0.001; r=0.136); handgrip (p=0.002; r=0.123)] and three negative [lateral jump (p=0.001; r=-0.174); standing long jump (p=0.001; r=-0.249); throwing velocity (p=0.072; r=-0.073)].

It is important to take into account the current recommendations and concerns of the WHO [4] (healthy eating habits, physical activity regular practice), improving body composition and motor competence in childhood from an early age, resulting probably in healthier adults and minimizing possible social problems concerning public health.

This work is funded by FCT/MEC through national funds and when applicable cofounded by FEDER - PT2020 partnership agreement under the project UID/EEA/50008/2013 and by QREN, Mais Centro - Programa Operacional Regional do Centro, FEDER (CENTRO-07-CT62-FEDER-005012; ID: 64765).

1. Venâncio P, Aguilar S, Pinto G. Obesidade infantil… um problema cada vez mais atual. Revista Portuguesa de Medicina Geral e Familiar 2012;28:410-416.

2. Luz C, Cordovil R, Almeida G, Rodrigues L. Link between motor competence and Health related fitness in children and adolescents. Sports 2017;5(41):1-8.

3. McCarthy H, Cole T, Fry T, Jebb S, Prentice A. Body fat reference curves for children. International Journal of Obesity 2006;30:598-602.

4. Inchley J, Currie D, Jewell J, Breda J, Barnekow V. Adolescent obesity and related behaviours: trends and inequalities in the WHO. Copenhagen: WHO; 2017.

Body Fat Percentage, Elementary School, Motor Competence, Obesity, Overweight.

O131 Working pregnant woman affectivity assessment regarding psychological requirements of the work

Maria s medina, valeriana g blanco, universidad de burgos, 09001 burgos, spain, correspondence: valeriana g blanco ([email protected]).

The labour situation of the pregnant woman has special connotations, both physical and psychological and this may influence work performance and perceived welfare, therefore also interfering in the development of different emotions.

Aware of this, the present work has as an objective to evaluate the relationship between the psychological requirements of the work of pregnant women and affectivity.

It had a convenience sample of 165 pregnant working women living in Burgos (Spain). The study has a cross-cutting nature and the data collection was carried out with the PANAS questionnaire for affectivity rating, ISTAS for psychological demand and an ad hoc questionnaire to collect identification data. The criterion variables were: work psychological requirements (EP), positive affectivity (AP) and negative affectivity (AN).

The results show that pregnant women have high unfavourable exposition levels for health regarding variables of psychological requirements.

The relation between variables showed significant relation between the EP and AN variables, concluding that pregnant women with a friendly exposition level of psychological requirements (EP) have a lot of positive affectivity (AP) and less negative. Pregnant women that had an unfavourable psychological exposition level (EP) had more negative affectivity (AN).

Working pregnant woman, Affectivity assessment, Psychological requirements of the work.

O132 The perception of social support and adherence to medication, on the person with COPD

SĂ­lvia vieira 1 , celeste bastos 1,2 , lĂ­gia lima 1,2, 1 nursing school of porto, 4200-072 porto, portugal; 2 center for health technology and services research, university of porto, 4200-450 porto, portugal, correspondence: sĂ­lvia vieira ([email protected]).

COPD (Chronic Obstructive Pulmonary Disease) is a chronic and incapacitating disease, characterized by the presence of persistent respiratory symptoms and a gradual decrease in energy [1-3]. The person with COPD has to cope with a complex therapeutic regimen and with the progressive worsening of the clinical condition [3], which may compromise their capacity for self-care. Therefore, people with COPD need support to manage the disease and the therapeutic regimen [4-6].

The study aims were to study people with COPD perception about their social support, as well as their level of adherence to medications and to analyse the association between perceived social support and adherence to medication.

This is a quantitative, descriptive and cross-sectional study, with a sample of 45 adults diagnosed with COPD, admitted to medical service at a hospital in the northern part of Portugal, between February and May 2017. Participants mean age was 71 years (SD = 11.9), they were mostly male (86.7%), married (72.7%) and had a low level of education. The measures used were: a sociodemographic and clinical questionnaire, the Social Support Scale (SSS) and Reported Adherence to Medication Scale (RAMS).

The results showed that the study participants perceived a positive social support (M = 3.5, SD = 0.8). The higher scores were found for the dimension of family and affective support (M = 3.9, SD = 1.0), and the lowest scores were found in the financial support dimension (M = 2.7, SD = 1.0). In relation to the treatment of COPD, most participants reported high adherence levels (M = 12.5, SD = 4.4). A positive association was found between perceived social support and medication adherence (r = 0.46, p = 0.001).

Our results support the importance of social support in adherence to medication, on the person with COPD. The study also suggests the existence of a group of patients more at risk, in terms of lack of social support and non-adherence to medication, pointing out the need to develop nursing interventions focused on the promotion of self-management of COPD.

1. Global Initiative for Chronic Obstructive Lung Diseases. Pocket Guide to COPD Diagnosis, Management, and Prevention. A Guide for Health Care Profissionals (2017 Report). Global Inititative for Chronic Lung Disease, Inc., 2017.

2. Criner G, Bourbeau J, Diekemper R, Ouellette D, Goodridge D, Stickland M, et al. Prevention of acute exacerbations of COPD: American College of Chest Physicians and Canadian Thoracic Society Guideline. Chest 2015;147(4):894-942.

3. Wedzicha J, Miravitlles M, Hurst J, Calverley P, Albert R, Krishnan J, et al. Management of COPD exacerbations: a European Respiratory Society/American Thoracic Society guideline. Eur Respir J. 2017 Mar 15;49(3). pii: 1600791.

4. Korpershoek Y, Bos-Touwen I, de Man-van Ginkel J, Lammers J, Schuurmans M, Trappenburg J. Determinants of activation for self-management in patients with COPD. Int J Chron Obstruct Pulmon Dis. 2016;11:1757-66.

5. Halding A, Grov E. Self-rated health aspects among persons living with chronic obstructive pulmonary disease. Int J Chron Obstruct Pulmon Dis. 2017 Apr 12;12:1163-1172.

6. Fotokian Z, Mohammadi Shahboulaghi F, Fallahi-Khoshknab M, Pourhabib A. The empowerment of elderly patients with chronic obstructive pulmonary disease: Managing life with the disease. Plos One 2017;12(4):e0174028.

COPD, Chronic obstructive pulmonary disease, Medication adherence, Social support.

O133 The organizational commitment of health professionals (doctors, nurses and auxiliaries) in two public hospitals in Cape Verde

Jacqueline delgado 1 , antĂłnio nunes 1,2 , amĂŠlia nunes 1, 1 universidade da beira interior, 6201-001 covilhĂŁ, portugal; 2 nĂşcleo de estudos em ciĂŞncias empresariais, 6200-209 covilhĂŁ, portugal, correspondence: antĂłnio nunes ([email protected]).

The organizational commitment (OC) has its origin in the “ Side bets ” theory, representing the result of the accumulation of bets, which can be lost in a situation where the interruption of an activity occurs [1] The terms are understood as the maintenance of the belonging to the organization, being something of value in which the individual invested [2]. That is, while the individual works, creates bonds, commits himself and goes investing in the organization. The three-dimensional model [3] identifies the three dimensions of OC: the affective commitment, which consists in the feeling or desire to participate in the organization; the continuance commitment, which consists in the obligation to remain in the organization; and finally, the normative commitment, which consists in the worker's need to remain in the organization.

The objective of this study is to measure the OC levels, in its several dimensions, on the health professionals (physicians, nurses and auxiliaries) in two public hospitals in Cape Verde, considering the importance of sociodemographic variables (age, gender, marital status and academic qualifications) and Working context (work income, seniority in the company, type of contract and Hierarchical position) for the CO levels revealed.

The study used a quantitative methodology to evaluate the impact of sociodemographic and professional context variables on OC levels. In order to measure OC, we used the scale of three components: affective, normative and calculative [3], adapted for the Portuguese language in 2008 [4]. The sample consisted of 224 health professionals.

The scale presented good levels of internal consistency (Cronbach's alpha of 0.85), with median OC values correlating positively with age; simultaneously, low OC levels were identified in higher education levels and High values of OC were identified in lower education levels. Finally, OC levels were also significantly higher for the less qualified professionals, auxiliaries showed the highest levels while the doctors showed the lowest levels of OC.

It is emphasized the positive and statistically significant relationship between age and OC, implying higher OC levels in the higher age groups, as identified in previous studies [5-8]. The inverse relation between OC levels and levels of academic qualifications, as identified by other authors [2-3, 5,8-9], is also a subject of interest. As well as the fact that the lower levels of OC appear in the most qualified professions: doctors and nurses, a not treated aspect in the literature and that characterizes the health professionals of Cape Verde.

1. Becker HS. Notes on the concept of commitment. Am J Sociol 1960;66(1):32-40.

2. Meyer JP, Allen NJ. Testing “side-bet theory” of organizational commitment: some methodological considerations. J Appl Psychol 1984;69(3):372-378.

3. Meyer JP, Allen NJ. A three-component conceptualization of organizational commitment. Hum R manage R 1991;1(1):61-89.

4. Nascimento JL, Lopes A, Salgueiro MDF. Estudo sobre a validação do “Modelo de Comportamento Organizacional” de Meyer e Allen para o contexto português. Comp.Org Gestão; 2008.

5. Mathieu JE, Zajac DM. A review and meta-analysis of the antecedents, correlates, and consequences of organizational commitment. Psychol Bull 1990;108(2):171.

6. Addae HM, Praveen KP, Velinor N. Role stressors and organizational commitment: public sector employment in St Lucia. Int J Manpow 2008;29(6):567-582.

7. Allen NJ, Meyer JP. The measurement and antecedents of affective, continuance and normative commitment to the organization. J Occup Organ Psychol 1990;63(1):1-18.

8. Angle HL, Perry JL. An empirical assessment of organizational commitment and organizational effectiveness. Adm Sci 1981;Q:1-14.

9. Mowday RT, Steers RM, Porter LW. The measurement of organizational commitment. J Vocat Behav 1979; 14 (2):224-247.

Organizational commitment, Health professionals, Physicians nurses and auxiliaries, Cape verde (Africa).

O134 Sleep quality and food intake of high school students

Ana sc carvalho 1 , adĂ­lia p fernandes 2 , josiana a vaz 2 , ana b gallego 3 , matilde s veja 3, 1 unidade local de saĂşde do nordeste, 5301-852 bragança, portugal; 2 escola superior de saĂşde, instituto politĂŠcnico de bragança, 5300-146 bragança, portugal; 3 universidad de leĂłn, 24071 leĂłn, spain, correspondence: ana sc carvalho ([email protected]).

Poor sleep quality is associated with increased food intake and poor diet quality [1]. People with lack of sleep show a positive correlation between free time and food intake and also experience hormonal and brain changes that drive the intake of food with high calorific value [1-3]. In addition, scientific research has shown a healthy and balanced diet to positively influence the quality of sleep [1].

The present study was set out to assess the sleep quality of high school students in Bragança county, and its association with food intake.

The study used non-experimental, analytical and transversal methodology, of epidemiological character and with a quantitative approach. It was intended to carry out the study in a population of 862 high school students. However, due to consent being required from both legal guardians and students, a smaller sample of 345 students was obtained. The data was collected in May 2017 through a questionnaire that included the Pittsburgh Sleep Quality Index (PSQI), validated for the Portuguese population.

Throughout the study and following PSQI analysis, it was concluded that 39.71% (n = 137) of participants showed poor quality of sleep (PSQI > 5 points). The correlation between sleep quality and food intake was assessed and a statistically significant association was found between the quality of sleep and the intake of snacks (X 2 = 17.144; p = 0.000), sugary products (X 2 = 18.603; p= 0 .000), fast-food (X 2 = 12.353; p = 0.002) and ready meals (X 2 = 14.852; p = 0.000). The risk of suffering from poor sleep quality is higher in young populations who frequently eat snacks ([OR]: 2.811; 99%), sugary products ([OR]: 1.901; 95%), fast-food ([OR]: 4.000; 99%) and ready meals ([OR]: 5.621; 95%) in comparison with young populations who rarely eat this sort of food. The sleep quality is also significantly related with the number of meals young people have in a day (X 2 = 7.580; p = 0.023). The risk of having poor quality sleep is 2.240 times higher in young people who rarely eat 4-6 meals a day.

A correlation between sleep quality and food intake in the sampled students was seen. The risk of having poor quality of sleep is higher in students who frequently eat a high calorie diet and also in students who rarely have 4-6 meals a day. There are several connections between sleep quality and eating habits. Sleep promotion and its connection with standard diets should be included as an essential part of community empowerment for health-promoting lifestyles [1,4,5].

1. McNeil J, Doucet E, Chaput JP. Inadequate Sleep as a Contributor to Obesity and Type 2 Diabetes. Canadian Journal of Diabetes. 2013;37:103-108.

2. Dewald JF, Meijer AM, Oort J, Kerkhof GA, Bogels SM. The influence of sleep quality, sleep duration and sleepiness on school performance in children and adolescents: A meta-analytic review. Sleep Medicine Reviews, 2010;14:179–189.

3. Paiva T. Bom Sono, Boa Vida. Cruz Quebrada: Oficina do Livro; 2008.

4. Lakshman R, Elks CE, Ong KK. Childhood obesity. Circulation 2012;126(14):1770-1779.

5. Direção Geral da Saúde Programa Nacional de Saúde Escolar. Lisboa: MinistÊrio da Saúde de Portugal; 2015.

Sleep Quality, Food intake, Balanced diet.

O135 Education matters!!! The link between childhood obesity and parents’ level of education

Ricardo melo 1 , ana inĂĄcio 1 , mariana pereira 1 , miguel santos 1 , simĂŁo sousa 1 , francisco campos 1 , fernando martins 1,2, 1 applied sport sciences research unit, coimbra education school, polytechnic institute of coimbra, 3030-329 coimbra, portugal; 2 instituto de telecomunicaçþes, university of beira interior, 6201-001 covilhĂŁ, portugal, correspondence: ricardo melo ([email protected]).

Obesity is a public health problem in most developed countries [1,2]. In Portugal this scenario is very serious because it stands as one of the European countries with more obese children [3,4], which is associated to poor eating habits, low level of physical activity, and sedentary lifestyles [2].

The objectives of this investigation are: I) to determine the prevalence of overweight/obesity in elementary school children; II) to compare children’s levels of body mass by age and gender; III) to verify correlations between children’s levels of body mass and family socio-demographic characteristics.

The sample was composed by 294 children between 5-9 years old (M ± SD = 7.35 ± 1.18 years old; 147 female) of the 10 elementary schools from the “Agrupamento de Escolas de Montemor-o-Velho (AEMMV)” (Coimbra-Portugal). Data was collected from September to December 2017. Family socio-demographic characteristics data were collected using a survey questionnaire applied to parents of participating children. Weight was evaluated using a Tanita Body Composition Monitor (model BC-420 SMA). Height was calculated using a stadiometer. Body Mass Index (BMI) was calculated using the formula weight/height 2 . The definition of underweight (level 1), normal weight (level 2), overweight (level 3) and obesity (level 4) was based on the tables in use by the Portuguese Directorate-General for Health [5], which correlates BMI with percentile tables. Data analysis was conducted using IBM SPSS (version 24.0, Chicago, USA) and a statistical significance of 10.0% was defined.

Results of this study show that 17.7% of the evaluated children are overweight and 16.3% are obese (34.0% are overweight/obese). No significant statistical differences were observed by gender (Mann-Whitney = 10416; p = 0.529) or by age (Kruskal-Wallis test = 4.01; p = 0.405). Results of Spearman correlation test (r) also evidence not existing significant statistical relations between levels of body mass and parents’ age (mother: r = -0.031; p = 0.608; father: r = 0.015; p = 0.797) or with household composition (r = -0.040; p = 0.499). However, a negative correlation exists between body mass levels and parents’ education (mother: r = -0.136, p = 0.019; father: r = -0.158, p = 0.006) evidencing that the higher the level of education of the parents the lower the prevalence of high levels of body mass (overweight/obesity).

Despite policies to tackle obesity are being implemented, results of this study show a high prevalence of overweight/obesity children’s in the AEMMV. Results also confirm that parents' education is a strong social health determinant [1]. This study suggests that public authorities need to implement more efficient programs (e.g. nutrition and physical activity) at schools and community to promote active and healthier lifestyles.

This work is funded by FCT/MEC through national funds and when applicable cofounded by FEDER - PT2020 partnership agreement under the project UID/EEA/50008/2013 and by QREN, Mais Centro - Programa Operacional Regional do Centro, FEDER (CENTRO-07-CT62-FEDER-005012; ID: 64765). The authors also would like to thank to: Agrupamento de Escolas de Montemor-o-Velho, Câmara Municipal de Montemor-o-Velho, and Unidade de Cuidados na Comunidade de Montemor-o-Velho.

1. OECD [internet]. Obesity Update 2017. Retrieved from https://www.oecd.org/els/healthsystems/Obesity-Update-2017.pdf

2. WHO [internet]. Adolescent obesity and related behaviours: trends and inequalities in the WHO European Region, 2002–2014 2017. Retrieved from http://www.euro.who.int/__data/assets/pdf_file/0019/339211/WHO_ObesityReport_ 2017_v3.pdf

3. Padez C, Fernandes T, MourĂŁo I, Moreira P, Rosado V. Prevalence of overweight and obesity in 7-9-year-old Portuguese children: trends in body mass index from 19702002.Am J Human Biology. 2004;16 (6):670-678.

4. Venâncio P, Aguilar S, Pinto G. Obesidade infantil… um problema cada vez mais atual. Revista Portuguesa de Medicina Geral e Familiar 2012;28:410-416.

5. Divisão de Saúde Materna, Infantil e dos Adolescentes da Direcção Geral da Saúde. Actualização das Curvas de Crescimento. Circular Normativa Nº: 05/DSMIA; 2016.

Body Mass Index, Education, Health, Obesity, Overweight.

O136 Relationship between the -1562 C/T polymorphism in the MMP-9 gene and multiple sclerosis

Ana valado 1 , maria j leitĂŁo 2 , lĂ­via sousa 3 , inĂŞs baldeiras 4, 1 departamento de ciĂŞncias biomĂŠdicas laboratoriais, escola superior de tecnologia da saĂşde de coimbra, instituto politĂŠcnico de coimbra, 3046-854 coimbra, portugal; 2 centro de neurociĂŞncias e biologia celular, 3004-504 coimbra, portugal; 3 serviço de neurologia, centro hospitalar e universitĂĄrio de coimbra, 3000-075 coimbra, portugal; 4 faculdade de medicina, universidade de coimbra, 3004-504 coimbra, portugal, correspondence: ana valado ([email protected]).

Matrix metalloproteinases (MMPs), particularly MMP-9, have showed an association with the influx of inflammatory cells into the CNS, disruption of the blood brain barrier and demyelination in Multiple Sclerosis (MS). The transcriptional activity of the MMP-9 gene is influenced by the -1562 C/T polymorphism in the promoter region of the gene, and the T alelle has been suggested as a genetic risk factor for MS.

To investigate the presence of the -1562 C/T polymorphism in the MMP-9 gene in healthy controls and MS patients and its association with clinical course of the disease.

Whole blood DNA was extracted from 169 patients (143 RRMS, 20 SPMS, 6 PPMS) and 186 controls, and the presence of the polymorphism was detected by PCR-RFLP. Quantification of MMP-9 was performed in 96 patients and 63 controls by ELISA. Data from patients was analysed for associations between the polymorphism distribution and clinical factors (gender, age at onset, disease duration, EDSS score and disease course).

The -1562 T allele was present in 39 patients and 41 controls, with no significant difference between groups (p = 0.533). However, in MS patients, but not in controls, more women presented with the -1562 T allele than men (p = 0.014). In patients, the distribution of the polymorphism was not significantly associated with age at onset (p = 0.759), disease duration (p = 0.309), progression of the disease (p = 0.121) or disability status (p = 0.180). The levels of MMP-9 in serum were significantly higher in MS patients compared to controls (p = 0.001). There was also an increase in serum MMP-9 values in controls that carried the T allele (p = 0.003), but not in MS patients.

The -1562 C/T polymorphism, at least in our population, does not seem to be a susceptibility risk factor for MS. However, in patients, there seems to be an association between the T allele with the female gender.

-1562C/T polymorphism, MMP-9, MS.

O137 Falls prevention in older people living in nursing homes in Northern Portugal

Isabel lage, odete araĂşjo, manuela almendra, fĂĄtima braga, rui novais, school of nursing, university of minho, 4704-553 braga, portugal, correspondence: odete araĂşjo ([email protected]).

Falls in older people are the leading cause of injury-related mortality and morbidity. People aged 65 and older have the highest risk of falling, with 30% of people older than 65 [1]. A fall can have significant adverse outcomes including injury, hospitalization and admission to long term care, development of fear of falling, activity restrictions and social isolation [2, 3].

The aim of this study was to describe the risk of falling in older people living in nursing homes in northern Portugal.

A descriptive correlational study was conducted in this research. A total of 833 participants (mean age 83 years) were recruited from 14 nursing homes in Northern Portugal. The statistical analysis of the data was performed using Statistical Package for Social Sciences (SPSSÂŽ) version 22.0, with descriptive and inferential statistical analysis with a significance level of 0.05.

The results showed that the older men have less probability of falling in comparison with older women (OR = 0.581). In addition, older people able to walk independently and talk also have less probability of falling (OR = 0.431, OR = 0.360). In opposition, older people with walking difficulties or using technical aids have high risk of falling (OR = 1.944 e OR = 1.518).

These findings support the idea that ongoing assessment could be more important than the admission assessment, in identifying risk factors for falls in older people after institutionalization, in order to prevent falls.

1. NICE. Falls: assessment and prevention of falls in older people. UK: NICE accredited; 2013.

2. Pellicer GarcĂ­a B, JuĂĄrez Vela R, Delgado Sevilla D, Redondo Castan LC, MartĂ­nez AbadĂ­a B, RamĂłn ArbuĂŠs E. [Prevalence and profile of the elderly home care valid suffering in a private residence falls]. Revista de enfermerĂ­a. 2013;36(12):8-16.

3. Yingfeng Z. Falls in older people in long-term care. Lancet. 2013;381 (9873):1179.

Falls, Older people, Nursing homes.

O138 “+ COOLuna” – intervention program of physiotherapy in schools at ACeS Baixo Vouga

Vitor ferreira 1,2 , ana oliveira 1 , maritza neto 1 , marta santo 1, 1 agrupamento de centros de saĂşde do baixo vouga, 3800-159 aveiro, portugal; 2 school of health, university of aveiro, 3810-193 aveiro, portugal, correspondence: vitor ferreira ([email protected]).

Musculoskeletal pain in children is one of the most common reasons to seek medical attention. The most common musculoskeletal pain conditions are nonspecific or idiopathic and include regional pain in the spine, with a high prevalence [1]. Multifactorial causes are indicated, like social, psychological, physiological and environmental factors [2, 3]. Within the environmental factors, the carriage of schoolbags is pointed out as a factor that contributes to the high prevalence of musculoskeletal pain [3-5]. However, some studies report that the weight of schoolbags has little influence on the perception of pain, mainly in the spine [6, 7]. Nevertheless, musculoskeletal pain in childhood can persist throughout adolescence and increases the risk of experiencing chronic pain in adulthood [8-10]. At this phase, adolescents undergo a period of accelerated muscle-skeletal growth and development, with spinal structures being sensitive to external aggressions [11].

The aim of this study was to evaluate musculoskeletal pain due to schoolbag carriage in terms of prevalence, intensity and predisposing risks factors in students of 5th grade in schools of the range of community health centres of Aveiro region, during the school year of 2016-2017.

A cross-sectional study was design. The presence, intensity and duration of pain was assessed using a body chart and numeric rating scale for pain. Predisposing risk factors was assessed by means of an ad hoc questionnaire.

A total of 960 children (male 51.1%: female 48.6%) with a mean age of 10.4 years (± 7.6) were included. The majority had backpacks (96.6%) and 82.4% (n = 775) carried the backpack over 2 shoulders. The mean schoolbag weight (4.9 ± 1.3 kg) represented a mean % body weight (%BW) of 13.0% (± 4.8). Only 29.3% carried schoolbags that were ≤ 10 %BW. The majority (79.9%) carried schoolbags to school for ≤ 15 min. The prevalence of musculoskeletal pain was reasonable (37.8%), and in the region of spine was low (16.0%). Multiple linear regression model indicated that pain is only explained by the number of hours of physical activity (negative correlation: r = -0.367) in 12.4% (R 2 = 0.124; p = 0.001, SEE = 0.143).

This study highlights the need to consider the multifactorial nature of musculoskeletal pain in children, and also the need to reinforce protective factor of physical exercise in future prevention programs dedicated to children.

1. Swain MS, Henschke N, Kamper SJ, Gobina I, Ottova-Jordan V, Maher CG. An international survey of pain in adolescents. BMC public health. 2014;14:447.

2. Paananen MV, Taimela SP, Auvinen JP, Tammelin TH, Kantomaa MT, Ebeling HE, et al. Risk factors for persistence of multiple musculoskeletal pains in adolescence: a 2-year follow-up study. European Journal of Pain. 2010;14(10):1026-32.

3. Stinson J, Connelly M, Kamper SJ, Herlin T, Toupin April K. Models of Care for addressing chronic musculoskeletal pain and health in children and adolescents. Best practice & research Clinical rheumatology. 2016;30(3):468-82.

4. Iyer SR. An ergonomic study of chronic musculoskeletal pain in schoolchildren. Indian journal of pediatrics. 2001;68(10):937-41.

5. Noll M, Candotti CT, da Rosa BN, Loss JF. Back pain prevalence and associated factors in children and adolescents: an epidemiological population study. Revista de SaĂşde PĂşblica. 2016;50:31.

6. Aprile I, Di Stasio E, Vincenzi MT, Arezzo MF, De Santis F, Mosca R, et al. The relationship between back pain and schoolbag use: a cross-sectional study of 5,318 Italian students. The spine journal : official journal of the North American Spine Society. 2016;16(6):748-55.

7. Dianat I, Sorkhi N, Pourhossein A, Alipour A, Asghari-Jafarabadi M. Neck, shoulder and low back pain in secondary schoolchildren in relation to schoolbag carriage: should the recommended weight limits be gender-specific? Appl Ergon. 2014;45(3):437-42.

8. Hestbaek L, Leboeuf-Yde C, Kyvik KO, Manniche C. The course of low back pain from adolescence to adulthood: eight-year follow-up of 9600 twins. Spine (Phila Pa 1976). 2006;31(4):468-72.

9. Siivola SM, Levoska S, Latvala K, Hoskio E, Vanharanta H, Keinanen-Kiukaanniemi S. Predictive factors for neck and shoulder pain: a longitudinal study in young adults. Spine (Phila Pa 1976). 2004;29(15):1662-9.

10. Hakala P, Rimpela A, Salminen JJ, Virtanen SM, Rimpela M. Back, neck, and shoulder pain in Finnish adolescents: national cross sectional surveys. Bmj. 2002;325(7367):743.

11. Goodburn EA, Ross DA. A Picture of health? : a review and annotated bibliography of the health of young people in developing countries / undertaken World Health Organization; 1995.

Physiotherapy, Schoolbags, Musculoskeletal pain, Children.

O139 Looking over Portuguese school-aged children lifestyles: results from a pilot study

Goreti marques, ana r pinheiro, fĂĄtima ferreira, daniela simĂľes, sara pinto, escola superior de saĂşde de santa maria, 4049-024 porto, portugal, correspondence: goreti marques ([email protected]).

Childhood obesity is considered one of the new epidemics of the 21st century. This study is part of a largest project regarding the improvement of healthy lifestyles in school-aged children, through a transdisciplinary team.

To describe food consumption and sport activities of Portuguese school-aged children.

An exploratory/descriptive pilot-study was conducted with third grade school-aged students from two Portuguese primary schools. Data were collected during through a self-filling form focusing on socio-demographic variables, sport activities and anthropometric measures (sex, age, house hold composition, practice of at least 60 minutes/week of sport activities outside school, weight, height). A booklet was used during five consecutive days to register food consumption. The study was previously approved by an Ethics Committee, and by the National Data Protection Commission (NDPC no.1704/2015). The informed consent of the child’s legal representative was signed. Data were analysed using the SPSS®-version 24.0.

Preliminary results included 109 school-aged children (mean age = 7.5 years old; mean weight = 28.50 kg; mean height = 131.60 cm). Regarding the Body Mass Index (BMI), 65.1% of the children were considered to have normal weight, 11.9% overweight, and 8.3% obesity; underweight emerged in 14.7% of children. The consumption of fruit/vegetables was significantly greater (p < 0.05) in underweight children when compared with normal weight and overweight/obese children. The average consumption of fat/oil, and sugary/salty products seemed smaller in underweight children and greater in overweight/obese children, while the consumption of dairies/meat/fish/eggs, cereals and their derivatives, tubers and water seemed similar; however, statistically significant differences were not found (p > 0.05). Most children (77.1%) performed at least 60 minutes/week of sport activities outside school (66.7% practice only one type, 25.0% practice two, and 8.3% practice three different sports per week). Food consumption was not significantly different between children that practiced at least 60 min/week of sports outside school comparing to the children who didn’t.

Though most children have normal weight, data show important abnormalities in BMI. The consumption of fruit/vegetables appears to be increased in underweight children and decreased in overweight/obese children, which highlights the need for more detailed research. Food consumption does not seem to differ depending on the practice of outside-school sports. Further stages must bring the development of a transdisciplinary healthcare program to improve healthy lifestyles among school-aged children.

This work was funded by project NORTE-01-0145-FEDER-024116.

Childhood obesity, Food consumption, Sport activities, Health promotion.

O140 Dating violence in university context: practices, beliefs and impacts on the health of victims

Sofia neves 1,2 , ana sousa, 1 joana topa 1,2 , janete borges 1, 1 instituto universitĂĄrio da maia, 4475-690 maia, portugal; 2 centro interdisciplinar de estudos de gĂŠnero, instituto superior de ciĂŞncias sociais e polĂ­ticas, universidade de lisboa, 1300-663 lisboa, portugal, correspondence: sofia neves ([email protected]).

Dating violence is an obvious and worrying social and health problem with serious consequences for its victims. It is characterized as a pattern of coercive and abusive tactics employed by one partner in a relationship to gain power and control over the other partner. It can take many forms, including physical violence, coercion, threats, intimidation, isolation, and emotional, sexual or economic abuse and occurs in the context of intimate heterosexual or homosexual/lesbian relationships. This kind of violence seems to be supported on conservative and traditional gender norms and stereotypes.

The main objective of this study is to characterize university students' beliefs and practices regarding dating violence, identifying the impacts of this type of violence on the psychological, physical, sexual and social health of their victims.

Were used self-administered questionnaires and a socio-demographic survey for data collection: Gender Belief Inventory (Maia University Institute and Interdisciplinary Centre for Gender Studies, version for research, 2017) and the Inventory on Violent Youth Relations (University Institute of Maia and Interdisciplinary Centre for Gender Studies, version for research, 2017). These were applied to 200 university students (142 females and 55 males), aged 18-44 (M = 20.54; SD = 4.435) who were attending the Maia University Institute. Data analysis was performed using the statistical program IBM- Statistics Package for the Social Sciences (version 24).

The results showed that 12.8% of students reported having been victims of some act of violence by someone with whom they maintain or maintained a relationship of intimacy. Men were identified as the main perpetuators, with women having the highest rates of victimization. With regard to the type of violence perpetuated, psychological and social violence appear as the most experienced by students. With regard to gender social beliefs, this study reveals that these students maintain conservative and traditional gender beliefs that continue to perpetuate violence. Regarding the impact of dating violence, there was awareness among respondents of the implications of this violence on the health of the victims.

This study shows that despite the efforts that have been made in the implementation of policies and projects to prevent gender violence, this has not been enough to finish its practice. The commitment to the implementation of gender equality programs in school education seems fundamental in order to prevent this public health problem.

Dating Violence, University Students, Beliefs, Practices, Implications to health.

O141 Defining clinical conditions in long-term healthcare as a first step to implement Time-Driven Activity Based Costing (TDABC)

Ana sargento 1,2 , ana querido 3,4 , henrique carvalho 2 , isa santos 2 , catarina reis 2,5 , marisa maximiano 2,5 , manuela frederico 6 , sandra oliveira 7,8 , susana leal 7,9, 1 center for applied research in management and economics, school of technology and management, 2411-901 leiria, portugal; 2 school of technology and management, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 4 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 5 center for research in informatics and communications, polytechnic institute of leiria, 2411-901 leiria, portugal; 6 nursing school of coimbra, 3046-851 coimbra, portugal; 7 school of management and technology, polytechnic institute of santarĂŠm, 2001-904 santarĂŠm, portugal; 8 center for health studies and research, university of coimbra, 3004-504 coimbra, portugal; 9 life quality research centre, 2001-904 santarĂŠm, portugal, correspondence: ana sargento ([email protected]).

Increasing healthcare costs is a concern of all developed countries. In Long-Term Healthcare (LTH) this is reinforced by population ageing and corresponding prevalence of chronic diseases. Thus, it is fundamental to accurately measure costs and outcomes in healthcare, improving value created for patients, i.e. , patient-centred health outcomes per monetary unit of cost [1, 2]. TDABC methodology applied to healthcare allows identifying the cost for each clinical condition in the full cycle of care, mapping processes, activities, resources and allocated time [3–5]. It has been mostly applied in acute-care settings, partly due to complexity of defining chronic condition [6].

This paper focuses on the cost component of a larger on-going research project (CARE4VALUE), aiming to enhance value creation in LTH providers and applied to a partner LTH unit. Specifically, the main objective is to define clinical conditions in the context of LTH, as a first step in the implementation of TDBAC.

Mixed qualitative and quantitative methods were applied, including: 1) three focus groups conducted with the health team of the LTH unit (physician, nurses, physiotherapist, psychologist, social assistant) to select, discuss and validate the criteria to define clinical conditions; 2) construction of a composite indicator and testing it over a sample of anonymized clinical data from 21 patients; 3) structured observation of processes taken throughout the full cycle of care of patients in different conditions. Qualitative data was submitted to content analysis and validated among participants. Quantitative data used in the composite indicator, based on validated scales, was subject to normalization, aggregation and sensitivity analysis.

One consensual outcome of the focus groups was that, in LTH, the disease or cause of entrance is less relevant to costs than the overall complexity of the patient, entailing psychical, social, spiritual and psychic-mental dimensions. Accordingly, a multidimensional classification model of patients in four complexity levels was delivered, after being validated and receiving consensus from the LTH team. Additionally, it will include a logging tool and dashboard to integrate separate patient-centred information and aid patient classification in complexity conditions.

The completion of this step allowed progressing in the design and implementation of the cost model, which, in turn, will support value measurement, and enhancing of the focus LTH unit. Besides, all involved professionals stated that their engagement in this phase of the project generated exceptional opportunities for interdisciplinary meetings and debate, contributing to closer ties between different areas of LTH.

1. Porter ME, Kaplan RS. How to pay for health care. Harv Bus Rev. 2016 Jul-Aug;94(7-8):88-98, 100, 134.

2. Schupbach J, Chandra A, Huckman RS. A Simple Way to Measure Health Care Outcomes. Havard Bus Rev [Internet]. 2016; DisponĂ­vel em: https://hbr.org/2016/12/a-simple-way-to-measure-health-careoutcomes

3. Crott R, Lawson G, Nollevaux MC, Castiaux A, Krug B. Comprehensive cost analysis of sentinel node biopsy in solid head and neck tumors using a time-driven activity-based costing approach. Eur Arch Oto-Rhino-Laryngology. 2016;273(9):2621–2628.

4. Alaoui S El, Lindefors N. Combining time-driven activity-based costing with clinical outcome in cost-effectiveness analysis to measure value in treatment of depression. PLoS One. 2016;11(10): e0165389.

5. Keel G, Savage C, Rafiq M, Mazzocato P. Time-driven activity-based costing in health care: A systematic review of the literature. Health Policy (New York). 2017;121(7):755–763.

6. Nolte EE, McKee M. Caring for people with chronic conditions : a health system perspective. Eur Obs Heal Syst Policies Ser. 2008;XXI:259.

Long-term healthcare, Time-Driven Activity Based Costing (TDABC), Clinical conditions, Patient-centered data, patient complexity.

O142 Effects of aerobic land-based and water-based exercise training programs on clinical and functional parameters in older women

Rafael oliveira 1,2 , carlos t santamarinha 3 , joĂŁo brito 1,2, 1 research unit in quality of life, sport sciences school of rio maior, polytechnic of santarĂŠm, 2040-413 rio maior, portugal; 2 research center in sports sciences, health sciences and human development, 6201-001 covilhĂŁ, portugal; 3 city hall of esposende, 4740-223 esposende, portugal, correspondence: rafael oliveira ([email protected]).

In Portugal, most exercise training programs are offered by municipalities, seasonally, by 8 to 10 months.

The aim of the study was to access the clinical and functional effects of the application of different fitness exercise training programs, which included aerobic fitness group classes, with calisthenics exercises and water-based exercise, for nine months, to older women.

In the study, 96 active older women participated. They were divided in four exercise groups: 2xland-based group (GA, n=21; age 71.46±9.75 years; body weight 72.44±11.85 kg; height 153.82±5.83cm); 2xwater-based group (GB, n=9; age 70.10±9.98 years; body weight 70.48±10.92 kg; height 153.68±5.64cm); group of 1xland plus 2xwater-based (GC, n=7, age 71.35±8.32 years; body weight 73.42±11.20 kg; height 154.39±5.01cm), group of 2xland plus 2xwater-based exercise (GD, n=39, age 71.46±7.38 years; body weight 71.70±11.66 kg; height 154.23±6.82 cm). Clinical parameters were also accessed, such as, fast glucose, triglycerides, rest blood pressure, rest heart rate frequency (FCR) and functional parameters [1]: resistance of upper and lower limbs, agility and aerobic capacity. The training intensity of the programs was moderate, 10-14 on the “Rate of Perceived Exertion” scale [2], run accordingly [3]. It was use inferential statistic through T-test to compare baseline vs post-training.

After nine months of intervention, the main results were at fast: glycemia (GA=116.0Âą12.11 vs 101.50Âą13.36 mg/dL; GC= 120.43Âą15.34 vs 100.47Âą11.65 mg/dL; GD = 127.29Âą36.60 vs 111.23Âą29.18 mg/dL); triglycerides (GA= 288.13Âą136.78 vs 158.13Âą47.24 mg/dL; GC= 295.94Âą112.92 vs 153.63Âą101.96 mg/dL; GD= 244.79Âą122.41 vs 144.98Âą69.27 mg/dL), FCR (GD=71.34Âą11.26 vs 66.31Âą8.68 bpm); aerobic capacity (GB= 541.88Âą51.03 vs 605.0Âą31.12 m ; GD=127.29Âą174.06 vs, 111.23Âą131.65 m), in resistance of lower limbs (GA=16.57Âą5.19 vs 18.90Âą5.07 repetitions; GB=14.89Âą5.21 vs 19.11Âą4.83 repetitions ; GD=20.54Âą5.38 vs 22.87Âą6.39 repetitions) and agility (GA= 7.94Âą3.52 vs 8.82Âą2.91 seconds; GB=8.86Âą4.09 vs 6.53Âą2.40 seconds; GC=7.90Âą4.56 vs 6.81Âą3.78 seconds; GD 6.13Âą1.66 vs 5.48Âą1.87 seconds), p < 0.05 for all. Also, it was observed that there were correlations between aerobic capacity and triglycerides; fast glucose and triglycerides.

The results showed a positive effect in all exercise training programs offered by municipality of Esposende, for clinical and functional parameters in older women. Groups of land-based exercise, at least twice a week, seam to lead to better results. The study supports the role of physical exercise to improve hemodynamic, lipid profile and functional parameters as reported previously by another similar study [4]. In addition, this study also revealed efficiency to improve clinical parameters that were not studied yet.

1. Rikli R, Jones C. The development and validation of a functional fitness test for community-residimg older adults. J Aging Phys Activ. 1999;7:129-161.

2. Borg G. Phychophysical bases of perceived exertion. Med Sci Sports Exerc. 1982;14:377-381.

3. American College of Sports Medicine, ACSM. ACSM’s Guidelines for exercise testing and prescription (9th ed). Philadelphia: Lippincott Williams and Wilkins; 2013.

4. Oliveira R, Santa-Marinha C, LeĂŁo R, Monteiro D, Bento T, Rocha RS, Brito JP. Exercise training programs and detraining in older women. J Hum Sport Exerc. 2017;12(1):142-155.

Older women, Water-based exercise, Land-based exercise, Functional capacities, Clinical parameters.s

O143 The influence of emotional intelligence in stigmatizing attitudes toward mental illness of undergraduate nursing students

Ana querido 1,2 , catarina tomĂĄs 1,2 , daniel carvalho 3 , marina cordeiro 1,2 , joĂŁo gomes 3, 1 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 santo andrĂŠ hospital, hospital center of leiria, 2410-197 leiria, portugal, correspondence: ana querido ([email protected]).

Health care professionals share the general public’s attitude towards people with mental illness, being generalized the harmful beliefs and the subsequent negative attitudes to these patients [1]. A significant correlation between emotional intelligence and mental illness stigma has been found.

To analyse the correlation between emotional intelligence and stigmatizing attitudes towards mental illness and to access the influence of this intelligence in those stigmatizing attitudes, among undergraduate nursing students.

It was performed a cross-sectional correlational study. Data was collected from a non-probabilistic sample of nursing students from a health school of the centre region of Portugal, using a questionnaire with sociodemographic questions, the Wong and Low Emotional Intelligence Scale (1-5) [2] and the Community Attitudes Toward the Mentally Ill Scale (40-200) [3]. Ethical procedures were taken into account during research according the Helsinki Declaration.

Most of the nursing students inquired (N=335) were female (N=263). The sample had an average age of 21.69 years (SD=4.64), distributed by all levels of nursing graduation. Some (n=25) had already suffered from mental illness, mainly (N=17) humour disorders, and 41.2% referred to have regular contact with mental health patients in their lives. These nursing students showed a good emotional intelligence (Mean=3.62; SD=0.41), being higher in self-emotional appraisal (Mean=3.73; SD=0.58) and others’ emotional appraisal (Mean=3.85; SD=0.49). Stigmatizing attitudes towards mental illness are medium-low (Mean=122.15; SD=5.02), being more intense in authoritarianism (Mean=23.92; SD=4.44) and social restrictiveness (Mean=20.87; SD=4.94). Emotional intelligence is positively correlated with attitudes in community mental health ideology (R=0.133; p=0.015). Other’s emotional appraisal ability of emotional intelligence is correlated with attitudes regarding benevolence (R=0.276; p=0.000), community mental health ideology (R=0.221; p=0.000), authoritarianism (R=-0.156; p=0.005) and social restrictiveness (R=-0.254; p=0.000). Other’s emotional appraisal ability can explain some of the attitudes regarding authoritarianism (R 2 =0.024; F=8.138; p=0.005), benevolence (R 2 =0.076; F=27.136; p=0.000), social restrictiveness (R 2 =0.065; F=22.560; p=0.000) and community mental health ideology (R 2 =0.049; F=16.862; p=0.000). Regulation of emotions also influences benevolence attitudes (R 2 =0.016; F=16.578; p=0.000).

Postgraduate nursing students have good emotional intelligence and low stigmatizing attitudes towards mental illness. A significant correlation was found between emotional intelligence and stigmatizing attitudes, mostly positive, and an influence of this type of intelligence in the four areas of this kind of attitudes. These results enhance the importance of developing emotional intelligence, specially the ability of other’s emotional appraisal, in a way to improve attitudes toward mental illness.

1. Poreddi V, Thimmaiah R, Pashupu D, Ramachandra, Badamath S. Undergraduate Nursing students’ attitudes towards mental illness: Implications for specific academic education. Indian J Psychol Med. 2014;36(4):368-372.

2. Querido A, Tomás C, Carvalho D, Gomes J, Cordeiro M. Measuring emotional intelligence in health care students – revalidation of WLEIS-P. In: Proceedings of the 3rd IPLeiria’s International health Congress; 2016 maio 6-7; Leiria, Portugal. London: BMC Health Services Research; 2016. p. 87.

3. Taylor S, Dear M. Scaling community Attitudes Toward the Mentally Ill. Schizophrenia Bulletin. 1981, 7(2): 225-240.

Mental health, Stigma, Nurse students, Emotional intelligence.

O144 Functional decline in older acute medical in-patients

CecĂ­lia rodrigues 1 , denisa mendonça 2 , maria m martins 3, 1 centro hospitalar do porto, 4099-001 porto, portugal; 2 instituto de ciĂŞncias biomĂŠdicas abel salazar, 4050-313 porto, portugal; 3 escola superior de enfermagem do porto, 4200-072 porto, portugal, correspondence: cecĂ­lia rodrigues ([email protected]).

Older patients hospitalised for acute illness are vulnerable to decline in basic self-care. This functional decline determines future health needs and can lead to negative health outcomes.

The aim of this study was to compare basic self-care needs in older acute medical in-patients between 2 weeks before hospitalization and discharge.

Single-centred, observational, and prospective cohort study. Data were collected between May and September 2017 and included 91 patients, aged 65 or older admitted to a medical ward of a 580-bed teaching hospital in Portugal. Performance in basic activities of daily living (BADL) at home (self-reported), at hospital admission (observed) and at discharge (observed) was collected. Functional status of the elderly patients at 2 weeks before hospitalization (baseline), at hospital admission, and at discharge was measured by the Katz Index. Differences in scores for BADL between baseline and admission, between admission and discharge, and between baseline and discharge were used to define pre-admission, in-hospital and overall functional decline.

Pre-admission, in-hospital and overall functional decline occurred in 78.0%, 4.4% and 63.7% of the patients, respectively. Patients were independent on average in 3.63, 1.41 and 1.90 BADL 2 weeks before admission, at hospital admission and at discharge, respectively. In-hospital functional improvement occurred in 36.3% of the patients.

Due to their attitude to observe, support and guide patients and their 24 hours patient supervision, nurses play a key role in strategies to prevent functional decline in older patients. An adequate planning of nursing care that includes interventions to promote and maintain mobility, in the logic of self-care, can be a valuable contribution in the prevention of functional decline and in the reconstruction of independence in self-care, after a generative event of dependence.

Functional decline, Elderly, Hospital outcomes.

O145 Validation of the Weight Focused Feelings Scale in a sample of overweight and obese women participating in a community-based weight management programme

Cristiana duarte 1 , marcela matos, 2 james stubbs 1 , corinne gale 3 , liam morris 4 , paul gilbert 3,5, 1 school of psychology, faculty of medicine and health, university of leeds, ls2 9jt, leeds, united kingdom; 2 cognitive and behavioural centre for research and intervention, university of coimbra, 3001-802 coimbra, portugal; 3 college of life and natural sciences, university of derby, kedleston road, de22 1gb, derby, united kingdom; 4 nutrition and research department, slimming world, clover nook road, de55 4rf, derbyshire, united kingdom; 5 mental health research unit, kingsway hospital, de22 3lz, derby, united kingdom, correspondence: cristiana duarte ([email protected]).

A significant body of literature suggests that negative emotions may undermine self-regulation of eating behaviour during/subsequent to weight loss attempts by influencing loss of control of eating behaviour [1]. Negative feelings related to body weight and shape seem to play a role in eating problems, with some studies suggesting that binge eating may occur as a means of momentarily avoiding/reducing negative affects related to body weight and shape [2, 3]. Recent evidence suggests that positive emotions may also relate to overeating in healthy and obese adults [4-6]. Nonetheless, research on emotions and eating focuses primarily on negative emotions and there is a lack of measuring relating positive emotions to eating.

The current study aimed to test the factorial structure and psychometric properties of the Weight Focused Feelings Scale (WFFS), an 11-item measure that assesses negative and positive feelings linked to body weight and shape.

2,236 women attending a community-based weight management programme participated in this study. Mean (SD) participants age was 41.71 (12.34) years and BMI was 31.62 (6.10) kg/m 2 . Data was randomly split in two independent data sets to conduct an Exploratory Factor Analysis (EFA) in 1,088 participants, and a Confirmatory Factor Analysis (CFA), in 1,148 participants.

Results of the EFA indicated a two-factor structure: negative feelings (7 items) and positive feelings (3 items). Items presented factorial loading above .49 on the first factor, and above .64 on the second factor. The CFA confirmed the plausibility of this 2-factor model (X 2 (41) = 283.771: p < .001; CFI = .96; TLI = .95; PCFI =.72; RMSEA = .07 [.06 to .08]). Standardized Regression Weights ranged from .55 to .68 in the Negative affect subscale; and .75 to .90 in the Positive affect subscale. Items' Squared Multiple Correlations ranged from .30 to .81. The subscales presented Composite Reliability values of .93 and .88, respectively. The two subscales were associated, in the expected direction, with measures of depressive, anxiety and stress symptoms, psychological wellbeing, shame, dietary disinhibition, restraint and susceptibility to hunger, and body mass index (BMI).

The WFFS is a valid measure to assess body weight and shape-related negative and positive emotions. This measure may be useful for future model testing examining the differential role of negative and positive emotions in eating behaviour and weight management.

1. Singh M. Mood, food and obesity. Frontiers Psychol. 2014;5:925.

2. Duarte C, Pinto-Gouveia J, Ferreira C. Ashamed and fused with body image and eating: Binge eating as an avoidance strategy. Clin Psychol Psychother. 2017;24(1):195-202.

3. Heatherton T, Baumeister R. Binge eating as escape from self-awareness. Psychol Bull 1991;110(1):86-108.

4. Bongers P, Jansen A, Havermans R, Roefs A, Nederkoorn C. Happy eating. The underestimated role of overeating in a positive mood. Appetite 2013;67:74-80.

5. Cardi V, Leppanen J, Treasure J. The effects of negative and positive mood induction on eating behaviour: A meta-analysis of laboratory studies in the healthy population and eating and weight disorders. Neuroscience & Biobehavioral Reviews in Cardiovascular Medicine 2015;57:299-309.

6. Evers C, Adriaanse M, Ridder DTD, Witt Huberts JC. Good mood food. Positive emotion as a neglected trigger for food intake. Appetite 2013;68:1-7.

Negative and Positive emotions, Body weight and shape, Eating behavior, Factorial Analysis, Psychometric analysis.

O146 Nurses’ perceptions of barriers for implementing EBP in a central hospital in the north of Portugal

Ana ic teixeira 1,2,3,4 , antĂłnio l carvalho 3,4 , cristina barroso 4, 1 centro hospitalar sĂŁo joĂŁo, 4200-319 porto, portugal; 2 instituto de ciĂŞncias biomĂŠdicas abel salazar, 4050-313 porto, portugal; 3 centro de investigação em tecnologias e serviços de saĂşde, 4200-450 porto, portugal; 4 escola superior de enfermagem do porto, 4200-072 porto, portugal, correspondence: ana ic teixeira ([email protected]).

Evidence based practice (EBP) is defined as an integration of best research evidence with clinical expertise and patient values in clinical decision making [1, 2]. Research confirms positive outcomes when implementing EBP: patient safety; improved clinical outcomes; reduced healthcare costs and decreased variation in patient outcomes [3]. Considered as a standard of care, EBP has benefits for nurses, patients, general population, health care systems, as well as for research and education. Authors describe several types of obstacles to the use of research in practice: characteristics of the adopter, organization, innovation and communication [1]. Individual barriers include lack of knowledge and how to critique research studies; lack of awareness; colleagues not supportive of practice change and nurses feeling a lack of authority to change practice. Organizational barriers include insufficient time to implement new ideas; lack of access to research; and lack of awareness of available educational tools related to research [3]. The most important factor related to nurses’ EBP is support from their employing organizations to use and conduct research. Programme implementation is most likely to be successful when it matches the values, needs and concerns of practitioners. Concerning the development and implementation of competence for EBP, firstly, is important to identify the barriers.

Our objective is to explore and describe nurse’s perceptions of these barriers in our context.

This study is part of a larger one, namely: “Clinical Supervision for Safety and Care Quality” (C-S2AFECARE-Q). To answer our research question a convenience sampling strategy was employed to distribute 500 questionnaires to nurses working in a central hospital in the north of Portugal. Data was collected between April and July 2017. They returned 260 questionnaires, 98 of these with the answer to the following question: “ In your opinion, which are the barriers for the implementation of EBP in your context? ”. Data analyses were based on content analysis proposed by Bardin (1977).

Barriers were allocated into categories: Organization; Leaders and Management; Professionals and evidence [4]. Nurses reported perceived barriers in all these levels. In the organization’ barrier, they identified lack of organizational culture; lack of support from management and outdated and unquestioned routines. They also reported insufficient support from the leaders. In individual terms, they identified negative attitudes, such as lack of motivation, resistance to change, lack of time and inadequate knowledge and skills.

Our results are consistent with other authors’ findings [5]. Nurses are clearly aware of the barriers in their context.

1. Munten G, van den Bogaard J, Cox K, Garretsen H, Bongers I. Implementation of Evidence-Based Practice in Nursing Using Action Research: A Review. Worldviews Evid Based Nurs. 2010;7(3):135-157.

2. Ubbink D, Guyatt GH, Vermeulen H. Framework of policy recommendations for implementation of evidence-based practice: a systematic scoping review. BMJ Open Journal. 2013;3:e001881.

3. Black AT, Balneaves LG, Garossino C, Puyat JH, Qian H. Promoting Evidence-Based Practice Through a Research Training Program for Point-of-Care Clinicians. J Nurs Adm. 2015;45(1):14-20.

4. Jylhä V, Oikarainen A, Perälä M, Holopainen A. Facilitating evidence-based practice in nursing and midwifery in the WHO European Region. World Health Organization; 2017 [cited 2018 Jan 18]. Available from: http://www.euro.who.int/en/health-topics/Health-systems/nursing-and-midwifery/publications/2017/facilitating-evidence-based-practice-in-nursing-and-midwifery-in-the-who-european-region-2017.

5. Pereira RPG, Cardoso MJ, Martins MA. Atitudes e barreiras Ă  prĂĄtica de enfermagem baseada na evidĂŞncia em contexto comunitĂĄrio. Revista de Enfermagem ReferĂŞncia. 2012;3(7):55-62.

6. Solomons N, Spross JA. Evidence-based practice barriers and facilitators from a continuous quality improvement perspective: an integrative review. Journal of Nursing Management. 2011;19(1):109-120.

Nursing, Evidence-Based Practice, Barriers, Hospital health care.

O147 Patient identification, patient safety and clinical audit

CecĂ­lia rodrigues, manuel valente, centro hospitalar do porto, 4099-001 porto, portugal.

The correct identification of the person under health care in an institution is a basic principle of a patient safety culture and quality of care provided. Failures associated with patient identification processes are the cause of medication errors, transfusions, complementary diagnostic and therapeutic screenings, invasive procedures performed on wrong persons, and other incidents of high severity.

Identify if all hospitalized patients have identification wristbands and check if the name is correct and readable.

In a 580-bed adult teaching hospital, between January and December 2017, on random days of each month, patients were audited for identification by wristband by a team of nurses with experience in clinical audit.

In the 12 months studied, 161 audits were performed, resulting in 3,539 patients audited. From the total patients audited, 3,406 (96.2%) were correctly identified with wristbands. There were 133 failures: 113 patients had no identification wristband, 19 patients had an identification wristband, but the name was unreadable and 1 patient had a wristband with the wrong name. The rate of correctly identified patients increased progressively over the months: in the first month (January 2017) 89.3% of patients were correctly identified with a wristband; in December 2017, the rate of correctly identified patients was 95.0%. The partial results of this audit were disclosed in general risk management meetings with clinical services in March, June and September 2017.

Given the potential negative implications of the absence of identification of the person undergoing health care, these results indicate that there is a clear opportunity for improvement in patient identification. Clinical audit has proved to be an instrument for improving quality and safety, particularly in improving the identification of patients.

Patient safety, Patient identification, Clinical audit.

O148 Demographic differences in quality of life in elderly population of Tâmega e Sousa

Sara s lima 1,2 , raquel esteves 1,2 , clarisse magalhĂŁes 1,2 , fĂĄtima ribeiro 1,2 , lurdes teixeira 1,2 , ana teixeira 1,2 , fernanda pereira 1,2, 1 cooperativa de ensino superior politĂŠcnico universitĂĄrio, 4585-116 gandra, paredes, portugal; 2 instituto de investigação e formação avançada em ciĂŞncias e tecnologias da saĂşde, 4585-116 gandra, paredes, portugal, correspondence: sara s lima ([email protected]).

Aging population is associated with an increase in pathologies of prolonged evolution, making necessary to redirect and reorganize social and health structures, in order to meet the specific needs of the elderly population.

To characterize the demographic profile of the elderly population of the region of Tâmega e Sousa regarding quality of life as well as functionality level and social support; and to find differences in quality of life according to gender and age.

This cross-sectional study included 200 participants, 67% women with a mean age of 72 years old (SD=5.3). A sociodemographic questionnaire, SF-36 to assess quality of life (physical and mental dimension), Barthel Index to assess functionality level and Satisfaction with Social Support Scale to assess satisfaction with social support were used.

For mental quality of life the mean score was 59.69 (SD=12.78) and 49.80 (SD=16.38) for physical quality of life. Satisfaction with social support was positively associated to both mental (r=.476, p<.001) and physical quality of life (r=.457, p<.001) as well as with functionality level (r= .386, p<.001; r=.458, p<.001, respectively). There were differences in mental (t(195)= -2.998, p=.003) and physical quality of life (t(195)= -3.358, p=.001) according to gender and according to age, i.e. , the youngest participants (< 71 years old) showed high mental (t(197)=-33.552, p<.001) and physical (t(197)=2.466, p=.015) quality of life.

The results highlight the role of social support and level of functionality in the quality of life of this population, emphasizing the need to foster the integration of this population into social programs and leisure time as well as to promote physical activity in order to improve the level of functionality. Physical activity improves independent living, reduces disability, and improves the quality of life of the elderly. Contrary to the literature, programs should be constructed taking into account that men presented worse quality of life and should be directed to the older ones, as expected. Therefore, results show the importance to implement multidisciplinary programs, tailored, with older people profiles, allowing the development of an integrated social and health response to the elderly population of this region.

Aging, Eldely people, Quality of Life, Social Support, Functionality, Social and health structures.

O149 Healthy lifestyles & literacy for health in work context- what trend?

Otilia freitas 1,2 , gregĂłrio freitas 1,2 , clementina morna 1,2 , isabel silva 1,2 , gilberta sousa 1,2 , rita vasconcelos 3 , luĂ­s saboga-nunes 4,5 , estudantes enfermagem 1, 1 center for health technology and services research, university of madeira, 9020-105 funchal, madeira, portugal; 2 higher school of health, 9020-105 funchal, madeira, portugal; 3 faculty of exact sciences and technology, university of madeira, 9020-105 funchal, madeira, portugal; 4 institute of environmental health, faculty of medicine, university of lisbon, 1649-028 lisboa, portugal; 5 institute of sociology, university of freiburg, 79098 freiburg, germany, correspondence: otilia freitas ([email protected]).

In the National Plan for Occupational Health (PNSO) - 2nd Cycle 2013/2017, goals are established to increase health gains and guarantee the value of the workers’ health in groups of employers by those responsible for governance and society in general. One of the specific objectives is to promote health and work practices and healthy lifestyles at the workplace, concerning private sector companies and Public Administration [1].

To describe health lifestyles and health literacy of workers of a company of the tertiary sector of RegiĂŁo AutĂłnoma da Madeira, Portugal.

Descriptive exploratory study, with a non-probabilistic convenience sample of 118 workers, with mean age of 45 years, predominantly male (88.1%). We used a sociodemographic data collection instrument, the FANTASTIC Lifestyle Questionnaire [2] (ÎąC 0.725) and the European Questionnaire on Literacy for Health, Portuguese version [3], (ÎąC of 0.97). A favourable opinion was obtained from an ethics committee and the ethical procedures inherent to this type of study were respected.

It was found that 11.9% of the respondents presented a general level of Good (73 to 86), 55.9% of Very Good (85 to 102) and 32.2% of Excellent (103 to 120) lifestyle. Regarding general health literacy, 56.70% of the respondents had limited literacy levels, being 7.20% inadequate and 49.50% problematic. The trend towards limited literacy, observed in general literacy, remained in all three domains. Thus, in the area of Health Care, 48.3% of the respondents had limited levels of literacy, being 8.47% inadequate and 39.83% problematic. Concerning disease prevention, 44.06% of the respondents showed limited levels of literacy, being 11.86% inadequate and 32.20% problematic. In terms of health promotion, 47.46% of the respondents showed limited levels of literacy, being 11.02% inadequate and 36.44% problematic. There is a positive (ρ = 0.277) and statistically significant correlation between the general lifestyle score and the overall health literacy score (p value = 0.002).

The results suggest the importance of an intervention with integrated activities and promoters of healthy lifestyles and of health literacy, focusing strategically on health education in the work context, aiming to raise awareness among the target population.

1. DGS. Plano Nacional de SaĂşde Ocupacional 2013/2017. https://www.dgs.pt/saudeocupacional/programa-nacional4.aspx.

2. Saboga-Nunes L, Sorensen K, Pelikan J, Cunha M, Rodrigues E, PaixĂŁo E. Cross-cultural adaptation and validation to Portuguese of the European Health Literacy Survey (HLS-EU-PT). Atencion Primaria. 2014;46(1):13.

3. Silva AMM, Brito IS, Amado J. Adaptação e validação do questionário “Estilo de Vida Fantástico”: resultados psicométricos preliminares. Referência. 2011;3:650.

Healthy lifestyles, Health literacy, Work context, Nursing.

O150 3rd pressure ulcer prevalence study in community setting - CSAH-USIT

Manuela dias, ana rocha, unidade de saĂşde da ilha terceira, centro de saĂşde de angra do heroĂ­smo, 9700-121, açores, portugal, correspondence: manuela dias ([email protected]).

Pressure ulcers are a problem to health care institutions and professionals. In Azores Islands, there are few prevalence studies in community settings and the results point to prevalence rates of 18.5%, according to the Nursing Scientific Investigation Group (ICE) [1] in 2006, and of 26.49% according to Rodrigues and Soriano [2], in 2010. The National Security Patient Plan 2015-2020 [3] recommends that all health institutions should supervise this problem every six month, so “Gabinete de Saúde Comunitária” is already conducting the 3rd Pressure Ulcer Prevention Study.

This study aims to examine the prevalence rate of pressure ulcers in patients living at home, to characterize those patients as well as the most severe ulcers.

This was a descriptive cross-sectional study using a quantitative approach with a questionnaire, which took place during November’s first week, in 2017. The population was the one attending Angra Health Centre, assisted by nurses, in a community setting. Data was analysed using Excel 2007. The study was performed after the Health Centre’s approval. A sample of 26 patients, with pressure ulcers, 54% female, with a medium age of 79.09 years participated in the study. Most of the patients (46.15%) belonged to the age group between 80-89 years.

The study reported a prevalence rate of 7.58%. There was a ratio of pressure ulcers per participant with pressure ulcer of 1.5, where the sacrum/coccyx was the main critical area. Most of the pressure ulcers recorded was category II (42.31%), with an evolution situated on 4.42 months. 77% had high risk of pressure ulcer development according to the Braden Scale; 92.31% of the patients had prevention devices on bed and only 31.25% had them on sitting chairs. Upon analysis of the three studies, we noticed that the prevalence rate had increased, the medium age decreased, with the largest group belonging to the group between 80-89 years old. There was also an increased number of the sacrum/coccyx localization in this group.

This study allowed us to analyse, briefly, the pressure ulcer context, in which nurses spend a large amount of time caring and showing some strategies for recovery. Targeting those findings should be implemented in some of these community settings, in order to improve the health quality of the patient.

1. Gomes LM. Prefåcio In Grupo ICE. Investigação Científica em Enfermagem, Enfermagem e úlceras de pressão: Da reflexão sobre a disciplina às evidências nos cuidados. Angra do Heroísmo: Grupo ICE. 2008; p. 8-13.

2. Rodrigues A, Soriano J. Fatores influenciadores dos cuidados de enfermagem domiciliårios na prevenção de úlceras por pressão. Revista de Enfermagem Referência. 2011;III(5):55-63.

3. Diário da República. Plano Nacional para a Segurança dos Doentes 2015-2020., 2.ª série — N.º 28 — 10 de fevereiro de 2015.

Pressure Ulcer, Prevalence, Community settings, Nursing.

O151 Relationship between health literacy, electronic health literacy and knowledge of sexually transmitted diseases in workers of a Portuguese hospital

Alice gonçalves 1 , anabela martins 2 , clara rocha 3 , diana martins 1,4 , isabel andrade 3 , margarida martins 5 , paula vidas 6 , paulo polĂłnio 7 , fernando mendes 1,8,9,10, 1 biomedical laboratory sciences, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal; 2 department of physiotherapy, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal; 3 department of complementary sciences, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal; 4 institute for research and innovation in health sciences, university of porto, 4200-135 porto, portugal; 5 emergency service, hospital distrital da figueira da foz, 3094-001 figueira da foz, portugal; 6 cardiology service, hospital distrital da figueira da foz, 3094-001 figueira da foz, portugal; 7 laboratory medicine hospital, hospital distrital da figueira da foz, 3094-001 figueira da foz, portugal; 8 biophysics institute, center for neuroscience and cell biology-institute for biomedical imaging and life sciences, faculty of medicine, university of coimbra, 3004-504 coimbra, portugal; 9 center of investigation in environment, genetics and oncobiology, faculty of medicine, university of coimbra, 3004-504 coimbra, portugal; 10 coimbra institute for clinical and biomedical research, university of coimbra, 3094-001 coimbra, portugal, correspondence: alice gonçalves ([email protected]).

Health literacy (HL) can be defined as the individual capacity of obtaining and understanding information, making appropriate decisions about health. An adequate level of HL is fundamental for risk awareness and behaviour change, as well as for disease prevention. Electronic health literacy (eHL) can be useful in promoting health, but it demands the capacity of judgment towards the obtained information and the ability to work with new technologies. Due to the high prevalence of sexually transmitted diseases (STDs) in the general population, it’s important to evaluate and relate these health concepts.

To evaluate and relate levels of HL and eHL with STD knowledge of professionals of a Portuguese Hospital.

149 individuals from different professional categories, working at a medium size Portuguese Hospital, answered an anonymous questionnaire, composed of four parts: socio-demographic data, STDs knowledge, and the Portuguese version of both Newest Vital Sign (to assess the functional HL) and eHEALS (to assess the eHL).

Altogether 66.0% of the sample had the possibility of an adequate HL; 59.2% of those individuals were health professionals and 76.3% had a high knowledge on STDs. 72.4% were female and 27.5% male, showing similar possibility of an adequate HL (66.7% and 61.0%, respectively). Concerning the individuals with the possibility of an adequate level of HL, 27.5% were 31-40 years old and 90.6% held a higher education degree, while 7.3% had only completed high school. In general, “ I know what health resources are available on the Internet ” and “ I have the skills I need to evaluate the health resources I find on the Internet ” had the highest mean (3.99) and “ I feel confident in using information from the Internet to make health decisions ” had the lowest with 3.08.

Individuals with higher education, especially in health sciences, have higher probability of an adequate HL. Age may also be an interfering factor, since the youth nowadays, has more access to information early in life. Our results also show that 50% of the studied population had the possibility of an adequate level of HL and a high knowledge in STDs, which is an average percentage for people who work at this hospital. This can be changed, using specific education procedures, such as lectures or seminars, to increase HL and knowledge on STDs.

1. Sørensen K, Van den Broucke S, Fullam J, Doyle G, Pelikan J, Slonska Z, et al. Health literacy and public health: A systematic review and integration of definitions and models. BMC Public Health;12(1):80.

2. Norman CD, Skinner HA. eHealth Literacy: Essential Skills for Consumer Health in a Networked World. J Med Internet Res. 2006;8(2):e9.

3. Norman CD, Skinner HA. eHEALS: The eHealth Literacy Scale. J Med Internet Res. 2006;8(4):e27.

Health literacy, eHealth literacy, Risk behaviour, Health professionals.

O152 Stigmatizing attitudes toward mental illness in future health professionals

Daniel carvalho 1 , catarina tomĂĄs 2,3 , ana querido 2,3 , marina cordeiro 2,3 , joĂŁo gomes 1, 1 santo andrĂŠ hospital, hospital center of leiria, 2410-197 leiria, portugal; 2 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal, correspondence: daniel carvalho ([email protected]).

Health professionals, like nurses, still have concerns and negative attitudes towards people with mental illness, despite development of mental health services [1]. Final year students were found to be less stigmatizing [2], being important to pay special attention to the education of these professionals and on-the-job training [1].

To access the levels of stigmatizing attitudes towards mental illness, and to characterize the determinants of these attitudes, developed by health students.

A quantitative, cross-sectional correlational study was performed. A non-probabilistic sample of health students from a school in the centre region of Portugal was considered, and data was collected with a questionnaire with sociodemographic questions, a question about perception on their knowledge about mental health (scoring 0 to 5) and the Community Attitudes Toward the Mentally Ill Scale (scoring 40 to 200) [3]. Ethical procedures were taken into account during research, according to the Helsinki Declaration.

The students inquired (N=636) were representative of five health degrees, namely, Dietetics, Nursing, Physiotherapy, Speech Therapy and Occupational Therapy, mostly female (82.1%), with an average age of 21.35 (SD=4,29; Median=20). A low percentage (6.6%) had already suffered from a psychiatric disease, mainly humour disorders (4.1%) and 37.6% had regular contact with a patient with mental illness in their lives. The sample had medium-low levels of stigmatizing attitudes towards mental illness (Mean=122.72; SD=5.19), being these negative attitudes more relevant in authoritarianism (Mean=24.17; SD=4.34) and social restriction (Mean=20.53; SD=4.93) areas. Male students showed more positive attitudes towards authoritarianism (p=0.004) and social restriction (p=0.007) and females towards Benevolence (p=0.016) and Community Mental Health Ideology (0.004). Nursing students had more frequent stigmatizing attitudes and physiotherapy students showed better attitudes (p=0.021). Students who have had mental illness had more stigmatizing attitudes in general (p=0.039), regarding all areas (p<0.005) except for Community Mental Health Ideology. Students who had regular contact with a mental health patient, had more stigmatizing attitudes in general (0.031) and in authoritarianism (p=0.039).

It was found medium-low levels of stigmatizing attitudes towards mental illness among the health students inquired. There was also evidence that having a mental disorder or having regular contact with mental health patients can increase stigmatizing attitudes of health students. This highlights the importance of intervention in these students, not also to improve attitudes towards mental health patients, but also to decrease the self-stigma that was also identified by these results.

1. Ihalainen-Tamlander N, Vähäniemi A, LÜyttyniemi E, Suominen T, Välimäki M. Stigmatizing attitudes in nurses towards people with mental illness: a cross-sectional study in primary settings in Finland. Journal of Psychiatric and mental Health Nursing. 2016;23(6-7):427-437.

2. Mas A, Hatim A. stigma in mental illness: attitudes of medical students towards mental illness. Med J Malaysia. 2002;57(4):433-444.

3. Taylor S, Dear M. Scaling community Attitudes Toward the Mentally Ill. Schizophrenia Bulletin. 1981;7(2):225-240.

Mental health, Stigma, Health students, Attitudes.

O153 Psychological symptoms and mental health stigma in teaching professionals

Catarina tomĂĄs 1,2 , ana querido 1,2 , marina cordeiro 1,2 , daniel carvalho 3 , joĂŁo m gomes 3, correspondence: catarina tomĂĄs ([email protected]).

Teaching professionals are recognised to be submitted to a daily stress, partly due to dynamic interactions associated with the teaching role, which leads to experiences of psychological and psychosomatic symptoms [1]. Facing mental health problems influences stigma towards mental illness. In this situation they are predisposed to accept preconceptions leading to internalized stigma [2]. Research has referred teachers as having negative attitudes towards mental illness [3]. Considering that there is little known about teacher professionals in secondary and higher education [4], this study addresses mental health stigma and psychological symptoms in these settings.

To characterise psychological symptoms and mental health stigma in teaching professionals; identify the differences between high-school and higher education teachers stigma; correlate psychological symptoms and mental health stigma.

Cross-sectional correlational study, with a non-probabilistic sample of 96 Portuguese teaching professionals. Data were collected using an on-line questionnaire composed by socio-demographic information, the Portuguese version of Brief Symptom Inventory (BSI) and he Attribution Questionnaire to Measure Mental Illness Stigma (AQ27) [5]. Ethical procedures were taken into account during research, according to the Helsinki Declaration.

Teaching professionals were mostly women (70.8%), aged between 30 and 62 years old (Mean=44.8; SD=7.86), 27.1% diagnosed with a mental disease and 41.1% referred contact with mental health patients. Teaching professionals revealed low levels of global psychological symptoms. Higher scores were found in Obsession-Compulsion (M=0.88; SD=0.73). A moderate level of stigma was found in the global sample (M=3.63; SD=0.74). Stigmatizing helping attitudes and behaviour were higher (M=6.50, SD=1.86). Although no difference was found in total stigma between groups, higher education professors rate responsibility higher than high-school teachers (M=3.06, SD=1.08 vs M=2.55, SD=0.97; p=0.02). Significant positive correlations were found between total stigma and psychological symptoms in Somatization, Obsession-Compulsion, Interpersonal Sensitivity, Depression, Anxiety and Psychoticism (p < 0.01).

Teaching professionals experience low intensity of several psychological symptoms, and reveal medium stigma towards mental illness. High-school teachers revealed higher psychological distress compared to higher education professors but no differences were found in stigma. The higher the psychological distress, the higher the stigma is expected to be towards people with mental illness. Therefore intervention towards mental health is needed, addressing psychological distress in order to minimize mental health stigma.

1. Au DWH, Tsang HWH, Lee JLC, Leung CHT, Lo JYT, Ngai SPC, et al. Psychosomatic and physical responses to a multicomponent stress management program among teaching professionals: A randomized study of cognitive behavioral intervention (CB) with complementary and alternative medicine (CAM) approach. Behav Res Ther. 2016;80:10–16.

2.Hamann J, Bühner M, Rüsch N. Self-Stigma and Consumer Participation in Shared Decision Making in Mental Health Services. Psychiatr Serv. 2017;68(8):783–788.

3. Basar MCZ. The beliefs of teachers toward mental illness. Procedia - Soc Behav Sci. 2012;47:1146–1152.

4. Ketchen SL, Gaddis SM, Heinze J, Beck K, Eisenberg D. Variations in Student Mental Health and Treatment Utilization Across US Colleges and Universities. J Am Coll Heal. 2015;63(6):388–396.

5.De Sousa S, Marques A, Curral R, QueirĂłs C. Stigmatizing attitudes in relatives of people with schizophrenia: a study using the Attribution Questionnaire AQ-27. Trends Psychiatry Psychother. 2012;34(4):186-197.

Mental health, Psychological symptoms, Stigma, Teaching professionals.s

O154 Exploring quality of life in individuals with cognitive impairment and chronic mental health difficulties in the context of supported employment services

Ana r jesus, cristina silva, joĂŁo canossa dias, karina sobral, mĂĄrio matos, patrĂ­cia sĂĄ, rui moreira, sara coutada, sara a oliveira, telma antunes, associação para a recuperação de cidadĂŁos inadaptados da lousĂŁ, 3200-901 lousĂŁ, portugal, correspondence: sara a oliveira ([email protected]).

The complex combination of many broad factors may have adverse effects in health with an impact on several areas. Quality of life (QoL) is a useful concept measuring the health state experienced by individuals. Perceived physical health, psychological well-being, social relationships, and environmental factors are QoL’s main dimensions (WHOQOL Group, 1995). In the field of employment, evidence-based supported employment is one of the most effective approaches. Thus, A.R.C.I.L’s Centre of Resources for Employment and Open Labour Market Inclusion started to include workers according to their preferences, mainly individuals with cognitive impairment and chronic mental health difficulties. In Portugal, few studies explored QoL in individuals with cognitive impairment and chronic mental health difficulties, particularly from the individual’s own perspective.

This study aims to explore differences in QoL between individuals with cognitive impairment and chronic mental illness, attending supported employment services given by the Centre of Resources for Employment and Open Labour Market Inclusion. Moreover, from baseline to 6 months, the outcomes from individuals self-reported QOL will be analysed.

The sample was composed by adults (N = 169; 52.1% women and 47.9% men), aged 18-64 years-old, with a mean age of 41.34 (SD = 11.33), who attended supported employment services. All participants had a previous diagnosis of cognitive impairment and/or chronic mental illness. QoL was assessed by WHOQOL-Bref (N=169 at baseline; N=51 at 6 months follow-up).

Results from independent t-test revealed significant gender and age differences between individuals QoL, with men reporting better physical and psychological QoL when compared to women. Younger participants (age ≤ 40 years-old) also presented better QoL (in all domains, except in social relations) when compared to older participants. All differences reflected small to moderate effect sizes. Overall QoL, as well as its physical and environmental dimensions were significantly and negatively associated with participants’ age. Finally, after attending six-months of supported employment services 29.4% of participants increased global QoL, 35.3% increased their physical and psychological QoL, 37.3% showed an improvement in their social relationships and 55% reported feeling better in their environment.

Overall, results suggest that individuals with cognitive impairment and chronic mental illness, attending supported employment services, have perceived a positive QoL at the 6-month follow-up. As the supported employment services are still on-going, future studies would have to be conducted to explore results in a larger sample and to measure impact in people’s health and living conditions.

Quality of Life, WHOQOL-Bref, Supported employment services, Cognitive impairment, Chronic mental illness.

O155 Characterization of pediatric medicines use in pre-scholar and primary school children

Isabel c pinto 1,2 , luĂ­s m nascimento 2,3 , ana pereira 4 , ana izidoro 5 , cĂĄtia patrocĂ­nio 6 , daniela martins 7 , margarida alves 8, 1 departamento de tecnologias de diagnĂłstico e terapĂŞutica, escola superior de saĂşde, instituto politĂŠcnico de bragança, 5300-253 bragança, portugal; 2 centro de investigação de montanha, instituto politĂŠcnico de bragança, 5300-253 bragança, portugal; 3 unidade local de saĂşde do nordeste, 5300-253 bragança, portugal; 4 farmĂĄcia d'izeda, 5300-592 izeda, bragança, portugal; 5 farmĂĄcia rainha, 5140-067 carrazeda de ansiĂŁes, portugal; 6 farmĂĄcia confiança, 5300-178 bragança, portugal; 7 farmĂĄcia holon, 2870-225 montijo, portugal; 8 farmĂĄcia da ponte, 5370-390 mirandela, portugal, correspondence: isabel c pinto ([email protected]).

Parents or other caregivers usually resort to the use of medication without prescription in their children, which can be considered as a facilitative process of drug intoxication. The child is not an adult in small size, which necessarily has implications for the use of drugs to ensure safety and effectiveness.

To examine the use of paediatric medicines and of associated factors, in children of pre-school and of the 1st cycle of basic years of education in the city of Bragança, in the Northeast of Portugal.

This cross-sectional, descriptive and correlational study was based on a questionnaire applied to 371 parents or guardians of children of pre-school and 1st cycle of basic education in the city of Bragança, in the academic year 2014/2015. Statistical analysis was performed on the SPSS program, v. 20.0. It was used descriptive statistics; correlations were accessed using Spearman and qui-square tests, considering the significance level of 5%.

The results revealed that 86% of parents use drugs without prescription, of these 49% resort to this practice under the influence of ancient medical guidelines and 28% under the influence of information transmitted in the pharmacy. Mostly parents (53%) resort to self-medication to relieve fever or treatment of influenza symptoms (14%) of their children. No statistically significant factors related to the use of non-prescription medication in children were found.

Paediatric self-medication is a common practice, especially made based on old medical guidelines. No explanatory factors have been found for this paediatric self-medication.

Acknowledgements:

The authors thank Fundação para a Ciência e a Tecnologia (FCT, Portugal) and the infrastructure FEDER under the PT2020 program for financial support to CIMO (UID/AGR/00690/2013).

Children, Pediatric medicines use, Pediatric self-medication, Pre- scholar children, Primary school children.

O156 Sleep and perimenopause: contributions to its management

Arminda pinheiro ([email protected]), higher school of education, university of minho, 4710-228 braga, portugal.

There are large geographical differences in the prevalence of menopausal symptoms. Given the differences in study methodologies, it has been difficult to establish comparisons. In middle-aged women, sleep disorders are quite prevalent problems and are sometimes attributed directly to the menopausal transition. Based on the conceptual framework proposed by Meleis and Schumacher, we consider that recent changes in the sleep pattern are an indicator of the transition process, assuming that they can interfere with quality of life, and that we can try to identify the conditions/factors associated with this change.

To evaluate sleep disorders and related factors in perimenopausal women.

This was a cross-sectional study, correlational; with a non-probabilistic convenience sample, in which 600 Portuguese perimenopause women (45–55 years) were requested to complete: the Menopause Rating Scale (MRS), Scale of attitudes and beliefs about menopause and the Satisfaction Scale Social Support; Self-Esteem Scale. Semi-structured interviews collected: socio-demographic and socio-economic data; lifestyle data; psychological data - global health perception; stressful life events; perception of the recent change of body image; have life projects and data on health history. Physical examination included: blood collection for determination of follicle stimulating hormone (FSH) and estradiol (E2), weight, height, and abdominal measurements. Women signed informed consents after exhibition of the study objectives and after guaranteeing anonymity and confidentiality.

In this study 43.5% of the women reported having no problems with sleep; 18.2% light intensity problems; 10.2% moderate intensity and 28.2% very intense problems. In relation to the influence of the different factors included in the final model on the probability of a woman having reported uncomfortable sleep problems, the logistic regression Forward: LR revealed that the conditions of socio-demographic and socioeconomic factors: level of education (b basiceducation =0.933, X 2 wald(1) =4.386, p=0.035, OR=2.222), the conditions of the psychosocial factor: attributed meaning menopause (b positivemeaning =-0.504;X 2 wald(1) =6.262; p=0.012; OR=0.604), satisfaction with social support (b familyupport =-0.154, X 2 wald(1) =10.849, p=0.001, OR=0.857), attitudes and beliefs regarding menopause(b changeshealthaging =-0.207;X 2 wald(1) =10.634, p=0.001, OR=0.813), attitudes(b physicalchanges =0.130, X 2 wald(1) =5.282; p=0.022; OR=0.878); have projects (b donothaveprojects =-0.662;X 2 wald(1) =9.907; p=0.002; OR=0.516) and the condition of the lifestyle factor: number of feed (b numberoffeed =- 0.285, X 2 wald(1) =10.658, p<0.001, OR=0.752), presented a statistically significant effect, significant difference on the Logit of the probability of a woman having referred problems, according to the adjusted Logitmodel(G 2 (10)=173.916; p<0.001;X 2 wald(8) =6.484; p= 0.593;R 2 CS =0.252;R 2 N =0.342;R 2 MF =0.218).

Problems with sleep can be considered a negative indicator of processes in perimenopausal women. The model suggests some modifiable factors, specifically: eating habits, attitudes, beliefs and meaning attributed to menopause, and importance of satisfaction with family social support. These aspects should be included in the initial nursing assessment and risk evaluation of women who cross this period, in the sense of adequately managing nursing interventions.

Problems, Sleep, Menopause.

O157 Preschooler’s executive and socio-emotional functioning: effects of two intervention programs- Psychomotor therapy and Creative Dance

Andreia sarnadinha 1 , catarina pereira 1,2,3 , ana c ferreira 1,2,3 , jorge fernandes 1,2,3 , guida veiga 1,2,3, 1 department of sports and health, school of science and technology, university of ĂŠvora, 7000-671 ĂŠvora, portugal; 2 comprehensive health research center, university of ĂŠvora, 7000-671 ĂŠvora, portugal; 3 research center in sports sciences, health sciences and human development, university of beira interior, 6201-001 covilhĂŁ, portugal, correspondence: andreia sarnadinha ([email protected]).

The preschool years represent a critical time period for the development of children’s executive functioning and socio-emotional competences [1] and therefore it is the ideal period for the stimulation of these competencies [2, 3]. Interventions with children in preschool age should privilege spontaneity, creativity and play as a method of learning and stimulation [4, 5]. Psychomotor therapy and Creative Dance are two therapeutic approaches based on these principles [5, 6]. However, to date, no research has been done comparing the effects of Psychomotor therapy and Creative Dance.

The aim of this study was to examine the feasibility and the impact of two intervention programs, Psychomotor therapy versus Creative Dance, on the executive and socio-emotional functioning of pre-schoolers.

Fifty preschool children (M = 4.04 years; SD = 0.67) were divided into two intervention groups and a control group. An experimental group participated in 24 Psychomotor therapy sessions, mainly involving sensorimotor activities and games with rules. The other experimental group participated in 24 Creative Dance sessions. The control group maintained daily life activities. Cold executive functions, hot executive functions, externalized and internalized behaviours and aggressiveness were evaluated.

The intervention programs were well tolerated by pre-school aged children. No significant differences were found in terms of intra- and inter-observer comparisons, except in the control group (p < 0.05). Cold executive functions were negatively correlated to reactive aggressiveness (r = -0.408, p = 0.003).

The results suggest that both programs were feasible and well tolerated in this age group, but their benefits were not evident. Increased working memory was associated with decreased levels of reactive aggression. This study alerts to the need for further research focused on pre-schoolers’ executive and socio-emotional functioning, particularly on the effects of interventions programs.

1. Papalia DE, Olds SW, Feldman, RD. O mundo da criança. Lisboa: McGraw-Hill; 2001.

2. Diamond A, Lee K. Interventions shown to aid executive function development in children 4–12 years old. Science, 2011;333(6045):959–964.

3. León CBR, Rodrigues CC, Seabra AG, Dias NM. Funçþes executivas e desempenho escolar em crianças de 6 a 9 anos de idade. Revista Psicopedagogia. 2013;30(92):113-120.

4. Vygotsky LS. A formação social da mente. São Paulo: Martins Fontes; 2010.

5. Traverso L, Viterbori P, Usai MC. Improving executive function in childhood: evaluation of a training intervention for 5-year-old children. Front Psychol. 2015;6:525-536.

6. Gilbert AG. Creative dance for all ages: a conceptual approach. Australia: Shape America; 2015.

Executive functions, Mental health, Mind-body therapies, Psychomotor intervention.

O158 Impact of a 10 km race on inflammatory and cardiovascular markers: comparison between trained and untrained recreational adults

Margarida carvalho 1 , andreia noites 2 , daniel moreira-gonçalves 3,4 , rita ferreira 5 , fernando ribeiro 6, 1 hospital de santa maria, 4049-025 porto, portugal; 2 department of physiotherapy, school of allied health technologies, polytechnic institute of porto, 4200-072 porto, portugal; 3 research center in physical activity, health and leisure, faculty of sport, university of porto, 4200-450 porto, portugal; 4 department of surgery and physiology, faculty of medicine, university of porto, 4200-450 porto, portugal; 5 mass spectrometry group, department of chemistry, university of aveiro, 3810-193 aveiro, portugal; 6 school of health sciences and institute of biomedicine, university of aveiro, 3810-193 aveiro, portugal, correspondence: margarida carvalho ([email protected]).

Previous studies have found that trained athletes had lower changes in circulating levels of inflammatory biomarkers and cardiovascular stress than untrained athletes, upon prolonged or exhausting exercise. Particularly, recreational runners with less training showed higher risk of cardiac injury and dysfunction after a marathon. Presently, we are observing a steadily growing number of young and older adults engaging in running events without having a professional orientation or training, emphasizing the need to assess biochemical markers that allow the evaluation of the acute changes imposed in these recreational athletes.

To compare the immediate and 24-hour effects of a 10-km run on inflammatory and cardiovascular biomarkers between recreational athletes, with and without specific running training.

18 recreational athletes (38.5 Âą 14.5 years), 10 men and 8 women, were recruited and divided in a trained and untrained group. Venous blood samples were taken prior to the 10km race (48 hours before), immediately after (within 30 minutes), and 24 hours after the race. The following biomarkers were analysed by slot blotting assay: vascular endothelial growth factor (VEGF), interleukin 6 (IL-6), high sensitive C-reactive protein (hsCRP), ghrelin, matrix metalloproteinase-2 (MMP-2) and MMP-9.

The trained group completed the race in 50.3 Âą 13.0 minutes per comparison to the 66.8 Âą 5.6 minutes of the untrained group (p = 0.003). A significant increase in circulating levels of hsCRP, ghrelin, VEGF and MMP-9 was observed immediately after the race in both groups; the levels of these biomarkers returned to baseline 24h post-race. A significant increase in IL-6 was also detected after the race in both groups, which returned to baseline levels at 24 hours post-race in the untrained group. Regarding MMP-2 levels, a significant increase was detected after the race only in the untrained that returned to baseline levels at 24 hours post-race.

The impact of a 10-km race in the inflammatory and cardiovascular markers assessed in this study was different between recreational athletes, with and without specific training.

Biomarkers, Cardiovascular system, Exercise, Inflammation, Running.

O159 The health of the informal caregiver of dependent person in self-care

Maria a dixe 1,2 , ana cs cabecinhas 2 , maura r domingues 2 , ana jcf santos 2 , marina g silva 2 , ana querido 1,2, correspondence: maria a dixe ([email protected]).

Caring for a caregiver should be a constant concern and a responsibility of all health professionals, so that those who give care do not end up being uncared-for.

This correlational study had the following main aims: to assess the level and prevalence of burden of the informal caregiver of a person dependent in self-care; to determine the relationship between the levels of burden and the informal caregiver's perception of their level of competence to be a caregiver.

Participants in this study were 33 informal caregivers of self-care dependent-persons in at least one activity of daily living, to whom a structured interview was performed at the time of hospital discharge. The interview included socio-demographic and professional data, perception of the informal caregiver on their level of competence to be a caregiver, and the Portuguese version of the Zarit Burden Interview [1]. This study was approved by the National Data Protection Commission and the ethics committee of the hospital where the study was conducted (nÂş 24/2017).

The majority of dependent-persons were female (60.6%) with a mean age of 81.6 Âą 11.3 years old, with the majority being dependent on all self-care activities. The mean age of caregivers was 61.4 Âą 12.1 years old, mainly females. The family relationship was mostly a son/daughter (39.4%) or a spouse (33.3%), taking care of the patient on average at 63.9 Âą .93 months. It was possible to verify that all caregivers had previous experience of caregiving to a family dependent. We also verified that the 33 caregivers presented a mean of 53.9 Âą 15.8 on the emotional burden scale (maximum possible value of 110) which corresponds to little burden. We can also mention that 30.3% of the caregivers present no burden, 30.3%, present mild burden and 39.4% present intense burden. Regarding the relationship between caregiver burden, we verified that higher levels of informal caregivers burden were related to lower levels of perception of their competence to satisfy the needs related to the hygiene of the dependent-person (rs=.-0.514; p> 0.05).

Caring of a dependent-person may lead to health risks to the caregiver. Even though the sample size is small, it was possible to verify that a considerable number of caregivers presented intense emotional burden. Therefore, it is necessary that health professionals develop interventions to prevent caregiver burden.

Acknowledgments

The current abstract is being presented on the behalf of the Help2Care research project. This study was funded by COMPETE 2020 under the Scientific and Technological Research Support System, in co-promotion. We acknowledge CiTechCare, Polytechnic Institute of Leiria, Polytechnic Institute of SantarĂŠm, Centro Hospitalar de Leiria, Polytechnic Institute of Castelo Branco and also all other members, institutions and students involved in the project.

1. Sequeira CA. Adaptação e validação da Escala de Sobrecarga do Cuidador de Zarit. Revista Referência. 2010;2(12):9-16.

Emotional burden, Informal caregiver, Self-care, Dependent-person, help2care.

O160 Growth and puberty in adolescent artistic gymnasts: is energy intake a question of concern?

Rita giro 1 , mĂłnica sousa 2 , inĂŞs t marques 3 , carla rĂŞgo 3, 1 universidade do porto, 4099-002 porto, portugal; 2 escola superior de saĂşde, instituto politĂŠcnico de leiria, 2411-901 leiria, portugal; 3 centro da criança e do adolescente, hospital cuf porto, 4100-180 porto, portugal, correspondence: rita giro ([email protected]).

Whether high-intensity training during childhood and adolescence compromises the growth and pubertal development of artistic gymnasts, remains a question for debate. However, when coupled with low energy availability, this hypothesis strengthens [1].

To characterize growth, sexual maturation, total energy intake and training aspects of competing artistic gymnasts and check for associations.

Convenience sample of 22 competing artistic gymnasts (13.8 ± 1.9 years) of both sexes. Anthropometric evaluation and body composition assessment were performed (InBody230™). Tanner stage was determined in all athletes and age of menarche and regularity of menses were assessed in females. Total energy intake was quantified (3-day food record) and its inadequacy verified according to the recommendations [2]. Athletes' training habits were also characterized.

Females showed significantly higher body fat (15.9 ± 3.2 vs 7.2 ± 3.6) and lower skeletal muscle mass (45.7 ± 2.1 vs 51.1 ± 3.0) percentages than males (p < 0.05). No differences were found between genders for any of the additional variables in the study (p ≥ 0.05). Mean age of menarche was 12.6 (± 1.3) years. Short stature was detected in 12.6% of the female gymnasts (z score < -2), but no cases of low weight-for-height (z score <-2) were observed. All athletes presented total energy intakes below the recommendations. A high training frequency and intensity (median: 6 days/week, in a total of 20.8 hours) were reported, as well as an association of both training frequency (ρ=+0.765; p=0,016) and training intensity (ρ=+0.727; p=0,026) with a later onset of menarche.

Our data suggests metabolic adaptation to chronically insufficient energy intakes in these athletes which, in accordance with growing evidence, might play a role in growth and pubertal delay.

Authors would like to thank and recognise the contribution of Cristina CĂ´rte-Real and Manuel Campos to the development of the investigation.

1. Mountjoy M, Sundgot-Borgen J, Burke L, Carter S, Constantini N, Lebrun C, et al. The IOC consensus statement: beyond the Female Athlete Triad--Relative Energy Deficiency in Sport (RED-S). Br J Sports Med. 2014;48(7):491-497.

2. EFSA Panel on Dietetic Products Nutrition and Allergies (NDA). Scientific Opinion on Dietary Reference Values for energy. EFSA Journal. 2013;11(1):3005.

Athletes, Energy availability, Health, Nutrition assessment, Paediatrics.

O161 Communication effectiveness in nursing teams

AntĂłnio calha 1 , liliana grade 3 , olĂ­via engenheiro 4 , sandra sapatinha 5 , eva neto 2, 1 instituto politĂŠcnico de portalegre, 7300-110 portalegre, portugal; 2 centro hospitalar do algarve, 8000-386 faro, portugal; 3 unidade local de saĂşde baixo alentejo, 7801-849 beja, portugal; 4 hospital espĂ­rito santo, 7000-811 ĂŠvora, portugal; 5 unidade local de saĂşde norte alentejano, 7300-074 portalegre, portugal, correspondence: antĂłnio calha ([email protected]).

The communication processes established among nurses are influential factors of the quality and effectiveness of care.

This research had as main objective to identify how nurses evaluate the different dimensions of the communicative process of the service in which they carry out their professional activity and what are the main obstacles to the proper functioning of the communication process.

This is eminently a quantitative study of correlational nature. For the collection of data, a questionnaire was used with closed-ended questions. 75 nurses from four health services (Neonatology, Medicine II, Emergency and Basic Emergency) were surveyed.

Five indices were elaborated to measure the different facets of the communicative process: Communication efficiency (α = 0.82); Information sufficiency (α = 0.84); Information timing (α = 0.85); Explicitness of the message (α = 0.90) and practical applicability (α = 0.087). Similarly, five indices were computed in order to measure the nature of the information: Information of a clinical nature (α = 0.88); Organizational information (α = 0.89); Service information (α = 0.91); Team information (α = 0.92) and Personal information (α = 0.93). All indices had a variation range between 1, corresponding to the worst possible appraisal and 5, corresponding to the best possible appraisal. The results show that it is organizational information that nurses (M = 2.96) do worse in the communicative process, especially in the timely manner in which information arrives to them (M = 2.76). The results obtained through a Kruskal-Wallis test allowed to identify differences regarding the consideration of conflict as an obstacle to the communication process (χ 2 KW(3) = 30.01, p= 0.000) and overvaluation of personal and professional relations (χ 2 KW(3) = 12.60, p= 0.006).

The research identifies some communication weaknesses in the clinical context related mainly to the way information of organizational nature is disseminated in the services.

Communication, Efficiency, Nursing, Teams.

O162 Factors affecting interpersonal conflict in nursing teams

AntĂłnio calha 1 , marĂ­lia ferreira 2 , sĂ­lvia alminhas 2 , telmo pequito 2, 1 instituto politĂŠcnico de portalegre, 7300-110 portalegre, portugal; 2 hospital espĂ­rito santo, 7000-811 ĂŠvora, portugal.

Teamwork is one of the foundations of nursing, exposing the profession to the vulnerabilities of the dynamics of group functioning. In this context, skills of conflict management in working teams are particularly relevant.

This research aimed to: I) identify how often nurses deal with conflict situations; II) identify the main causes of conflict mentioned by nurses and, III) assess the strategies adopted to deal with conflicting situations.

This is an exploratory, quantitative and correlational research. For the collection of data, a questionnaire was used with closed-ended questions. The sample consisted of a total of 35 nurses from the emergency department of a Hospital of the Portuguese NHS.

Five indexes were computed in order to evaluate the different strategies to deal with conflict situations: I) commitment strategy (Îą = 0.745); II) avoidance strategy (Îą = 0.699); III) accommodation strategy (Îą = 0.745); IV) confrontation strategy (Îą = 0.618) and collaboration strategy (Îą= 0.698). All indices had a variation that ranged between 1, corresponding to the less frequent possible appraisal and 5, corresponding to most frequent possible appraisal. Most of the nurses reported that they were rarely involved in situations of conflict, however, 57.1% stated that they sometimes observed those situations. Results show that nurses mostly indicated the use of two strategies to deal with conflict: accommodation (M= 3.11) and confrontation (M= 3.07). Data analysis revealed that the strategy of accommodation had a statistically significant positive correlation coefficient in relation with the incompatibility of personalities (rs= 0.400, p < 0.05) and a negative correlation coefficient in relation with the scarcity of material (rs= -0.358, p < 0.05) as cause of conflict.

The results obtained allow us to conclude that the nature of the conflict determines the way it is managed by nurses. The data reveal, in particular, that the scarcity of material resources strengthens the confrontation in the nursing team, contributing to the degradation of the organizational environment. Thus, conflict management is an essential skill and tool that nurses can, and should, use as a basis of sustainability and development of nursing practice.

Interpersonal conflict, Team work, Nursing, Emergency service.

O163 Numerical modelling of electrical stimulation on scaffolds for tissue engineering

Paula pascoal-faria 1,2 , pedro c ferreira 1 , abhishek datta 3,4 , nuno alves 1, 1 centre for rapid and sustainable product development, polytechnic institute of leiria, 2411-091 leiria, portugal; 2 school of technology and management, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 soterix medical inc., 10001 new york, new york, united states of america; 4 city city college of new york, 10031 new york, new york, united states of america, correspondence: paula pascoal-faria ([email protected]).

Preliminary experimental in vitro studies on tissue engineering applications have shown the advantage of using different type of stimuli, namely, mechanical, electrical, magnetic or its combination to enhance cell behaviour. In these studies cells proliferation and differentiation change significantly when electrical stimulation is applied on cells placed inside scaffolds systems within a bioreactor. We established an ambitious research program on numerical modelling of stimuli on scaffolds for tissue engineering. In this study we develop a new finite element-based (FE) multiphysics framework that allows the numerical optimization of the parameters involved when electrical stimulation is applied on bioscaffolds with different geometries and characteristics. The FE framework that has been developed allows the prediction of the electrical stimulation as a function of the scaffold geometry and its electrical characteristics, that may contribute to the acceleration of the proliferation and differentiation of the cells.

Finite element model, Bioscaffolds, Electrical stimulation, Tissue engineering.

O164 Motor competence in preschoolers with and without hearing loss

Guida veiga 1,2,3 , mariana santos 1 , brenda s silva 4 , catarina pereira 1,2,3, 1 department of sports and health, school of science and technology, university of ĂŠvora, 7000-671 ĂŠvora, portugal; 2 comprehensive health research center, university of ĂŠvora, 7000-671 ĂŠvora, portugal; 3 research center in sports sciences, health sciences and human development, university of beira interior, 6201-001 covilhĂŁ, portugal; 4 developmental psychology, leiden university, 2311 ez leiden, netherlands, correspondence: guida veiga ([email protected]).

Several studies have been demonstrating the developmental consequences of early childhood hearing loss in terms of language, communication, academic, and social-emotional functioning [1]. However, there is still no consensus on whether young children with hearing losses (HL) are comparable to hearing peers regarding their motor competence. Whereas some studies associate hearing loss with poorer motor development [2], others show that HI children are as proficient as their hearing peers [3]. Nevertheless, most of these studies has been focusing older children and to date, no study has examined Portuguese HI children’s motor competence.

This study aimed to examine motor competence of children with HL in comparison with hearing children.

A total of 35 children participated in the study; 13 (mean age 4,73 years) with HL and 22 (mean age 5,09) hearing children. Children were tested by the Movement Assessment Battery for Children–Second Edition (MABC-2).

Children with HL showed worse performances on manual dexterity, ball skills and balance than hearing peers, however these differences were only significant regarding balance (p=.006).

Children with HL are at greater risk for balance deficits.

1. Joint Committee on Infant Hearing. 2007 position statement: Principles and guidelines for early hearing detection and intervention programs. Pediatrics. 2007;120:898–921.

2. Hartman E, Houwen S, Visscher C. Motor skill performance and sports participation in deaf elementary school children. Adapted Physical Activity Quarterly. 2011;28(2):132-145.

3. Engel-Yeger B, Weissman D. A comparison of motor abilities and perceived selfefficacy between children with hearing impairments and normal hearing children. Disability And Rehabilitation. 2009;31(5):352-358.

Motor skills, Emotion understanding, Empathy, Deafness, Cochlear implant.

O165 Determinants of the beginning of breastfeeding in the first half hour of life

Dolores sardo ([email protected]).

Breastfeeding in the first hour of life is associated with longer duration of breastfeeding, reduction of infant deaths in low-resource countries and improvement of the health status of the child and of mother. This practice favours early contact between mother and baby, and in Step 4 of the Ten Measures to be considered a Baby Friendly Hospital, it is recommended to help mothers to initiate breastfeeding within the first half hour after birth (WHO & UNICEF).

To evaluate the rate of breastfeeding initiation in the first half hour and two hours of the child's life; identify the determinants of breastfeeding in the first half-hour and two hours of life and their relationship to the prevalence of exclusive breastfeeding at four months.

We performed a quantitative, descriptive and cross-sectional study in a Portuguese population. The sample was no probabilistic, intentional (n = 150) 89.3% married mothers or with partner and average age was 31.1 years old, with normal delivery (43.3%). Data collection was conducted through a self-report questionnaire to mothers, 4 months after delivery.

The rate of breastfeeding initiation in the first half hour and two hours of the child's life was 48% and 88% respectively. In this sample the determinants related to obstetric history, new-born and breastfeeding were studied: only caesarean delivery revealed statistical significance in the first half hour (X 2 (1) = 6.141; p = 0.010); and there was a statistically significant difference associated with eutocic delivery and the initiation of breastfeeding up to two hours of life. The prevalence of exclusive breastfeeding at 4 months was predicted only by the weight of the new-born (b pequeno para a idade gestacional =4.821; X 2 wald(1) = 21.616; p < 0.001; OR=124.038).

In this study, only the type of delivery had significant statistical influence for the initiation of breastfeeding in the first half hour of life. We also found that the initiation of breastfeeding in the first two hours of life did not predict its prevalence at four months.

Breastfeeding, Prevalence, First half-hour.

O166 Body image acceptance as a protector against the effect of body image-related shame memories on emotional eating in women with Binge Eating Disorder

Cristiana duarte 1 , josĂŠ pinto-gouveia 2, 1 school of psychology, faculty of medicine and health, university of leeds, ls2 9jt, leeds, united kingdom; 2 cognitive and behavioural centre for research and intervention, university of coimbra, 3001-802 coimbra, portugal.

Understanding the factors underlying the public health problem of Binge Eating Disorder (BED) is a pressing need. Research show that body image-related shame memories are significantly associated with the severity of binge eating symptomatology in women with Binge Eating Disorder (BED) [1]. These individuals may eat as a means to temporarily avoid or reduce negative emotions related to body weight and shape [2, 3]. Evidence suggests that body image flexibility ( i.e. , the ability to accept and fully experience body image-related internal experiences - thoughts, emotions, memories - when doing so is consistent with valued-living [4], may deter the engagement in emotional eating [5].

The current study examined the moderator effect of body image flexibility on the association between shame memories and emotional eating in a sample of women with BED.

109 women with the diagnosis of BED participated in this study. Mean (SD) participants age was 37.39 (10.51) years and BMI was 33.69 (7.75) kg/m 2 . Participants were assessed through the Eating Disorder Examination 17.0D, the Shame Experiences Interview (SEI) and filled self-report measures assessing the centrality of the recalled shame memories, emotional eating and body image flexibility.

Descriptive statistics showed that body image-related shame memories were the most frequently recalled memories. Correlational analysis revealed that the extent to which the recalled shame memory is central to identity is significantly associated with emotional eating. Body image flexibility was negatively associated with the centrality of the recalled shame memory and emotional eating. Results of the moderation analysis showed that body image flexibility significantly moderate the association between the centrality of the recalled shame memory and emotional eating. This suggests that patients with BED with a greater trait-like ability to accept body image-related negative internal experiences, present a lower tendency to use food as a form of experiential avoidance.

Findings support a conceptual integrative model of binge eating symptomatology that clarifies the role that contextual and interpersonal variables play in the development and persistence of difficulties in regulating eating behaviour in BED, and how body image flexibility may have a protective role in these associations. These results also have important clinical implications supporting the relevance of contextual-behavioural interventions that address the role of shame memories, body image difficulties and that help patients to develop body image flexibility.

1. Duarte C, Pinto-Gouveia J. The impact of early shame memories in Binge Eating Disorder: The mediator effect of current body image and cognitive fusion. Psychiatry Reseach, 258:511- 517.

2. Duarte C, Pinto-Gouveia J, Ferreira C. Ashamed and fused with body image and eating: Binge eating as an avoidance strategy. Clin Psychol Psychother. 2015. doi: 10.1002/cpp.1996. 3. Heatherton T, Baumeister R. Binge eating as escape from self-awareness. Psychol Bull 1991;110(1):86-108.

4. Sandoz E, Wilson K, Merwin R, Kellum K. Assessment of body image flexibility: The Body Image-Acceptance and Action Questionnaire. Journal of Contextual Behavioral Science. 2013;2(1-2):39-48.

5. Duarte C, Pinto-Gouveia J. Returning to emotional eating: The psychometric properties of the EES and association with body image flexibility. Eating and Weight Disorders. 2015;20 (4):497-504.

Body image-related shame memories, Shame memories centrality, emotional eating, Body image flexibility, Binge Eating Disorder.

O167 Psychometric properties of the Psychological Welfare Scale

Rosa m freire, filipe pereira, teresa martins, maria r grilo, correspondence: rosa m freire ([email protected]).

Pleasure and happiness have been associated with well-being, and obtaining maximum enjoyment is seen as a life goal. Nowadays, well-being is perceived to be either subjective wellbeing (SWB) or psychological well-being (PWB). SWB involves global evaluations that affect life quality, whereas PWB examines perceived existential challenges of life [1].

To measure Psychological Welfare and validate the constructs of 42 items of the Psychological Welfare Scale.

Quantitative, exploratory and cross study was used as methodology. A convenience sample was performed with 252 participants. The measuring instrument was developed using the Qualtrics program. The study had favourable appreciation and consent from the College Nursing of Porto (ESEP) that provided a computer platform that allowed the participants to access to the measurement instrument. The construct in analysis in this study was developed by Ryff [2] and intended to measure Psychological Welfare on six scales: autonomy, environmental mastery, personal growth, positive relations with others, purpose in life, and self-acceptance. Originally, each subscale was composed of 20 items, but the author suggested smaller versions, preferably the version of seven items per subscale. Respondents rated statements on a scale of 1 to 6, with 1 indicating strong disagreement and 6 indicating strong agreement. Participants are, on average, 31.38 years old (SD=12.88) and 81% are female. The analysis of the items of the scale was performed through item descriptive analysis, confirmatory factorial analysis, internal consistency analysis of each subscale and factorial validity using AMOS (version 22, IBM SPSS).

The descriptive analysis of the items showed that the response tendency is above the midpoint. In the analysis of the adaptation of the theoretical model to the empirical model, we verified that the model constituted by a factor with six manifested variables resulting from the sum of the seven items of each subscale showed good indexes of adjustment. The internal consistency values of each subscale ranged from 0.69 to 0.82 suggesting a reasonable to good internal consistency. All correlations showed significant values at the 0.01 level.

The values found suggest that the concepts of the subscales of the measuring instrument are related but without any redundancy of measurements. The Psychological Welfare Scale is a theoretically grounded instrument which specifically focuses on measuring multiple facets of psychological welfare. We consider this Scale of Psychological Welfare to be a valid and reliable tool to measure psychological welfare in a Portuguese context.

1. Sweta SM. Cross-cultural Validity of Ryff’s Well-being Scale in India. Asia-Pacific Journal of Management Research and Innovation. 2013;9(4):379–387.

2. Ryff C. Happiness is everything, or is it? Explorations on the meaning of psychological wellbeing. American Psychological Association, Inc. Journal of Personality and Social Psychology. 1989;57(6):1069–1081.

Assessment, Psychological Welfare, Scale, Psychometrics properties, Validation Studies.

O168 The influence of statins in the skeletal muscle of Individuals with hypercholesterolemia - ultrasound study

Alexandra andrĂŠ 1 , joĂŁo p figueiredo 1 , carlos af ribeiro 2 , gustavo f ribeiro 3 , paula tavares 3, 1 escola superior de tecnologia da saĂşde de coimbra, instituto politĂŠcnico de coimbra, 3046-854 coimbra, portugal; 2 instituto biomĂŠdico de investigação da luz e da imagem, 3000-548 coimbra, portugal; 3 faculdade de ciĂŞncias do desporto e educação fĂ­sica, universidade de coimbra, 3040-248 coimbra, portugal, correspondence: alexandra andrĂŠ ([email protected]).

The skeletal muscle (SM) is a kind of tissue that has the capacity to adjust with different stimulus. Intake drugs can cause alterations and incapacity to do exercise. To evaluate the muscle macroscopically measurements is important to do. Pennate angle and fascicle length are determinants of muscle strength. Thickness is important to evaluate state of atrophy or hypertrophy. The ecogenicithy evaluates the fat infiltration in atrophy.

Statins interfere with the SM functions and are the prescribed medications for prevention and treatment of high cholesterol more used in the world. A side effect of these drugs is myotoxicity. Creatine Kinase (CK) is a biomarker to evaluate the severity of muscle damage that can change with gender, age, and physical activity. Factors that influence the increase of CK are exercise intensity, if it exceeds the metabolic capacity alterations that origin CK release to the blood circulation and sarcomere degradation.

The aim of this study was to evaluate and analyse macroscopically muscle structures in patients with hypercholesterolemia, in order to interpret the pathophysiological mechanisms produced by statins inducing myotoxicity. The sensitivity and acuity to the ultrasound were used to evaluate the muscle.

The study was performed in 47 individuals, age between [50-65] years old. Ultrasound tests were performed to the 47 individuals in three different groups – control group, individuals that intake statins and individuals that intake statins and do exercise. Gastrocnemius muscle was performed bilaterally using a 13Mhz probe to determinate the dimensions of the pennate angle, fascicle length and thickness. A questionnaire and the informed consent term were signed by all individuals. The study was approved by the Ethics Committee for Health of the Regional Health Administration (Study nº45-2015)

Differences in dimensions were observed. Significant differences were obtained for the pennate angle and the fascicle. The main differences were observed between the control group and experimental groups that intake statins.

Ultrasound has sensitivity and specificity to analyse the muscle tissue macroscopically. Individuals who intake statins have lower values of the evaluated structures. Differences were observed in the pennate angle in both sides, in the group that intake statins. The literature is unanimous in stating that individuals who take statins develop muscle changes called myalgia. The side effects of statins may increase with exercise. Ultrasonography is an imaging modality not yet widely used in this type of approach.

Muscle, Ultrasound, Statins, Hypercholesterolemia.

O169 Maximum expiratory pressure (MEP) Increase through a program with a sportive blowgun in institutionalized women with intellectual disability

Marisa barroso 1,2 , rui forte 2 , rui matos 1,2 , luĂ­s coelho 1,2 , david catela 1, 1 life quality research center, 2001-904 santarĂŠm, portugal; 2 school of education and social sciences, polytechnic institute of leiria, 2411-901 leiria, portugal, correspondence: marisa barroso ([email protected]).

Blowgun is a long tube through which projectiles are shot [1]. The propulsive power is limited by the user's strength of respiratory muscles and the vital capacity of the lungs [2]. People with intellectual disability have limited levels of maximum oxygen consumption and ventilatory capacity [3].

The aim of this study is to investigate the effect of a Sportive Blowgun program on the respiratory capacity of institutionalized women with intellectual disability.

The sample was divided into two groups, the control group (n = 9; age 49.80 Âą 8.50), and an experimental group (n = 9; age 44.80 Âą 13.30). The experimental group underwent a 12-week Sportive Blowgun program, with 2 weekly sessions where each subject threw 40 darts at a distance that progressed from 4m to 10m away from the target. Maximum expiratory pressure (MEP) values (L/min) were collected through a Micro Medical - Micro RPM portable spirometer. Five measurements were taken for both groups. One before the beginning of the program, 3 during the program 1 at the end of the program. Informed permission from the institution and the participants assent was obtained.

In the experimental group, a significant difference (p = 0.008) was observed in the mean values of initial maximum expiratory pressure (MEP1 = 48.11 L/min) and final (MEP5 = 73.56 L/min). There is a significant difference between the values of the initial measurement with all other measurements, suggesting a positive evolution between them. There is only no significant difference between the intermediate measurements, which suggests a stagnation of the values between the second, third and fourth measurements. In the control group, except for the second measurement, there was no significant difference between the initial maximal expiratory pressure and the other measurements, which suggests that there was an initial adaptation to the test but that did not last. Comparing groups, except for the first measurement, all the others present a significant difference. A very significant difference was observed in the third (p = 0.005) and final measurement (p = 0.005), where the experimental group presented values always higher than the control group.

The results suggest that the Sportive Blowgun program increases maximum expiratory pressure (MEP) by providing a respiratory capacity gain in institutionalized women with mental disability. Is suggested a more extensive study to verify the possibility of considering the sportive blowgun program as a complementary non-clinical therapy for respiratory insufficiencies.

1. MariĂąas AP, Higuchi H. Blowgun Techniques: The Definitive Guide to Modern and Traditional Blowgun Techniques. Vermont: Tutle Publishing; 2010.

2. Nagasaki T, Okada H, Kai S, Takahashi S. Influence of Blowgun Training on Respiratory Function. Rigakuryoho Kagaku 2010; 25(6): 867–871.

3. McGrother CW, Marshall B. Recent trends in incidence, morbidity and survival in Down’s syndrome. Journal of Mental Deficiency Research. 1990;34:49–57.

Blowgun, Intellectual disability, Respiratory therapy.

O170 Respiratory control technique and attention deficit hyperactivity disorder in children

David catela 1 , isabel piscalho 2 , rita ferreira 2 , ana victorino 2 , bĂĄrbara cerejeira 2 , nicole marques 2 , sara dias 2, 1 life quality research center, 2001-904 santarĂŠm, portugal; 2 education school, polytechnic institute of santarĂŠm, 2001-904 santarĂŠm, portugal, correspondence: david catela ([email protected]).

Attention deficit hyperactivity disorder (ADHD) comprises a persistent pattern of symptoms hyperactivity, impulsiveness and/or lack of attention [1] and can cause a significant impairment in academic activities [2]. ADHD has a prevalence rate ranging from 3% to 7% among school age children [3]. Respiratory sinus arrhythmia (RSA) is higher among children with typical development than in children with ADHD [4], and ADHD in childhood is associated with abnormal parasympathetic mechanisms [5], with significantly higher mean heart rates, mean R-R interval significantly shorter (lower heart rate variability) and ratio LF/HF significantly higher than children with typical development [6-10].

The purpose of this study is to verify if through breath control ADHD children can increase heart rate variability (HRV).

Vital signs of ten potential ADHD children (11.22 Âą .42 years old, 3 girls), were collected for 6 minutes, in supine position, under two conditions: I) normal breathing (NB); and, II) slow abdominal breathing (AB), e.g., [11]. HRV data acquisition was carried out through Polar V800 [12]. HRV analysis was made through gHRV software [13-15].

During AB children significantly reduced breathing frequency (BF) (8.9±4.2, Md=8) compared to NB frequency (18.3±5.9, Md=19) (Z= 6.439, p<.001, r=.81); and diastolic pressure (DP) (AB – 61.3±8.7, Md=61; NB – 63.6±6.4, Md=63; Z= 2.146, p<.05, r=.29); significantly augmented standard deviation of HR (AB – 7.4±1.4; NB – 5.9±1.5; Z= 2.310, p<.05, r=.89); standard deviation of mean interval RR (AB – 80.5±22.9ms; NB – 64.3±27.7ms; Z= 2.192, p<.05, r=.73), and SD2 (AB – 106.9±29.7; NB – 82.5±35.6; Z= 2.192, p<.05, r=.73). Also, during AB children reduced systolic pressure (101.3±16.4, Md=97; NB – 104.9±17.1, Md=103; ns), heart rate frequency (79.8±11.7bpm, Md=81.8; NB – 81.7±11.8bpm, Md=83.9; ns); and, mean interval RR was greater (766.9±117.8ms Md=734.8; NB – 749.1±115.2ms, Md=715.8; ns); like rMSSD (112.9±53.3ms, Md=108.1; NB – 97.5±64.1ms, Md=79.9; ns), and HRV index (18.3±4.7ms, Md=18.9; NB – 15.6±6.1ms, Md=14.1; ns); and, ApEn became positive (greater) (.0001±.001ms, Md=.0001; NB - (-).001±.002ms, Md=(-) .0001; ns).

Consequently, with one short training session, these children had the capacity to implement a significantly slower BF to values near 8 bpm, cf. [11], which resulted in a reduction in HR, SP, and DP; and, in an augmentation of various parameters of HRV. All these changes point to an increase of vagal activity during AB. If vagal activity was reinforced during AB [16-17], it means that, maybe, a process of bottom-up adjustment of attention and emotional responses can be promoted, cf. [18-20].

1. American Psychiatric Association. Diagnostic and statistical manual of mental disorders (DSM-5ÂŽ). American Psychiatric Pub; 2013.

2. Cantwell DP, Baker L. Association between attention deficit-hyperactivity disorder and learning disorders. J Learn Disabil. 1991;24(2):88-95.

3. Rash JA, Aguirre-Camacho A. Attention-deficit hyperactivity disorder and cardiac vagal control: a systematic

review. ADHD Attention Deficit and Hyperactivity Disorders. 2012;4(4):167-77.

4. Buchhorn R, Conzelmann A, Willaschek C, StĂśrk D, Taurines R, Renner TJ. Heart rate variability and methylphenidate in children with ADHD. ADHD Attention Deficit and Hyperactivity Disorders. 2012;4(2):85-91.

5. Musser ED, Backs RW, Schmitt CF, Ablow JC, Measelle JR, Nigg JT. Emotion regulation via the autonomic nervous system in children with attention-deficit/hyperactivity disorder (ADHD). J Abnorm Child Psychol. 2011; 39(6):841-852.

6. Tonhajzerova I, Ondrejka I, Adamik P, Hruby R, Javorka M, Trunkvalterova Z, et al. Changes in the cardiac autonomic regulation in children with attention deficit hyperactivity disorder (ADHD). Indian Journal of Medical Research. 2009;130:44-150.

7. Griffiths KR, Quintana DS, Hermens DF, Spooner C, Tsang TW, Clarke S, et al. Sustained attention and heart rate variability in children and adolescents with ADHD. Biological psychology. 2017;124:11-20.

8. Imeraj L, Antrop I, Roeyers H, Deschepper E, Bal S, Deboutte D. Diurnal variations in arousal: a naturalistic heart rate study in children with ADHD. European child & adolescent psychiatry. 2011;20(8):381-392.

9. Rukmani MR, Seshadri SP, Thennarasu K, Raju TR, Sathyaprabha TN. Heart rate variability in children with attention-deficit/hyperactivity disorder: a pilot study. Annals of neurosciences. 2016;23(2):81-88.

10. de Carvalho TD, Wajnsztejn R, de Abreu LC, Vanderlei LCM, Godoy MF, Adami F, et al. Analysis of cardiac autonomic modulation of children with attention deficit hyperactivity disorder. Neuropsychiatric disease and treatment. 2014;10:613.

11. Lehrer PM, Vaschillo E, Vaschillo B. Resonant frequency biofeedback training to increase cardiac variability: Rationale and manual for training. Applied psychophysiology and biofeedback. 2000;25(3):177-191.

12. Giles D, Draper N, Neil W. Validity of the Polar V800 heart rate monitor to measure RR intervals at rest. European Journal of Applied Physiology. 2016, 116(3):563–571.

13. Rodríguez-Liñares L, Lado MJ, Vila XA, Méndez AJ, Cuesta, P. gHRV: Heart rate variability analysis made easy. Computer Methods and Programs in Biomedicine. 2014;116(1):26–38.

14. Rodríguez-Liñares L, Méndez AJ, Vila XA, Lado MJ. gHRV: A user friendly application for HRV analysis. In: Information Systems and Technologies (CISTI); 2012. p. 1–5.

15. Vila J, Palacios F, Presedo J, Fernández-Delgado M, Felix P, Barro S. Time-frequency analysis of heartrate variability. IEEE Engineering in Medicine and Biology Magazine. 1997;16(5):119–126.

16. Levy MN. Autonomic interactions in cardiac control. Annals of the New York Academy of Sciences. 1990;601(1):209-221.

17. Uijtdehaage SH, Thayer JF. Accentuated antagonism in the control of human heart rate. Clinical Autonomic Research. 2000;10(3):107-110.

18. Ruiz‐Padial E, Sollers JJ, Vila J., Thayer, JF. The rhythm of the heart in the blink of an eye: Emotion modulated startle magnitude covaries with heart rate variability. Psychophysiology. 2003;40(2):306-313.

19. Thayer JF, Brosschot JF. Psychosomatics and psychopathology: looking up and down from the brain. Psychoneuroendocrinology. 2005;30(10):1050-1058.

20. Thayer JF, Lane RD. Claude Bernard and the heart–brain connection: Further elaboration of a model of neurovisceral integration. Neuroscience & Biobehavioral Reviews. 2009;33(2):81-88.

ADHD, Children, HRV, Breathing Technique.

O171 Can older adults accurately perceive affordances for a stepping forward task? Differences between faller and non-faller community-dwelling older adults

Gabriela almeida, jorge bravo, hugo rosado, catarina pereira, department of sports and health, school of science and technology, university of ĂŠvora, 7000-671 ĂŠvora, portugal, correspondence: gabriela almeida ([email protected]).

Different studies framing an ecological approach to perception [1] have tried to understand how people, mostly children [2] and adults [3], perceive their action limits, in other words, what an environment affords related on individual characteristics. However, studies about older adults seem to be scarcer [4–7], particularly studies focused on whether or not faller and non-faller older adults can accurately perceive affordances for a stepping forward task.

The purpose of this study was to determine if older people could accurately perceive affordances for the task of stepping forward. The relationship between real and estimated maximum distance was explored in community-dwelling older adults, comparing fallers with non-fallers.

A sample of 347 community-dwelling older adults (age 73.02 Âą 6.40 yr; non-fallers: 57.9%, fallers: 42.1%) with the absence of cognitive impairment participated in the study. Participants were asked to predict their maximum distance for a stepping forward prior to performing the task. Absolute Percent Error (APE), Absolute Error (AE) and Error Tendency (ET) were calculated accordingly (2.8). APE measured deviation percentage from accurate perceptions, AE indicated the discrepancy (in cm) between estimation and real performance. ET indicated the direction of the error (under- or overestimation bias).

On average, non-faller estimated (63.7 ± 15.5 cm) and performed (70.7 ± 14.9 cm) greater distances than faller (estimation: 57.1 ± 14.5 cm; real: 61.7 ± 14.6 cm) older adults. No statistically significant differences were observed in APE (fallers: 7.2 ± 12.4 %; non-fallers: 9.6 ± 12.5 %). However, differences in AE were significant between faller (6.7 ± 5.9 cm) and non-faller (9.6 ± 12.5 cm) older adults (p = .001). Old people had a huge tendency to underestimate (77.2%) the maximum distance achieved in a stepping forward. The results show a significant association between ET and being faller (χ 2 (1) =6.407, p=.01). Despite general participants exhibit an underestimation tendency, this tendency is greater in non-fallers (61.6% vs 38.4%). Further, there were fewer non-fallers than fallers overestimating their ability to step forward (45.6% vs 54.4%).

Older adults displayed a tendency to underestimate the maximum distance they can stepping-forward. The bias of overestimation is more frequent in fallers, whereas persons who underestimated tend to do not fall, suggesting that they have a protective behaviour which avoids falls. Data evidence that older adults can perceive what the environment affords, which is in agreement with an ecological perspective to perception and action.

This study was funded by ESACA Project (Grant ALT20-03-0145-FEDER-000007).

1. Gibson J. The ecological approach to visual perception. New Jersey: Lawrence Erlbaum; 1979.

2. Almeida G, Luz C, Martins R, Cordovil R. Differences between Estimation and Real Performance in School-Age Children: Fundamental Movement Skills. 2016;2016:3795956.

3. Cole WG, Chan GLY, Vereijken B, Adolph KE. Perceiving affordances for different motor skills. Exp brain Res. 2013;225(3):309–319.

4. Konczak J, Meeuwsen HJ, Cress ME. Changing affordances in stair climbing: the perception of maximum climability in young and older adults. J Exp Psychol Hum Percept Perform. 1992;18(3):691–697.

5. Cesari P, Formenti F, Olivato P. A common perceptual parameter for stair climbing for children, young and old adults. Hum Mov Sci. 2003;22(1):111–24.

6. Luyat M, Domino D, Noel M. Surestimer ses capacités peut-il conduire à la chute? Une étude sur la perception des affordances posturales chez la personne âgée. Psychol NeuroPsychiatr Vieil. 2008;6(4):286–297.

7. Noel M, Bernard A, Luyat M. La surestimation de ses performances : Un biais spécifique du vieillissement? Geriatr Psychol Neuropsychiatr Vieil. 2011;9(3):287–294.

8. Almeida G, Luz C, Martins R, Cordovil R. Do Children Accurately Estimate Their Performance of Fundamental Movement Skills. J Mot Learn Dev. 2017;5(2):193-206.

Aging, Falling, Perception of affordances, Gait.

O172 A new affordance perception test to explain falls occurrence: preliminary results of stepping-forward task

Catarina ln pereira, jorge bravo, hugo rosado, gabriela almeida, correspondence: catarina ln pereira ([email protected]).

Falls cause injury, dependence, and death. Identify the subjects that are potential fallers is essential for a successful prevention. Researchers developed several models and tests in order to diagnose individual’s risk of falling [1, 2]. Risk factors such as environmental hazards, strength, balance or dual tasks are commonly tested. However, their discriminative power is limited, indicating a gap which these tests do not address. The assessment on the perception of affordances for individual’s ability to perceive the critical boundary action [3, 4], may fill this gap.

To analyse the appropriateness of a new stepping-forward test to explain fall occurrence in community-dwelling adults, that assess perception and action boundary.

Participants were 266 women and 81 men aged 73.0 ± 6.4 years. They were assessed for fall occurrence (yes vs . no), and for stepping-forward and perception boundaries. Participants judged their maximum stepping-forward distance prior to the performance of the estimated task. Absolute Error (AE) [|estimated – real|] (cm) and Absolute Percent Error (APE) (%) were computed, and the Error Tendency (ET) was classified (underestimation vs . overestimation) [5, 6].

Univariate binary regression analysis showed that all the described variables explain significantly fall occurrence (p < 0.05). Data showed that, for each additional cm estimated in the stepping-forward test, the likelihood of falling decreased on 2.9%, OR: 0.971 (95%CI: 0.957-0.986), and for each additional cm performed in the test, this likelihood decreased on 4.0%, OR: 0.960 (95%CI: 0.945-0.975). Furthermore, data showed that for each additional cm computed as AE, the likelihood of falling decreased on 3.6%, OR: 0.964 (95%CI: 0.933-0.996), and for each additional 1% computed as APE this likelihood decreased on 0.9%, OR: 0.991 (95%CI: 0.969-1.013). Finally, data showed that subjects reporting an ET of underestimation were less 47.7% likely for falling, OR: 0.523 (95%CI: 0.315-0.867), than subjects showing an ET of overestimation.

The new stepping-forward affordance perception test evidenced to be useful to determine the risk of fall occurrence. A higher estimation of maximum distance achieved or a higher real performance on the test were associated with a lower risk of falling. Further, a higher AE and an underestimation tendency showed to be associated with a decreased risk of falling. This suggests that is the marge of security provided by the higher performance ability, in contrast with a lower perception of affordance, which is protective and avoids falls.

1. Pereira CLN, Baptista F, Infante P. Role of physical activity in the occurrence of falls and fallrelated injuries in community-dwelling adults over 50 years old. Disabil Rehabil. 2014;36(2):117–124.

2. Lohman M, Crow R, DiMilia P, Nicklett E, Bruce M, Batsis J. Operationalisation and validation of the Stopping Elderly Accidents, Deaths, and Injuries (STEADI) fall risk algorithm in a nationally representative sample. J Epidemiol Community Heal. 2017;71(12):1191–1197.

3. Luyat M, Domino D, Noel M. Surestimer ses capacités peut-il conduire à la chute? Une étude sur la perception des affordances posturales chez la personne âgée. Psychol NeuroPsychiatr Vieil. 2008;6(4):286–297.

4. Noel M, Bernard A, Luyat M. La surestimation de ses performances : Un biais spécifique du vieillissement? Geriatr Psychol Neuropsychiatr Vieil. 2011;9(3):287–294.

5. Almeida G, Luz C, Martins R, Cordovil R. Differences between Estimation and Real Performance in School-Age Children : Fundamental Movement Skills. 2016;2016:3795956.

6. Almeida G, Luz C, Martins R, Cordovil R. Do Children Accurately Estimate Their Performance of Fundamental Movement Skills. J Mot Learn Dev. 2017;5(2):193-206.

Aging, Falling risk, Boundary action, Perception.

Poster Communications

P1 prevalence of low back pain in surfers: associated factors, beatriz minghelli 1,2 , inĂŞs sousa 1 , sara graça 1 , sofia queiroz 1 , inĂŞs guerreiro 1, 1 school of health jean piaget – algarve, instituto piaget de silves, 8300-025 silves, portugal; 2 research in education and community intervention, piaget institute, 1950-157 lisbon, portugal, correspondence: beatriz minghelli ([email protected]).

Paddling in surf consists on the movement that the surfer most performs during practice, and this repeated movement, associated with a spinal hyperextension posture, may predispose to the occurrence of injuries.

The aim of this study was to verify the prevalence of low back pain in surfers, and its associated factors.

The sample consisted of 50 Algarve surfers, 40 (80%) males, aged between 9 and 57 years (24.26 Âą 12.41 years). The measurement instruments consisted on a questionnaire and on the use of the KINOVEA software for movement analysis. The questionnaire contained questions about the socio-demographic characteristics of the population and about the occurrence of low back pain (at the moment, over a 12-month period and during all surfing practice). Surfers were demarcated with a tape on D8 and at the base of the sacrum. Surfers were filmed while performing the paddling movement, in the sea, using their own boards. The movement movies recorded were analysed. A line was drawn between two points, while another line was projected on the board, establishing an angle. Data analysis was performed through the application of binary logistic regression, the method entered used as a binary outcome variable for the prevalence of low back pain during all surfing practice.

8 (16%) surfers reported low back pain at the moment of data collection, 16 (32%) reported low back pain in the last 12 months, and 23 (46%) surfers reported that they had felt low back pain throughout all their surfing practice. Spinal hyperextension angles varied between 14 o and 38 o (23.04 o ± 4.73 o ). Female surfers presented a higher risk of sustaining surfing-related injuries than males (odds ratio= 1.36; 95%CI: 0.33-5.55; p= 0.671), individuals who had surfed for less than five years were at 2.6 (95%CI: 0.82-8.20; p = 0.103) more risk than those who had surfed for more than 5 years, surfers with ages equal or upper to 18 years revealed 1.15 (0.38-3.49; p= 0.811) odds than younger surfers, those who didn’t participate in the championships had 1.57 (0.50-4.83; p= 0.442) more chances compared to those who participated, and surfers with an spinal hyperextension angulation above 23 o revealed 1.04 (0.34-3.19; p = 0.945) to be more likely to develop low back pain.

There was a high prevalence of low back pain in the surfers analysed. Thus, it is necessary to have a better biomechanical analysis of the paddle movement of surfing.

Low back pain, Prevalence, Surf.

P2 Assessment risk of work-related musculoskeletal disorders according to the RULA method in Nurses

Paula c santos 1,2 , sofia lopes 1,3,4 , vanessa silva 1 , pedro norton 5 , joĂŁo amaro 5 , cristina mesquita 1,4, 1 department of phisioterapy, school of allied health, polytechnic institute of porto, 4050-313 porto, portugal; 2 research centre in physical activity, health and leisure, faculty of sport, university of porto, 4050-313 porto, portugal; 3 escola superior de saĂşde de vale de sousa, 4585-116 gandra, portugal; 4 centro de estudos do movimento e atividade humana, escola superior de saĂşde, instituto politĂŠcnico do porto, 4200-072 porto, portugal; 5 centro hospitalar de sĂŁo joĂŁo, 4200-319 porto, portugal, correspondence: cristina mesquita ([email protected]).

In what concerns Occupational Health, the most recurrent injuries are musculoskeletal disorders, which are frequently associated with risk factors, such as, repetition of the task and handling of loads. Among health professionals, nurses are the most affected.

This study evaluates work-related musculoskeletal disorders (WMSDs) in nurses from a central hospital, using the Rapid Upper Limb Assessment (RULA) method.

This is an observational, cross-sectional study with a sample of 34 nurses from the surgery department. Collections were made through the observation of tasks performed by nurses when applying the RULA. The final score was associated with the need for intervention, for prevention of WMSDs varying between 1 and 7; no intervention is required and immediate intervention is required, respectively. Descriptive analysis of the partial and final scores, as well as the Mann-Whitney test, the Fisher exact test and the chi- square test were performed.

The tasks with the highest risk were bed hygiene and transfers. Among the evaluated tasks, the majority of the final scores obtained were 6 and 7, which refers to a need for intervention soon or immediately, respectively. There were no significant associations between the risk of injury and gender, age or length of service.

It was concluded that most of the tasks performed by nurses presented a high final score, according to the RULA method, and the body segments with the highest risk are shoulders, neck and trunk, suggesting the need for immediate intervention.

Musculoskeletal disorders, RULA, Occupational Health.

P3 After disaster: conceptualising the extent and length of the psychological impact

Alice morgado ([email protected]), university of northampton, northampton, nn2 6ju, united kingdom.

Psychosocial responses to disasters have been widely explored in psychological and psychiatric literature. However, some issues have not yet been clarified with regards to conceptualizing disasters and addressing the long-term effects of disasters through a perspective focused on developmental and positive psychology principles.

The aim of this study is to explore existing research regarding psychological dimensions of exposure to disaster.

A literature review was conducted focusing on disaster conceptualisations and long-term adaptive functioning of those who have and have not been identified as individuals at risk for adverse outcomes. Focusing on conceptions of disaster and trauma, the extent of the impact in different populations was also considered, along with existing knowledge regarding reactions to disaster and possible factors involved.

There has been significant effort in designing immediate and short-term relief and assistance in disasters, addressing the most common effects of exposure to trauma [1-3]. Developmental considerations have outlined differential psychological outcomes through the lifespan [4-7]. An important body of research has focused on resilience in relation to trauma [8-11], nevertheless, studies regarding long-term consequences and adaptive functioning are still scarce [12]. Efforts seem to focus more on preventing relatively immediate severe symptoms of psychopathology [13, 14] rather than on promoting long-term psychological adjustment.

Research aimed at understanding the long-term psychological effects of exposure to disasters, looking at individuals who showed and did not show psychopathology following that incident seems a sensible topic to be developed. Aiming to understand how individuals in different life stages deal with adversity and to design interventions able to support individuals in dealing with the less visible long-term effects of trauma is equally important. In addition, to focusing on the absence of psychopathology, researchers should have in mind the promotion of positive development throughout the life-span.

Researchers should develop measures that assess exposure to disaster/trauma, taking into consideration not only the type of event, dates and duration, but also the type of exposure and involved stressors, attempting to capture disaster exposure in its complexity. At the same time, research should acknowledge the importance of the meaning that individuals attribute to an event and its consequences, more than the event itself [2, 3], and consider perceived individual, family and community resources in relation to it.

1. Briere, J & Elliott, D. Prevalence, characteristics, and long-term sequelae of natural disaster exposure in the general population. Journal of Traumatic Stress. 2000, 13: 661-679.

2. Norris, F H & Wind, L H. The experience of disaster: Trauma, loss, adversities, and community effects. In Y Neria, S Galea & F H Norris (eds.) Mental health and disasters. Cambridge: Cambridge University Press. 2010. pp.29-44.

3. Park, C L. Meaning making in the context of disasters. Journal of Clinical Psychology. 2016, 72: 1234-1246.

4. Gurwitch, R H et al. When disaster strikes: Responding to the needs of children. Prehospital and disaster medicine. 2004, 19:21-28.

5. Reijneveld, S A, Crone, M R, Verhulst, F C, & Verloove-Vanhorick, S P. The effect of a severe disaster on the mental health of adolescents: A controlled study. The Lancet. 2003, 362: 691-696.

6. Vernberg, E M, Hambrick, E P, Cho, B, & Hendrickson, M L. Positive psychology and disaster mental health: Strategies for working with children and adolescents. Journal of Clinical Psychology. 2016, 72: 1333-1347.

7. Wooding, S & Raphael, B. Psychological impact of disasters and terrorism on children and adolescents: Experiences from Australia. Prehospital and disaster medicine. 2004, 19: 10-20.

8. Bonanno, G A, & Gupta, S. Resilience after disaster. In Y Neria, S Galea, & F H Norris (eds.) Mental health and disasters. Cambridge: Cambridge University Press. 2010. pp.145-160.

9. Cox, R S, Perry, K E. Like a fish out of water: Reconsidering disaster recovery and the role of place and social capital in community disaster resilience. American Journal of Community Psychology. 2011, 48: 395-411.

10. Norris, F H, Stevens, S P, Pfefferbaum, B, Wyche, K F, & Pfefferbaum, R. Community resilience as a metaphor, theory, set of capacities, and strategy for disasterreadiness. American Journal of Community Psychology. 2008, 41: 127-150.

11. Schulenberg, S E. Disaster mental health and positive psychology – Considering the context of natural and technological disasters: An introduction to the special issue. Journal of Clinical Psychology. 2016, 72: 1223-1233

12. Juen, B. State of the art on psychosocial interventions after disasters. Communication at OPSIC (Operationalising Psychossocial Support In Crisis). Tel Aviv, 13th January, 2014.

13. Briere, J & Elliott, D. Prevalence, characteristics, and long-term sequelae of natural disaster exposure in the general population. Journal of Traumatic Stress. 2000, 13: 661-679.

14. North, C S. Current research and recent breakthroughs on the mental health effects of disasters. Current Psychiatry Reports. 2014, 16: 481-489.

Disaster, Psychological impact, Trauma, Resilience, Development.

P4 Evaluation of Portuguese athletes knowledge regarding doping in sports

Marco jardim 1 , andrĂŠ ruivo 2 , catarina jesus 3 , david cristovĂŁo, 1 school of helth, polytechnic institute of setĂşbal, 2915-503 setĂşbal, portugal; 2 portuguese sports physiotherapy interest group, portuguese association of physiotherapists, 2785-679 sĂŁo domingos de rana, portugal; 3 hospital de loulĂŠ, 8100-503 loulĂŠ, portugal, correspondence: marco jardim ([email protected]).

Doping is no longer an exclusive issue in sports and has been recognized as a worldwide public health problem. A relevant part of doping violations has been discovered in athletes from all ages and every competitive level often motivated by their limited knowledge about doping in sports. Portuguese athlete’s knowledge about doping rules violations is far to be known and an accurate picture about their state of knowledge seems to be an important factor to develop effective educational anti-doping programs.

The purpose of this study was to evaluate the knowledge of Portuguese athletes towards doping in sports, regarding substances and methods on the prohibited list, health consequences, athletes’ rights and responsibilities and doping control procedures.

A cross sectional study was performed in several Portuguese sport institutions. A total of 374 non-professional athletes (83% response rate) were evaluated about knowledge on doping in sports. Self-administrated and pretested questionnaire was used to collect sociodemographic data and doping knowledge. Descriptive statistics were carried out to express athlete’s sociodemographic information and mean doping knowledge score. Chi square tests were used to assess the association between study variables and doping knowledge questions. Inferential statistics (Mann–Whitney U test and Kruskal Wallis tests, p < 0.05) were used to examine differences between study variables.

Only 21% of the athletes demonstrated good global knowledge on doping. The overall mean knowledge on doping was 56.8 ± 13.8 were the highest mean knowledge area was observed on rights and responsibilities’ (62.7 ± 21.5), while the lowest was perceived on doping control procedures (54.0 ± 18.4). Higher global knowledge on doping was associated with female athletes, aged between 19-21 years with university educational level. No differences were found between team sports and individual sports athletes.

No national-level data had been reported so far and this study can provide useful information regarding gaps and trends about doping practices in the country. Doping knowledge among the participants was poor, particularly in terms of prohibited substances and doping control procedures. The results of this study suggest that educational anti-doping programs should be intensified and more effective amongst Portuguese sports populations.

Public Health, Doping, Doping Knowledge, Sports Population, Education, Portugal.

P5 Do children with Specific Language Impairment (SLI) present implicit learning (IL) deficits? Evidence from an Artificial Grammar Learning (AGL) paradigm

Ana p soares 1 , andreia nunes 1 , paulo j martins 2 , marisa lousada 3, 1 psychology research center, school of psychology, university of minho, 4710-057 braga, portugal; 2 center for humanistic studies of the university of minho, university of minho, 4710-057 braga, portugal; 3 center for health technology and services research, school of health sciences, university of aveiro, 3810-193 aveiro, portugal, correspondence: ana p soares ([email protected]).

Specific Language Impairment (SLI) is a neurodevelopmental disorder involving language deficits in the absence of other associated condition [1]. The aetiology of SLI is hotly debated, ranging from representational deficits in grammar to impairments in the cognitive processes that underlie language acquisition. Recent research suggests that SLI difficulties may arise from implicit learning (IL) deficits, i.e . impairments in the cognitive mechanisms that allow children to extract the structural regularities present in the input and to generalize it to new contexts [2]. IL studies have been conducted mainly with adults and unimpaired children using the Serial Reaction Time Task (SRTT). The few studies conducted with language impaired children produced inconsistent results [3]. Since performance of this task involves a motor component that seems to be also impaired in SLI, it is critical to conduct studies using other tasks and paradigms.

To analyse if IL deficits are core in SLI using an Artificial Grammar Learning (AGL) task. The AGL task is particularly suited to study IL deficits in SLI because it mimics language acquisition more closely than SRTT and, in addition, avoids its motor component. In an AGL task, participants are firstly exposed to strings that conform the rules of an artificial-grammar (learning-phase). Then, they are asked to decide whether new strings conform or not these rules (test-phase). Performance is typically better to grammatical (G) than to non-grammatical (NG) strings, indicating that participants learned the grammar even without consciousness of it.

Fourteen Portuguese children participated in this study (M age =4.86, SD=.66), 7 with a SLI diagnosis matched in age, sex, and non-verbal IQ with other 7 children with typical development (TD). All children were asked to perform a visual AGL task presented as a computer game. Written consent was obtained from all parents.

Results showed that TD children outperformed SLI children in the test-phase. More hits were also observed for the G strings that revealed higher- than lower-similarity with the strings presented in the learning-phase. Furthermore, the analysis of children performance showed that while TD children revealed an increased number of correct responses and a decreased number of attempts to achieve a correct response in the learning-phase, SLI children did not.

Children with SLI reveal deficits in their IL abilities as indexed by a worse performance both in the learning and test phases of a visual AGL task. IL malfunctioning should be considered in the aetiology of the disorder.

1. Bishop DVM. What Causes Specific Language Impairment in Children? Curr Dir Psychol Sci. 2006;15(5):217–21.

2. Lum JAG, Conti-Ramsden G, Morgan AT, Ullman MT. Procedural learning deficits in specific language impairment (SLI): a meta-analysis of serial reaction time task performance. Cortex. 2014;51(100):1–10.

3. Ullman MT, Pierpont EI. Specific language impairment is not specific to language: the procedural deficit hypothesis. Cortex. 2005;41(3):399–433.

Specific Language Impairment, Implicit learning, Artificial grammar learning, Language impaired children.

P6 Fatal road accidents: behavior and the use of safety equipment

Christine b godoy 1 , maria hpm jorge 2 , jackeline g brito 1, 1 faculty of nursing, universidade federal de mato grosso. 78060-900 cuiabĂĄ, mato grasso, brazil; 2 school of public health, universidade de sĂŁo paulo, 01246-904 sĂŁo paulo, brazil, correspondence: christine b godoy ([email protected]).

At present, road accidents represent the second major cause of death in Brazil, striking mainly the younger population [1, 2]. Several factors contribute to this reality, among them, the accelerated urbanization process with significant population growth and an increase in the number of vehicles in circulation; impunity for violators, lack of proper supervision; old vehicle fleet; poor maintenance of public roads; poor signage; alcohol and driving combination; non-use of safety equipment and improper behaviour of drivers of pedestrians and vehicles [3-6]. Considering that death is the maximum expression of a given problem in a society [7-8], learning about the factors associated to casualties from road accidents may direct prevention actions and contribute to higher effectiveness [9-10].

This study examines the factors related to fatal road accidents involving children, adolescents and youngsters in CuiabĂĄ, the capital of Mato Grosso, in 2009.

This is a domestic survey descriptive in nature. In the first stage of the research, data were collected through Death Certificates (DO), in order to identify, mainly, victims and address. In the second stage of the study, a household survey was conducted with the families of the victims, in which information was collected on the use of safety equipment and behaviour of victims in traffic, according to the families' reports. The analysis has been done with the EpiInfo software.

In the period and population studied, deaths occurred only due to land transport accidents (codes V01 to V89 of ICD10), generally referred to as traffic accidents, and there were no fatalities of other types of transport (such as air transport or by boat). We studied 22 deaths due to traffic accidents. Most of whom are male (86.4%). Among the motorcycle driver victims, not all were wearing a helmet (44.4%), many did not respect traffic signs (55.5%), and some used to combine alcohol consumption and driving (33.3%). Among the car driver victims, 85.7% were not using seat belts, and many used to combine alcohol consumption and driving (57.1%). The pedestrian victims were not using the Zebra crossing (50.0%) or respected the red light of the Pelican crossing (50.0%).

The results point to the need to intervene directly in risk factors in order to reduce road casualties.

1. Nukhba Z, Uzma RK, Junaid AR, Prasanthi P, Adnan AH. Understanding unintentional childhood home injuries: pilot surveillance data from Karachi, Pakistan. Inj Prev 2012; 18(Suppl 1): A97-A97.

2. Aguilera SLV, MoysÊs ST, MoysÊs SJ. Intervençþes de segurança viåria e seus efeitos nas lesþes causadas pelo trânsito: uma revisão sistemåtica. Rev Panam Salud Publica 2014; 36(4): 1-13.

3. Wei Y, Chen L, Li T, Ma W, Peng N, Huang L. Self-efficacy of first aid for home acidentes among parents with 0 to 4 year old children at a metropolitan community health center in Taiwan. Accid Anal Prev. 2013; 52:182-7.

4. Fraga AMA, Fraga GP, Stanley C, Costantini TW, Coimbra R. Children at danger: injury fatalities among children in San Diego County. Eur J Epidemiol 2010;25(3):211–217.

5. Koizumi MS, Leyton V, Carvalho DG, Coelho CA, Mello Jorge MHP, Gianvecchio V et al. Alcoolemia e mortalidade por acidentes de trânsito no município de São Paulo, 2007/2008. ABRAMET – Associação Brasileira de Medicina de Tráfego 2010;28(1):25-34.

6. Mello Jorge MHP, Koizumi MS. Acidentes de trânsito como objeto de estudo da medicina de tråfego. O papel da epidemiologia. In: Moreira FDL (org). Medicina do Transporte. Rio de Janeiro: Arquimedes, 2010. P. 355-375.

7. Afzali S, Saleh A, Seif Rabiei MA, Taheri K. Frequency of alcohol and substance abuse observed in drivers killed in traffic acciÂŹdents in Hamadan, Iran. Arch Iran Med. 2013;16(1):240-242.

8. Andrade SSCA, Mello Jorge MHP. Estimativa de sequelas físicas em vítimas de acidentes de transporte terrestre internadas em hospitais do Sistema Único de Saúde. Ver Bras Epidemiol 2016; 19(1): 100-111.

9. Bravo MS. Aprender a dirigir aos 18 anos de idade: uma visĂŁo da psicologia nessa fase da adolescĂŞncia. Boletim de Psicologia 2015; LXV(43): 147-155.

10. Ivers RQ, Sakashitaa C, Senserrickb T, Elkingtona J, Loa S, Boufousb S, Romea L. Does an on-road motorcycle coaching program reduce crashes innovice riders? A randomised control trial. Accident Analysis and Prevention 2016; 86(1): 40-46.

Road accidents, External cause, Mortality, Risk factor.

P7 Education with resource to simulated practice: gains in the implementation of gastric intubation

Marta assunção 1 , susana pinto 1 , lurdes lopes 2 , claudia oliveira 1 , helena josĂŠ 3,4, 1 institute of health sciences, universidade catĂłlica portuguesa, 4200-374 porto, portugal; 2 iberoamerican university foundation, 1990-083 lisbon, portugal; 3 health sciences research unit: nursing, nursing school of coimbra, 3046- 851 coimbra, portugal; 4 university of lisbon, 1649-004 lisbon, portugal, correspondence: marta assunção ([email protected]).

Patient safety is an important issue and a challenge in today’s health care practice to reduce adverse events [1]. A strategy to minimize this problem is the clinical simulation, as it is in this context, where doubt and error are allowed [2], without jeopardizing the integrity of the person [3].

To analyse the technical evolution of the students regarding the accomplishment of a nursing intervention i.e. : gastric intubation.

Quasi-experimental study without control group, using medium-fidelity simulators. An observation grid focused on nursing intervention was built for data collection: gastric intubation, with twenty-one items. Sampling was through accessibility. The inclusion criteria were: to be a registered nurse; studying in a course of the Centro de Formação de Saúde Multiperfil (CFS, Angola: Luanda), with a minimum of two years of professional experience and to participate in the three study moments. The study population consisted of 37 nurses, but 7 were excluded because they were not present in all phases of the study (n=30). The first moment of study occurred in laboratory context (in the CFS laboratories), in a realistic scenario, where a clinical situation was presented in which it was necessary to perform gastric intubation, based on the knowledge held by the student. In the second moment, students participated in a theoretical approach to the procedure and trained the procedure under simulated practice. On a third moment (few days after the 2nd moment) a second observation was made. We proceeded to analysis and comparison of data using descriptive statistics.

56.7% of the participants were female. Age ranged from 28 and 52 years, the average was thirty-nine [39.27 (Âą 16.97)] years. 50% of students were from the province of Luanda, the rest from other provinces of Angola. 10% of the students did not obtain gains with the simulated practice, while 90% presented a positive evolution from the first to the second observation. The most significant changes were in the following actions: head positioning, flexion nullification and swallowing request.

Similar to what is mentioned in the literature, about the gains obtained from the simulated practice in realistic scenarios, in this study, gains were also observed in the performance of a nursing intervention. The use of simulated practice in nursing education, specifically with respect to the development of instrumental skills, contributes to successful teaching, which can translate into better performance and, subsequently, less risk to the patient.

1. World Health Organization. Patient safety curriculum guide: Multiprofessional edition. Geneva, Switzerland: WHO; 2011. Available from: http://apps.who.int/iris/bitstream/10665/44641/1/9789241501958_eng.pdf.

2. Teixeira CRS, Kusumota L, Braga FTMM, Gaioso VP, Santos CB, Silva VLS, et al. O Uso de Simulador no Ensino e Avaliação Clínica em Enfermagem. Texto Contexto Enferm (Florianópolis) [serial on the Internet]. 2011 [cited 2017 October 11]; 20: 187-93. Available from: http://dx.doi.org/10.1590/S0104-07072011000500024.

3. Ferreira C, Carvalho JM, Carvalho FLQ. Impacto da Metodologia de Simulação Realística, Enquanto Tecnologia Aplicada a Educação nos Cursos de Saúde. STAES [serial on the Internet]. 2015 [cited 2017 October 11]; 32-40. Available from: www.revistas.uneb.br/index.php/staes/article/view/1617/1099.

Simulation training, Education, Nursing, Clinical Competence, Intubation, Gastrointestinal, Patient safety.

P8 Emotional labour in paediatric nursing: a propose model for practice guidance

Paula diogo ([email protected]), unidade de investigação & desenvolvimento em enfermagem, escola superior de enfermagem de lisboa, 1600-190 lisboa, portugal.

Health-disease processes experienced by children and youth, and their families, are often associated with intense emotionality and, simultaneously, entails a great emotional challenge for nurses in their care, requiring emotional labour of triple centrality: on the client, nurse and nurse-client relationship [1]. Nurses perform this emotional labour according to their personal resources and learning from the day-to-day experience of care [2]. Moreover, this emotional dimension of nursing care continues to be undervalued by health institutions, and by nurses themselves, so that emotional labour is not always the object of reflection and/or support in scientific evidence [3]. For this reason, conceptual models are needed to guide and strengthen nurses in their practice, especially when the context is peculiar as in paediatric care.

Diogo [1], presented an explanatory hypothesis of the process of therapeutic use of emotions in paediatric nursing, arguing that Emotional Labour in Paediatric Nursing translates into actions of positive transformation of emotional experience into interactions of care with the paediatric client, through five categories of intervention: 1) Promoting a safe and affectionate environment; 2) Nurturing care with affection; 3) To facilitate client emotional management; 4) to build stability in the relationship; 5) to regulate their own emotional disposition to care. This Emotional Labour Model in Paediatric Nursing was developed based on the nursing paradigm of transformation [4] whose central concept is Caring, supported by the Watson's Human Care theory [5], and theorizes about “personal knowing” [6]. This Model also integrates the principles of family-centred care and non-traumatic care in Paediatric Nursing, such as the holistic and humanized perspective on health. At the heart of the proposed Model is the Emotional Labour of Nursing conception [2].

1. Diogo P. Trabalho com as emoçþes em Enfermagem Pediåtrica: Um processo de metamorfose da experiência emocional no ato de cuidar. 2ª ed. Loures: Lusodidacta; 2015.

2. Smith P. Emotional Labour of Nursing Revisited. Can nurses Still Care? 2ÂŞ ed. Hampshire: Palgrave Macmillan; 2012.

3. Diogo P, compilador. Investigar os Fenómenos Emocionais da Pråtica e da Formação em Enfermagem. Loures: Lusodidacta; 2017.

4. KĂŠrouac S, Pepin J, Ducharme F, Duquette A, et al. El pensamiento enfermero. Barcelona: Masson; 1996.

5. Watson J. Nursing: The Philosophy and Science of Caring. Boulder: University Press of Colorado; 2008.

6. Fawcett J, Watson J, Neuman BH, Fitzpatrick JJ. On nursing theories and evidence. J Nurs Scholarsh. 2001; 33(2): 115-119.

Emotions, Emotional Labour, Conceptual Model, Peadiatric Nursing.

P9 Cardiovascular risk factors in patients with ischemic and hemorrhagic stroke

Ilda barreira 1 , matilde martins 2 , leonel preto 2 , norberto silva 1 , pedro preto 3 , maria e mendes 2.

Stroke is the second worldwide most common cause of death and the main reason of functional disability [1]. Early identification and treatment of modifiable risk factors can reduce the risk of stroke. In stroke patients, the identification of cardiovascular risk factors is also important for preventing another stroke [2].

To assess the prevalence of cardiovascular risk factors in stroke patients.

Analytical and retrospective cohort study. Data were collected through electronic health records of all patients with stroke admitted to an emergency department for seven years (2010 to 2016). The research protocol has been approved by an ethics committee.

Were analysed the electronic health records of 756 patients with ischemic stroke (78.6 Âą 10.7 years) and 207 with intracerebral haemorrhage (76.1 Âą 11.9 years). In ischemic stroke, the most common risk factors were hypertension (66.7%), hypercholesterolemia (30.7%), diabetes mellitus (26.5%), atrial fibrillation (25.4%), obesity (11.4%) and smoking (5.2%). In haemorrhagic stroke the most prevalent risk factors were hypertension (57.0%), diabetes (25.6%), dyslipidaemia (23.7%), atrial fibrillation (17.4%), obesity (15.5%) and smoking (9.2%).

Hypertension was more prevalent in ischemic stroke and is associated with type of stroke (χ 2 = 6.633, df = 1, p = 0.010). Atrial fibrillation also prevailed in thromboembolic events with statistical significance (p = 0.016). Diagnosis and control of cardiovascular risk factors is a fundamental objective for primary and secondary prevention of stroke.

1. Donnan GA, Fisher M, Macleod M, Davis SM. Stroke. Lancet. 2008;371(9624):1612-23.

2. Arboix A. Cardiovascular risk factors for acute stroke: Risk profiles in the different subtypes of ischemic stroke. World J Clin Cases. 2015;3(5):418-29.

Prevalence, Cardiovascular risk factors, Ischemic stroke, Hemorrhagic stroke.

P10 Topical oxygen therapy in wound healing: a systematic review

JoĂŁo l simĂľes, dilsa a bastos, raquel v grilo, marta l soares, sĂ­lvia s abreu, juliana r almeida, elsa p melo, school of health sciences, university of aveiro, 3810-193 aveiro, portugal, correspondence: joĂŁo l simĂľes ([email protected]).

Oxygen is recognised as an essential element in the wound healing process and, it is suggested that the topical application of oxygen may be a promising therapy in wound care. Thus, the importance of oxygen in the tissue healing process is evident, namely in ATP synthesis; production of reactive oxygen species, which stimulate vascular endothelial growth factor synthesis; and microbial growth inhibition through the promotion of macrophage chemotaxis and increase of leukocyte activity. Moreover, oxygen increases the rate of collagen deposition, an important step in healing, which supplies the matrix for angiogenesis and tissue maturation. Thus, according to the P.I.C.O. review model for clinical questions, this systematic review intends to answer the research question “ In chronic wounds, how does topical oxygen therapy affects wound healing ?”. It was considered chronic wounds for “patient population or disease of interest”, topical oxygen therapy for “intervention or issue of interest” and wound healing for “outcome”. However, a “comparison intervention or group” and a “time frame” were not applicable.

The aim of this study was to conduct a systematic review of the current evidence for this therapy through the analysis of primary research studies published between January 2006 and December 2016.

Published literature was identified using Scopus, B-On, Scielo, Pubmed, Ebsco Host and Medline databases. Exclusion criteria and quality indicators were applied and a total of 11 articles with different designs were included in the review.

The studies analysed emphasise the evidence of additional O 2 usage in wound care, since it reduces hypoxia and it allows triggering mechanisms which are essential for the healing process. The analysed literature presents the results of its effects in its various forms: pressurized, continuous and dissolved. Although there are still questions about the exact mechanisms of this treatment and it is necessary to carry out randomised studies, the current results suggest that this therapy plays an important role in restoring the O 2 balance in the wound bed, necessary for healing.

These findings show the potential of this therapy in promoting healing of chronic wounds and improving people’s quality of life. In addition, there are many other potential advantages related to its usage, such as low cost, apparent safety, no associated adverse effects and the possibility to submit a diversified population to this care at any health organisation or even at the patient’s home.

Oxygen, Topical administration, Wound Healing, Wounds and Injuries.

P11 Microbiological characterization of bathing areas of a county in the Northern region

Joana mendes, marlene mota, antĂłnio araĂşjo, cecĂ­lia rodrigues, teresa moreira, manuela amorim, escola superior de saĂşde, instituto politĂŠcnico do porto, 4200-072 porto, portugal.

The management of bathing water aims at the protection of human health and the preservation, protection and improvement of the quality of the environment [1, 2]. In order to control the quality of these same waters for recreational use, microbiological indicators of faecal contamination are monitored, according to Decree-Law 135/2009 of June 3rd [1]. The microbiological indicators of faecal contamination used are Escherichia coli and Enterococcus spp . since they are commensals of the gastrointestinal flora of humans and most animals [3].

This study aimed to characterize the results of intestinal E. coli and Enterococcus parameters of inland bathing waters of a county in the northern region of Portugal during 2016.

A retrospective descriptive study was performed using database records from a northern laboratory. The microbiological parameters studied to characterize the inland bathing waters included CFU/100mL of E.coli and CFU/100mL of intestinal Enterococcus . The results were classified as “Bad”, “Acceptable”, “Good” or “Excellent”, according to the Decree-Law 135/2009 of June 3rd [1].

We verified that in the total of 26 inland bathing waters under study, 6 (23.1%) obtained a quality equal to or greater than “Acceptable”. The remaining 20 bathing waters (76.9%) were classified as “Bad”. This result, in 17 samples was due to both parameters, intestinal Enterococcus and E. coli . In the other three, the “Bad” classification was only due to the Enterococcus results. The months with the highest counts of E. coli were September (45.69%), June (43.30%) and May (39.62%), and for Enterococcus were May (52.83%), June (52.58%) and July (32.35%).

In an initial study and applying criteria that will then have to be more extended in terms of time, there is a first tendency for most of the inland bathing waters under study to present “Bad” quality (76.90%). Since all bathing waters should have at least “Acceptable” quality and provisional data, these results indicate an urgent need to take measures in order to counteract this and increase the number of bathing waters classified as “Excellent” or “Good.” The different E.coli and intestinal Enterococcus counts observed in different months showed that climatic, environmental, social and urban factors could be involved in this differences and deserves attention in future studies [2, 4]. The quality of bathing water is fundamental in terms of public health. In this sense, the results of this study are worrisome, however these studies should be conducted in a longer time perspective.

1. Portugal. Decreto-Lei n.º 135/2009, de 3 de junho de 2009. Estabelece o regime de identificação, gestão, monitorização e classificação da qualidade das åguas balneares e de prestação de informação ao público sobre as mesmas. Diårio da República n.º 107/2009. 3460-3468.

2. Portugal. Decreto-Lei n.º 113/2012, de 23 de maio de 2012. Gestão da qualidade das åguas balneares, e ao seu ajustamento ao quadro institucional resultante da publicação do Decreto-Lei n.º 7/2012, de 17 de janeiro, que define a orgânica do MinistÊrio da Agricultura, do Mar, do Ambiente e do Ordenamento do Território, e do Decreto-Lei n.º 56/2012, de 12 de março, que define a orgânica da Agência Portuguesa do Ambiente, I.P.. Diårio da República, 1ªa sÊrie, n.º 100. 2715-26.

3. Boehm AB, Sassoubre LM. Enterococci as Indicators of Environmental Fecal Contamination. In: Gilmore MS, Clewell DB, Ike Y, Shankar N, editors. Enterococci From Commensals to Leading Causes of Drug Resistant Infection. Boston: Massachusetts Eye and Ear Infirmary; 2014.

4. McMichael AJ. Environmental change, climate and population health: a challenge for inter-disciplinary research. Environmental Health and Preventive Medicine. 2008, 13(4):183-186.

Inland bathing water, Fecal contamination indicators, Escherichia coli, Enterococci intestinal.

P12 Microbiological characterization of food handlers in school canteens

Diana gomes, teresa moreira, marlene mota, cecĂ­lia rodrigues, antĂłnio araĂşjo, manuela amorim.

Food-borne substances are a major concern of Public Health, given that food can be the source of various hazards (biological, physical and chemical). Approximately 20% of outbreaks of foodborne illness are associated with the personal hygiene of food handlers. The personal hygiene of manipulators is one of the best ways to block bacterial contamination and its extension to new areas [1, 2].

To evaluate the microbiological profile of the hands of food handlers in school canteens of the northern region of Portugal during 2016 and to verify the efficiency of the hygiene processes.

Handlers and utensils were tested using a swab soaked in Maximum recovery diluent-Histidine Lecithin and Polysorbate (MRD-HLPS) rubbing against parts were food might get retained, following ISO 18593: 2004 [3]. The parameters evaluated were coliforms at 37ÂşC/24h, Escherichia coli at 44ÂşC/24h and coagulase positive Staphylococcus at 37ÂşC/48h, according to ISO 4832:2006 [4], ISO 16649-2:2001 [5] and ISO 6888-1:1999 [6], respectively. A statistical analysis of the results of the microbiological profile evaluation was carried out at the hands of the food handlers of public primary school canteens.

Ours results of the microbiological profile of the hands of food handlers showed that 9.95% of samples analysed had bacterial contamination. Most of the samples with bacterial contamination were caused by the presence of coliforms, followed by coagulase positive Staphylococcus . Only one sample was registered with positive E. coli . It was not found a significant difference in the proportions of samples with bacterial contamination and positive for coliform bacteria and coagulase positive Staphylococcus , in the distribution line and in the kitchen, over the several months.

The food handler is an important and recognized source of bacterial contamination of foodstuffs [1, 2]. The results of the present study indicate the necessity to implement measures to control bacterial contamination in the hands of manipulators of school canteens, aiming at correcting possible flaws encountered. Food legislation, the Hazard Analysis and Critical Control Point (HACCP) system, and reference documents such as the Codex Alimentarius and the Food Code present guidelines to promote improved food hygiene and personal hygiene for handlers [2, 7].

1. Arduse L, Brown D. HACCP and Sanitation in Restaurants and Food Service Operations. Florida: Atlantic Publishing Company; 2005.

2. World Health Organization, Food and Agriculture Organization. Codex Alimentarius: Higiene dos alimentos. 3rd ed. Brasilia: Agência Nacional de Vigilância Sanitåria; 2006.

3. International Organization for Standardization. ISO 18593:2004, Microbiology of food and animal feeding stuffs — Horizontal methods for sampling techniques from surfaces using contact plates and swabs; 2004.

4. International Organization for Standardization. ISO 4832:2006, Microbiology of food and animal feeding enumeration of coliforms — Colony-count technique; 2006.

5. International Organization for Standardization. ISO 16649-2:2001, Horizontal method for the enumeration of B-glucuronidase-positive Escherichia coli; 2001.

6. International Organization for Standardization. ISO 6888-1:1999, Horizontal method for the enumeration of coagulase-positive staphylococci (Staphylococcus aureus and other species); 2003.

7. Food and Drug Administration. Food Code. Virginia: United States Department of Health and Human Services; 2013.

Food safety, Food handlers, Hand hygiene, Microbiological evaluation, Bacterial contamination.

P13 Exploring the effectiveness of digital psychoeducational interventions on depression literacy: a scoping review

Karin panitz, jennifer apolinĂĄrio-hagen, department of health psychology, institute for psychology, university of hagen, 58097 hagen, germany, karin panitz ([email protected]).

Depression is a huge burden requiring efficient strategies for prevention and treatment [1]. Psychoeducation can improve health literacy and help to reduce the stigma of help-seeking. In recent years, the Internet has been suggested as a way to deliver mental health interventions to a broader range of persons and to reduce barriers to seek help from face-to-face services. However, little is known about the effectiveness of digital psychoeducational interventions on health literacy and psychological outcomes, such as help-seeking intentions [2].

To derive practical implications for health professionals, this scoping review aimed to explore the effectiveness of different digital psychoeducational interventions strengthening depression literacy or knowledge (primary outcome), stigmatizing attitudes and on help-seeking attitudes, intentions and behaviour (secondary outcomes). This review is conceptualized as an update and expansion of previous research [2] with a focus on a broad range of interventions.

In May 2017, a systematic search through electronic databases (e.g. PsycINFO and PSYNDEX) was performed to identify longitudinal studies on the effectiveness of digital interventions targeting depression-related mental health literacy among adults published between 2007 and 2017 in peer-reviewed English journals.

Overall, 19 Studies met the inclusion criteria, mostly stemming from Australia. The findings of 13 of the included 17 studies evaluating mental health literacy revealed significant increases in depression literacy. Pure dissemination of information via websites, e-mails or psychoeducational interventions yielded primarily positive findings. Both Internet-based Cognitive Behavioural Therapy and online game programs were found to be knowledge-enhancing, except for one study using a simulated dialogue. Findings on digital intervention targeting stigmatization in terms of individual, as well as perceived attitudes towards mental illness were inconsistent. Concerning perceived stigma, 4 of 8 studies showed positive results in reducing stigma, whereas other results were inconsistent. Likewise, the effects of interventions on help-seeking (n= 8 studies) with respect to attitudes (n= 5 studies), intentions (n= 6 studies) and behaviour (n= 4 studies) were inconclusive.

The evidence base on mental health literacy interventions is promising, but still limited. Various digital interventions are overall comparably effective in strengthening depression literacy and reducing stigmatizing attitudes. Given several limitations, future research should compare subpopulations to understand what works are best for whom in clinical practice. Furthermore, the comparability of knowledge levels of healthy and depressed persons should be considered. Finally, eHealth literacy of clients and health professionals should be explored and, where required, promoted with evidence-based information.

1. Kessler RC. The Costs of Depression. The Psychiatric Clinics of North America. 2012;35(1):1-14. doi:10.1016/j.psc.2011.11.005.

2. Brijnath B, Protheroe J, Mahtani KR, Antoniades J. Do Web-based Mental Health Literacy Interventions Improve the Mental Health Literacy of Adult Consumers? Results From a Systematic Review. Journal of Medical Internet Research. 2016;18(6):e165. doi:10.2196/jmir.5463.

Depression, Mental health, Health literacy, eHealth, Review.

P14 Family nurse as a privileged caregiver of families of patient with wounds in domiciliary context: nurse’s perspective

Maria fms nunes 1 , joĂŁo l simĂľes 2 , marĂ­lia s rua 2, 1 unidade de saĂşde familiar flor de sal, agrupamento de centros de saĂşde do baixo vouga, 3800-039 aveiro, portugal; 2 escola superior de saĂşde, universidade of aveiro, 3810-193 aveiro, portugal.

Population ageing is a reality that has contributed to the increase of chronic diseases and the number of dependent people with wounds, with the need of home care. This issue has implications in family dynamics. It is important to take care not only of the person with the wound but also of its family. These new health needs led to the reorganization of primary health care, where family nurses emerged as essential professionals.

The aim of this study is to know the perception of family nurses of ACeS Baixo Vouga about their care with families of patient with wounds, in domiciliary context, and the importance given to this nursing practice with families. On the other hand, to identify the factors that nurses consider as barriers or facilitators in their work with families.

It was made a quantitative, descriptive and correlational study. The instrument used for data collection was a questionnaire with two parts. The first part aimed to characterize the sample with the sociodemographic and professional data of the participants, while the second one was built with two questions and the scale of the Perception about Family Nursing. The sample consisted of 150 nurses working in primary health care, in USF or UCSP, of ACeS Baixo Vouga, ARSC. Data processing was made by a descriptive and inferential analysis using the Statistical Package for Social Sciences (SPSS) and a qualitative analysis through content analysis.

The results for the subscale Perception of Family Nursing Practice (PFNP) showed that the nurses selected “often” in most of the questions. The PFNP isn’t affected by sociodemographic and professional variables. This subscale is only affected by the nurses’ formation variable. The nurses with curricular formation on family have higher level of applicability of the family nursing in practice. For the subscale Importance Assigned to Family Nursing (IAFN), the most relevant category was “important”. The IAFN is affected by sociodemographic, professional and formation variables. The group of nurses with a higher degree of education give more importance to family nursing.

Nurses attribute a higher level of importance on the nurse’s care with families of patient with wounds in the domiciliary context than on the applicability of the family nursing in practice. The characteristics of nursing care are the most relevant facilitator factor for family nursing practice. The characteristics of the institution are the most mentioned as a barrier factor.

Family nurse, Family, Home care, People with wounds.

P15 Physical resilience as a key concept in the prevention of frailty in the elderly

Rafael bernardes, cristina l baixinho, lisbon nursing school, 1700-063, lisbon, portugal, correspondence: rafael bernardes ([email protected]).

The concept of frailty has been presented in the literature in a variety of ways [1-5] and in close relation with negative health outcomes, such as gait difficulty, falls and weight loss [1, 5]. The correct assessment of frailty in the elderly and the design of an adequate care plan is essential for the provision of personalized care and assistance to caregivers [1-3]. Recent studies associate this concept with physical resilience, as a personal characteristic that determines the capacity to resist functional decline or restore physical health, being a central aspect in active aging [6].

Identify the characteristics of physical resilience that modify (positively or negatively) the fragility of the elder.

Integrative literature review (RIL) to answer the question “ How can physical resilience influence frailty in the elder? ”.

Fragility, although a significant syndrome linked to the natural aging process, can be modified [2]. The contextual factors of each person, if well evaluated and controlled, can improve functionality not only physically but also cognitively and socially. Interventions that reduce vulnerability and adverse outcomes reduce the risk of hospitalization. The ability of an elderly person to withstand external stress is strongly related to the physiological reserve [5-6]. Taking into account that one of the main components of the fragility phenotype is sarcopenia, many of the interventions must be operationalized in order to prevent it [6]. The phenomenon of loss of functionality and muscle mass is not an isolated phenomenon [5] and produces negative functional outcomes such as difficulty in climbing stairs, getting up from a chair or bed and lifting heavy objects. The optimization of physical resilience can happen through the design of programs of physical exercise, nutrition, therapeutic reconciliation, psychoeducational support and support by health professionals.

Physical resilience is also influenced by factors common to frailty. The main constraint of physical resilience that affects fragility is the physiological reserve. Resilience can be quantified in three ways by determining functional trajectories: “resilient” (without functional changes after adverse events) or “resilient” (functional decline with subsequent recovery) by Physical Resilience levels: “fragile phenotype” vs “robust phenotype” and by determining “chronological age” versus “biological age”. This possibility of quantification “opens” the door for the development of interventions to treat fragility.

1. Anzaldi LJ, Davison A, Boyd CM, Leff B, Kharrazi H. Comparing clinician descriptions of frailty and geriatric syndromes using electronic health records: a retrospective cohort study. BMC Geriatr. 2017;17:248. DOI 10.1186/s12877-017-0645-7.

2. Fairhall N, Langron C, Sherrington C, et al. Treating frailty-a practical guide. BMC Med. 2011;6,9:83.doi: 10.1186/1741-7015-9-83.

3. Bieniek J, Wilczynski K, Szewieczek J. Fried frailty phenotype assessment components as applied to geriatric inpatients. Clin Interv Aging. 2016;11:453-59. doi: 10.2147/CIA.S101369. eCollection 2016.

4. Bongue, B; Buisson, A; Dupre, C; Beland, F; Gonthier, R; Crawford-Achour, E (2017). Predictive performance of four frailty screening tools in community-dwelling elderly. BMC Geriatr 17(1):262. doi: 10.1186/s12877-017-0633-y.

5. Zaslasvky O, Cochrane BB, Thompson HJ, Woods NF; LaCroix A. Frailty: A Review of the First Decade of Research. Biol Res Nurs. 2013;15(4) 422-32. doi: 10.1177/1099800412462866.

6. Whitson HE, Duan-Porter W, Schmader KE, Morey MC, Cohen HJ, ColĂłn-E CS. Physical Resilience in Older Adults: Systematic Review and Development of an Emerging Construct. J Gerontol A Biol Sci Med. 2016;71(4):489-95. doi: 10.1093/gerona/glv202.

Motor activity, Nursing, Sarcopenia, Frail elderly, Dependence.

P16 Safety Protocol for Nasolaringoscopic Evaluation of Swallowing: cultural and linguistic validation and adaption for European Portuguese language

Liliana abreu 1 , pedro s couto 2 , susana mestre 3, 1 faculty of medicine, lisbon university, 1649-028 lisbon, portugal; 2 center for research and development in mathematics and applications, department of mathematics, university of aveiro, 3810-193 aveiro, portugal ; 3 university hospital center of algarve, 8000-386 faro, portugal, correspondence: pedro s couto ([email protected]).

In practice, a Speech Therapist works with several neurological diseases that present changes in swallowing, especially after acute stroke. These changes, called dysphagia, can lead the patient to death by leading to malnutrition, dehydration, tracheal aspiration and recurrent pneumonia [1]. Since most of these cases are diagnosed in a hospital setting, it becomes increasingly important to create working tools that help health professionals to perform more rigorous therapeutic evaluations and interventions.

The present study aims to contribute to the cultural and linguistic validation and adaptation of the Protocol of Security of a Nasolaryngoscopy Evaluation of Swallowing (PSAND).

The study comprises two parts: a qualitative part, that corresponds to the translation and adaptation of the protocol to European Portuguese Language, and a quantitative part, where the psychometric characteristics of the protocol were studied. Further details about translation and adaptation of the protocol can be found in [2], specially the content validity procedures and its application in a pilot study. A severity assessment scale [3] was used for the functional evaluation of the swallowing safety by classifying the swallow of the subjects as normal, penetration or aspiration. For data collection, it was used the Portuguese adaptation of the PSAND and the nasolaryngoscope as evaluation tools. The content validity index (CVI) was calculated for the qualitative part, and t-student or qui-squared tests were used for comparison between severity groups.

The sample consisted of twenty subjects, where all of them have an acute stroke as clinical diagnosis whether or not having dysphagia. The age of the inquired ranged from 31 to 85 years old, being 16 males. The results obtained by the panel of experts allowed us to conclude that all the parameters are relevant to the evaluation of swallowing and important to determinate a safe feeding for each case (CVI>0.80). Thus, by applying the PSAND, it was possible to study two groups: “Penetration” (13 patients) and “Aspiration” (5 patients). There were statistically significant differences (p < 0.05) between the two groups for the variables: dependent or independent feeding; poor oral control; lot of residues; reduction of laryngeal sensitivity; leaking of the bolus and difficulty in cleaning pharyngeal residues.

In summary, we can say that the application of this protocol is an asset to diagnose the presence of dysphagia in any clinical diagnosis, evaluate the swallowing function, verifying the risk of penetration and aspiration and classifying the Dysphagia Severity.

This work was supported in part by the Portuguese Foundation for Science and Technology (FCT-Fundação para a Ciência e a Tecnologia), through Center for Research and Development in Mathematics and Applications (CIDMA), within project UID/MAT/04106/2013.

1. Michou E, Hamdy E. Cortical input in control of swallowing. Current opinion in Otolaryngology & Head and Neck Surgery. 2009 June; 17:166-71.

2. Abreu, Liliana. Protocolo de Segurança na Avaliação Nasolaringoscópica da Deglutição (PSAND): contributo para a validação cultural e linguística do português Europeu [Master Thesis] [Portuguese]. Escola Superior de Saúde do Alcoitão. 2016.

3. Rosenbeck JC, Robbins JA, Roecker EB, Coyle JL, Wood JL. A penetration Aspiration Scale. New York: Spring; 1996.

Swallowing, Dysphagia, AVC, Evaluation, PSAND.

P17 Trend in obesity in an aging society: estimate of obese elderly in Brazil in 2030

Adriane carvalho, roger s rosa, scheila mai, rita nugem, ronaldo bordin, federal university of rio grande do sul, porto alegre, rio grande do sul, 90040-060, brazil, correspondence: adriane carvalho ([email protected]).

Population aging and the increasing longevity of older people are increasingly relevant worldwide phenomena [1]. In addition, along with ageing, a significant increase in the prevalence of obesity among the elderly is also occurring [2,3].

To estimate the increase in the number of obese individuals, due exclusively to population aging in Brazil from 2014 to 2030.

The number of obese adult Brazilians was obtained by extrapolation of the prevalence estimated by VIGITEL (Surveillance System for Risk and Protection Factors for Chronic Diseases by Telephone Inquiry) [4] in Brazilian capitals, in 2014, for the entire Brazilian population. The population projection for 2030 by age groups was obtained from IBGE (Brazilian Institute of Geography and Statistics) [5]. The prevalence obtained by VIGITEL in 2014 was applied to population projections by 2030, maintaining all other variables constant, with 95% confidence intervals (95% CI).

The Brazilian adult population (18 +years) corresponded to 144.5 million people in 2014 of whom 15.5 million (10.7%) were 65 years of age or older. Obese adults accounted for 25.9 million (95% CI 24.9-27.0 million) of the entire adult population (17.9%), of which 3.1 million (95% CI 2.8-3.3 million) were elderly obese. The obese elderly corresponded to 11.9% of adults with obesity. In 2030, it is estimated that the Brazilian adult population will reach 175.2 million people, of whom 30.0 million (17.1%) are elderly. Obese will correspond to 31.4 million (95% CI 30.1-32.8 million) of adult Brazilians of whom 5.9 million (95% CI 5.4-6.4 million) will be obese elderly. That is, exclusively due to aging, it is expected an increase of 5.5 million obese for the entire population. An estimated 2.8 million more are obese in the age group of 65 and over. Therefore, it is expected that the percentage of 11.9% of elderly among obese adults in 2014 will rise to 18.9% in 2030.

Considering only the effect of aging with current levels of obesity prevalence, it is estimated that there will be an increase of almost 3 million obese people in Brazil by 2030. The impact of the increase in prevalence itself was not considered, which would make the prospect even more worrying due to the impact on chronic non-communicable diseases and in the use of health services.

1. MinistÊrio da Saúde (BR). Secretaria de Atenção à Saúde. Estatuto do Idoso. Brasília: MinistÊrio da Saúde, 2013.

2. Ferreira VA, MagalhĂŁes R. Obesidade no Brasil: tendĂŞncias atuais. Rev Port Saude Publica. 2006;24(2):71-81.

3. MĂĄrtires MAR, Costa MAM, Santos CSV. Texto Contexto Enferm, FlorianĂłpolis. 2013 Jul-Set;22(3):797-803.

4. Malta DC, Bernal RI, Nunes ML, Oliveira MM, Iser BM, Andrade SC, et al. Prevalência de fatores de risco e proteção para doenças crônicas não transmissíveis em adultos: estudo transversal, Brasil 2012. Epidemiol Serv Saúde, Brasília. 2014 Dez;23(4):609-22.

5. Instituto Brasileiro de Geografia e Estatística [homepage na internet]. Projeção da População do Brasil por sexo e idade: 2000- 2060 [acesso em 10 dez 2017]. Disponível em: https://ww2.ibge.gov.br/home/estatistica/populacao/projecao_da_populacao/2013/default_tab.shtm.

Obesity, Aging, Tendencies, Population projection, Demography.

P18 Nursing interventions towards the hospitalized elderly patient with delirium – a systematic review of literature

Marta bento, rita marques, universidade catĂłlica portuguesa, 1649-023 lisboa, portugal.

Delirium is one of the most prevalent neuropsychiatric syndromes in the hospital setting, preferably in the elderly debilitated patients. It is a cognitive alteration of sudden onset, developing in a matter of hours or days; which is interspersed with periods of lucidity and also characterized by disturbances in attention, memory and behaviour. It is also identified by the worsening of the symptoms at night and by changes in the sleep-wake cycle. The presence of this syndrome, makes impossible a holistic care, upsetting an effectively communication, between patient and nurse or family. It may even be considered common for an elderly, given the age, to appear confused, but it should not be considered normal, so investing in concrete studies to specify these mental changes and determinate what interventions are more appropriate for this vulnerable group, is emergent. It is up to nurses, who are in a privileged position, the early recognition/intervention at this neurological condition. It is assumed as an emerging need, to implement non-pharmacological strategies, so that the occurrence of delirium decreases and thus avoids great suffering.

This study aimed to identify the nursing interventions directed to the hospitalized elderly, for the control and prevention of delirium.

Using the methodology recommended by the Cochrane Centre, this systematic review of literature was guided by the following research question: “ What is the scientific evidence regarding nursing interventions directed to the hospitalized adult/elderly for the control of delirium ?” Using a PICO framework as reference, a review of articles published between 2012 and 2017 was carried out. The research was conducted at B-ON and EBSCO host - Research Databases.

In this bibliographic review 5 studies were selected, in common, they present tendentially, non-pharmacological strategies adopted by nurses with preventive character towards the predisposing and precipitating factors of delirium. The role of nursing in carrying out preventive actions was important in the maintenance of the sensorial balance (frequently reorientation, encouraging the use of visual and hearing aids improves patients ‘sensorium), optimizing circadian rhythm (minimizing night procedures, allowing periods of rest), assessing the local environment (limiting background noise and light) as well as in the mental status, pain, monitoring hydration, nutrition and stimulation of early mobility.

The implementation of nursing delirium preventive measures truth sensibilized professionals reveals to be effective in reducing the incidence of delirium. Research is imperative, to recognize and validate witch interventions may better control delirium and thus reduce its consequences.

Delirium, Nursing interventions, Hospitalized adult patients, Evidence-based practice.

P19 Distribution of gama-chamber nuclear equipment is associated to the distribution of physicians in the state of Rio Grande do Sul, Brazil

PatrĂ­cia silva, roger s rosa, rita nugem, adriane carvalho, ronaldo bordin, federal university of rio grande do sul, 90040-060 porto alegre, rio grande do sul, brazil, correspondence: patrĂ­cia silva ([email protected]).

The use of effective technologies extends the resolution of health services. However, over-supply can create incentives for service over-use, which is not without risk to patients. Nuclear medicine equipment has been increasingly used. Knowing the associations with their spatial distribution can contribute to interventions aimed at reducing inequalities.

To dimension the association among mean number of equipment’s of gamma-chamber, population, Gross Domestic Product and number of physicians, by health region of Rio Grande do Sul, state of southern Brazil.

Observational and cross-sectional descriptive study based on public data from each one of the 30 health regions for 2013, the most recent year at the time of the survey (2016-2017). Data was managed in Microsoft ExcelÂŽ. Pearson's linear correlation coefficient and multiple linear regression analysis were used with Statistica 12.5ÂŽ software, at a significance level of 5%. The variable considered for outcome was monthly mean of gamma camera equipment (GamaC) and the predictor variables (I) population (POP), expressed in number of inhabitants; (II) Gross Domestic Product (GDP), expressed in the national coin (Real); and (III) the number of physicians registered in the CNES - National Register of Health Establishments (MED) by health region of the State Health Secretariat, in 2013.

The predictive variables POP, GDP and MED were each one highly correlated with GamaC (R = 0.94, 0.92 and 0.98 respectively). Simple linear regressions with each independent variable were elaborated. It was found that POP, PIB and MED significantly affected the GamaC variable (adjusted R 2 of 0.89, 0.84 and 0.96 respectively). In the final model, where variables were standardized and GamaC was considered to be simultaneously dependent on the predictive variables POP, GDP and MED, the POP variable lost significance (p > 0.05). The variable PIB presented a negative coefficient (-0.54, p < 0.01), while the variable MED, a positive (1.27, p < 0.01).

Health regions of the state that had the highest number of physicians, had the highest mean number of scintigraphic chambers. The growth in the supply of medical equipment such as nuclear medicine improves the population's access to services, but the greater supply in Rio Grande do Sul state was associated more with better developed health regions, when considering the number of medical professionals available, than the gross domestic product or the number of residents in the territory.

Nuclear medicine, Supply, Health needs, Demand of health services.

P20 Family experiences of the internalized person in situation of critical illness: integrative revision

Raquel mv ramos, ana cr monteiro, sĂ­lvia p coelho, instituto de ciĂŞncias da saĂşde, universidade catĂłlica portuguesa, 4169-005 porto, portugal, raquel mv ramos ([email protected]).

The admission of a patient to a critical health unit is usually traumatic for the family, having a major impact on their life, which can result in a moment of crisis, an anxiety enhancer. Fear of death, uncertainty of the future, emotional disturbances, financial worries, changing roles and routines, and the hospital environment are some sources that provide anxiety of a person's family in critical illness [1].

To know the existent evidence about Family Experiences of the Person hospitalized in Situation of Critical Illness.

Integrative literature review using databases CINAHL, MEDLINE, Nursing & Allied Health Collection: Comprehensive, Cochrane, Library, Information Science & Technology Abstracts, Medication with MeSH descriptors: “family”, “needs assessment” and “critical illness”. Were included all English-language articles, available in full text, with abstract and references available, between 2002 and 2017, excluding articles in the paediatrics area.

In total, 7 were selected and 4 articles were analysed in full. From the literature, it emerges that the family of the person hospitalized in a critical illness has experiences and needs consequent to this situation, in which it is necessary an intervention from the professionals to support/to encourage during this traumatic transition of the familiar life [2]. The family has its own needs, and these must be met to effectively manage the situation of instability of the family member. Since the family directly influences the evolution of a person's condition in a critical illness situation, it is important to see the family also as a target of care, in a holistic view of caring [3]. The main areas of need experienced by the family are: information on the clinical situation, assurance of patient safety, support by health professionals and willingness to be close to the patient [2].

Health professionals should be aware that the family is also a target in care, and that, in a multidisciplinary team, nurses are the most qualified professionals to plan and develop interventions to meet and respond to the family needs of the person hospitalized in critical illness [4]. The team must be able to respond to the identified family needs, through interventions to attenuate and help them to live the moment of hospitalization, making it the least traumatic possible, involving the relatives in the care, through the clarification of doubts and by helping to manage emotions and expectations [3].

1. Leske J. Interventions to Decrease Family Anxiety. Critical Care Nurse.2002, 22 (6): R61-65.

2. Kinrade T, Jackson A, Tomnay J. The psychosocial needs of families during critical illness: comparison of nurses’ and family members’ perspectives. Australian Journal of Advanced Nursing. 2009, 27 (1): R82-88.

3. Henneman E, Cardin S. Family-Centered Critical Care: A Practical Approach to Making it Happen. Critical Care Nurse. 2002, 22 (6), R12-19.

4. Fortunatti C. Most important needs of family members of critical patients in light of the Critical Care Family Needs Inventory. Invest Educ Enferm. 2014, 32 (2): R306-316.

Needs assessment, Family, Critical illness.

P21 Cannabidiol oil vs ozonized extra virgin olive oil in the upp treatment of category ii

Carla jimenez-rodriguez 1 , francisco j hernĂĄndez-martĂ­nez 2 , marĂ­a c jimĂŠnez-dĂ­az 1 , juan f jimĂŠnez-dĂ­az 3 , bienvenida c rodrĂ­guez-de-vera 3, 1 universidad de jaĂŠn, universidad de jaĂŠn, 23071 jaĂŠn, espaĂąa; 2 cabildo de lanzarote, 35500 lanzarote, las palmas, islas canarias, espaĂąa; 3 universidad de las palmas de gran canaria, 35015 las palmas de gran canaria, espaĂąa, correspondence: carla jimenez-rodriguez ([email protected]).

Pressure ulcers (UPP) Category II are shallow open wounds. The phytotherapeutic treatments for them are based on healing and antiseptic action. This effect is produced by cannabidiol oil. Also, extra virgin olive oil (EVOO) ozonized has repairing properties with germicidal power.

To determine the effectiveness of cannabidiol oil versus EVOO in the treatment of UPP.

Clinical trial with 60 users with UPP Category II. After the informed consent of the patients, data collection was done in September 2017. Criterion of inclusion: it was essential that each of the users had at least two chronic wounds with the same injury (Category II), in order to apply in each one a product. We excluded users with vascular disease or in situations of extreme severity. Each user included in the study was followed for 20 days. Skin assessment and initial risk assessment was performed with the Braden scale by the principal investigator and another investigator of the team. Subsequently, the skin condition of the patients was evaluated daily, before the application of the product, by the nurse who attended them. Additionally, the patients were evaluated every 7 days by two investigators. The SPSS 25.0 program was used for statistical calculations, considering a level of significance of p < 0.05.

Average age 71.45 Âą 1.27 years. Of a total of 137 chronic wounds, 56.93% were located in the lower limbs. Regarding the resolution of the wounds, no significant differences were found between the two products, since 68.61% of the lesions improve significantly using both products before 72 hours, and all of them heal at the most in 8 days. It did not appear topically on the skin, no allergic reaction due to the use of both products, ansd the application of cannabis oil on the wound was very well tolerated by patients (p < 0.37).

Cannabidiol oil is shown to be as effective as EVOO in the treatment of UPP Category II, both being a good alternative to traditional therapies. In addition, the moisturizing, emollient and anti-inflammatory properties of the two products preserve the perilesional skin in perfect condition. Cannabidiol oil achieves a more favourable analgesic response in patients during wound healing.

Pressure ulcers, Cannabidiol oil, Extra virgin olive oil ozonized, Traditional therapies.

P22 Microbial colonization of experimental ulceras in the laboratory animal treated with cannabidiol oil

Carla jimĂŠnez-rodrĂ­guez 1 , carmelo monzĂłn-moreno 2 , juan f jimĂŠnez-dĂ­az 2 , marĂ­a-del-carmen jimĂŠnez-dĂ­az 1 , bienvenida-del-carmen rodrĂ­guez-de-vera 2, 1 universidad de jaĂŠn, universidad de jaĂŠn, 23071 jaĂŠn, espaĂąa; 2 universidad de las palmas de gran canaria, 35015 las palmas de gran canaria, espaĂąa, correspondence: carla jimĂŠnez-rodrĂ­guez ([email protected]).

One of the most undesirable complications in the healing process is infection in the bed of wounds or ulcers.

To verify the microbial colonization of experimental ulcers in the laboratory animal treated with cannabidiol oil (CBD) applied topically.

Experimental study with a control group (with physiological saline to maintain hydration conditions and group with extra virgin olive oil (EVOO) to avoid bias with the oleic excipient), to check the mesophilic microbial colonization after the use of the applied CBD topically on experimental total skin ulcerative lesions in the adult male white rat, Sprague Dawley strain. Ten animals were used for each group under standard laboratory conditions. After anaesthesia with 100% isoflurane, a total skin wound was performed in the region of the back with a disposable surgical punch of 8 mm in diameter. Subsequently, they are distributed in individual cages to prevent them from licking each other and with sufficient height to prevent the friction of the cutaneous ulcer with the passenger compartment. 0.15 ml of the respective product was applied daily to the ulcers. The microbiological analysis was carried out by studying the variation of the bacterial microbiota. The colony-forming units of each wound were determined by counting on a plate, after obtaining a total skin sample and a superficial sweep. The organic samples obtained were placed in sterile tubes containing 1 ml of physiological serum to which vortex was applied for 30 seconds, serial dilutions being made to the tenth of the samples subject to titration. Six plates of Tryptic Soy Agar (TSA) were then labelled, one for each dilution obtained, and 0.1 ml of each of these dilutions was added, spreading it on the surface of the plate by means of the sowing handle. Plates were incubated in an oven at 37° for twenty-four hours and then the colony forming units were counted.

Two hundred fourteen different colonies were obtained. The majority genus was Staphylococcus . There was no difference in microbial colonization due to the products used in each group, i.e. , physiological serum, EVOO and CBD.

The analysis of the mesophilic cutaneous microbiota shows a microbial colonization rich in gram-positive organisms, the majority being the presence of coagulase-negative staphylococci (CNS) that behave as opportunistic pathogens in skin continuity solutions.

Colonization, Microbial cannabidiol, Skin, Ulcer, Rat.

P23 The impact of dermatologic and cosmetic counseling - case study

Stefany moreira 1 , ana oliveira 2 , rita oliveira 2,3 , clĂĄudia pinho 2 , agostinho cruz 2, 1 escola superior de saĂşde, instituto politĂŠcnico do porto, 4200-072 porto, portugal; 2 centro de investigação em saĂşde e ambiente, escola superior de saĂşde, instituto politĂŠcnico do porto, 4200-072 porto, portugal; 3 secção autĂłnoma de ciĂŞncias da saĂşde, universidade de aveiro, 3810-193 aveiro, portugal, correspondence: stefany moreira ([email protected]).

Community pharmacy professionals (CPPs) have been recognized as the most accessible and best-positioned health professionals for the provision of pharmaceutical counselling [1]. This happens due to the easy access to pharmacies, and because their interventions translate into: beneficial clinical results; satisfaction of users; reduction of costs and prevention of problems or negative reactions to medicines [1, 2]. The sale of dermatological products and symptoms associated with skin problems has a considerable impact on sales and advice requirements in pharmacies, respectively [3].

To demonstrate the importance of CPPs through the quantitative evaluation of the impact of dermatologic and cosmetic counselling; and to determine which dermatological/cosmetic areas affect most people and what motivates them to turn to this type of counselling.

Prospective, longitudinal and an observational case study. It took place in a pharmacy in the city of Porto, between January and April 2017. It had 3 phases: I) invitation (where were explained the objectives and the methodology); II) first Interview: Completion of PART I of the Questionnaire (description of the situation and the advice provided by the CPP); III) second Interview: Completion of PART II of the Questionnaire (evaluation of the result of the counselling).

Of the 16 analysed situations: 62.50% were resolved and/or people were satisfied, 31.25% were in the process of improvement, and 6.25% were not resolved and/or people were not satisfied. The three most mentioned dermatological/cosmetic areas in the requests for counselling were: daily skin care (37.50%); marks, spots, comedones, pimples or signs on the skin (18.75%) and sun protection (12.50%).

CPPs have proven to be very valuable in providing counselling on dermatologic products and cosmetics, where, this had a positive impact. The dermatological/cosmetic area that most had expression among the requested situations was daily skin care.

1. Curley LE, Moody J, Gobarani R, Aspden T, Jensen M, McDonald M, et al. Is there potential for the future provision of triage services in community pharmacy? J Pharm Policy Pract. 2016;9(29):1-22.

2. Coelho RB, Costa FA. Impact of pharmaceutical counseling in minor health problems in rural Portugal. Pharmacy Practice. 2014 Oct;2(4).

3. Tucker R, Stewart D. Why people seek advice from community pharmacies about skin problems. Int J Pharm Pract. 2015;23:150-3.

Community pharmacy professionals, Counselling, Dermatologic products, Cosmetics.

P24 Ability of clients for self-management of medication regime: specification of nursing diagnosis

There is a growing concern to understand the experience of living with multiple morbidities and the need to manage a medication regime [1, 2] by people experiencing one or more health/disease transitions [3], in order to assist them in this process. Being human responses to different transitions the object of the Nursing discipline, these professionals must identify and represent the nursing care needs of clients in the Nursing Information Systems in use, which are a repository of the Discipline knowledge.

Identify and specify the nursing diagnoses centred on the ability for self-management of the medication regime, as a type of self-care in situations of health deviation.

Qualitative study. All nursing documentation customised in the Portuguese nursing information System - SAPEÂŽ (2012) and in Sclinico (2016) - was subject to content analysis. After conducting content analysis, the authors presented it to a group of 14 nursing experts in the field, to reach consensus.

From the analysis of the national customisations, we infer a set of nursing diagnoses related to the person's abilities to manage the medication regime. These diagnoses focus on the potential to improve the ability for: self-management of the medication regime; self-management of the medication regime using devices; administering medication; administering subcutaneous medication; administering insulin; administering inhalant medication; administering oxygen therapy; self-monitoring in face of the medication regime; self-monitoring of capillary glycemia; self-monitoring heart rate in face of administering medication; self-monitoring blood pressure in face of administering medication; and self-monitoring urine.

The specified diagnoses reflect nursing care needs of people who are challenged to live with chronic illnesses, particularly at the level of skills they need to develop in order to manage the medication regime. It is necessary that nurses identify these needs to prescribe interventions that improve the ability of the person to administer medication, with or without the use of devices; by different routes; and to monitor some physiologic parameters related to the medication taken. We believe this will be a first contribution to the representation of the nursing knowledge in this area.

1. Meranus M, Engstrom G. Experience of self-management of medications among older people with multimorbidity. J Clin Nurs. 2015; 24: 2757-2764.

2. Duguay C, Gallagher F, Fortin M. The experience of adults with multimorbidity: a qualitative study. J Comorbidity. 2014; 411-21.

3. Meleis A, Sawyer L, Im E, Messias H, DeAnne K, Schumaker K. Experiencing transitions: an emerging middle-range theory. Advances in Nursing Science. 2000; 23 (1): 12-28.

Self-management, Medication regime, Nursing diagnosis, Nursing information systems.

P25 Antioxidant activity of Artemisia annua L

Rita vieira 1 , clĂĄudia pinho 2 , ana i oliveira 2 , rita f oliveira 2,3 , agostinho cruz 2, correspondence: clĂĄudia pinho ([email protected]).

Tea infusions of Artemisia annua are known for their prophylactic and therapeutic efficacy against malaria [1]. However, recent studies have revealed that A. annua possess a variety of pharmacological activities such as antibacterial, cytotoxic and antioxidant [2, 3].

This study aims to evaluate antioxidant activity of A. annua plant obtained from two different manufactures, prepared using different solvents.

A. annua leaves (obtained from two manufactures) were extracted with two solvents (water and 70% ethanol), and antioxidant activity of the extracts were screened using superoxide and 1,1-diphenyl-2-picryl hydrazyl (DPPH•) radical scavenging, and metal chelating activity.

The extracts tested not only showed ability to bind to iron ions but also demonstrated ability to inhibit free radicals. Results showed that antioxidant activity increased with increasing concentrations of the extracts studied. The IC50 values of A. annua aqueous extract (infusion), obtained from manufacture A, for DPPH and superoxide radical scavenging activities, and Fe 2+ chelating activity, ranged from 29.3 to 176.6 Îźg/mL. For the hydroalcoholic extract, IC50 values ranged from 28.0 to 262.1 Îźg/mL (all above standards). The IC50 values of A. annua aqueous extract (infusion), obtained from manufacture B, for superoxide and DPPH radical scavenging activities, and Fe 2+ chelating activity, ranged from 6.9 to 282.0 Îźg/mL. For the hydroalcoholic extract, IC50 values were 40.4, 46.8 and 50.5 Îźg/mL for Fe 2+ chelating activity, superoxide and DPPH radical scavenging activities, respectively. Only the aqueous extract, obtained from manufacture B, showed an IC50 value (6.9 Îźg/mL), for the superoxide radical scavenging activity, lower than positive control (20.6 Îźg/mL - ascorbic acid).

This study confirms the differences in antioxidant activities using different solvents, suggesting that the solvent effect should be taken into account in the evaluation of the antioxidant potential of any sample. However, the origin of the plants including the pre- and post-harvesting practices can be also important for their chemical composition, resulting in different values for the same antioxidant assays and solvents.

1. van der Kooy F, Verpoorte R. The content of artemisinin in the Artemisia annua tea infusion. Planta Med. 2011, 77(15):1754-6.

2. Kim WS, Choi WJ, Lee S, Kim WJ, Lee DC, Sohn UD, Shin HS, Kim W. Antiinflammatory, Antioxidant and Antimicrobial Effects of Artemisinin Extracts from Artemisia annua L. Korean J Physiol Pharmacol. 2015, 19(1):21-7.

3. Singh NP, Ferreira JF, Park JS, Lai HC. Cytotoxicity of ethanolic extracts of Artemisia annua to Molt-4 human leukemia cells. Planta Med. 2011, 77(16):1788-93.

Artemisia annua , Antioxidant activity, Solvent extraction, DPPH, Superoxide anion radical, Metal chelating activity.

P26 Swimming pool users and behaviors: practices and motivations

Daniel a marinho 1,2 , luĂ­s faĂ­l 1 , mĂĄrio c marques 1,2 , antĂłnio sousa 1,2 , henrique p neiva 1,2, 1 department of sport sciences, university of beira interior, 6201-001 covilhĂŁ, portugal; 2 research center in sports sciences, health sciences and human development, university of tras-os-montes and alto douro, 5001-801 vila real, portugal, correspondence: daniel a marinho ([email protected]).

Health and sports professionals have recommended water-based exercises as an alternative to traditional dry-land exercise, leading to an increase in physical exercise performed in an aquatic context. The properties of the aquatic environment, combined with the resistance of the water during all movements, make it beneficial for health-related parameters and physical fitness [1]. However, research is needed to understand the practices of different populations, according to the specificity of some activities. Few is known about people’s practices in these particular activities.

The purpose of this study is to characterize Portuguese practices and motivations to use the swimming pools and to exercise in-water physical activities.

Subjects from the interior region of Portugal, swimming pool users, completed a questionnaire consisting of 33 questions. Those questions were focused on the characterization of their usual in-water activities, and main motivations.

Until now, 418 swimming pool users answered the questions, ranging from 18 to 79 years old (44.7% females, 55.3% males). Most of them were active and only 67 subjects were retired from work. They used to practice aquatic actives for more than 2 years (60%), and the majority twice-a-week, preferring the evening time to attend the swimming pool. Among the various types of swimming pool use, it was verified that 31% perform water aerobics, 48% swimming classes and 31% free time schedules. More than half of the sample only performed aquatic activities (54%) and aimed to improve health (47%), physical fitness (31%) and 11% to relief stress. Curiously, only 1% wanted to learn how to swim. They classified the physical activities performed in-water in the last few weeks mostly of moderated/vigorous intensity. People who attend to swimming pools are persistent and committed with aquatic exercitation, practicing for more than two years. Although most of them participate in swimming or water aerobics lessons, there is still a considerable number of free-time users and swimming pools must be prepared for this fact. Interestingly, the majority attend to swimming pool to improve health and physical fitness.

This pilot study will be implemented in several other regions of the country and this would allow to understand the motivations and needs of users and to improve offers and support to other areas of research ( i.e. , development of technological devices).

This project was supported by the Project NanoSTIMA: Macro-to-Nano Human Sensing, Towards Integrated Multimodal Health Monitoring and Analytics, NORTE-01-0145-FEDER000016, co-financed by European Fund for Regional Development (FEDER) - NORTE 2020.

1. Barbosa TM, Marinho DA, Reis VM, Silva AJ, Bragada JA. Physiological assessment of head-out aquatic exercises in healthy subjects: a qualitative review. J Sports Sci Med. 2009, 8(2): 179-189.

In-water activities, Questionnaire, Physical activity.

P27 Critical patient’s comfort:sStrategies to reduce environmental noise levels

Telma ramos, filipa veludo, school of nursing, institute of health sciences, universidade catĂłlica portuguesa, 1649-023 lisbon, portugal, correspondence: telma ramos ([email protected]).

Noise may have harmful effects. For critically ill patients, highlights have main consequences cardiovascular disorders, reduction of arterial oxygen saturation, increase in gastric secretion, stimulation of the pituitary gland, sleep disturbance, immunosuppression and reduction of the cicatrisation process [1]. Noise has an overall negative impact on patients’ recovery. Identification and dissemination of strategies to reduce environmental noise empowers nurses towards changes in their professional practice.

Identify evidence in Literature of nursing care strategies to reduce environmental noise in critical patient care. Methods

This research was conducted in two phases. 1st Phase: Mediated by an integrative literature review (16/04/2017) we carried out data-base research through the Academic Search Complete; Complementary Index; CINAHL Plus with Full Text; Directory of Open Access Journals; Supplemental Index; Psychology and Behavioural; Sciences Collection; SPORTDiscus with Full Text; RCAAP; SciELO; Europeana; Business Source Complete; Education Source; IEEE Xplore Digital Library; MedicLatina; JSTOR Journals; PsycARTICLES; ScienceDirect. Descriptors: (TI (Noise*or sleep*) AND (Nurs*) AND (intervention or care or patient care or care plan* or critical care), non-temporal. Inclusion criteria: Primary, secondary, opinion/reflexion studies. Exclusion Criteria: Paediatrics context, REM, pharmacological intervention. From the initially 441 articles obtained, we excluded 391 by reading abstracts, 22 by summary and 15 by the complete text, concluding with 13 articles as final sample. 2nd Phase: Content analysis according to [2] in order to categorize results.

We have identified 6 feasible categories for environmental noise reduction, which we present as main strategies: Behavioural changes (creation of awareness to the importance of the tone of voice and silent handling of equipment and materials); Material and Equipment management (mobile phones, televisions and radios volume configuration; determination of correct parameters for alarm configuration); Management of silence promotion care (implementation of periods of silence, avoid noisy tasks); Training in environmental noise (behavioural change programs and health education about negative effects of noise); Care quality control (usage of ear plugs); Others (infrastructural adaptations, encourage suppliers to produce more silent products).

This study systematizes strategies to be implemented by nursing professionals in order to reduce environmental noise within health structures and improve patient comfort. The implementation of a silence culture enables an adequate and essential physical environment to patient recovery [3]. Empower nurses with the identified strategies allows the improvement of people’s quality of life. The shortage of published research reflects the need of forward research.

1. Christensen, M. (2007). Noise levels in a general intensive care unit: a descriptive study, Nurse Critical Care, 12(4), 188-97.

2. Bardin, L. Anålise de Conteúdo. Lisboa: Ediçþes 70 Lda, 2016.

3. Nightingale F. Notas Sobre Enfermagem: o que ĂŠ e o que nĂŁo ĂŠ. Loures: LusociĂŞncia; 2005.

Noise, Comfort, Integrative literature review, Content analysis.

P28 Nurse-patients’ family interaction in ICU and the establishment of effective therapeutic partnerships: vulnerability experienced and clinical competence

Anabela mendes ([email protected]).

When faced with a negative event such as the situation of a critical illness, nurses and patients’ family build their interaction on a daily basis [1-3]. The closeness and the joint interest in finding together the answer to their common issues motivates them for a common path, based on trust [4]. We find that nurses' clinical exercise time and their clinical competence can influence this process [5, 6].

To analyse how family perceives the interaction between nurses and the family. To diagnose how to build a daily basis interaction when facing a critical illness, and which steps highlight the existing confidence. Having Benner's theoretical framework as a support, we need to understand the relationship between the time spent in critical care and nurses’ clinical competence.

Qualitative study. Data collection through open interview to 12 family members, of an adult person hospitalized in ICU. The interviews content analysis was carried out according to the phenomenological approach suggested by Van Manen. The Software used for qualitative data analysis was Nvivo. This software showed the advantages of time saving and allowed to carefully explore the relationship between the data [7].

Family members recognized how determinate the interaction with nurses to their daily life in the ICU was. They reported the careful construction of discourse and the effective presence with the sick person as nurses' strategy for interaction. The need to know better the situation and to discover what will happen, motivated families to start the interaction. Trust was revealed in founded solicitude and compassion. Families know that nurses are vulnerable to their suffering. During interaction, family members noticed that clinical competence is inherent to the nurse person and not related to the time of practice.

The co-existence compromises nurses and family in the construction of an effective therapeutic partnership. They recognized that the information they have from the sick person, arising from different circumstances, must be shared, considering professional ethics, beliefs and values and also the relevance for the therapeutic process. It is in interaction and for the interaction that they discover vulnerability, comfort and trust each other.

1. Curtis, J. Caring for patients with critical illness and their families: the value of integrated clinical team. Respiratory care. 2008, 53 (4): 480-487.

2. Mendes, A. A informação à família na unidade de cuidados intensivos: Desalojar o desassossego que vive em si. Lisboa : Lusodidacta, 2015.

3. Mendes, Aa. Sensibilidade dos profissionais face à necessidade de informação: a experiencia vivida pela família na unidade de cuidados intensivos. Texto Contexto Enferm. 2016, pp. 5(1):2-9 http://dx.doi.org/10.1590/0104-07072016004470014.

4. Benner, P., Kyriakidis, P. e Stannard, D. Clinical wisdom and interventions in acute and critical care. A Thinking-in-action approach. 2ª. New York : Springer publishing company, 2011. 1ª edição- 1999. 978-0-8262-0573-8.

5. Benner, P., et al., et al. Educating nurses. A call for radical transformation. San Francisco : The carnegie foundation for the advancement of teaching, 2010. 978-0-470-45796-2.

6. Benner, P., Tanner, C. e Chesla, C. Expertise in nursing practice. Caring, clinical judgement & ethics. New york : Springer publishing company, 2009. 978-0826 12544-6.

7. Forte, E., et al., et al. A HermenĂŞutica e o Software Atlas.ti: UniĂŁo promissora. Texto Contexto Enferm. 2017, 26(4).

Family, Nursing, Intensive care, Interpersonal relations, Communication.

P29 Effectiveness of vein visualization technologies on peripheral intravenous catheterization: a systematic review protocol

Anabela s oliveira 1 , joĂŁo graveto 1 , nĂĄdia osĂłrio 2 , paulo costa 1 , vânia oliveira 1 , luciene braga 4 , isabel moreira 5 , fernando gama, daniela vidal, joĂŁo apĂłstolo 6 , pedro parreira 1, 1 health sciences research unit, nursing school of coimbra, coimbra, 3046-851, portugal; 2 coimbra health school, polytechnic institute of coimbra, coimbra, 3046-854, portugal; 3 coimbra hospital and universitary centre, 3000-075 coimbra, portugal; 4 federal university of viçosa, minas gerais, 36570-900, brazil; 5 nursing school of coimbra, 3046-851 coimbra, portugal; 6 portugal centre for evidence based practice: a joanna briggs institute centre of excellence, 3046-851 coimbra, portugal, correspondence: pedro parreira ([email protected]).

The insertion of a peripheral vascular catheter (PVC) is the most often invasive procedure performed in hospital settings [1-3]. During hospitalization, 33.0-96.7% of patients need to have a PVC inserted [4-6]. These devices are not risk-free, affecting patients’ safety and well-being. In fact, up to 72.5% of the PVCs are removed due to complications [6]. Healthcare professionals should consider using specific technologies that help to select the vein to puncture and reduce the number of attempts and catheter-related mechanical complications.

This review aims to identify and synthesize the effectiveness of the use of vein visualization technologies (near-infrared light or ultrasonography) in patients who need peripheral intravenous catheterization when compared with the traditional technique.

Methodology proposed by Joanna Briggs Institute [7]. A three-step search strategy was used in this review: (I) an initial limited search was undertaken followed by an analysis of the words contained in the title and abstract, and of the index terms used to describe the article; (II) a second search using all identified keywords and index terms was undertaken across all included databases; (III) the reference list of all identified reports and articles was searched for additional studies. Studies of quantitative evidence published between 1999 and 2017 were considered for inclusion in this review. This review included patients of all ages, in any clinical setting. However, studies where patients displayed a previous vascular access device in situ were excluded. The assessment of methodological quality, data extraction and synthesis will be conducted by two independent reviewers using standardized tools recommended by the Joanna Briggs Institute [7]. Any arising disagreements will be resolved through discussion or with a third reviewer.

An initial limited search of MEDLINE via PubMed and CINAHL was undertaken, using specific terms such as: catheters; cannula; “vascular access devices”; “peripheral intravenous catheterization”; “peripheral venous catheterization”; “peripheral access”; “peripheral intravenous access”; “venous access”; NIR*; near-infrared*; infra-red*; light*; device*; machine*; ultrasonograph*; technolog*; sonography*; ultrasound*. Resultantly, 2,699 studies were retrieved, written in English, Portuguese, Spanish and French. Keywords and index terms are being identified in order to generate a more comprehensive search strategy (step two).

The critical analysis of existing data will contribute to the dissemination of the best evidence available on the subject. It is expected that this dissemination will be reflected in the definition of guidelines regarding PVC management and, consequently, in the optimization of current practices.

1. Marsh N, Webster J, Mihala G, Rickard CM. Devices and dressings to secure peripheral venous catheters to prevent complications. Cochrane Database Syst Rev. 2015; 6:1-14.

2. Wallis MC, McGrail M, Webster J, Marsh N, Gowardman J, Playford EG et al. Risk factors for peripheral intravenous catheter failure: a multivariate analysis of data from a randomized controlled trial. Infection control and hospital epidemiology. 2014;35(1):63-8.

3. Webster J, Osborne S, Rickard C, New K. Clinically-indicated replacement versus routine replacement of peripheral venous catheters. Cochrane Database Syst Rev. 2015;8. Art. No.: CD007798.

4. GrĂźne F, Schrappe M, Basten J, Wenchel H, Tual E, StĂźtzer H. Phlebitis Rate and Time Kinetics of Short Peripheral Intravenous Catheters. Infection. 2004;32(1):30-32.

5. Pujol M, Hornero A, Saballs M, Argerich M, Verdaguer R, Cisnal M et al. Clinical epidemiology and outcomes of peripheral venous catheter-related bloodstream infections at a university-affiliated hospital. Journal of Hospital Infection. 2007;67(1):22-9.

6. Braga LM. Pråticas de enfermagem e a segurança do doente no processo de punção de vasos e na administração da terapêutica endovenosa [PhD Thesis]. Universidade de Lisboa; 2017.

7. Peters M, Godfrey C, McInerney P, Baldini Soares C, Khalil H, Parker D. Chapter 11: Scoping Reviews. In: Aromataris E, Munn Z, ed. by. Joanna Briggs Institute Reviewer's Manual [Internet]. The Joanna Briggs Institute; 2017 [cited 14 December 2017]. Available from: https://reviewersmanual.joannabriggs.org/.

Peripheral intravenous catheterization, Near-infrared light, Ultrasonography, Traditional technique.

P30 Falls Efficacy Scale-International: how does it “behave” with users of adult day care centres?

Daniela figueiredo 1,2 , martina neves 1, 1 school of health sciences, university of aveiro, 3810-193 aveiro, portugal; 2 center for health technology and services research, school of health sciences, university of aveiro, 3810-193 aveiro, portugal, correspondence: daniela figueiredo ([email protected]).

The Falls Efficacy Scale-International (FES-I) is a highly reliable instrument to assess fear of falling among older adults. However, the majority of validation studies with the FES-I are conducted with independent and relatively healthy community-dwelling older people, which limits extrapolation to those receiving adult day care services. Reference to higher disability and frailty is common among adult day care users compared to non-users.

This study presents preliminary findings of the psychometric properties of the European Portuguese version of the FES-I in a sample of older users of day care centres.

A cross-sectional study with a convenience sample was conducted. Data collection included a socio-demographic questionnaire, and the Portuguese versions of the FES-I and the Activities-specific Balance Confidence Scale (ABC). Descriptive and inferential statistical analyses were performed.

A total of 100 older people users of day-care centres (81.94 Âą 6.43 years old; 77% female) have participated in the study. FES-I had excellent internal consistency (Îą = 0.970) and test-retest reliability (ICC2,1=0.979). A significant negative correlation was found between the FES-I and the ABC (rs = -0.828; p < 0.001), indicating good concurrent validity. FES-I scores were significantly higher among those who were older, female and less educated.

The FES-I seems to be a reliable and valid measure of fear of falling for older people who are clients of adult day care services. The findings are highly comparable with those previously found for non-users of day-care centres. FES-I can be also used to prevent risk of falls in this type of care settings.

This paper was supported by ERDF (European Regional Development Fund) through the operation POCI-01-0145-FEDER-007746 funded by the Programa Operacional Competitividade e Internacionalização – COMPETE2020 and by National Funds through FCT - Fundação para a Ciência e a Tecnologia within CINTESIS, R&D Unit (reference UID/IC/4255/2013).

Falls Efficacy Scale-International, Older people, Adult day care, Fear of falling, Psychometric properties.

P31 Function-Focused Care: validation of self-effecacy, outcomes expectations and knowledge scales

LĂŠnia costa 1 , pedro sĂĄ-couto 2 , joĂŁo tavares 3,4, correspondence: lĂŠnia costa ([email protected]).

The nursing assistant (NA) plays an important role in maintaining the health and independency of institutionalized older adults (OA) [1]. These professionals are required to help OA to achieve and maintain their highest level of function. The Function-Focused Care (FFC) is a philosophy of care that promotes the restoration and/or maintenance of physical function. In the institutional context it is relevant empowering NA to adopted this philosophy [2].

This study intends to analyse the perception of NA in relation to the FFC, through scales of self-efficacy, outcomes expectations and knowledge, as well as, the validity and reliability related properties.

Quantitative approach of a descriptive/correlational cross-sectional type. A self-report questionnaire consisting of sociodemographic and professional variables and the scales of self-efficacy, expectations and knowledge were applied. Further details about the scales used can be found in Costa [3]. The validation/reliability procedures for each scale consisted in the calculation of exploratory factor analysis, Cronbach's alpha, and in the intra-class correlation coefficient (ICC) for test/retest purposes. Correlation between the scales themselves, with feelings related to the care of elderly, and sociodemographic and professional variables were tested using the Spearman Rank test.

The sample consisted of 73 NA (100% women) with a mean age of 46.4 (Âą 9.9) years from 5 different institutions. The scale of the self-efficacy showed a three-factor model with the total variance of 73.4%, Cronbach's alpha = 85.2% and ICC = 0.80. The scale of outcomes expectations presented one factor, Cronbach's alpha= 95.2% and ICC = 0.97. The scale of Knowledge obtained a percentage of correct answers only of 44.7%. It was not possible to develop predictive models to relate these scales in a pre-intervention situation. Also, the low correlation between the scales and feelings related to the care of OA (difficulty, gratification, physical overload and emotional overload) or sociodemographic and professional variables (age, years of experience, and self-knowledge), indicated a weak dependence between them. Finally, the institution variable showed not to be a confounding variable (that is, does not influence these results).

The Portuguese version of the scales analysed showed satisfactory data validity and reliability. These results suggest that the Portuguese version of these scales can be used to evaluate the FFC performed by the NA. These results point to the importance of implementing a FFC program in the institutions and analyse its impact on AO care and on NA.

1. Gray-Stanley JA, Muramatsu, N. Work stress, burnout, and social and personal resources among direct care workers. Research in Developmental Disabilities. 2011; 32(3), 1065–74. http://doi.org/10.1016/j.ridd.2011.01.025

2. Resnick B, Boltz M, Galik E, Pretzer-Aboff I. Restorative Care Nursing for Older Adults. New York; Eds: Graubard A, Claire L (2nd ed); Springer Publishing Company; 2012

3. Costa, LÊnia. Cuidado centrado na funcionalidade: validação das escalas de autoeficåcia, expectativas e conhecimento [Master Thesis] [Portuguese]. Universidade de Aveiro. 2016

Aging, Functionality, Function-focused care, Nursing assistant.

P32 Determinant factors for the development of student competencies in the context of clinical training: one ecological perspective

MarĂ­lia rua 1 , isabel alarcĂŁo 2 , wilson abreu 3, 1 health school, university ofaveiro, 3810-193 aveiro, portugal; 2 university of aveiro, 3810-193 aveiro, portugal; 3 nursing school of porto, 4200-072 porto, portugal, correspondence: marĂ­lia rua ([email protected]).

The growing complexity of health care settings, as well as the own care, require that training in this area is also a process thought, a dynamic perspective of integration/implementation of knowledge in each context, which is only possible if carried out in close collaboration between school and a real context of clinical practice [1]. In the light of bio-ecological perspective [2] the development of skills of the students in this context may be influenced by several factors related to the person, the process, context and time.

To understand the factors that influence the development of student’s skills in the clinical training.

We selected a qualitative methodology, using a case study [3] referring to the Nursing Degree, in University of Aveiro. Data emerged from narratives of students and supervisors about their experiences on clinical context.

The final results allow us to conclude that the development of abilities occurs in an integrating way, combining synergistically different dimensions and important factors related to the PPCT model. For the Person - emerge the activities, the contact with suffering/death and affective-relational climate. For the Process, the proximal process is pointed out, as well as strategies of supervision. In these contexts, emerge, in the microsystem, the specificities of each context; in the mesosystem the importance goes to the multicontextual participation; in the exosystem, to the interinstitutional relationship and, at a macrosystemic, signs the influence of the policies of hospital management. With respect to time, the importance of the continuity of the proximal processes and the periodicity of the clinical teaching were observed.

The student’s skills development is a dynamic, dialectical and progressive process which implies: continuity over time; progressive interaction with people of context–process; contexts that establish themselves as important elements in the development of students' skills at different levels.

1. Rua M dos S. De aluno a enfermeiro - Desenvolvimento de CompetĂŞncias em Contexto de Ensino ClĂ­nico. Loures: LusociĂŞncia; 2011.

2. Bronfenbrenner U, Morris P. The Ecology of Developmental Process. In: Pedro JG, editor. Stress and Violence in Childhood and Youth. Lisboa: Faculdade de Medicina, Universidade de Lisboa; 1999. p. 21–96.

3. Yin R. Estudo de Caso. Planejamento e MĂŠtodos. 3a. Porto Alegre: Artemed Editora; 2005.

Bioecological model, Student, Competencies, Clinical training.

P33 Phytochemical screening from Rosmarinus officinalis and Ginkgo biloba leaf extracts

Ana frança 1 , diana silva 2 , ana i oliveira 3 , rita f oliveira 3,4 , clĂĄudia pinho 3 , agostinho cruz 3, 1 farmĂĄcia holon, baguim do monte, 4435-668 gondomar, portugal; 2 farmĂĄcia higiĂŠnica, fĂŁo, 4740-323 esposende, portugal; 3 centro de investigação em saĂşde e ambiente, escola superior de saĂşde, instituto politĂŠcnico do porto, 4200-072 porto, portugal; 4 secção autĂłnoma de ciĂŞncias da saĂşde, universidade de aveiro, 3810-193 aveiro, portugal, correspondence: ana frança ([email protected]).

Currently, drug therapy with oral antidiabetic agents, is capable of inducing normoglycemia levels able to decrease the risk of complications associated with diabetes mellitus . However, it is also known that the various existing oral antidiabetic agents may trigger a large number of adverse events, either alone or in combination. Some of these tolerability and security issues related to the oral antidiabetic are reported by patients and can influence negatively or satisfaction with treatment or glycaemic control, or the therapeutic adherence and maintenance. It is therefore very important the role of patients in monitoring of adverse events related to the use of the oral antidiabetic drugs in order to optimize treatment and improve the quality of life of patients with type 2 diabetes (DM2).

The aim of this study is to determine the prevalence of adverse events associated with use of oral antidiabetics and assessing their impact on Health-related Quality of Life (HRQoL) of diabetic patients tracked in primary health care.

The results show that the highest prevalence of adverse events is in the DipeptidylPeptidase-4 Inhibitors followed by Metformin+Sitagliptin (fixe dose) and Metformin+Vildagliptin (fixe dose) therapeutic classes. We also found that all the correlations between different variables are statistically significant (p < 0.001).

Thus, we conclude that patients who show greater number of adverse events tend to have poorer health profile, worse general health and also lower health related quality of life.

1. Begum A, Sandhya S, Ali SS, Vinod KR, Reddy S, Banji D. An in-depth review on the medicinal flora Rosmarinus officinalis (lamiaceae). Acta Sci Pol Technol Aliment. 2013;12(1):61–73.

2. European Medicines Agency. European Union herbal monograph on Ginkgo biloba L., folium. 2015.

3. Goh LM, Barlow PJ, Yong CS. Examination of antioxidant activity of Ginkgo biloba leaf infusions. Food Chem. 2003;82:275–82.

4. Kontogianni VG, Tomic G, Nikolic I, Nerantzaki AA, Sayyad N, Stosic-Grujicic S, et al. Phytochemical profile of Rosmarinus officinalis and Salvia officinalis extracts and correlation to their antioxidant and anti-proliferative activity. Food Chem. 2013;136(1):120–9.

Rosmarinus officinalis , ginkgo biloba , phytochemical screening, leaf extract.

P34 Systematic review - how comfort and comfort in nursing are characterized

Ana r sousa 1 , eliette castela 2 , patrĂ­cia pontĂ­fice-sousa 2 , teresa silveira, 1 centro hospitalar de setĂşbal, 2910-446 setĂşbal, portugal; 2 universidade catĂłlica portuguesa, 1640-023, lisboa, portugal, correspondence: ana r sousa ([email protected]).

Comfort is an important concept and a fundamental value of nursing. This is assumed to be a multidimensional, dynamic and intersubjective concept and the nursing intervention measures used to satisfy the specific comfort needs, thus comforting constitutes a competence of the nurse. Recognizing the importance of scientific evidence in practice, the importance of characterizing and understanding the ways and means of comfort centred on the needs of the client, an exploratory research was carried out with the purpose of knowing the meaning of comfort, as well as ways and forms of comfort, in order to define effective interventions that promote comfort.

To know how is evidenced the characterization of comfort and comfort in the nursing scientific literature.

Systematic review of the literature based on the recommendations of the Joanna Briggs Institute on the PICO strategy and PRISMA recommendations. The research was performed in databases CINAHL Plus, MEDLINE, Nursing & Allied Health Collection and MedicLatina, from January 2010 to November 2017, combining the following descriptors: Comfort * AND Nurs * AND research NOT Psyquiatric.

Eleven studies were integrated in the review, which involved people with chronic and acute illness. Studies have shown that being socially accepted, being physically comfortable, feeling safe, being close to significant people are some of the characteristics that qualify comfort. Regarding comfort in nursing, the findings analysed demonstrate numerous comforting strategies, namely, effective and empathetic presence, touch, smile, family integration in the care process, among others.

The main results, while providing data that allow us to characterize comfort and comfort in nursing, also highlight the need to investigate the focus.

Caracterization of the comfort term, Comfort care, Nursing care.

P35 Rotines of life and health of institutionalized young people

Tiago machado 1 , joĂŁo serrano 1 , sergio ibanez 2 , helena mesquita 1 , pedro pires 1, 1 school of education, polytechnic institute of castelo branco, 6000-266 castelo branco, portugal; 2 faculty of sports science, university of extramadura, 10003 cĂĄceres, spain, correspondence: tiago machado ([email protected]).

Today's society creates many limitations in the daily lives of children and young people and will have an impact on their well-being, health and quality of life. This is aggravated when we speak of institutionalized children. Objective

To study the leisure activities of young people institutionalized in Homes for Children and Youth and to know if these activities contribute to the development of healthy lifestyles.

A questionnaire was used as instrument, which was submitted to validation by specialists. The questionnaires were completed by the young people with the presence of the researcher. The sample consisted of 100 young people aged between 10 and 18 years old, belonging to 6 Homes for Children and Youth.

In the period of free time we found that in the recreations the most accomplished activities were to talk with friends (86.4%), play in the sports fields (47.5%) and dating (23.3%). Most of the young people do not participate in school sports (76.7%). The most accomplished activities after school were: watching TV (100%), cleaning the Institution (96.2%), listening to Music (89.3%), studying (87.4%), playing on the computer and on Facebook (82.6%) and doing physical activity (table tennis, football, among others) (73.8%). Regarding the accomplishment of these activities, the young people reported doing them daily or at least 2 to 3 times a week. When asked if they would like to occupy their free time within the institutions otherwise, the young people were divided, even though the majority responded that they did not (53.4%). As young people are in an open regime, we asked them which activities were most frequently carried out, outside the institution, and they answered that it was a walk with their friends (87.4%), and that the frequency of this activity was daily or at least 2 to 3 times a week. When asked if they would like to take their free time away from institutions in a different way, most of the young people said no, given that their activities are their preference.

The results showed that the young people in the study carried out activities considered healthy, which contribute to their quality of life and well-being; however, we were able to verify that they present limitations, namely, in relation to physical activity, since most of the activities performed both inside and outside the institutions had low levels of intensity and frequency, being carried out sporadically.

Life and health routines, Free Time and Leisure, Children and young people at risk.

P36 Evaluation of pain in patients intubated orotracheally: BPS and CPOT

Ana rpq pinheiro 1 , rita marques 2, 1 instituto de ciĂŞncias da saĂşde, universidade catĂłlica portuguesa, 1649-023 lisbon, portugal; 2 escola superior de saĂşde da cruz vermelha portuguesa, 1300-906 lisbon, portugal, correspondence: ana rpq pinheiro ([email protected]).

Pain is considered as a symptom of difficult assessment and characterization by health care providers who take care of orotracheal intubated patients (EOT) unable to communicate verbally, so their assessment is critical to an effective management of care and therapy. An EOT patient is exposed to a variety of painful procedures and, if the pain is not controlled, it can lead to multiple complications: physical (cardiovascular, neurological and pulmonary) and psychological (stress, anxiety and delirium), so nurses must have credible instruments for evaluation and monitoring.

This systematic literature review (RSL) aimed to identify the most reliable tool for pain assessment, by analysing the validity and reliability of the BPS and CPOT scales, as well as their ease of application.

Using the methodology recommended by the Cochrane Centre, this RSL was guided by the following research question: “ Which is the most appropriate scale, BPS or CPOT, for pain assessment in patients unable to communicate verbally? ” The seven included studies resulted from a research in EBSCO, using the terms “behavioural pain scale” and “critical care pain observation tool”, with Boolean operator “and”, in full text, published between 2007-2017.

With a number of participants between 23 and 117, the 7 selected studies, all with quantitative nature, concluded that both scales are reliable and valid for the assessment of pain in this population [1-5]. [4] report that although BPS is more sensitive in identifying the patient's response, CPOT is a good alternative. It was also verified that both instruments are sensitive to painful procedures, with an increase in several indicators [1, 2, 6]. There was also a significant statistically correlation between the values of arterial tension and the performance of a painful procedure, (the higher the pain value, the higher the arterial tension) [5,6].

Both scales (BPS and CPOT) are suitable for the evaluation of pain in EOT patients and according to nurses, both are easy to apply and useful for care delivery [5, 7]. However, the literature does not show the most adequate scale, suggesting that other studies must be done.

1. Rijkenberg S, Stilma W, Endeman H, Bosman RJ, Straaten HMO. Pain measurement in mechanically ventilated critically ill patients: Behavioral Pain Scale versus Critical-Care Pain Observation Tool. Journal of Critical Care.

2015; 30: p. 167-172.

2. Liu Y, Li L, Herr K. Evaluation of Two Observational Pain Assessment Tools in Chinese Critically Ill Patients. Pain Medicine. 2015; 16: p. 1622-1628.

3. Rahu MA, Grap MJ, Ferguson P, Joseph P, Sherman S, Elswick, Jr RK. Validity and Sensitivity of 6 Pain Scales in Critically Ill, Intubated Adults. American Journal of Critical Care. 2015 Nov; 24(6): p. 514-523.

4. Darwish ZQ, Hamdi R, Fallatah S. Evaluation of Pain Assessment Tools in Patients Receiving Mechanical Ventilation. AACN Advanced Critical Care. 2016 4-6; 27(2): p. 162-172.

5. Vadelka A, Busnelli A, Bonetti L. Comparison between two behavioural scales for the evaluation of pain in critical patients, as related to the state of sedation: an observational study. SCENARIO. 2017; 34(2): p. 4-14.

6. DamstrĂśm DN, Saboonchi F, Sackey PV, BjĂśrling G. A preliminary validation of the Swedish version of the critical-care pain observation tool in adults. Acta Anaesthesiol Scand. 2011; 55: p. 379-386.

7. Fothergill Bourbonnais F, Malone-Tucker S, Dalton-Kischel D. Intensive care nurses’ assessment of pain in patients who are mechanically ventilated: How a pilot study helped to influence practice. Canadian Journal of Critical Care Nursing. 2016; 27(3): p. 24-29.

Behavioral pain scale, Critical care pain observation tool, Pain rating scales, Nursing.

P37 Analgesic effect of the topical use of cannabidiol oil in experimental ulcers in the laboratory animal

MarĂ­a-del-carmen jimĂŠnez-dĂ­az 1 , carla jimĂŠnez-rodrĂ­guez 1 , marĂ­a-del-pino quintana-montesdeoca 2 , juan f jimĂŠnez-dĂ­az 3 , francisco j hernĂĄndez martĂ­nez 3, 1 universidad de jaĂŠn, universidad de jaĂŠn, 23071 jaĂŠn, espaĂąa; 2 universidad de las palmas de gran canaria, 35015 las palmas de gran canaria, espaĂąa; 3 cabildo de lanzarote, 35500 lanzarote, las palmas, islas canarias, espaĂąa, correspondence: marĂ­a-del-carmen jimĂŠnez-dĂ­az ([email protected]).

Since ancient times the use of the Cannabis sativa L. plant has been known in topical form for the treatment of haemorrhages, inflammations, oedema, various pains, among others. In fact, tincture and cannabis extract were sold without restriction in European and American pharmacies until the beginning of the 20th century.

To study the analgesic effect of cannabidiol oil (CBD) applied topically to experimental ulcerative skin lesions in the laboratory animal.

Experimental study with control group (with physiological saline to maintain hydration conditions and group with extra virgin olive oil (EVOO) to avoid bias with the oleic excipient), to check the analgesic effect of CBD applied topically on ulcerative lesions. Experimental total skin in the adult male white rat, Sprague Dawley strain. Ten animals were used for each group under standard laboratory conditions. After anaesthesia with 100% isoflurane, a total skin wound was performed in the region of the back with a disposable surgical punch of 8 mm in diameter. They were then distributed in individual cages to prevent them from licking each other and with sufficient height to prevent rubbing of the skin ulcer with the passenger compartment. 0.15 ml of the respective product was applied daily to the ulcers. The analgesic response was assessed by obtaining latency of withdrawal of the tail of the animal under thermal stimulus through the meter LE7106 Tail-flickÂŽ, LeticaÂŽ, previous impregnation of the tail of the animals of the different products used by topical friction and, after waiting for 15 minutes, the tail of each of the animals is subjected to the emission of the infrared beam. Similar action was taken with the rest of the products studied, the EVOO and the CBD. The analgesic response was practiced twice, on different days. The statistical program SPSS 25.0 was used, considering a level of significance of p < 0.05.

There was no significant difference between the products applied to the different animals. The highest average corresponded to the application of CBD in observation and measurement 1. In observation and measurement 2, no significant difference was detected and the average values were similar between EVOO and CBD.

The most favourable analgesic response was obtained under the influence of cannabidiol oil, which also presents a greater tolerance to pain as experimentation progresses.

Analgesia, Cannabidiol, Skin, Ulcer, Rat.

P38 Polypharmacy in elderly patients in a community pharmacy

SĂłnia lopes 1 , clara rocha 2 , rui cruz 1, 1 pharmacy department, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal; 2 complementary sciences department, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal, correspondence: sĂłnia lopes ([email protected]).

The definition of polypharmacy isn’t consensual, but all authors refer it as the simultaneous use, and the chronic way, of several drugs by the same person. Polypharmacy affects mainly elderly and it’s due to the high number of chronic diseases in this population and consequent need to take medications to control them.

Characterization and quantification of polypharmacy in a rural elderly population; identification of the most prescribed pharmacotherapeutic classes; evaluation of connection between polypharmacy and elderly characteristics.

It carried out an observational, retrospective, transversal and analytical study in a Popular Pharmacy (Pombal). 230 individuals aged 65 years old or more were surveyed and the data collection was made through a questionnaire prepared for this purpose. Major polypharmacy was defined as the chronic consumption of at least 5 different drugs.

The elderly took, on average, 6.20 ± 2.91 drugs daily. The prevalence of Major polypharmacy was 70.4%. The most prescribed pharmacotherapeutics groups were cardiovascular and Central Nervous System. There were statistically significant differences between age and number of medicaments taken, as well between number of drugs and the way to identify the medication, the knowledge of the therapeutics indications, the occurrence of mistakes or take outside advised time, and the self-perception of health state (p ≤ 0.05).

In view of the obtained results, it’s concluded that polypharmacy is very high in the Portuguese population under study. It’s the persons most aged who consume a greater number of drugs. The elderly with less academic qualifications are those who have more difficulty in identifying medication and respective therapeutic indications. It’s necessary to adopt strategies in order to reduce polypharmacy, having both the prescriber and the professionals of pharmacy, a preponderant role in this task.

Polypharmacy, Elderly, Chronic medication.

P39 Salivary detection of the topical use of cannabidiol oil in experimental ulceras in the laboratory animal?

Bienvenida-del-carmen rodrĂ­guez-de-vera 1 , carla jimĂŠnez-rodrĂ­guez 1 , marĂ­a c jimĂŠnez-dĂ­az 2 , juan f jimĂŠnez-dĂ­az 1 , francisco j hernĂĄndez-martĂ­nez 3, 1 universidad de las palmas de gran canaria, 35015 las palmas de gran canaria, espaĂąa; 2 universidad de jaĂŠn, universidad de jaĂŠn, 23071 jaĂŠn, espaĂąa; 3 cabildo de lanzarote, 35500 lanzarote, las palmas, islas canarias, espaĂąa, correspondence: bienvenida-del-carmen rodrĂ­guez-de-vera ([email protected]).

Cannabidiol (CBD) is a phytocannabinoid that does not refer to experiences contrasted with its topical use in ulcerative lesions and skin wounds.

To determine if the topical application of CBD in ulcers exerts a cumulative effect in the organism of the experimental animal, with undesirable effects at the level of the Central Nervous System (CNS), in a similar way to its general use.

Experimental study applying total skin ulcerative lesions in the adult male white rat, Sprague Dawley strain, with cannabidiol oil to check if it accumulates in the body and can be detected in salivary secretion. Ten animals were used under standard laboratory conditions. After anaesthesia with 100% isoflurane, a total skin wound was performed in the region of the back with a disposable surgical punch of 8 mm in diameter. They were then distributed in individual cages to prevent them from licking each other, with sufficient height to prevent rubbing of the skin ulcer with the passenger compartment. 0.15 ml of cannabidiol oil was applied daily to the ulcers. The study of the body accumulation of the drug object of our study, the cannabidiol oil, was done by qualitative detection of the drug and its metabolites in the saliva of the experimental animal. This type of assay provided a preliminary analytical result using monoclonal Ab to selectively detect high levels of a specific drug which, if positive, would require complementing it with other chemical methods to quantify the accumulation of the product. The test, drogotestÂŽ, is made up of spongy collectors (chupa-chups) that, after being soaked for 3 minutes in the mouth of the animal treated with cannabidiol oil, allows to pour its contents by manual compression of the same through a strainer in collecting chamber. Finally, 3 drops of the fluid of the animal's saliva, contained in said collecting chamber, are poured over a detector sample in cassette wells that will show, or not, the presence of the drug in a time period of less than 10 minutes, by observation. direct visual with colour changes of the cassette sample.

The detection by the drogotestÂŽ test of the general accumulation of cannabis in the organism of the animals studied was negative.

The benefit of the topical use of cannabidiol in ulcerative lesions is confirmed without the undesirable effects that cannabis causes in the CNS.

1. Grotenhermen F, Russo E, Navarrete R (eds). Cannabis y cannabinoides, farmacologĂ­a, toxicologĂ­a y potencial terapĂŠutico. Sevilla: Castellarte; 2003.

2. Abanades S, Cabrero-Castel A, Fiz J, FarrĂŠ M. FarmacologĂ­a clĂ­nica del cannabis. Dolor 2005; 20: 187-98.

3. OMS. Serie Informes TĂŠcnicos, nÂş 478. El uso del Cannabis.

4. Fusar-Poli P et al. Distinct effects of (delta) 9-tetrahydrocannabinol and cannabidiol on neural activation during emotional processing. Arch Gen Psychiatry 2009; 66 (1): 95-105.

5. Pertwee RG. Cannabinoid receptors and pain. Prog Neurobiol 2001; 63: 569-611.

6. Walker JM, Huang SM. Cannabinoid analgesia. Pharmacol Ther 2002; 95: 127-35.

Detection, Saliva, Cannabidiol, Skin, Ulcer, Rat.

P40 Humour and nurses’ stress: humour contributions on stress management. A literature systematic review

Maria i santos 1 , rita marques 2, 1 health sciences institute, catholic university of portugal, 1649-023 lisbon, portugal; 2 escola superior de saĂşde da cruz vermelha portuguesa, 1300-125 lisbon, portugal, maria i santos ([email protected]).

Nurses experience high levels of work-related stress due to their daily contact with critical situations, suffering, negative emotions and death. This stress overload imposes negative consequences at an individual and at an organizational level, with direct and indirect costs. On the contrary, humour seems to produce benefits to health, job satisfaction and group cohesion, when used in an adaptive way. There is scientific evidence that humour may establish an incisive coping strategy in the management of work-related stress that can be used by nurses for their own benefit.

The aim of the present study is to build a framework of a systematic review on the relationship between humour and nurses’ work-related stress.

Using the Cochrane Centre recommended methodology, this systematic review was guided by the following research question: What is the contribution of humour to nurses’ stress management? The research was performed through the EBSCO and Google Scholar search engines, in the bibliographic databases: CINAHL®Complete, MEDLINE Complete, Cochrane Controlled Trials Register, MedicLatina, Pubmed, Scielo and RCAAP, since 2007 to 2017, using the following conjugations of descriptors and Boolean operators: humour (OR humor) AND stress AND nurses (OR nursing) NOT children NOT patients NOT students.

A sum of four articles, that respond to our research question, was selected. The empirical studies were developed in four different countries (USA, Canada, UK and Portugal) with diverse designs using either qualitative and quantitative approaches. The sample varied from 15 to 61 nurses. All studies demonstrated that humour expressions are used by nurses to deal with stressful situations [1-4].

The use of adaptive forms of humour promote detachment and reassessment of the situation contributing to stress management [3, 4]. It also strengthens interpersonal relationships, improves communication and group cohesion, and contributes to job satisfaction [1, 2]. Therefore, evidence shows that humour can be an effective tool in stress management. Moreover, one study points out that humour can also arise in response to stress, warning to an increased stress level [4]. Nevertheless, the reduced empiric evidence found suggests that this subject within Nursing Science is not yet highly established.

1. Scott T. Expression of humour by emergency personnel involved in sudden deathwork. Mortality. 2007, 12 (4): 350-364.

2. Dean R, Major J. From critical care to comfort care: the sustaining value of humour. Journal of Clinical Nursing. 2008, 17: 1088-1095.

3. Harris T. Caring and Coping. Exploring How Nurses Manage Workplace Stress. Journal of Hospice & Palliative Nursing. 2013, 15 (8): 446-454.

4. Santos M, JosĂŠ H, Capelas M. O Humor e o Stresse dos Enfermeiros que Cuidam com Pessoas em Fim de Vida. Revista Servir. 2016, 59 (4): 69-74.

Humor/humour, Stress, Nurses.

P41 Socio-clinical relationships among nursing students in practice context

Laura reis ([email protected]), porto nursing school, 4200-072 porto, portugal.

It is in clinical context that the students attribute greater meaning to their peers. This fact is related to a set of new experiences that are significant for these actors. Supported by the literature consulted, we are led to say that they last even beyond the end of the course, unlike the relations established in the classroom.

Analyse the relationships established among students over two consecutive clinical studies.

The study was carried out with students from a group of the 2nd year of a Nursing Degree who were experiencing their first clinical experience in a hospital context, namely 10 weeks in an internal medicine service and 10 weeks in a surgery service. We chose an ethnographic study within the framework of the qualitative paradigm, in a longitudinal approach according to the logic of the case study. As a data collection technique, we used participant observation and semi-structured interviews.

We verified that the relation established between students differed according to whether it was a first or a second clinical teaching. The knowledge that the group acquired about themselves and about the methodologies adopted by the tutors, was different in the different contexts. In the first medical school (medicine), despite the little knowledge that the students had among themselves, the established relationship was very cooperative and cordial, based on the spirit of help. This was due to a number of factors, namely, a first clinical experience, leading to a high need to share information and knowledge related to the context and clinical practices; the lack of knowledge about tutors; and the lack of safety in the care of patients. The relationships established in this space were, therefore, strong and cohesive. In the clinical teaching of surgery, the relation established was different, highlighting more divergences/heterogeneities. According to the students' opinion, this fact was related to the work methodology/distribution of students and the request of academic work by the teachers.

We verified that on the 2nd clinical teaching, the relationships established between students were influenced by interpersonal competition logics. As it is known, the need to form more restricted groups can lead to unleashing alliance dynamics and oppositions among students, giving rise to behaviours of competitiveness, or of helping others, or even to the development of individualized work. Another aspect that seems to have been strongly conditioning, were the methodological strategies adopted by the tutors of the different clinical contexts.

Clinical supervision, Clinical teaching, Nursing students, Socio-clinical relations.

P42 Stress management in under graduation nursing students

Marco oliveira, andreia santos, rafaela barbosa, diana portovedo, isabel oliveira, escola superior de enfermagem da cruz vermelha portuguesa, 3720-126 oliveira de azemĂŠis, portugal, correspondence: isabel oliveira ([email protected]).

Admission to higher education is seen by some students as an opportunity of growth, to explore new environments and build new relationships yet, other students, perceive it as potentially anxiogenic. Some of the stressful factors are: examinations, clinical teaching, academic results, a competitive environment and the experience of transition and adaptation to a new academic environment. Therefore, it is necessary to work on the causes of stress and promote coping strategies. Being such an important subject for under graduation students, that experiencing it so intensely, it is crucial that students are also an active part in promoting the efficient management of stress.

Thus, in order to contribute to the adoption of coping strategies and promote the efficient management of stress, a participatory action research in health was developed in a nursing school. The strategies were: the implementation of a student support line (peer support); monthly meetings, addressed to the student population, on topics related to stress and its management; tutorials for 1st year undergraduate students by 3rd and 4th year students, through a structured program of integration to the nursing school. The follow-up of the effectiveness of the activities was carried out through a questionnaire measuring stress levels. This research foresees an initial evaluation (before implementation), intermediate evaluations and a final one, to measure the achievement of the objectives initially stated.

The preliminary findings show a mean of 8.52 and a standard deviation of 2.73 in the score of the domain Sleep/Stress of the Portuguese version of the Fantastic Lifestyle Assessment.

With the implementation of this participatory action research it is expected a reduction of stress levels, as well as enabling students to adopt coping strategies in order to manage their stress. On the other hand, it will allow a better integration of students, as well as a better academic development, both in theoretical evaluations and in clinical teaching, consolidating the relational skills, which are so important during the course of clinical teaching and later on in their professional life.

Nursing students, Stress, Coping skills, Action research, Participatory.

P43 A contribution to the validation of the volume - viscosity swallow test (V-VST) – Portuguese version

Catarina camĂľes 1 , marĂ­lia dourado 1 , maria a matos 2, 1 faculty of medicine, university of coimbra, 3004-504 coimbra, portugal; 2 school of health, university of aveiro, 3810-193 aveiro, portugal, correspondence: marĂ­lia dourado ([email protected]).

The prevalence of swallowing disorders after stroke is well described in the literature [1,2]. The early identification of these alterations resorting to non-invasive and easily administered instruments can minimize its consequences and reduce comorbidity and mortality among these patients [1,4]. The V-VST exhibits good psychometric properties, allowing the early identification of patients at risk of developing respiratory and nutritional complications. Its use also allows dietary preventive recommendations to patients until diagnosis confirmation, by instrumental examinations [2].

The goal of this study is to contribute to the validation of the V-VST –Portuguese version, in patients with subacute stroke.

In phase I, the V-VST- Portuguese version [3], as well as its instructions, was presented to a panel of experts constituted by six speech and language therapists, in order to assess its content validity. In phase II, after ethical approval, it was applied to thirty-three patients with subacute stroke, to analyse its psychometric properties, namely its internal consistency and reliability (inter and intra raters). Criterion validity was assessed through the simultaneous application of the 3Oz wst. Collected data were analysed with IBM SPSS version 24.0. Intraclass Correlation, Cronbach’s Alfa and Cohen's Kappa values were calculated.

Results of phase I demonstrate a very good agreement between all members of the panel of experts as for constituent items of the V-VST (I-CVI/Ave = 0.95) as well as to its instructions (I-CVI = 0.83). Preliminary results of phase II, showed that the V-VST presents very good intraclass (ICC=0.816) and interclass correlation coefficients (ICC = 0.837). Values obtained from the comparison between the V-VST and 3Oz wst have given similar results (I- CVI = 0.83).

The V-VST - Portuguese version seems to be a valid, reliable and practical tool for assessing dysphagia in patients with subacute stroke. Further studies need to be done in the future.

We would like to thank to the Nutricia Portugal, to the Faculty of Pharmacy of the University of Coimbra as well as to all patients and members of the neurology C service of the Hospitalar and Universitary Center of Coimbra by all the support and availability showed during this study.

1. Belafsky PC, Mouadeb DA, Rees CJ, Pryor JC, Postma GN, Allen J, Leonard RJ. Validity and Reliability of the Eating Assessment Toll (EAT-10). Ann Otol Rhinol Laryngol. 2008;117(12):919-924.

2. ClavĂŠ P, Arreola V, Romea M, Medina L, Palomera E, Prat MS. Accuracy of the volume - viscosity swallow test for clinical screening of oropharyngeal dysphagia and aspiration. Clin Nutr. 2008 Dec;27(6):806-815.

3. Nogueira DS, Ferreira PS, Reis EA, Lopes IS. Measuring Outcomes for Dysphagia: Validity and Reliability of the European Portuguese Eating Assessment Tool (P-EAT-10). Dysphagia. 2015;30(5):511-520.

Deglution disorders, Bedside examination, Dysphagia, Aspiration.

P44 Strategies to improve hand hygiene practices: an integrative literature review

Ana c mestre, filipa veludo, susana freitas, correspondence: ana c mestre ([email protected]).

Healthcare-associated infections (HAIs) are a global concern and pose a real threat to patient safety. Many of them preventable [1]. Knowing that hands of healthcare professionals are one of the main vehicles in the transmission of microorganisms, hand hygiene (HH) is recognized as the easier and most effective measure to prevent and reduce HAIs [2]. However, despite all evidence available and although 98% of healthcare professionals consider HH as the most important basic precaution in preventing HAIs, compliance is poor, remaining less than 40% [3,4].

To identify, in Literature, the most effective strategies to promote HH compliance.

An integrative review between September and October 2017 was fulfilled with the Boolean strategy: [(TI Title) hand hygiene AND (AB Abstract) nurse AND (AB Abstract) infection AND (AB Abstract) strategy OR compliance OR adherence] in CINAHLÂŽ, Science Direct and Academic Search Complete. A total of 396 articles were identified, initially. After applying the inclusion criteria: primary and secondary studies with a qualitative and quantitative approach available in full text in Portuguese, English, French and Spanish; and exclusion criteria: studies published before 2016, a sample of 12 articles was included for analysis.

From a total of 12 articles analysed, 10 showed the importance of a multimodal approach to the improvement of HH practices with consequent increase in compliance to this behaviour. It stands out the combination of interventions addressing knowledge (education), awareness, context of action (reminders in the workplace) as well as the involvement and support of leaders and managers in building an institutional safety culture (social influence) as the most effective to ensure greater compliance to HH.

In order to improve HH practices and, consequently, adherence to this behaviour, the adoption of a multimodal strategy proved to be more successful when compared to single interventions. At an early stage, it is essential to understand the reasons that lead to non-adherence to HH and after that design interventions based on identified barriers. The approach should be global, including not only healthcare professionals but also leaders and managers.

1. WORLD HEALTH ORGANIZATION. WHO Guidelines on Hand Hygiene in Health Care. 2009. Accessed 20-11-2017. Available in http://apps.who.int/iris/bitstream/10665/44102/1/9789241597906_eng.pdf

2. DIREÇÃO-GERAL DA SAÚDE- Circular Normativa nº 13/DQS/DSD de 14/06/2010 (2010). Orientação de Boa Prática para a Higiene das Mãos nas Unidades de Saúde. Lisboa: Direção Geral de Saúde. Acccessed 20-11-2017. Available in: https://www.dgs.pt/directrizes-da-dgs/normas-e-circulares-normativas/circular-normativa-n-13dqsdsd-de-14062010.aspx

3. Piras SE, Lauderdale J, Minnick A. An elicitation study of critical care nurses’ salient hand hygiene beliefs. Intensive and Critical Care Nursing. 2017;42:10–16.

4. Farhoudi F, Sanaei Dashti A, Hoshangi Davani M, Ghalebi N, Sajadi G, Taghizadeh R. Impact of WHO Hand Hygiene Improvement Program Implementation: A Quasi-Experimental Trial. Biomed Res Int. 2016;2016:7026169.

Hand hygiene, Healthcare-associated infections, Multimodal strategy, Integrative literature review.

P45 Conception and implementation of a nursing intervention program for family caregivers

Ricardo melo 1,2 , marĂ­lia rua 2 , cĂŠlia santos 3, 1 centro hospitalar de gaia/espinho, 4400-129 gaia, portugal; 2 escola superior de saĂşde, universidade de aveiro, 3810-193 aveiro, portugal; 3 escola superior de enfermagem do porto, 4200-072 porto, portugal, correspondence: ricardo melo ([email protected]).

The increase of longevity of people and prevalence of diseases resulting in situations of dependency [1], emerge a greater need for supportive care to meet the needs expressed [2]. Family caregivers are very important elements in caring for the family member with self-care dependency, at the home context [3, 4]. This is an exhausting process with serious consequences for the general state of health perceived by the caregiver, as well as for the manifested burden [3, 5, 6]. A structured and contextualized intervention program [7] aimed at the qualification and support of family caregivers is essential for the transition and adequate performance of the functions inherent in this role.

To develop and implement a Nursing Intervention Program with family caregivers of dependent persons, in a home context.

This process began with an integrative review of the literature, in order to discover the main needs evidenced by family caregivers. Electronic databases were used, namely EBSCO and B-on, with the following descriptors: Caregiver; Family Caregivers; Needs; Dependent. The second stage corresponded to the adaptation of the Intervention Program, with the use of the Delphi technique on a group of experts. The last phase corresponded to a quasi-experimental study, with pre- and post-intervention evaluation, with the implementation of the program on 70 family caregivers, using home visits.

With the review of the literature were obtained 21 articles (ten quantitative studies, five qualitative studies, four systematic reviews of the literature, a review of the literature and a mixed study). The evidenced needs were organized by the Transition Theory: community and social resources; knowledge and preparation; personal meaning, beliefs and attitudes; and socioeconomic condition. The consensus technique allowed the structuring of a Nursing Intervention Program, with 93 interventions, divided in emotional and instrumental support. The implementation of the Intervention Program implied, on average, 6 home visits to the caregivers, emotional support provision and caregiver training.

A Nursing Intervention Program, with structured and contextualized interventions in the home context, with family caregivers of dependent persons, is a facilitator for the transition experienced by caregivers, but also an important instrument of the work developed by nurses. Thus, it provides the necessary emotional support and skills that enable caregivers to optimize care delivery.

1. INE - Instituto Nacional de Estatística IP. Censos 2011 Resultados Definitivos - Portugal. XV recenseamento geral da população; V recenseamento geral da habitação. Lisboa: Instituto Nacional de Estatística; 2012.

2. Figueiredo D. Cuidados Familiares ao Idoso Dependente. Lisboa: Climepsi Editores; 2007.

3. ImaginĂĄrio C. O Idoso Dependente em Contexto Familiar: Uma AnĂĄlise da VisĂŁo da FamĂ­lia e do Cuidador Principal. 2ÂŞ ed. Coimbra: Formasau; 2008.

4. Marques SCL. Os Cuidadores Indormais de Doentes com AVC Coimbra: Formasau - Formação e Saúde Lda.; 2007 Janeiro de 2007.

5. Martins T. Acidente Vascular Cerebral: Qualidade de Vida e bem-estar dos doentes e familiares cuidadores. Coimbra: Formasau – Formação e Saúde Lda.; 2006.

6. Sequeira C. Cuidar de Idosos com DependĂŞncia FĂ­sica e Mental. Lisboa: Lidel; 2010.

7. ICN. CIPE Versão 2 - Classificação Internacional para Pråtica de Enfermagem Lisboa: International Council of Nurses; 2011.

Family caregivers, Intervention program, Transition, Needs, Dependency.

P46 Antimicrobial activity of natural extracts and commercial elixirs in oral pathogens

Maria j alves, marta pereira, sara fraga, isabel ferreira, maria i dias, centro de investigação de montanha, instituto politĂŠcnico de bragança, 5300-253 bragança, portugal, correspondence: maria j alves ([email protected]).

Although Streptococcus mutans has been responsible for decades as the etiological agent of dental caries, recent evidence indicates a high prevalence for S. mutans in dental biofilms where Candida albicans resides; which suggests that the interaction between these two species may mediate cariogenic development [1].

To evaluate the antimicrobial activity of three chemical elixirs of different commercial brands and two aqueous extracts obtained from plants ( Chamomilla recutita L. and Foeniculum vulgare Mill .) in C. albicans and S. mutans.

Percent growth inhibition was quantified by measurement of optical density (OD) at 600 nm in a microplate reader.

Both the extracts and the elixirs presented antimicrobial activity for the two microorganisms previously mentioned. Among the elixirs tested, the one with the highest antimicrobial activity for S. mutans was Colgate (100%), followed by Eludril and a white brand (≥ 99%). For C. albicans , the Eludril (100%) gave the highest activity, followed by Colgate (99%). A Chamomilla recutita extract (10 mg/ml) showed an inhibition percentage of growth of 96% for S. mutans very similar to that of the antibiotic (97%). The inhibition percentage of growth decreased for C. albicans (87%), although was higher than the antifungal fluconazole (84%).

The two extracts showed less antimicrobial activity compared to the elixirs, however, they had higher percentages of inhibition growth than the drugs tested for both microorganisms. The extract of Chamomilla recutita was the one that presented the highest antimicrobial activity for the two microorganisms tested comparatively with Foeniculum vulgare .

1. Metwalli KH, Khan SA, Krom BP, Jabra-Rizk MA. Streptococcus mutans, Candida albicans, and the Human Mouth: A Sticky Situation. PLOS Pathogens. 2013;9(10):1-5.

Oral biofilm, Antimicrobial activity, Elixir, Foeniculum vulgare mill , Chamomilla recutita L .

P47 The effects of water walking on body composition – a study with children between 6 and 12 years old

Samuel honĂłrio 1,2 , joĂŁo oliveira 3 , marco batista 1,2 , joĂŁo serrano 1,4 , jorge santos 3 , rui paulo 1,2 , pedro duarte-mendes 1,2, 1 department of sports and well-being, polytechnic institute of castelo branco, 6000-084 castelo branco, portugal; 2 research on education and community intervention, 4411-801 arcozelo, vila nova de gaia, portugal; 2 polytechnic institute of castelo branco, 6000-084 castelo branco, portugal; 4 centre for the study of education, technologies and health, polytechnic institute of viseu, 3504-510 viseu, portugal.

Aquatic activities have been recommended as frequent practice, due to the physical properties of water, especially because fluctuation and reduction of joint impact, with improvements in the body composition of children.

The present research aims to verify if there are differences in body composition in children aged between 6 and 12 years who practice swimming, complemented with water walking at the end of each session and those who only practice swimming.

The sample consisted of 28 individuals aged 6 to 12 years and was divided into two groups: swimming group (SG) with 9 children and a swimming complemented with water walking group (SWWG) of 19 children. In this study, of twelve weeks with three moments of evaluation, with two sessions per week of 45 minutes each, we wanted to identify the benefits in body composition (weight, muscle mass, fat mass, body water, BMI, waist circumference and body percentiles). For that purpose, we used a bio-impedance scale Targa Z29777A, and an anthropometric tape to measure the waist circumference. The water walking activity occurred at the end of each session for 6 minutes, performed in straight line with the water level at the children’s chest. In terms of statistical procedures, we used the program Statistical Package for the Social Sciences version number 20 (SPSS 20.0). We used descriptive statistics (minimum, maximum, means and standard deviations), the Shapiro Wilk test for testing the normality of the sample, inferential statistics (non-parametric Mann-Whitney tests, Friedman's ANOVA, and for the calculation of the magnitude of the effect, the d-Cohen test).

After data treatment, regarding the inter-group analysis (comparison between the swimming group and the swimming group with water walking) we observed that there were significant differences in the weight variable, that is, at the end of the 3 moments. Concerning intra-group differences (improvements in the swimming group and in the swimming with water walking group, in the three moments evaluated), the SWWG showed significant improvements in the variables of weight as well, muscle mass, fat mass, body water, body mass index (BMI) and body percentiles.

We have concluded that the practice of activities such as swimming and water walking has benefits in the analysed variables and that there are differences in the groups analysed; however, the two activities complemented (swimming and water walking) present much more significant improvements.

NCT03519620

Children, Water Walking, Swimming, Body composition, Bio-impedance.

P48 Health literacy level of students at the time of enrolment to health courses in higher education

Luis s luis 1,2 , victor assunção 3 , henrique luis 4 , helena melo 5, 1 school of health science, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 escola superior de saĂşde, instituto politĂŠcnico de portalegre, 7300-074 portalegre, portugal; 4 faculdade de medicina dentĂĄria, universidade de lisboa, 1600-003 lisboa, portugal; 5 escola superior de saĂşde ribeiro sanches, 1950-396 lisboa, portugal, correspondence: luis s luis ([email protected]).

A good level of Health literacy is essential for effective communication between health professionals and patients. Health professionals training should provide them with a set of skills in this area that would allow them to communicate successfully with other health professionals and patients.

The goal of this study was to identify what level of health literacy students have when enrolled at the Universities and Polytechnic Institutes. Students from Nursing Degree courses (Portalegre School of Health Sciences and Ribeiro Sanches Health School) and from the Dental Hygiene Degree (Faculty of Dental Medicine of Lisbon and Portalegre School of Health Sciences), had their health literacy level assessed at enrolment, in the year of 2013/2014.

To evaluate the level of health literacy, the Portuguese version of the NVS (New Vital Sign) was administered to 42 dental hygiene students (6 males and 36 females) and 53 nursing students (14 males and 39 females).

The total sample, composed of 95 students, was analysed for provenance to higher education level degree, it was verified that 83.5% came from high school, 6.6% entered by the process of “Maiores de 23”, and 9.9% through other enrolment processes. When analysing these data, by degree, most of the students at the Dental Hygiene degree (92.9%), accessed higher education through high school completion. This value was of 75.5% for Nursing students. Regarding the level of health literacy, it was observed that 72.63% of the students had high level of health literacy, 24.21% moderate health literacy level and 3.16% low health literacy level. There was a statistically significant difference between Nursing and Dental Hygiene students, with the former having a better health literacy (p = 0.007). At the time of entry to the University and Polytechnic, and when considering courses, collected data shows that, for the Degree in Dental Hygiene, 4.8% of the students had a low level of health literacy, 21.4% present a moderate level and 73.8% a high level of health literacy. When considering the Nursing Degree, these values were, respectively, 1.9%, 26.4% and 71.7%.

Students access higher education with a level of health literacy appropriate to the courses they intend to attend. The presented results are preliminary and part of a longitudinal study lasting three academic years, evaluating the evolution of health literacy levels of students throughout their training.

Keywords: Health literacy, Nursing School, Dental Hygiene School, Students.

P49 Nursing caring of vulnerable patients in emergency situations: what does the evidence say?

Marta pacheco, maria t leal, correspondence: marta pacheco ([email protected]).

Critically ill patient’s (CIP) nursing care quality has improved significantly in the last years because of important technological improvements. In pre-hospital, emergency and intensive care settings, they contribute to the valuation of technical competencies over relational aspects. The emergent need to stabilize the CIP diverts the attention away from the psychological and emotional support, increasing anxiety and diminishing the patient’s cooperation, making the care experience increasingly hostile. Vulnerability arises in these settings [1] as a condition to be appreciated in a “harmonious conciliation between the technological expertise and the art of caring” [2], because it improves expectations and experiences, and increases the positive outcomes of the resuscitation process, giving greater visibility to the nurses’ work. To understand the experiences of patients who lived emergency situations, as well as nursing interventions that reduce vulnerability, will allow for better care of the CIP [3].

Our goal is to review the available evidence regarding nursing interventions that reduce vulnerability at emergency and resuscitation settings.

We performed an integrative review of the literature available from MEDLINE and CINAHL databases and grey literature, to answer the question: “Which nursing interventions promote the reduction of the CIP’s vulnerability in emergency setting? ” [4].

Nine qualitative and quantitative studies satisfied the search criteria and were included. These studies are from different countries and cultures and their analysis identified both the emergency setting and nursing interventions as elements that influence the CIP’s subjective experience of vulnerability. Vulnerability is a permanently present condition in the human being and the recognition of the mutual vulnerability by nurses is a way of preserving the value of caring.

The experience of CIP in emergency and resuscitation settings is influenced by organizational, environmental and caring factors. Being cared by competent professionals [5,6], using a therapeutic relationship and responding to the patient main needs are the most valued aspects that convey safety and a significant decrease in vulnerability. Communication and empathy [6], with explanation of clinical procedures, allows the development of trust and reduces vulnerability, facilitating patient collaboration in the care provided. Recognizing the vulnerability of CIP and its influence on collaboration, recovery and satisfaction in care, allows developing strategies and abolishing mechanized and merely technical behaviours. Vulnerability is a continually present condition in humans and the recognition of vulnerability of the professionals themselves [7] is a way to socially preserve the value of caring in an emergency context.

1. Mitchell M. General anaesthesia and day-case patient anxiety. J Adv Nurs. 2010;66(5):1059–1071.

2. Sá F, Botelho M, Henriques M. Cuidar da Família da Pessoa em Situação Crítica: A Experiência do Enfermeiro. Pensar Enferm. 2015;19(1):31–46.

3. Paavilainen E, Salminen-Tuomaala M, Kurikka S, Paussu P. Experiences of counselling in the emergency department during the waiting period: importance of family participation. J Clin Nurs. 2009;18(15):2217–2224.

4. The Joanna Briggs Institute. Joanna Briggs Institute Reviewers’ Manual: 2014 Edition [Internet]. Adelaide: The Joanna Briggs Institute; 2014. 197 p.

5. Baldursdottir G, Jonsdottir H. The importance of nurse caring behaviors as perceived by patients receiving care at an emergency department. Hear Lung. 2002;31(1):67–75.

6. Wiman E, Wikblad K, Idvall E. Trauma patients’ encounters with the team in the emergency department: a qualitative study. Int J Nurs Stud. 2007;44(5):714–22.

7. Malone RE. Dimensions of vulnerability in emergency nurses’ narratives. Adv Nurs Sci. 2000;23(1):1–11.

Vulnerability, Critically ill patient, Nursing care, Emergency department.

P50 Sedentary behavior of older people above 75: where, when and with whom

Marta gameiro, madalena g silva, correspondence: marta gameiro ([email protected]).

Sedentary behaviours are understood as activities with a low energy expenditure (<1.5 metabolic equivalent - METs) [1]. Even for those who comply with physical activity recommendations, prolonged time in sedentary behaviours is associated with an increased risk of chronic and cardiovascular diseases; decreased muscle mass, strength and power; decreased functional capacity and premature death in elderly people [2]. Older people spend long periods of time in sedentary behaviour [3]. A deeper understanding of what activities represent these behaviours, when and with whom they take place, becomes extremely relevant for health professionals who need to promote a reduction in sedentary time, among very older people in order to enhance health benefits.

To characterize the time spent in sedentary behaviour of older people in three urban day-care centres, as to the duration, type of activities, with whom and where they take place.

A cross sectional study was conducted with 54 participants, average age of 84.53 Âą 5.35. An Activity Diary was used to characterize sedentary activity. Each activity noted in the diary was classified in METs, per the Compendium of Physical Activities: Classification of Energy Costs of Human Physical Activities. They were then clustered in four groups: sedentary activities; light intensity; moderate intensity and vigorous intensity activities. Time spent in activities within each group was summed, resulting in the total sum of sedentary, light, moderate, and vigorous intensity activities (min.week -1 ). Relative and absolute frequencies, as well as mean/standard deviation, were used, to characterize sedentary behaviour.

Our sample was mainly female (88.9%), widowers (70.4%), living alone (64.8%) and with a low educational background (61.1%). On a regular week, they spent an average of 6h20 min (2608.15 Âą 930.67 min.week -1 ) in sedentary behaviour. Half of this time is spent watching TV (50.2%), alone, in the afternoon and evening. Other activities are talking (11.73%), reading (6.7%), resting (6.4%) and playing board games (4.7%).

We conclude that our sample spent an average of 6h20min per day in sedentary activity, mainly in the afternoon and evening, watching television alone. In order to break sedentary patterns, clinical interventions need to tackle this particular period of the day, finding alternative strategies for a greater energy expenditure for very old people.

1. Ainsworth BE, Haskell WL, Whitt MC, Irwin ML, Swartz ANNM, Strath SJ, et al. Compendium of Physical Activities: an MET intensities. Med Sci Sport Exerc. 2000;32(9):498–504.

2. Dunlop DD, Song J, Arntson EK, Semanik PA, Lee J, Chang RW, et al. Sedentary time in U.S. older adults associated with disability in activities of daily living independent of physical activity Dorothy. J Phys Act Heal. 2015;12(1):93–101.

3. Leask CF, Harvey JA, Skelton DA, Chastin SF. Exploring the context of sedentarybehaviour in older adults (what, where, why, when and with whom). Eur Rev Aging Phys Act. 2015;12(1):4.

Sedentary behavior, Context, Very old people.

P51 Women’s perception on the role of family nurse in the transition to motherhood

Andreia ferreira, joĂŁo simĂľes, helena loureiro, escola superior saĂşde, universidade de aveiro, 3810-193 aveiro, portugal, correspondence: andreia ferreira ([email protected]).

The study carried out has as its title “ Women's perception on the role of family nurse in the transition to motherhood ” and aims to understand the experiences of puerperal women in relation to the role of the family nurse, in promoting the transition to maternity, in the year 2016/2017, at USF Rainha Tereza.

The accomplishment of the empirical study was authorized by the ARS Centro Ethics Committee. It is part of a paradigm of qualitative nature, based on a descriptive phenomenological methodology, that intends to answer the research question: What is the perception of puerperal women regarding the interventions of the family nurse in the transition to motherhood? Eleven mothers for the first time, who were selected in a non-probabilistic manner and for convenience, according to previously established inclusion and exclusion criteria, participated in this study. Once the informed consent of all the participants was obtained, the information collection was obtained through the filling of a questionnaire of sociodemographic characterization and the conduction of a semi-structured interview, recorded in an audiographic record, was later transcribed and analysed by the webQDA program, through the Bardin content analysis technique.

Evidence has led to the conclusion that motherhood, as a life cycle transition, is approached by the family nurse in a fragmented way. Women perceive that this health professional has an intervening role in the processes of “Being Woman” and “Being Mother”, but as far as “Being Wife”, the intervention still remains little exiguous. Nevertheless, it became clear that family nurses assume a primordial function in this transition and that this is essentially related to the role of health promoter attributed to it, by mothers for the first time.

In conclusion, the traineeship report sensitized the importance of relational skills in the delivery of care and the need to provide care in a holistic way, integrating the family as a partner in the perinatal follow-up.

Women's Health, Maternal Health, Motherhood, Transition, Nursing.

P52 Self-perception of health status and physical condition of elderly people practitioners of hydrogymnastics

Carlos farinha 1 , joĂŁo serrano 2 , josĂŠ p ferreira 1 , joĂŁo petrica 2 , rui paulo 3 , pedro duarte-mendes 2 , marco batista 2, 1 faculty of sport sciences and physical education, university of coimbra, 3040-256 coimbra, portugal; 2 higher school of education, polytechnic institute of castelo branco, 6000-266 castelo branco, portugal; 3 sport, health and exercise research unit, polytechnic institute of castelo branco, 6000-266 castelo branco, portugal, correspondence: carlos farinha ([email protected]).

The researcher interest in self-perception of health status and physical condition of the elderly is determinant in an increasingly aging society and where one should seek to improve their quality of life.

To study the self-perception of health status and the impact of a physical activity program oriented during 4 months among an elderly population of practitioners of hydrogymnastics.

As instruments were used the questionnaire “mos short health survey – 36 items version 2 (SF-36)” translated and validated for the Portuguese version by Ferreira (2000), and the battery of Functional Tests Fitness Test. Questionnaires were filled by the elderly with the presence of the investigator and the physical fitness was evaluated following the protocol of proof. The sample was constituted by 83 elderly individuals over 55 years of age. In statistical terms we used a percentage analysis in the questionnaire and in the evaluation of the physical condition after application of the Kolmogorov-Smirnov test, we used the non-parametric test for two paired variables, Wilcoxon.

According to the different dimensions studied (physical function, physical performance, pain, general health, vitality, social function and emotional performance), we found that the majority of the elderly presents limitations mainly in the most demanding tasks (lift weights, running, walk fast, more intense housework, among others); and that their state of health interfered in the performance of some daily activities, but not in the time spend in performing these same activities. The pains have not been strong in the last times and so, they consider their general health moderate and stable, interfering little in the development of their relationship and social activities. The elderly also said that they feel most of the time calm and tranquil and therefore they feel happy. As for physical fitness, there were improvements in the results in practically all the tests, between the 1st and 2nd evaluation, which showed a positive impact of the work developed during the 4 months.

Based on the results, we can conclude that the elderly of the study evidenced a perception of their health condition considered positive and that fits in the normal parameters for their age. The results of the physical activity program revealed improvements in physical fitness levels, between the beginning and the end of the program, aspect that is determinant for the health and autonomy of the elderly, as the investigation reveals.

Gerontomotricity, Physical activity and health, Physical condition, Hydrogymnastics.

P53 Impact of dual-task on older adults’ gait spatiotemporal parameters

NĂĄdia augusto 1 , rodrigo martins 2 , madalena g silva 1 , ricardo matias 1,3, 1 physiotherapy department, school of health, polytechnic institute of setĂşbal, 2910-761 setubal, portugal; 2 red cross school of health, 1300-125 lisbon, portugal; 3 champalimaud centre for the unknown, 1400-038 lisbon, portugal, correspondence: nĂĄdia augusto ([email protected]).

The ageing process induces changes in gait, particularly in its automatic features [1]. Early detection of gait changes can be a key to ensure timely clinical intervention to prevent falls [2], since most of these occur while walking in dual-task situations [3]. Several studies using 3D kinematic analysis systems have suggested that changes in spatiotemporal parameters can be measured before these changes are noticeable [4,5]. However, their use has been confined to laboratories [6], which is difficult to transfer into clinical practice [7].

To study the effects of the introduction of a dual task, in spatial/temporal parameters, on the gait of older adults, with an ambulatory 3D kinematic analysis system.

An exploratory observational study was performed. Fifteen healthy older adults (age= 75.73 Âą 6.03 years old) were recruited from a day centre. All participants were instructed to walk 10 meters in a single self-selected pace, and in a dual task (a verbal fluency task and an arithmetic task) condition. Seven spatiotemporal gait parameters were assessed with the Xsens MVN ambulatory system, based on 17 inertial sensors: stride velocity, stride length, stride width, cadence, stance time, swing time and double support time.

The results of the Friedman test revealed statistically significant differences between the temporal parameters of single gait (stride velocity: χ 2 =11.200, p = 0.004; cadence: χ 2 =24.102, p = 0.000; stance time: χ 2 =20.133, p = 0.000; swing time: χ 2 =17.733, p = 0.000 and double support time: χ 2 =19.733, p = 0.000) and each of the two dual task conditions. No statistically significant differences were observed between the cognitive verbal fluency and the arithmetic conditions.

Our results suggest that spatiotemporal parameters of gait significantly change under both cognitive dual task conditions, and that these changes are detectable with the ambulatory 3D kinematic analysis system used. These findings strongly support the use of body-worn sensors, to early detect changes in gait patterns, promoting timely interventions to prevent falls.

1. Bridenbaugh SA. Kressing RW. Motor cognitive dual tasking: early detection of gait impairment, fall risk and cognitive decline. Z Gerontol Geriatr. 2015 ;48(1):15-21.

2. Bridenbaugh SA, Kressing RW. Quantitative gait disturbances in older adults with cognitive impairments. Curr Pharm Des. 2014;20(19):3165-3172.

3. Beauchet O, Annweiler C, Dubost V, Allali G, Kressig RW, Bridenbaugh S, Berut G, Assal F, Herrmann FR. Stops walking while talking: a predictor of falls in older adults?. European Journal of Neurology. 2009;16(7):786-795.

4. Hollman JH, Kovash FM, Kubik JJ, Linbo RA. Age-related differences in spatiotemporal markers of gait stability during dual task walking. Gait Posture. 2007;26(1):113-119.

5. Bridenbaugh SA, Kressing RW. Laboratory Review: The Role of Gait Analysis in Seniors’ Mobility and Falls Prevention. Gerontology. 2011;57(3):256-264.

6. Aminian K, Najafi B, BĂźla C, Leyvraz PF, Robert P. Spatio-temporal parameters of gait measured by an ambulatory system using miniature gyroscopes. J Biomech. 2002;35(5):689-699.

7. Najafi B, Helbostad JL, Moe-Nilssen R, Zijlstra W, Aminian K. Does walking strategy in older people change as function of walking distance?. Gait Posture. 2009;29(2):261-266.

Dual task, Spatiotemporal parameters, Gait, Older adults.

P54 Portuguese workers: perception of wellbeing at work in an industrial company

Marina cordeiro 1,2 , josĂŠ c gomes 1,2 , paulo granjo 3, 1 school of health science, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 instituto de ciĂŞncias sociais, universidade de lisboa, lisboa, portugal, correspondence: marina cordeiro ([email protected]).

People spend a long time working, but work can have a negative impact on wellbeing, especially in situations of poor working conditions and dissatisfaction [1, 2]. This may lead to negative consequences for the worker, company and society [2, 3], therefore, it’s important to recognize the wellbeing perception at work of Portuguese workers and which variables may influence it.

To determine worker’s perception of wellbeing at work and its relationship with sociodemographic and occupational characteristics.

A cross-sectional study, descriptive and correlational, was performed in a Portuguese industrial company. The non-probabilistic convenience sample was composed by 134 workers. Data was collected by a self-administered questionnaire including sociodemographic and occupational questions and the Perception of Wellbeing at Work Questionnaire (PWW). The study was approved by an ethical commission.

The sample mean age was 38.58 years (SD=9.52; min=21; max=62), manly composed by female workers (77.6%), 44% concluded high school, 61.2% are married/living under common law, and 31.3% has 2 children. Workers worked in the company for about 107.15 months (SD=51.10), 88.1% had a Permanent Employment Contract (PEC), 87.3% works ≤ 40hours a week, 40.3% had as working schedule from 07h00 to 16h00 and 41% were production operators. PWW had a mean of 67.2 (SD=7.8), Adaptation and Adequacy to Work (AAW) dimension of 27.2 (SD=3.2), Organizational Characteristics (OC) of 19.2 (SD=3.9), Interpersonal Relationship (IR) of 11.2 (SD=1.7), Structure of work (SW) of 5.4 (SD=1.3) and Bound with the Institution (BI) of 4.2 (SD=1.6). There were no statistically significant differences (p > 0.05) between PWW and sociodemographic or occupational variables. However, statistically significant differences (p < 0.05) were found in IR according to gender (men with higher average) and the type of contractual affiliation (workers with Term Employment Contract [TEC] had better PWW in IR than those with PEC) and SW according to the workload (workers working ≤ 40 hours have higher PWW in SW). It was also found that IR was negatively correlated with age and with working time for the same function.

The sample presents moderate PWW, which assumes higher values in AAW and lower in BI. Older workers and those who have been working longer at the same function have a lower PWW in IR. Male workers and those with a TEC had a better PWW in the IR, and those who worked ≤ 40 hours per week had higher PWW in SW. These results may be useful for designing interventions to promote wellness at work.

1. International Labour Office. Psychosocial risks, stress and violence in the world of work. International Journal of Labour Research. 2016;8:1-131.

2. Harvey SB, Modini M, Joyce S, Milligan-Saville JS, Tan L, Mykletun A, Bryant RA, Christensen H, Mitchell PB. Can work make you mentally ill? A systematic meta-review of work-related risk factors for common mental health problems. Occupational & Environmental Medicine. 2017; 74(4): 301-310.

3. Cottini E, Lucifora C. Mental Health and Working Conditions in Europe. ILRReview. 2013; 66(4): 958-988.

Workplace, Mental health, Occupational health, Workers wellbeing.

P55 Informal caregivers of oldest old people

Sara alves 1, 2 , constança paĂşl 1,2 , oscar ribeiro 2,3, 1 unidade de investigação e formação sobre adultos e idosos, centro de investigação em tecnologias e serviços de saĂşde, instituto de ciĂŞncias biomĂŠdicas abel salazar, 4050-313 porto, portugal; 2 centro de investigação em tecnologias e serviços de saĂşde, universidade do porto, 4200-450 porto, portugal; 3 departamento de educação e psicologia, universidade de aveiro, 3810-193 aveiro, portugal, correspondence: sara alves ([email protected]).

Living longer may lead to a long period of disability and frailty with increasing care demands. Informal caregivers, namely family members, play a very important role in the provision of care to very old individuals. Informal care represents crucial support but the available knowledge on this topic is still scarce in the Portuguese context.

To provide an overview of caregiving characteristics in a sample of dyads of informal caregivers and oldest old people (80+) from Porto.

A sample of 72 caregiving dyads was considered. Sociodemographic data, information on the caregiving experience (e.g. length of care, relationship with the person cared) and the disability level of care receiver (Activities of Daily Living – ADL’s and IADL’s, and comorbidities) were obtained.

Informal caregivers had a mean age of 63.9 years (SD=9.95), were mostly women (76.4%), with children (70.8%), married (63.9%) and retired (50.0%). Time spent on caregiving was on average 9.73 hrs/day (SD=7.61) and the length of care was around 7 years (SD=6.15). Most of the sample had formal social support to help in the care provision: 54.2% receives support from home care services and 34.7% from day centres. Care receivers mean age was 92.0 years (SD=5.28), were mostly women (73.6%) and widowed (65.3%). Half of the care receivers (n=36; 50.0%) were completely dependent in ADL’s, 30.6% were moderately dependent, 16.7% severely dependent, and 2.8 % were independent. Most of the sample was severely dependent in IADL’s (95.8%). The mean number of comorbidities was 6.94 (SD=1.95), and the mean of medicines intake was 6.17 (SD=3.41).

This is an ongoing project but current data already shows that the amount of time spent on caring activities by women who are themselves in advanced age is very high, and probably exhausting due to the high level of functional dependency of the care recipients. A clear picture of tasks and probable exhaustion of the group of people taking care of very old people is crucial in order to plan which and how to deliver formal care to these dyads. Such planning is important for preventing/alleviating burden situations and raising the quality of life of care recipients and their families.

Project was approved by Ethical Commission of ICBAS.UP, approval number 188/2017 and by Portuguese Data Protection Authority (CNPD), approval number 1338/2017.

Informal caregiving, Oldest old people, Disability.

P56 Comparison of antioxidant activity for Ginkgo biloba L. and Rosmarinus officinalis L

Diana silva 1 , ana frança 2 , clåudia pinho 3 , ana i oliveira 3 , rita f oliveira 3,4 , agostinho cruz 3, 1 farmåcia holon, baguim do monte, gondomar, portugal; 2 farmåcia higiÊnica, fão, esposende, portugal; 3 centro de investigação em saúde e ambiente, escola superior de saúde, instituto politÊcnico do porto, 4200-072 porto, portugal; 4 secção autónoma de ciências da saúde, universidade de aveiro, 3810-193 aveiro, portugal.

Natural antioxidant products have gained popularity worldwide and are increasingly being used to treat various diseases [1]. Leaves of Rosmarinus officinalis L. and Ginkgo biloba L. possess a variety of bioactivities, including antioxidant [2].

Therefore, the present study aims to evaluate in vitro antioxidant properties of aqueous and hydroalcoholic extracts of three different commercial brands of R. officinalis and G. biloba .

R. officinalis and G. biloba leaves from three different commercial brands were extracted with two solvents (water and 80% ethanol), and antioxidant activity of the extracts were screened using the superoxide anion and 1,1-diphenyl-2-picryl hydrazyl (DPPH•) radical scavenging, and metal ion chelating capacity.

A comparison of both plant extracts in the DPPH assay and Fe 2+ chelating activity indicated that R. officinalis showed lower IC 50 values comparing to G. biloba, ranging from 44.1-61.8 Îźg/mL (aqueous extracts) and 20.8-23.3 Îźg/ml (hydroalcoholic extracts) in the DPPH assay, and 93.1-329.0 Îźg/mL (aqueous extracts) and 33.4-71.0 Îźg/mL (hydroalcoholic extracts) in the Fe 2+ chelating activity. For the superoxide radical scavenging activity, the hydroalcoholic extracts of R. officinalis showed the best IC 50 values, ranging from 5.1-15.5 Îźg/mL, with one brand showing and IC 50 value lower (5.1 Îźg/mL) than positive control (5.8 Îźg/mL - ascorbic acid). Results also showed that in both plants and brands, the highest antioxidant activity was found mainly in the hydroalcoholic extracts, for all the assays tested.

The findings of this study support the view that some medicinal plants are promising sources of potential antioxidants. The different brands and solvent types used in the present study may influence the chemical composition of the rosemary and ginkgo extracts obtained and therefore their antioxidant capacity.

1. Zhang A, Sun H, Wang X. Recent advances in natural products from plants for treatment of liver diseases. Eur J Med Chem. 2013;63:570-577.

2. El-Beltagi HS, Badawi MH. Comparison of Antioxidant and Antimicrobial Properties for Ginkgo biloba and Rosemary (Rosmarinus officinalis L.) from Egypt. Not Bot Horti Agrobo. 2013;41(1):126-135.

Antioxidant activity, Ginkgo biloba , Rosmarinus officinalis , Solvent extraction.

P57 The skills of the wound navigator in the health care team

Raquel silva, filipa veludo, correspondence: raquel silva ([email protected]).

Aging co-morbidities are the main reason for skin changes, requiring qualified professionals to assist the person with this problem [1,2]. In this sense, it emerges in the literature even the tenuous concept – wound navigator, which may enhance the approach to the person with wounds, often described as the tissue viability nurse.

Define the wound navigator and identify his skills.

Integrative literature review using electronic research (CINAHLÂŽ, Nursing & Allied Health Collection, Cochrane Plus Collection, MedicLatina, MEDLINEÂŽ) and manual research in 12 specialty associations in tissue viability, with the following descriptors (wound OR tissue viability OR ulcer) AND (nurs*) AND (care OR role OR skills OR patient care team OR navigator OR manager OR multidisciplinary OR interdisciplinary OR tissue viability service OR interven* OR pratic*). Inclusion criteria were articles in Portuguese, English, Spanish or French, without temporal limitation, full texts and free access. Exclusion criteria were articles that do not address the study phenomenon. The research was conducted on 08/25/2017, where we obtained 601 articles from the databases and 145 from associations. The titles and abstracts of the publications were read, followed by reading the full text of the selected publications. The sample was defined by 19 articles (15 from databases and 4 from associations).

Only one article defines wound navigator, as the health professional with knowledge in the specialty, who acts as a defender of the interests of the clients, which combines the needs felt by them; the objectives of the treatment and the health care treatment plan by referral [3]. It collects the results achieved from the practice and dissemination of research, in order to highlight their actions before the policy of health care [3]. The competences found in the remaining 18 articles were divided into 4 categories: quality (training, auditing, research and elaboration of norms and protocols) management (involvement in product choice, articulation with suppliers, promotion of change and ability to work in multi and interdisciplinary team), care (postgraduate knowledge, experience in the area of tissue viability, prescription of specialized care and treatments with advanced therapies) and leadership (communication, supervision and consulting).

There is little literature that precisely defines the wound navigator and his skills, therefore more research is needed to describe in detail. When the term is defined and its competences are known, it may through them formally develop teams with nurses specialized in the area, holders of the general and specific attributes identified.

1. Bianchi, J. Preventing, assessing and managing skin tears. Nurs Times. 2012;108(13):12,14, 16

2. Dutton, M.; Chiarella, M. e Curtis, K. The role of the wound care nurse: an integrative review. Community Wound Care. 2014; Vol. Supplement:S39-40, S42-7.

3. Moore, Z.; Butcher, G.; Corbet, L.P. et al. AAWC, AWMA, EWMA, Position Paper: Managing Wounds as a Team. J Wound Care. 2014;23(5): S1-S38.

Tissue viability nurse, Skills, Role.

P58 Nursing interventions in the prevention and management of aggressive behaviors in psychiatric context

Aida bessa 1 , isabel marques 2 , amorim rosa 2, 1 centro hospital e universitĂĄrio de coimbra, 3000-075 coimbra, portugal; 2 escola superior de enfermagem de coimbra, 3046-051 coimbra, portugal, correspondence: aida bessa ([email protected]).

Aggressive behaviour emerges in social interaction and is therefore considered as a process arising from the relationship between the person and the person’s physical, social and cultural environment over time. In these situations, where the person is experiencing transition processes that generate misaligned human responses, inducing processes of mental suffering as the manifestation of aggressive behaviours, the need to provide specialized nursing care emerges.

The aim of this study was to analyse the most relevant data from the studies about Nursing Interventions in the prevention and management of aggressive behaviours in the psychiatric context, according to the risk profile of the patients.

An integrative review of the literature was carried out, in libraries of national and international organizations, using the B-On search engine, PubMed database - from the question “ What Nursing interventions in the prevention and management of aggressive behaviours in psychiatric context, according to the risk profile of the patients? ” The following keywords were used: Nursing Interventions, Violence, Aggression, and Risk Psychiat*. 343 articles were considered, published in the period 2004-2016, in Portuguese, Spanish, French or English and with full text. After applying the inclusion and exclusion criteria, we obtained 10 articles that were analysed according to the previously defined protocol.

Five articles addressed the prevention and management measures implemented to deal with the incidents of aggression, while all the others emphasize the implementation of interventions to prevent aggressive behaviour. Award-winning nursing interventions are non-restrictive (communicational and de-escalation techniques). Observation emerges as the first intervention, including risk assessment. Environmental and chemical containment arise among the containment measures.

It was concluded that the risk assessment of aggressive behaviours, upon stratification, followed by the implementation of preventive/management measures, adapted to the risk level, can be easily implemented in the routines of the services. It therefore contributes both to reducing the incidence and severity of these behaviours and to improving the management/reduction of coercive measures.

Aggressive behaviors, Psychiatry Context, Prevention, Nursing Interventions.

P59 An advanced nursing practice model proposal to improve heart failure under mechanical circulatory support patients’ outcomes

Teresa pessoa 1,2 , maria t leal 2, 1 departmento de cardiologia, hospital de santa maria, centro hospitalar de lisboa norte, 1649-035 lisboa, portugal; 2 escola superior de enfermagem de lisboa, 1600-190 lisboa, portugal, correspondence: teresa pessoa ([email protected]).

The use of left ventricular assist devices has grown rapidly in recent years for patients with end-stage heart failure (HF) [1]. HF under Mechanical Circulatory Support (MCS) patient-centred care assumes that nurses have professional competence, knowledge and ability to make decisions and prioritize care [2]. The ability of decision-making, and prioritization of multiple nursing interventions depends on clinical judgement and professional and personal experience [3].

To identify evidence-based nursing interventions that improve the outcomes of patients with HF under MCS and organize them in the form of an Advanced Nursing Practice Model.

Integrative literature review based on a systematic search of original articles and literature reviews published between January 1st, 2010 and August 31st, 2016, in MEDLINE, CINAHL and Cochrane databases and manual search on Google and ResearchGate. Studies related to adult HF patients with formal indication or under MCS in which the first author was a nurse, have been included. Studies related to patients under MCS devices such as intra-aortic balloon counterpulsation or venovenous extracorporeal membrane oxygenation have been excluded.

From the 41 articles included, 43 categories emerged through content analysis. Those were grouped in four areas of care where nurses should intervene. The 15 nursing interventions included in the Model were not mutually exclusive, which makes that the same intervention can be applied to more than one area of care. There are no validated protocols to each intervention. The model was constructed based on the review results and on The Strong Model of Advanced Nursing Practice [4], Clinical Judgement [5] and Patient Centred Care [2] frameworks.

HF under MCS patients centred-care is complex and requires teamwork and relational skills. It depends on the best available evidence-based scientific knowledge, on stakeholders’ life experience and on context and environment specificity. Patient-related outcomes can be improved through the application of the proposed model. To make it operational, there is a need to standardize practices and to develop protocols, guidelines and training programs to improve advanced nursing practice.

1. Creaser JW, Rourke D, Vandenbogaart E, Chaker T, Nsair A, Cheng R, et al. Outcomes of biventricular mechanical support patients discharged to home to await heart transplantation. J Cardiovasc Nurs. 2015;30(4):E13–20.

2. McCormack B, McCance TV. Development of a framework for person-centred nursing. J Adv Nurs. 2006;56(5):472–9.

3. Benner P, Tanner CA, Chesla CA. Expertise in nursing practice: caring, clinical judgment and ethics. 2nd ed. Springer Publishing Company. New York: Springer Publishing Company; 2009.

4. Ackerman MH, Norsen L, Martin B, Wiedrich J, Kitzman HJ. Development of a model of advanced practice. Am J Crit Care. 1996;5(1):68–73.

5. Tanner CA. Thinking like a nurse: a research-based model of clinical judgment in nursing. J Nurs Educ. 2006;45(6):204–11.

Heart failure, Mechanical circulatory support, Nursing interventions, Patient-related outcomes, Advanced nursing practice.

P60 Acute effects of aerobic exercise on motor memory consolidation in older people

AndrĂŠ ramalho 1 , pedro duarte-mendes 1 , rui paulo 1 , joĂŁo serrano 1,2 , antĂłnio rosado 3 , joĂŁo petrica 1,2, 1 department of sports and well-being, polytechnic institute of castelo branco, 6000-266 castelo branco, portugal; 2 centre for the study of education, technologies and health, polytechnic institute of viseu, 3504-510 viseu, portugal; 3 faculty of human kinetics, university of lisbon, 1499-002 lisbon, portugal, correspondence: andrĂŠ ramalho ([email protected]).

Scientific evidence suggests that an aerobic exercise session promotes improvements in the consolidation of motor memory in adults.

In this sense, the main purpose of this study was to investigate if an aerobic training session could improve motor memory consolidation in older people.

The participants of this study were 33 subjects of both genders (M = 68 years old; SD = 4.2 years old) divided in two groups: a control group and an experimental group. The participants performed a Soda Pop test before the aerobic training session (Baseline). The training session lasted 45 minutes and was composed of running exercises. After the training session, the motor memory consolidation was held in three different stages: Training; 1 hour after training; 24 hours after training. The Shapiro-Wilk test was applied to the normality distribution of data and One-way ANOVA test for parametric statistics.

The results indicated that although the experimental group presented a better performance in motor memory consolidation, 1 hour after training and 24 hours after training, the differences were not significant (p ≥ 0.05).

Thus, it seems that an aerobic training session does not significantly improve motor memory consolidation in older people.

NCT03506490

Aerobic exercise, Motor memory consolidation, Learning, Aging.

P61 Fundamentals of care in the critically ill person in ICU: an integrative literature review

Maria j pires 1 , helga r henriques 2 , maria c durĂŁo 3, 1 escola superior de enfermagem de lisboa, 1600-190 lisboa, portugal; 2 departamento de fundamentos de enfermagem, escola superior de enfermagem de lisboa, 1600-190 lisboa, portugal; 3 departamento de enfermagem mĂŠdico-cirĂşrgica, escola superior de enfermagem de lisboa, 1600-190 lisboa, portugal, correspondence: maria j pires ([email protected]).

In the critically ill person, the priority of nursing interventions is, in most cases, exclusively directed to the management of the clinical situation, not valuing the fundamentals of care [1,2]. These universal activities are essential to the maintenance of life. They are present in each person, regardless of their health condition, whenever there is the necessary strength, will or knowledge [3,4]. In acute or chronic illness, injury or in the healthy person, the fundamentals of care may be disturbed, which motivate the nurses' intervention [3,5].

To know the evidence related to the fundamentals of care in the recovery of the critically ill person.

Integrative Literature Review, using a PICO model clinical question and carried out through the search of articles in MEDLINE, CINAHL and grey literature databases.

We identified 1,268 results, of which 11 documents were selected to compose the final sample. The analysis of the documents showed that there is an invisibility of the fundamentals of care in ICU, caused by the predominance of biomedical model and technological care. Some studies, however, highlight the importance of nurses' intervention in patient recovery in areas of fundamentals of care, such as sleep [6], breathing, skin, nutrition, communication, patient and family education, mobilization, positioning and hygiene [7,8].

Fundamentals of care contribute to reducing complications during hospitalization of patients in the ICU, such as malnutrition, development of pressure ulcers or infection. The fundamentals of care are neglected in this context of care to the detriment of the technological care, reason why it is necessary to investigate this issue.

1. Feo R, Kitson A. Promoting patient-centred fundamental care in acute healthcare systems. Int J Nurs Stud. 2016;57:1–11.

2. Henneman EA. Patient Safety and Technology. AACN Advanced Critical Care, 2009;20(2):128–132.

3. Henderson V. Principios BĂĄsicos dos Cuidados de Enfermagem do CIE. Lusodidata, Ed. Loures; 2007.

4. Kitson A, Conroy T, Wengstrom Y, Profetto-McGrath J, Robertson-Malt S. Defining the fundamentals of care. Int J Nurs Pract. 2010;16(4):423–434.

5. Clares JWB, Freitas MC, Galiza FT, Almeida PC. Necessidades relacionadas ao sono/repouso de idosos: estudo fundamentado em Henderson. Acta Paul Enferm. 2012;25(Número Especial 1):54–59.

6. Eliassen K, Hopstock L. Sleep promotion in the intensive care unit-A survey of nurses’ interventions. Intensive and Critical Care Nursing. 2011;27(3):138–142.

7. Shahin E, Dassen T, Halfens R. Pressure ulcer prevention in intensive care patients: Guidelines and practice. J Eval Clin Pract. 2009;15(2):370–374.

8. Curtis K, Wiseman T. Back to basics-Essential nursing care in the ED, Part 2. Australasian Emergency Nursing Journal. 2008;11(2):95–99.

Critical patient, Fundamentals of care, Recovery, Safety.

P62 Adaptation and validation for the Portuguese population of the Quality of the Carer-Patient Relationship (QCPR) scale: preliminary results

Rosa silva 1 , paulo costa 2 , isabel gil 3 , hugo neves 4,5 , nele spruytte, joĂŁo apĂłstolo 2,5, 1 universidade catĂłlica portuguesa, institute of health sciences, 4200-374 porto, portugal; 2 the health sciences research unit: nursing, nursing school of coimbra, 3046-851 coimbra, portugal; 3 nursing school of coimbra, coimbra, 3046-851, portugal; 4 school of health science, polytechnic institute of leiria, 2411-901 leiria, portugal; 5 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 6 centre for evidence based practice: a joanna briggs institute centre of excellence, 3000-232 coimbra, portugal, correspondence: rosa silva ([email protected]).

The number of elderly with dementia is hastily increasing. The informal (family) carer are urged to perform more activities in order to maintain the elderly health needs. Assessing the quality of the relationship between the dyad (a family carer and a patient with dementia) is necessary to assure that care plans are well adjusted. The Quality of The Carer-Patient Relationship (QCPR), [1] emerges as a specific scale for assessing the quality of the bidirectional relation of the dyad.

To adapt the QCPR scale to Portuguese and determine its psychometric properties.

Phase (I), transcultural adaptation according to the international recommendations of the American Association of Orthopedic Surgeons [2], composed by four steps; Phase (II), a pilot study was conducted to assess the preliminary psychometric properties in the new cultural context, with a convenience sample composed by patients with dementia (n=30) and their caregivers (n=30). The factorial structure of the QCPR in this sample was evaluated through a confirmatory factorial analysis (CFA) with the AMOS software (v.24, SPSS Inc, Chicago, IL).

Phase (I), the initial translations proved to be close to the original content (first step). Less consensual terminology was critically analysed and synthesized by the translators and elements of the research team (second step), resulting in the first version. The versions obtained in the back-translation process (third step) presented good semantic and idiomatic equivalence. After expert consensus (fourth step), the scale was revised in its entirety and punctually reformulated. This resulted in a second translated version, which maintained the original structure (an individualized questionnaire for each element of the dyad, with 14 items of the 5-point Likert-type scale response).

Phase (II), the CFA shows no violation of normality in all variables, with the presence of factorial weights below 0.5 in items 10 and 11 in the caregiver’s version, and in items 13 and 14 in the patient’s version. Regarding the quality of the adjustment of the model, both the caregiver’s version (χ 2 /df=1.304; CFI=0.922; GFI=0.731; RMSEA=0.097 P[RMSEA≤0.05]> 0.05; MECVI=6.739), as well as the patient’s version (χ 2 /df=1.152; CFI=0.955; GFI=0.760; RMSEA=0.069 P[RMSEA≤0.05]> 0.05; MECVI=6.475) present a model with questionable to good adjustment.

A strong and positive relation between both latent variables were contrary to what was expected to observe. The fact that only two observations evidenced conflict may indicate that there is a need to strengthen the number and diversity of participants sampled, in order to clarify these preliminary results.

This study is part of the project “Cognitive stimulation in the elderly: intervention in cognitive fragility and promotion of self-care”, funded by the Nursing School of Coimbra.

1. Spruytte N, Audenhove C, Lammertyn F, Storms G. The quality of the caregiving relationship in informal care for older adults with dementia and chronic psychiatric patients. Psychology and Psychotherapy: Theory, Research and Practice. 2002;75(3):295-311.

2. Beaton D, Bombardier C, Guillemin F, Ferraz M. Guidelines for the Process of Cross-Cultural Adaptation of Self-Report Measures. Spine. 2000;25(24):3186-3191.

Patient-carer relationship, Quality of the relationship, Dementia.

P63 Association between Mediterranean diet and mood in young volunteers

Rafael bravo 1 , nuria perera 1 , lierni ugartemendia 1 , javier cubero 2 , ana b rodrĂ­guez 1 , maria a gĂłmez-zubeldia 1, 1 department of physiology, faculty of science, university of extremadura, 06071 badajoz, spain; 2 health education laboratory, experimental science education area, university of extremadura, 06071 badajoz, spain; 3 department of physiology, faculty of medicine, university of extremadura, 06071 badajoz, spain, correspondence: rafael bravo ([email protected]).

Mediterranean diet (MD) is characterized by a high intake of vegetables, a moderate intake of fish and poultry and a low intake of meat and alcohol. MD is considered as a healthy dietary pattern in terms of morbidity and mortality. These benefits have been associated with the ingested levels of olive oil, fibber, non-simple carbohydrates and proteins obtained from vegetables.

In the last years, it has been reported that dietary patterns may influence on several psychological aspects like mood, anxiety or depression. Therefore, our aim was to elucidate whether MD may have a positive effect on the mood of young volunteers.

In this assay participated 52 male volunteers and 80 female volunteers. Participants filled the following scales: Mediterranean Diet Adherence Questionnaire (MDAQ), Beck’s Anxiety Inventory, Beck’s Depression Inventory, Ruminative Response Scale and the Well-Being Index (WHO-5). After data collection, every psychological variable was correlated with the score obtained in the MD Adherence Questionnaire.

Related to male participants our results showed a negative correlation between MDAQ and Personal Dysfunction (p < 0.05). On the other hand, related to women, anxiety showed a negative correlation (p < 0.05) and well-being showed a positive correlation (p < 0.05), when correlated with MDAQ.

In our sample, we observed that MD may have positive effects on the mood of both young men and women.

Authors are gratefull to Junta de Extremadura (Fondos FEDER – GR 15051).

Nutrition, Mediterranean diet, Mood.

P64 Health education, interprofessional collaboration and infection control in a house of support in souththern Brazil

Andressa t hoffmann, adriele timmen, alzira mb lewgoy, nadia m kuplich, ester d schwarz, hospital de clĂ­nicas de porto alegre, porto alegre, 90035903, rio grande do sul, brasil, correspondence: andressa t hoffmann ([email protected]).

Interprofessional collaboration can be understood as the interaction of professionals from different fields of knowledge aiming at a whole and broad health care. In this sense, health education constitutes a means-activity for the promotion of health, with a transforming role of individual practices [1,2]. The emergence of multidrug resistant (MDR) bacteria is a public health problem, and measures to prevent their spread are essential, together with awareness actions on infectious diseases and basic measures of personal and environmental hygiene.

To describe the interprofessional collaboration carried out by a group of professionals, in the areas of social work, nursing and pharmacy of the hospital infection control commission (HICC), with the staff and the children and relatives housed in a house of support from a university hospital, in Brazil.

Experiences reported from July 2016 to December 2017. After identifying the need for a theoretical deepening in prevention and control of infections, especially in the management of children and adolescents with MDR bacteria and their relationships, in the daily routine of the home, a joint planning of actions was started between the infection control team and the support house staff, focusing on health education. Initially, monthly meetings were made among professionals for reflection and problematization of learning and participatory observation, making feasible the situational diagnosis of the house. In these meetings, subjects such as hand hygiene, bacterial transmission, management of individuals with MDR bacteria, matrix-based strategies, hospital and house of support concepts were discussed, enabling a better understanding of the difference of care inside and outside the hospital. After the development and consolidation of concepts, house staff was encouraged to produce health education actions for their public.

The performance of a theatrical play was carried out by both teams in June 2017, where it was identified the consolidation of contents by the professionals in the period of theoretical deepening. After this action, monthly workshops were started with the hosted public, carried out with the participation of both teams, in which issues related to the prevention and control of infections were discussed.

Health education and interprofessional collaboration enabled the development of the identity of the house of support, the strengthening of the team in topics of infection control, as well as the planning and carrying out of activities with the house's public. Such health promotion actions enabled employees and family members to become multipliers of good practices in infection prevention and control.

1. Matuda CG, Pinto NRS, Martins CL, Fazão P. Colaboração interprofissional na EstratÊgia Saúde da Família: implicaçþes para a produção do cuidado e a gestão do trabalho. Ciência & Saúde Coletiva. 2015; 20(8):2511-2521.

2. Candeias NMF. Conceitos de educação e de promoção em saúde: mudanças individuais e mudanças organizacionais. Rev. Saúde Pública. 1997; 31(2):209-213.

Heath education, Infection control, Interprofessional collaboration.

P65 Promotion of patient safety in nursing practice: what strategies?

Tânia ferreira 1 , rita marques 2, 1 universidade catĂłlica portuguesa, 1649-023 lisboa, portugal; 2 escola superior de saĂşde da cruz vermelha portuguesa, 1300-906 lisboa, portugal, correspondence: tânia ferreira ([email protected]).

Patient safety is a priority for World Health Organization (WHO). The organization considers it fundamental to develop a safety culture among health professionals in general and among nurses in particular, ensuring the safety and quality of the health care.

The purpose of this literature review (LR) was to identify nursing strategies for promoting patient safety.

Using the methodology recommended by the Cochrane Centre, this LR was guided by the research question: “ What are the strategies that promote patient safety associated with nursing practice?” A research was carried out in scientific databases, in EBSCOhost (CINAHL® MEDLINE®) and SciELO, with publication dates between January of 2012 and December of 2017, with descriptors “patient safety”, “safety culture” and “nursing care”, having emerged 47 articles. After reading the title and abstract, 8 articles were selected to answer the research question.

The identification of risks for people during nursing care and the incorporation of good practices and trust in teamwork, contribute to the improvement and development of a safety culture in health care [1,2]. The guarantee of satisfaction of the professional, the communication between professionals and institution, and the support to the team given by the administrations, are strategic factors for the assurance of patient safety [3, 4]. Nurses' perceptions about the safety culture of the patient and the intention to report adverse events are significant in promoting patient safety [5-7]. Coaching behaviour showed a significant and positive correlation with safety culture and coaching behaviours of team leaders were associated with higher degrees of perceived safety culture and stronger intentions to report adverse events [2,8]. Openness of communication and non-punitive responses to mistakes, as well as teamwork influence the patient's safety culture [5, 7].

Through this LR it was possible to perceive that the safety culture is an issue still underdeveloped in health organizations, where nurses have an essential role. Nursing practice is related to the professionals' perception of patient safety, which is related to a set of strategies that minimize the risks of adverse events.

1. Oliveira R, Leitão I, Silva L, Figueiredo S, Sampaio R, Gondim M. EstratÊgias para promover segurança do paciente: da identificação dos riscos às pråticas baseadas em evidências. Esc Anna Nery. 2014; 18(1): 122-129.

2. Hwang J. What are hospital nurses’ strengths and weaknesses in patient safety competence? Findings from three Korean hospitals. International Journal for Quality in Health Care. 2015; 27(3): 232-238.

3. Rigobello MCG, Carvalho REFL, Cassiani SHDB, Galon T, Capucho HC, Deus NN. Clima de segurança do paciente: percepção dos profissionais de enfermagem. Acta Paul Enferm. 2012; 25(5): 728-35.

4. Mayeng LM, Wolvaardt JE. Patient safety culture in a district hospital in South Africa: Na issue of quality. Curationis. 2015; 38 (1).

5. Ammouri AA, Tailakh AK, Muliira JK, Geethakrishnan R, Al Kindi SN. Patient safety culture among nurses. International Nursing Review. 2015; 62: 102-110.

6. Costa TD, Salvador PTCO, Rodrigues CCFM, Alves KYA, Tourinho FSV, Santos VEP. Percepção de profissionais de enfermagem acerca de segurança do paciente em unidades de terapia intensiva. Rev Gaúcha Enferm. 2016; 37 (3).

7. Khater WA, Akhu-Zaheya LM, AL-Mahasneh SI, Khater R. Nurses’ perceptions of patient safety culture in Jordanian hospitals. International Nursing Review. 2015; 62: 82-91.

8. Ko Y, Yu S. The Relationships Among Perceived Patients’ Safety Culture, Intention to Report Errors, and Leader Coaching Behavior of Nurses in Korea: A Pilot Study. J Patient Saf. 2015; 00.

Patient safety, Safety culture, Strategies, Nursing.

P66 End of life person’s evaluation criteria in the decision making regarding artificial nutrition

Tânia afonso 1 , filipa veludo 1 , patrĂ­cia p sousa 1 , sĂłnia santos 2, 1 instituto de ciĂŞncias da saĂşde, escola de enfermagem, universidade catĂłlica portuguesa, 1649-023 lisboa, portugal; 2 unidade de cuidados intensivos polivalente, hospital prof doutor fernando fonseca, 2720-276 amadora, portugal, correspondence: tânia afonso ([email protected]).

Artificial nutrition at the end of life is assumed as a medical intervention, however for a large percentage of person’s and families is considered as basic care [1]. Thinking about artificial nutrition and the end of life person, such as the person with advanced, incurable and progressive disease, with a survival expectancy between 3 to 6 months [2] is often reflected on a set of issues. This is a controversial discussion, about the quality of life resulting of one of these means and ethical questioning [3]. It’s relevant to look to the user/family as one, which motivates the urgent intervention of the nurses in decision-making support.

Identify scientific evidence regarding the end-of-life evaluation criteria, to be considered in the nurses’ decision-making about artificial nutrition.

Literature Review (15-06-2017) with PRISMA guidelines for reviews [4] in Academic Search Complete, Complementary Index, CINAHL Plus with Full Text®, Psychology and Behavioural Sciences Collection, ScieELO, MEDLINE®, Directory of Open Access Journals, Supplemental Index, ScienceDirect, Education Source, Business Source Complete and MedicLatina. Inclusion/exclusion criteria: nurses who care for adult/elderly persons at the end of life, excluding nurses who care for children; articles about nurses’ intervention in nutrition care to the person at the end of life and the person’s evaluation criteria; full text; in French/Spanish/English/Portuguese; peer-reviewed; published between 2000-2017. A sample of 11 articles was selected.

The evaluation criteria to be considered when making decisions on artificial nutrition are: the evaluation of symptoms/problems; emotional value of food; the meaning of the diet for the person at the end of life and definition of prognosis [3,5-6]. In every decision-making, it should be considered the existence of a clinical indication/treatment, a therapeutic objective and the informed consent of a user or legal guardian.

It is concluded that the decision on artificial nutrition should integrate the person at the end of life and family, be taken by an interdisciplinary team, considering the definition of the prognosis and the effectiveness of the treatment applied [3]. The intervention of the nurse is understood as a primordial one, based on the best evidence, in relation of proximity [5] considered, simultaneously, the principle of autonomy, beneficence, non-maleficence and justice. There is little evidence of end-of-life nutrition and new studies on the role of nurses within the interdisciplinary team are suggested.

1. Stiles E. Providing artificial nutrition and hydration in palliative care. Nursing Standard. 2013, 27: 35-42.

2. Barbosa A [et al.]. Manual de Cuidados Paliativos. Lisboa: Núcleo de Cuidados Paliativos, Centro de BioÊtica da Faculdade de Medicina da Universidade de Lisboa. 2010, 2ª edição;

3. Alves P. Intervenção do Enfermeiro que Cuida da Pessoa em Fim de Vida com Alteraçþes do Comer e Beber. Pensar Enfermagem. 2013, 17(1): 17-30;

4. Moher D [et al.]. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. Ann Intern Med. 2009, 151(4): 264-269;

5. Bryon E [et al.]. Decision-making about artificial feeding in end-of-life care: literature review. Journal of Advanced Nursing. 2008, 63(1): 2-14;

6. Holmes S. Withholding or withdrawing nutrition at the end of life. Nursing Standard. 2010, 25(14): 43-46.

Nursing, Artificial nutrition, Therapeutic obstinacy, Integrative review.

P67 Psychometric properties of the Portuguese version of personal outcomes scale for children and adolescents: an initial research

Cristina simĂľes 1,2 , cĂŠlia ribeiro 1, 1 economics and social sciences department, portuguese catholic university, 3504-505 viseu, portugal; 2 research centre on special education, faculdade de motricidade humana, 1495-687 cruz quebrada, portugal.

The quality of life (QOL) assessment of children and adolescents has been particularly important in the field of intellectual disability (ID) during the past years. Special Education needs to use a systematic approach to the assessment of the QOL domains, in order to implement a social-ecological model and to promote full inclusion in all contexts of life. It is important to develop a scale that provides simultaneously self-report and report-of-others measures to gather information based on a multiperception strategy and to encourage person-centred planning.

This research aims to analyse the validity and reliability of the Portuguese version of the Personal Outcomes Scale for Children and Adolescents (POS-C).

Data were collected from 54 children and adolescents with ID (M age = 12.48, SD = 2.93) and respective proxies (M age = 46.59, SD = 5.68). After the cross-cultural adaptation stage, the validity (content, construct) and the reliability (test-retest, Cronbach’s alpha, inter-rater) properties of the POS-C were examined.

All items of the POS-C were considered relevant by 10 experts, who agreed on a Portuguese version of the scale. The scores of the content validity index (CVI) of each item (≥ .80), the scale CVI-universal agreement (≥ .84), the scale CVI-average (≥ .99) and the Cohen’s kappa (≥ .44) showed suitable content validity of the scale. The total score from self-report and domains ranged from moderate (r = .42 in emotional well-being) to high (r = .82 in social inclusion). Regarding the report-of-others, the Pearson’s coefficients ranged from moderate (r = .49 in emotional well-being) to high (r = .85 in interpersonal relations). The test-retest scores were high in practitioners (r = .95) and in family members (r = .90). The internal consistency reliability of the self-report domains ranged from .41 (interpersonal relations) to .70 (self-determination), and in report-of-others ranged from .54 (physical well-being) to .79 (emotional well-being). The overall scale demonstrated good Cronbach’s alpha scores (α = .81 in self-report and α = .87 in report-of-others). The inter-rater of domains ranged from .47 (interpersonal relations) to .81 (personal development).

This initial research on the psychometric properties of the scale, introduces the POS-C as a useful measure of personal outcome scales for Portuguese children and adolescents with ID. POS-C is an important tool to improve personalized support plans, based on self-report and report-of-others measures.

Quality of life, Intellectual disability, Cross-cultural adaptation, Validity, Reliability.

P68 Development and validation of a multi-domain digital cognitive stimulation program for older adults with mild to moderate cognitive decline

Filipa c couto 1 , maria a dixe 1,2,3 , jaime ribeiro 2,3,4 , mĂłnica braĂşna 2,3 , luĂ­s marcelino 5 , joĂŁo apĂłstolo 6, 1 the health sciences research unit: nursing, nursing school of coimbra, coimbra, 3000-232, portugal; 2 school of health science, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 4 research centre on didactics and technology in the education of trainers, department of education and psychology, university of aveiro, campus universitĂĄrio, aveiro 3810-193, portugal; 5 informatics engineering department, school of technology and management, polytechnic of leiria, leiria, 2411-901, portugal; 6 the health sciences research unit: nursing, portugal centre for evidence based practice: a joanna briggs institute centre of excellence, nursing school of coimbra, coimbra, 3000-232, portugal, correspondence: filipa c couto ([email protected]).

Frail older adults present a decline on their cognitive function. Cognitive interventions are related to the maintenance of cognitive function and are associated with independence and well-being [1, 2]. A cognitive intervention could be considered as a complex intervention once it contains several interacting components [3]. The MIND&GAIT project expects the development of a structured digital cognitive stimulation program to be used with older adults with mild to moderate cognitive decline.

To develop and validate an elderly-friendly multi-domain digital cognitive stimulation program.

To develop the program, the research team followed Guidelines for complex interventions from The Medical Research Council. The development process comprises four phases [3]. The first phase corresponds to a Preliminary Phase (I), the second to a Modelling Phase (II), the third to a Field Test Phase (III) and the last to a Consensus Conference Phase (IV).

The digital stimulation cognitive program has 8 individual sessions and 1 group session. The program has already passed for Phase I, which corresponds to an initial conceptualization of the program design and its support materials. At the moment, it is on Phase II, being presented to an experts’ panel to gather different opinions and evaluations from specialists in the area of cognitive interventions. Each session of the program will later be evaluated at Phase III. All the contributions and analysis resulted from the previous phases will be synthesized in phase IV. It is expected that the critical evaluation, provided by specialists in the area of cognitive interventions, will result in a well-founded and structured intervention to be applied in older adults with mild to moderate cognitive decline. The program’s construction, supported by guidelines and based in elderly-friendly digital components, intends to give a response to the challenge of an increasing aged population allying e-health. The program is to be used by health professionals and informal caregivers, in order to work as a possible way to prevent or minimize cognitive decline.

Cognitive interventions have impact on cognitive decline, a condition that assumes more importance once it is related with frailty in older adults. Although being a multidomain program, it also potentiates people for activities of daily living. As a complex intervention, this program allows health professionals and other people to apply nonpharmacological interventions which can represent the implementation of best practice towards the needs of an ageing population.

The current abstract is being presented on behalf of a research group. It is also part of the MIND&GAIT project Promoting independent living in frail older adults by improving cognition and gait ability and using assistive products, which is a Portuguese project with the support of COMPETE 2020 under the Scientific and Technological Research Support System, in the copromotion phase. We acknowledge The Health Sciences Research Unit: Nursing (UICISA:E) of the Nursing School of Coimbra, the Polytechnic of Leiria and also to other members, institutions and students involved in the project.

2. Mewborn CM, Lindbergh CA, Miller LS. Cognitive interventions for cognitively healthy, midly impaired and mixed samples of older adults: a systematic review and meta-analysis of randomized-controlled trials. Neuropsychol Rev. 2017;27(4):403-439.

3. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

Aged, Cognitive decline, Cognitive stimulation, Frailty, Complex intervention.

P69 Tele-enfermeiro evolution

Telmo sousa 1 , pedro brandĂŁo 1,4 , paulino sousa 2,3 , joĂŁo rodrigues 4,5, 1 faculdade de ciĂŞncias, universidade do porto, 4169-007 porto, portugal; 2 centro de investigação em tecnologias e serviços de saĂşde, 4200-450 porto, portugal; 3 escola superior de enfermagem do porto, 4200-072 porto, portugal; 4 instituto de telecomunicaçþes, 1049-001 lisboa, portugal; 5 administração regional de saĂşde do norte, 4000-099 porto, portugal, correspondence: telmo sousa ([email protected]).

Despite the high technological growth envisaged in the health area, the use of technologies is still scarce when we refer to the Primary Healthcare (PHC). PHC care is very important because it allows for the implementation of proximity interventions, such as nursing home care, resulting in an improvement of the health of individuals, families, and communities. The system mentioned above is the one implemented in health centres called S-Clinic and developed by the Serviços Partilhados do MinistÊrio da Saúde (SPMS). However, this process presents some problems, such as the time spent collecting patient data; the introduction of the intervention data into the system, and the lack of a support structure for the data records extracted from the patient by the nurse.

Combining technological evolution, the importance of PHC and the difficulty of the nursing process at home, we propose the development of an application for mobile devices, with the objective of allowing nurses to import patient data through information, data records of the interventions carried out in an electronic format, which are then exported to the system.

In this way, the application will facilitate the work of the nurse because it replaces the records on paper, thus allowing a better collection and structuring of the data, as well as the increase of the efficiency of the work activity, and reduction of the time spent for the collection and introduction of data in S-Clinic. We had to study the essential contents of the nursing process at home and implemented in the system, in order to create a data structure with the closest resemblance to S-Clinic. Then, to obtain this information a meeting was held with nursing experts to provide their knowledge in this area.

In this meeting, the contents considered essential for a domicile were addressed, and the key points were: nursing focus or diagnosis and nursing intervention. The data model was implemented in order to cover all the contents. Some security measures that could be implemented have also been discussed, in order to protect data. After the application development was complete, a meeting was held with some of the nursing experts present at the first meeting to gather requirements, in order to evaluate the system.

The feedback was very positive, encouraging the research team to continue this development because they see a good solution for the future of the PHC at the home environment.

Nursing, Health information system, Digital health, Innovation, Development.

P70 Construction of parenthood - role of the family nurse

Andreia mjs azevedo, elsa mop melo, assunção dl almeida, correspondence: andreia mjs azevedo ([email protected]).

The birth of the first child is presented as the most challenging responsibility facing the family, requiring the adaptation of their interactions with singular and holistic impact. The first six months emerge as a transient, predictable and irreversible milestone. The transition to parenting is marked by the changes and repercussions that drive the child's growth and development, predating and precipitating others in the family life cycle. Supporting families in transition is the competence of nurses, namely, their intervention in the field of family health nursing, in the training for the construction of parenting.

Understand how parents build their parenting model during the first semester of life of the first child. Analyse parents' expectations and constraints/difficulties in the transition to paternity. Explore identity figures and resources mobilized by parents in the transition to parenthood.

We developed a phenomenological study of qualitative nature, with a non-probabilistic sampling of convenience that includes 11 subjects, parents with the first child to complete six months of life between October and December 2016, enrolled in USF Rainha D. Tereza. We conducted semi-structured interviews, obtaining the narratives of the experiences and their deepest understanding of them. We complied with ethical procedures and submitted the information collected to analysis using WEBQDA Software.

We highlight the experience of parenting in the desire to be a parent, and in the expectations created in pregnancy, contributing to the parental model. This is determined by factors such as the characteristics of the child, the characteristics and previous experiences of the parents, and the family dynamics. Parents face difficulties in providing care for the child and reconciling parental, marital and familial and social roles. Faced with these difficulties, parents use human, community and monetary resources. We highlight the community resource in support of health care, which is valued by parents. The family nurse, when identified and recognized, is described as an effective and accessible resource in adapting to parenting.

The results obtained by the research carried out allowed us to acknowledge the experience of the transition to parenthood of the parents interviewed and to affirm the role of the family nurse, in their capacity to build their own model of parenting, as well as contributing with knowledge to be valued in nursing interventions.

Nursing, Family, Transition, Parenthood.

P71 Antioxidant activity of the garlic (Allium sativum L.) submitted to different technological processes

Carla sousa 1 , catarina novo 2 , ana f vinha 1,3, 1 unidade de investigação em energia, ambiente e saĂşde, centro de estudos em biomedicina, fundação fernando pessoa, 4249-004 porto, portugal; 2 universidade fernando pessoa, 4249-004 porto, portugal; 3 requimte/laqv, departamento de ciĂŞncias quĂ­micas, faculdade de farmĂĄcia da universidade do porto 4051-401 porto, portugal, correspondence: ana f vinha ([email protected]).

Garlic has become extensively investigated by its benefits for health. Some therapeutic activities are attributed to the garlic, namely, antioxidant, hepatoprotective, anticancer and antitumor properties, among others [1-3].

Therefore, the total phenolic content (TPC) and the total flavonoid content (TFC) have been determined, as well as the antioxidant properties of extracts of the different forms of presentation/parts of the garlic existing in the market (bulb, in powder and in tablets/capsules), by the radical 2,2-diphenyl-1-picrylhydrazyl (DPPH•) method and by evaluation of the ferric reducing antioxidant power (FRAP). The scavenging capacity of the same extracts against reactive species (O 2 - •, H 2 O 2 , NO•) was also evaluated. Finally, the biological activity of the presentation forms of garlic existing in the market was compared with the one of the garlic peel, considered food waste, taking also into account some variables that can influence the properties of the bulb, that is, boiling and freezing.

TPC was superior in the frozen chopped garlic sample, having the garlic tablets the lowest content. The cooked garlic presented an inferior value of TPC when comparing with the raw chopped bulb. These results indicate that cooking and freezing methods intervene directly with the total phenolic content, but in an opposite way. The extract of cooked garlic had the higher value of TFC, belonging the lowest tenor to garlic tablets. Radical DPPH• and FRAP methods allowed to verify that the cooked garlic extract evidenced a superior antioxidant activity. This result can be explained by cell wall rupture derived from heating, provoking antioxidant substance release, new and/or stronger antioxidant substance formation or oxidant enzymes inhibition [4]. The frozen chopped garlic extract presented the highest scavenging capacity of the three studied reactive species. In general, the higher the total phenolic content, the greater the capacity of inhibition of reactive species NO•, O 2 - • and H 2 O 2 .

This study has showed that the diverse forms of presentation/parts of the garlic possess high bioactive compounds content, and consequently antioxidant activity, presenting health benefits.

1. Banerjee SK, Mukherjee PK, Maulik SK. Garlic as an antioxidant : the good, the bad and the ugly. Phytother Res. 2003, 17(2): 97–106.

2. Naji KM, Al-Shaibani ES, Alhadi FA, Al-Soudi SA, D’souza MR. Hepatoprotective and antioxidant effects of single clove garlic against CCl4-induced hepatic damage in rabbits. BMC Complement Altern Med. 2017, 17: 411.

3. Oommen S, Anto RJ, Srinivas G, Karunagaran D. Allicin (from garlic) induces caspase-mediated apoptosis in cancer cells. Eur J Pharmacol. 2004, 485(1-3): 97-103.

4. Ali M, Mahsa M, Barmak MJ. Effect of boiling cooking on antioxidant activities and phenolic content of selected iranian vegetables. Res J Pharm Biol Chem Sci 2015, 6(3): 663-641.

Allium sativum L. , Bioactive compounds, Antioxidant activity, Reactive species.

P72 Influence of gamma irradiation in the antioxidant potential of pumpkin seeds and mung beans

Anabela macedo 1 , carla sousa 2 , ana f vinha 2,3, 1 universidade fernando pessoa, 4249-004 porto, portugal; 2 unidade de investigação em energia, ambiente e saúde, centro de estudos em biomedicina, fundação fernando pessoa, 4249-004 porto, portugal; 3 requimte/laqv, departamento de ciências químicas, faculdade de farmåcia da universidade do porto 4051-401 porto, portugal.

Food conservation is a challenge for the food industry. The high respiration rate, the lack of physical protection to avoid water loss and the changes due to microbial attack are often associated with loss of food quality, contributing to deterioration through browning, weight loss and texture changes [1]. Furthermore, bacteria, moulds, enzymatic activity (mainly polyphenol oxidase) and biochemical changes can cause spoilage during storage [2]. The use of ionizing energy for preservation has been widely studied by the food industry. However, studies evaluating the effects of ionizing radiation are mostly available in cultivated species, being scarce the reports on wild species and food waste, considered add-value foods. In this regard, food technology is making progress towards increasing food preservation and contributing to a reduction of the incidence of food-related diseases. Previous studies assessing the potential of gamma irradiation as a suitable technique to increase natural products shelf-life were focused in nutritional and chemical parameters, including bioactive compounds and their antioxidant activity [3]. Many natural compounds found in edible food wastes (seeds) or grains (beans) present antioxidant activity. Among the most important natural antioxidants are phenolic compounds (flavonoids, phenolic acids and tannins), nitrogenous compounds (alkaloids, amino acids, peptides, amines and chlorophyll byproducts), carotenoids, tocopherols and ascorbic acid.

In the present work, the effects of gamma radiation dose (0, 0.5, 1.0, 1.5 and 5.0 kGy) on the chemical composition (total phenolics and total flavonoids) of pumpkin seeds and mung beans were evaluated. The antioxidant activity was studied using DPPH• and FRAP assays. It was observed a slight increase in the content of bioactive compounds, as well as in antioxidant activity, with irradiation doses below 1.5 kGy. Final results showed that irradiation may be a viable technique to guarantee the content of bioactive compounds, as well as their biological properties, including antioxidant activity.

1. Singh P, Langowski HC, Wanib AA, Saengerlaub S. Recent advances in extending the shelf life of fresh Agaricus mushrooms: a review. J Sci Food Agric. 2010, 90: 1393-1402.

2. Fernandes Â, Barreira JCM, Antonio AL, Bento A, Botelho ML, Ferreira ICFR. Assessing the effects of gamma irradiation and storage time in energetic value and in major individual nutrients of chestnuts. Food Chem Toxicol. 2011, 49: 2429-2432.

3. Antonio AL, Fernandes Â, Barreira JCM, Bento A, Botelho ML, Ferreira ICFR. (Influence of gamma irradiation in the antioxidant potential of chestnuts (Castanea sativa Mill.) fruits and skins. Food Chem Toxicol. 2011, 49: 1918-1923.

Food conservation, Gamma irradiation, Pumpkin seeds, Mung beans, Antioxidants.

P73 The effects of swimming and swimming complemented with water walking on spirometry values

Pedro duarte-mendes 1,2 , samuel honĂłrio 1,2 , joĂŁo oliveira 1 , joĂŁo petrica 1,3 , andrĂŠ ramalho 1,2 , antĂłnio faustino 1 , rui paulo 1,2, 1 department of sports and well-being, polytechnic institute of castelo branco, 6000-084 castelo branco, portugal; 2 research on education and community intervention, 4411-801 arcozelo, vila nova de gaia, portugal; 3 centre for the study of education, technologies and health, polytechnic institute of viseu, 3504-510 viseu, portugal, correspondence: pedro duarte-mendes ([email protected]).

Spirometry is a standard pulmonary function test that measures how an individual inhales or exhales volumes of air as a function of time. It is the most important and most frequently performed pulmonary function testing procedure, having become indispensable for the prevention, diagnosis and evaluation of various respiratory impairments. However, there have been only a few studies addressing the effect of physical activity on pulmonary function test results and investigating the association between body composition and respiratory parameters in sports activities [1-3].

The objective of this study was to verify if there are differences in spirometry values in children aged between 6 and 12 years who practice swimming complemented with water walking at the end of each session and those who only practice swimming.

In this study 28 subjects (mean age, 7.68 ± 1.16 years) participated and were divided into two groups: swimming group (SG) (N=9) and swimming complemented with water walking group (SWWG) (N=19). The study was performed in 12 weeks with 3 moments of evaluation (M1, M2 and M3), with two sessions per week of 45 minutes each, we wanted to identify the benefits in pulmonary function - Forced Vital Capacity (FVC), Forced Expiratory Volume in 1 second (FEV1) and Peak Expiratory Flow (PEF). The water walking activity occurred at the end of each session for 6 minutes, performed in straight line with the water level at the children’s chest. The spirometry tests were realized with the microQuark Spirometer®. For the analysis of the results, we used descriptive statistics, the Shapiro Wilk test for testing the normality of the sample and for inferential statistics the Mann-Whitney tests, Friedman's Anova, and d-Cohen for the magnitude of effect.

The results show, that from the inter-group analysis (comparison between the SG and the SWWG) we observe that there were significant differences in the FVC (M2 - p=0.025), VEF1 (M2 - p=0.01; M3 - p=0.008) and PEF (M1 - p=0.033; M2 - p=0.012; M3 - p=0.037) values. Concerning intra-group differences (improvement in the SG and the SWWG in the three moments evaluated), the SWWG showed significant differences in FVC (p= 0.003) and FEV 1 (p=0.008), and the SG showed significant differences in VEF 1 (p=0.034) and PEF (p=0.013).

These results show that “swimming” and “swimming complemented with water walking” show improvements in spirometry values in children. The swimming complemented with water walking group showed better results.

NCT03506100

1. Durmic T, Lazovic B, Djelic M, Lazic J, Zikic D, Zugic V, Dekleva M, Mazic S. Sport-specific influences on respiratory patterns in elite athletes. J Bras Pneumol. 2015, 41: 516-522.

2. Vaithiyanadane V, Sugapriya G, Saravanan A, Ramachandran C. Pulmonary function test in swimmers and non-swimmers- a comparative study. Int J Biol Med Res. 2012, 3, 1735-1738.

3. Lopes, M. d., Bento, P. C., Lazzaroto, L., Rodacki, A. F., & Leite, N. (2015). The effects of water walking on the anthropometrics and metabolic aspects in young obese. Rev Bras Cineantropom Desempenho Hum. 2015, 17, 235-237.

Spirometry, Swimming, Water walking.

P74 Preliminary translation and validation of Movement Imagery Questionnaire – Children (MIQ-C) to Portuguese

Pedro duarte-mendes 1,2 , daniel silva 1 , joão petrica 1,3 , daniel marinho 4,5 , bruno travassos 4,5 , joão serrano 1,3, 1 department of sports and well-being, polytechnic institute of castelo branco, 6000-084 castelo branco, portugal; 2 research on education and community intervention 4411-801 arcozelo – vila nova de gaia, portugal; 3 centro de estudos em educação, tecnologias e saúde, instituto politécnico de viseu, 3504-510 viseu, portugal; 4 department of sport sciences, university of beira interior, 6201-001 covilhã, portugal; 5 research center in sports sciences, health sciences and human development, university of tras-os-montes and alto douro, 5001-801 vila real, portugal.

The ability to perform movement imagery has been shown to influence motor performance and learning in sports and rehabilitation. Imagery is a cognitive process that can play an important role in planning and execution of movements or actions. Several instruments have been developed in order to evaluate the ability of Imagery in adults such has the MIQ-3, with Portuguese Athletes [1]. However, none focused on imagery ability questionnaire for children with the three modalities (kinesthetic, visual internal and visual external imagery) for Portuguese children’s.

The objective of this study was to translate and validate preliminary, for the Portuguese children’s population, the Movement Imagery Questionnaire for Children [2], determining its initial psychometric qualities through an exploratory factor analysis model that supports it.

In this study 162 subjects of both genders (124 male, 38 female) with a mean age of 10.1 years (SD = 16) participated. For the development of the Portuguese adaptation of the evaluation instrument, a methodology was developed in two phases: (1) the translation phase and cultural adaptation of the questionnaire and (2) the application of the Exploratory Factor Analysis method of the instrument. In the statistical analysis of data, we used the Kaiser-Meyer-Olkin (KMO) and Bartlett tests, to evaluate the quality of the correlations, and an exploratory factorial analysis (EFA) to determine the number of factors to be retained, the number of items associated with them and their internal consistency. The type of rotation adopted was the oblique rotation Promax.

Initially it was found that the procedures of translation and adaptation originated a Portuguese version of MIQ - C similar to the original version. Secondly, we found that the psychometric qualities proved their suitability of adaptation performed (KMO=0.822, Bartlett test p=.000), demonstrating that its factor structure is the same as the original version (12 items grouped into 3 factors, with 4 items each factor), with quite acceptable levels of validity and reliability (Cronbach's alpha: 0.85 to MIQ - C, 0.79 for the kinesthetic imagery, 0.74 for the visual internal imagery and 0.76 for the visual external imagery).

The results showed that the Portuguese version of the Movement Imagery Questionnaire for Children, with the aim to assess the imagery ability in three modalities (kinesthetic, visual internal and visual external imagery) has quite acceptable indexes for its validation.

1. Mendes P, Marinho D, Petrica J, Silveira P, Monteriro D, Cid L. Translation and Validation of the Movement Imagery Questionnaire – 3 (MIQ - 3) with Portuguese Athletes. Motricidade. 2016, 12, 149-158.

2. Martini R, Carter M, Yoxon E, Cumming J, Ste-Marie M. Development and validation of the Movement Imagery Questionnaire for Children (MIQ-C). Psychology of Sport and Exercise. 2016, 22: 190-201.

Imagery, Translation and validation, Children, Exploratory factor analysis.

P75 New emerging point-of-care platforms for Clostridium difficile testing

Isabel andrade 1 , chantal fernandes 2,3 , teresa gonçalves 2,3, 1 coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal; 2 institute of microbiology, faculty of medicine, university of coimbra, 3004-504 coimbra, portugal; 3 center for neuroscience and cell biology, university of coimbra, 3004-504 coimbra, portugal, correspondence: isabel andrade ([email protected]).

Since 2000, the incidence and severity of Clostridium difficile ( C. difficile ) infections have increased, justifying the need for expedite, sensitive and specific methods of diagnosis. The existing rapid diagnostic tests vary widely in terms of clinical usefulness, which is evaluated on its sensitivity, specificity, turnaround time (TAT), cost, and availability. Besides that, there is no generally accepted gold standard or single optimal approach for C. difficile testing. This has challenged various stakeholders to develop point-of-care (POC) platforms with the best test performance characteristics, easy to use, and with rapid TAT, a critical element of POC testing to improve the clinical management of this infectious disease. POC testing can be done at home, at the primary care level, by hospital staff in emergency or operating rooms, intensive care units, as well as in extreme environments, such as remote or low-resource settings, or in conditions following emergency crises or natural disasters.

To review current evidence regarding emerging POC platforms for C. difficile testing.

PubMed database was searched using keywords relevant to POC testing of C. difficile , since 2000, yielding a total of 10 articles included out of 17 initially identified/selected for full review.

The findings show that during the last decade extensive research efforts were underway to develop stand-alone platforms suitable for POC testing of C. difficile infections. Multistep algorithms using the polymerase chain reaction test for C. difficile toxin gene(s) have the best test performance characteristics, and a trend seems to exist in favour of molecular tests in detriment of immunoassays, in POC platforms. A common feature to all POC devices is the rapid TAT, within seconds-minutes to a few hours, which is crucial for C. difficile infection management. Some of these POC devices enable to run either multiplex tests on a single sample, or multiple samples. Although the majority of these stand-alone POC platforms for C. difficile testing are still prototypes, they may be looked at as a step towards more rapid, miniaturized, portable and easier to use test devices with the potential to affect healthcare decisions at its earliest stage.

In conclusion, research efforts show an increasing number of technologies evolving for the development of POC platforms for C. difficile testing.

Clostridium difficile, Point-of-care testing, Turnaround time, Stand-alone platform.

P76 Contributions for the validation of the Portuguese version of the Cohen-Mansfield Agitation Inventory (CMAI)

Rafael alves 1 , daniela figueiredo 2,3 , alda marques 2,4, 1 psychiatric hospital centre of lisbon, 1749-002 lisbon, portugal; 2 school of health sciences, university of aveiro, 3810-193 aveiro, portugal; 3 institute of biomedicine, university of aveiro, 3810-193 aveiro, portugal; 4 center for health technology and services research, university of aveiro, 3810-193 aveiro, portugal, correspondence: rafael alves ([email protected]).

Dementia represents one of the greatest health challenges due to its incidence and frequency, costs and impacts on the individual, family and society. The Diagnostic and Statistical Manual of Mental Disorders – V (DSMV) considers that the behavioural changes, i.e. , the non-cognitive aspects of dementia, should be diagnosed, however, this is often not common practice. The identification and quantification of Behavioural and Psychological Symptoms of Dementia (BPSD) can be objectively assessed using the Cohen-Mansfield Agitation Inventory (CMAI). CMAI is a reliable and valid instrument used in clinical and research practice [3], however, it has never been validated for European Portuguese.

To contribute for the adaptation and validation of the CMAI for the European Portuguese population.

The study was conducted in two phases. The first phase consisted of the translation and cultural and linguistic validation of the CMAI according to the International Society for Pharmacoeconomics and Outcomes Research (ISPOR). The following methodology was used a) Translation, b) Reconciliation, c) Retroversion, d) Harmonization, e) Cognitive testimony/analysis and e) Spelling review. In phase 2, the internal consistency was calculated using the Cronbach's alpha. The intra- and inter-rater reliability was determined using the Intraclass Correlation Coefficient (ICC), using the intra-rater (1,1) and the inter-rater (2,1) equations. A factorial exploratory analysis of the construct validity was conducted with 101 people with dementia (83.3 Âą 8.0 years; n=83; 82.2% female). Statistical analysis was performed with the Kaiser-Meyer-Olkin (KMO) test, on the behaviours manifested by more than 10% of the sample and only items weighing more than 0.4 were included in the extracted factors.

The Portuguese version of CMAI revealed good reliability inter-rater (ICC> 0.4 for 22/29 items) and excellent intra-rater (ICC> 0.75 for 21/29 items) reliability and good internal consistency (α = 0.694). The factorial exploratory analysis was applied to 21 items that meet the criterion. An association in three factors “non-aggressive physical behaviour” (67.3%), “aggressive behaviour” (66.3%) and “verbal agitation behaviour” (63.4%) was found with a reasonable quality (KMO = 0.664) and reasonable internal consistency values (0.754; 0.633; 0.714).

This study contributed to the availability of a measurement instrument for the European Portuguese, that can be used in clinical or research contexts, with people with dementia. As a future study it is suggested to analyse the remaining validation processes (criterion and confirmatory factor analysis).

Dementia, Instrument, BPSD, Agitation.

P77 Analysis of the activations of the Intra-Hospital Emergency Team

The safety of patients is extremely important, therefore the Intra-Hospital Emergency Team (IHET) has emerged to respond to situations of clinical deterioration of hospitalized patients. When the patient presents deterioration of his clinical condition, it requires being examined in a timely manner by a team that provides the highest level of care. That seems to be the better way to avoid the occurrence of critical events, such as mortality, cardiovascular arrest and unplanned admission in intensive care units.

This study aims to determine the characteristics of the IHET activations of Centro Hospitalar de Leiria (CHL).

This exploratory study analysed the registrations of the activations occurred in the last half of 2011, totalling 325 records. Sociodemographic and clinical characteristics of the patients, and the characteristics of the activations were analysed using the chi-square test and ANOVA.

This study showed IHET activations mainly for male patients (56%), with a mean age of 74.48 ± 13.34 years, with an admission diagnosis related to respiratory diseases (34%) and with the main activation criterion of worried professional (33%). Despite the lack of records on the presence of previous signs of clinical deterioration, it was found that only 27% of patients presented them. The relationship between the presence of previous signs of deterioration and the result of the activation stands out (χ 2 = 18.695; p ≤ 0.001).

It was found that the IHET was activated in a timely manner by nurses who were knowledgeable of their patients and with the capacity to predict critical events, which enabled the best possible outcomes.

Hospital rapid response team, Nursing, Emergency situation, Clinical deterioration.

P78 Virtual assistant to facilitate self-care of older people with type 2 diabetes: preliminary study protocol

Mara p guerreiro 1 , adriana henriques 1 , isabel ce silva 1 , anabela mendes 1 , ana p clĂĄudio 2 , maria b carmo 2 , joĂŁo balsa 2 , susana buinhas 3 , nuno pimenta 4 , afonso m cavaco 5, 1 escola superior de enfermagem de lisboa, 1600-190 lisboa, portugal; 2 instituto de biossistemas e ciĂŞncias integrativas, universidade de lisboa, 1749-016 lisboa, portugal; 3 faculdade de ciĂŞncias, universidade de lisboa, 1749-016 lisboa, portugal; 4 escola superior de desporto de rio maior, instituto politĂŠcnico de santarĂŠm, 2040-413 rio maior, portugal; 5 faculdade de farmĂĄcia, universidade de lisboa, 1649-003 lisboa, portugal, correspondence: mara p guerreiro ([email protected]).

More than a quarter of people aged 60-79 years is diabetic [1]. The management of type 2 diabetes (T2D) includes diet, physical activity and, often, medicines; it requires daily self-care and constant lifestyle-related choices [2]. It has been estimated that glycaemic control is achieved in less than 50% of T2D patients due to low self-care behaviour [2]. Sustained hyperglycaemia causes complications and premature death, as well as significant costs [1]. Improving self-care and T2D management is therefore crucial.

Technology-based interventions, such as text messages, have successfully been used in T2D management; nevertheless, challenges remain, such as acceptability to users and attrition [3,4]. Relational agents, which are computational artefacts designed to establish rapport and trust by simulating face-to-face counselling in long-term interactions, may overcome such challenges. In particular, they have shown acceptability to older and low literacy patients in other contexts [5,6]. There is a paucity of research on the use of relational agents in older people with T2D.

To develop a viable prototype of a relational agent software to assist older T2D patients in self-care and to test its use in this patient group.

This is a mixed-method study, grounded in the Medical Research Council framework to develop and evaluate complex interventions [7]. The first stage - development of the optimal intervention – will comprise the definition of pre-requisites, content production (e.g. dialog creation guided by behaviour-change theories) and empowering existing virtual humans [8] with artificial intelligence. Users and health care professionals will be involved iteratively. The software is expected to run in a tablet, independently from internet connections, targeting adherence to physical activity, diet and medication-use, without assistance from health care professionals. The second stage will be a non-randomised, non-controlled feasibility trial. Eligible subjects enrolled in diabetes nursing consultations in primary care will be invited to participate. Main outcome measures include software use (timing and frequency, usability, patient satisfaction), reported self-care and reported medication adherence. Acceptability will be researched through focus groups. In both stages data will be analysed with the aid of SPSS; qualitative data will be transcribed verbatim and thematically analysed with NVIVO software.

Data collection will start once ethical approval is obtained.

The study is expected to yield a software prototype to facilitate self-care of older T2D patients, with potential to become an effective, scalable and sustainable intervention.

The study is funded by Compete 2020 and FCT (024250, 02/SAICT/2016).

1. Sociedade Portuguesa de Diabetologia. Diabetes: Factos e NĂşmeros - o ano de 2015.2016.

2. García-Pérez L-E, Alvarez M, Dilla T, Gil-Guillén V, Orozco-Beltrán D. Adherence to therapies in patients with type 2 diabetes. Diabetes Ther. 2013;4:175–94.

3. Arambepola C, Ricci-Cabello I, Manikavasagam P, Roberts N, French DP, Farmer A. The Impact of Automated Brief Messages Promoting Lifestyle Changes Delivered Via Mobile Devices to People with Type 2 Diabetes: A Systematic Literature Review and Meta-Analysis of Controlled Trials. J. Med. Internet Res. 2016;18.

4. Stellefson M, Chaney B, Barry AE, Chavarria E, Tennant B, Walsh-Childers K, et al. Web 2.0 chronic disease self-management for older adults: A systematic review. J. Med. Internet Res. 2013;15:e35.

5. Bickmore T, Caruso L, Clough-Gorr K, Heeren T. “It”s just like you talk to a friend’ relational agents for older adults. Interact. Comput. 2005;17:711–35.

6. Bickmore TW, Pfeifer LM, Paasche-Orlow MK. Health Document Explanation by Virtual Agents. In: Pelachaud C, Martin J-C, André E, Chollet G, Karpouzis K, Pelé D, editors. Intell. virtual agents. Springer; 2007:183–96.

7. Möhler R, Köpke S, Meyer G. Criteria for Reporting the Development and Evaluation of Complex Interventions in healthcare: revised guideline (CReDECI 2). Trials; 2015;16:1–9.

8. Cláudio AP, Carmo MB, Pinto V, Guerreiro MP. Virtual Humans for Training and Assessment of Self- medication Consultation Skills in Pharmacy Students. Proc. IEEE ICCSE 2015- 10th Int. Conf. Comput. Sci. Educ. 2015;175–80.

Diabetes, Self-care, Relational agents, Technology, Elderly

P79 Use of maggot therapy in a hard-to-heal wound care unit: application and home follow-up protocol

Rodrigo c ferrera 1 , marĂ­a af fernĂĄndez 1 , pablo g molina 2 , evelin b lopez 2 , alberto p paredes 3 , adĂĄn a ordiales 4,5, 1 nursing department, university of las palmas de gran canaria, 35015 canary islands, spain; 2 nursing department, university of valencia, 46010 valencia, spain; 3 hospital clĂ­nico universitario, 46010 valencia, spain; 4 hard-to-heal nursing care unit, hospital clĂ­nico universitario, 46010 valencia, spain; 5 nursing department, university of valencia, 46010 valencia, spain, correspondence: alberto p paredes ([email protected]).

Larval debridement therapy or “maggot” therapy or biosurgery is described as the use of worms for the removal of non-viable tissue. Debridement is achieved thanks to the action of proteolytic enzymes, which are secreted by these larvae, liquefying the protein material on the surface of the wound, which is subsequently used by the larvae as a nutritive material. Healthy tissues are not affected, making this method the most selective among those available. In addition, this therapy helps to fight infection and helps normalization and closure of injuries. Larval debridement has been successfully used in a large range of chronic hard-to-heal wounds with presence of non-viable tissue of many aetiologies such as pressure ulcers, venous or ischemic wounds or diabetic foot ulcers [1-5]

To ensure continuity in care in the aim of obtaining the best results from the use of larval debridement therapy, while allowing the administration of treatment in any healthcare context, especially at the patient's own home.

The procedure for the use of Maggot therapy has been elaborated and coordinated by the Nursing Unit of Ulcers and Complex Wounds of the Hospital ClĂ­nico de Valencia. This protocol includes the administrative process, selection of the size of the dressings and the procedure of application and care of the therapy itself. At the same time, an information brochure was prepared for family members, patients and care professionals with information about daily surveillance and application of the therapy.

This protocol is in the process of being implemented, having been applied in several patients with different aetiology of hard-to-heal wounds within home follow-up care with successfully results.

The implementation of a protocol for the use of Maggots debridement therapy seems to be effective in ensuring continuity in the treatment and follow-up of patients with difficult healing wounds in a home care context.

1. Ballester MartĂ­nez L, MartĂ­nez MonleĂłn E, Serra Perucho N, Palomar Llatas F. UtilizaciĂłn de la Terapia Larval en Heridas Desvitalizadas: RevisiĂłn BibliogrĂĄfica [Use of Maggots Therapy in Necrotic Wounds: Literature Review]. Enf Derm. 2016, 10: 27-33.

2. McCaughan D, Cullum N, Durnville J, VenUS II Team. Patient2s perceptions and experiences of venous leg ulceration and their attitudes to larval therapy: an in-depth qualitative study. Health Expect. 2015, 18(4): 527-541.

3. Mudge E, Price P, Walkley N, Harding KG. A randomized controlled trial of larval therapy for the debridement of leg ulcers: results of a multicenter, randomized, controlled, open, observer blind, parallel group study. Wound Repair Regen. 2014, 22(1): 43-51.

4. EWMA Document. Larvae debridement therapy. J Wound Care. 2013, 22(1): 522-525.

5. Whitaker IS, Twine C, Whitaker MJ, Welck M, Brown CS, Shandall A. Larval trerapy from antiquity to the present day: mechanisms of actions, clinical applications and future potential. Postgrad Med J. 2007, 83(980): 409-413.

Maggot therapy, Wound care, Debridement, Home-care settings, Protocole.

P80 Adverse reactions and dietary supplements

Andreia barros 1 , clĂĄudia pinho 2 , ana i oliveira 2 , rita f oliveira 2,3 , agostinho cruz 2, correspondence: rita f oliveira ([email protected]).

Over the last years, the use of dietary supplements has increased substantially [1]. Although these products are considered as safe and can be beneficial, there are risks associated with some. Manufacturers are not required to demonstrate their safety and efficacy, so it is essential that consumers have good knowledge about dietary supplements [2]. The attribution of injury to a specific supplement can be challenging, especially because of the multiple ingredients, the variability in quality and content, as well as the vast underreporting of adverse reactions [3].

This study aims to identify the main adverse reactions and knowledge on reporting adverse events associated to the use of dietary supplements, by the population of Porto (Portugal).

A descriptive, cross-sectional study was performed through an anonymous, confidential and voluntary questionnaire to 404 adult participants from the municipality of Porto (Portugal). Data were analysed quantitatively using SPSS version 24.0.

Of the 404 participants, 54.7% (221) were females and 45.3% (183) were males. Results revealed that 55.9% (226) of the participants were users of dietary supplements and the common reasons for consuming supplements were to improve memory, concentration and reduce fatigue. Of the 226 consumers of supplements, only 1.3% (3) identified adverse reactions after taking supplements with multivitamins and used for insomnia and anxiety. Of the 404 participants, 21.5% (87) referred to know that is possible to report an adverse reaction associated to dietary supplements, in Portugal, since 2014. Also, only 8.9% (36) referred to know which entity is responsible for the adverse reactions associated to supplements, and of these 36 participants only 5.6% (2) had correctly answered the name of the entity - Direção Geral da Alimentação e Veterinåria (DGAV).

The findings of this survey indicate the need to provide knowledge on reporting adverse events associated with dietary supplements use. It is essential to provide adequate information to facilitate better understanding of the risks associated with the use of these products.

1. Kantor ED, Rehm CD, Du M, White E, Giovannucci EL. Trends in dietary supplement use among US adults from 1999–2012. JAMA. 2016, 316:1464–1474.

2. Axon DR, Vanova J, Edel C, Slack M. Dietary Supplement Use, Knowledge, and Perceptions Among Student Pharmacists. Am J Pharm Educ. 2017, 81(5): 92.

3. Felix TM, Karpa KD, Lewis PR. Adverse Effects of Common Drugs: Dietary Supplements. FP Essent. 2015, 436:31-40.

Dietary supplements, Risks, Adverse reactions reporting, DGAV.

P81 Urinary tract infections and dietary supplements: counselling in pharmacy

Marta novais 1 , clĂĄudia pinho 2 , ana i oliveira 2 , rita f oliveira 2,3 , agostinho cruz 2.

Urinary tract infections (UTIs) are some of the most common bacterial infections [1]. Treatment usually involves antibiotics, and recurrence is a major concern [2]. Therefore, identifying new and effective strategies, like the use of botanical dietary supplements, to control UTIs is a high priority. It is also important to provide health professionals with adequate knowledge related to the use of dietary supplements and other complementary and/or alternative medicines.

This study aims to evaluate the counselling practices by pharmacy professionals, working in Barcelos (Portugal), related to the use of botanical dietary supplements, in the prevention and/or treatment of urinary infections.

A descriptive, cross-sectional study was performed through an anonymous, confidential and voluntary questionnaire to a convenience sample of 108 pharmacy professionals from Barcelos (Portugal). Data were analysed using SPSS version 24.0.

Of the 108 participants, 67.6% were females and 32.4% were males. The results showed that 96.3% of the professionals usually advise the use of dietary supplements for the prevention and/or treatment of lower urinary tract infections. The common reasons to recommend supplements include the efficacy and safety of these products, and the lower price. It was also observed that 64.8% of pharmacy professionals consider their knowledge sufficient to recommend dietary supplements for the prevention and/or treatment of urinary infections. Regarding the recommendations by professionals for the prevention of urinary tract infections, the products containing Vaccinium macrocarpon L. were the most recommended. On the other hand, products containing Arctostaphylos uva ursi L were the most recommended for the treatment of urinary tract infections. In general, the main plants sold by pharmacy professionals for the control of urinary tract infections included Vaccinium macrocarpon L. , Arctostaphylos uva ursi L. , Vaccinium myrtillus L. , Equisetum arvense L. and Hibiscus sabdariffa L..

The findings of this study revealed that pharmacy professionals recommend dietary supplements for control of urinary tract infections and consider their knowledge sufficient to properly advise these products. Because evidence on the efficacy of dietary supplements is often scarce or controversial, providing consistent recommendations about these products to their patients can be challenging for healthcare professionals.

1. Stamm WE, Norrby SR. Urinary tract infections: disease panorama and challenges. J Infect Dis. 2001, 183 (Suppl 1):S1-S4.

2. Guay DR. Contemporary management of uncomplicated urinary tract infections. Drugs. 2008, 68(9):1169-205.

Urinary Tract Infections, Botanical Dietary supplements, Counseling, Pharmacy Professionals.

P82 Nurse’s intervention – end of life nutrition approach protocol

Tânia s afonso 1 , filipa veludo 1 , patrĂ­cia p sousa 1 , dulce oliveira 2, 1 instituto de ciĂŞncias da saĂşde, escola de enfermagem, universidade catĂłlica portuguesal, 1649-023 lisboa, portugal; 2 unidade de medicina paliativa, hospital de santa maria, centro hospitalar lisboa norte, 1649-035 lisboa, portugal, correspondence: tânia s afonso ([email protected]).

To know that nutrition in the present society is increasingly associated with life maintenance and comfort, helps us to understand the complexity of this subject when approached the end of life. Artificial nutrition remains controversial in a palliative context, given the questioning about the quality of life that offers [1]. Protocols help nurses in the decision-making process and increasing their competences.

To present an end-of-life nutrition approach protocol for palliative care.

This study is the result of three integrative literature reviews that intended to measure: which nursing interventions promote end of life nutrition in people without artificial nutrition criteria? ; what are the evaluation criteria for the end-of-life person for the nurse’s decision-making of start, don’t start or suspending artificial nutrition? ; does the nurse’s interventions towards the end-of-life reduce the risk of therapeutic obstinacy associated with artificial nutrition ? Based in Buckman & Spikes Communication Protocol [2], the results were integrated in a protocol form and submitted to the opinion of 13 experts, from 18th October to 6th November 2017, and the respective changes were made. Inclusion criteria of experts were: being health professionals; palliative care experience and/or work development in nutrition subjects.

Our experts have on average 37 years old; 10 carry out their activity in Palliative settings, 8 of these have advanced training in Palliative Care. Our protocol considers: I) setting - preparing the environment; II) perception - prior knowledge of the person/family information about nutrition, preferences and considerations regarding the future commitment of feeding and active listening, understanding what the person/family wants to know, especially as to the meaning of nutrition, what that moment represents and invite them to address the subject; III) knowledge - provide adequate information in phases, contextualizing the present symptoms in the disease process (prognosis) and discuss the evaluation criteria before starting artificial nutrition; IV) emotions – attend to the emotions and provide realistic hope; V) strategy – interventions from the patient’s needs are presented in an algorithm form, promoting oral feeding as long as possible. In all process, the person and family autonomy in decision making is preserved. At each step, we identified an element to avoid in the communication process [1,2].

The set of nurse’s interventions in end-of-life nutrition approach systematizes the elements to be considered in decision-making and guarantees the importance of nurses' contribution in risk reduction of therapeutic obstinacy.

1. Alves P. Intervenção do Enfermeiro que Cuida da Pessoa em Fim de Vida com Alteraçþes do Comer e Beber. Pensar Enfermagem. 2013, 17(1): 17-30;

2. Baile W [et al.]. SPIKES — A Six-Step Protocol for Delivering Bad News: Application to the Patient with Cancer. Oncologist. 2000, 5(4):302-311.

Nursing, Nutrition, Spikes protocol, Communication, Palliative care.

P83 Associated factors with polymedication in elderly accompanied in the health strategy of the family of the city of Palhoça, Santa Catarina, Brazil

FabrĂ­cia m almeida 1 , mĂ´nica r moraes 1 , giovanna g vietta 1 , roberta ts shirasaki 2 , Ă­sis m sousa 2 , pedro f simĂŁo 1 , bĂĄrbara o gama 1 , fabiana o gama 1 , paulo f freitas 1 , mĂĄrcia kretzer 1, 1 universidade do sul de santa catarina, 88704-900, tubarĂŁo, santa catarina, brasil; 2 unidade bĂĄsica de saĂşde ponte do imaruim, palhoça, santa catarina, 88130-300, brasil, correspondence: fabrĂ­cia m almeida ([email protected]).

Aging implies in an increase in the number of morbidities, which requires medical treatment and may result in the use of multiple medications. Polymedication is associated with a risk of loss of quality of life, negative health outcomes and a risk of dependent mortality.

To evaluate the factors associated to polymedication in the elderly followed up in the Family Health Strategy of the city of Palhoça, Santa Catarina, Brazil.

A cross-sectional study carried out in the elderly accompanied by two Basic Health Units in Palhoça. Data collected between august and November 2017, using a questionnaire with sociodemographic and clinical data and the Geriatric Depression Scale (GDS). Polymedication was defined as the use of 5 or more drugs on an ongoing basis. Analysis by SPSS 20.0, with chi-square and Fisher's exact test, Prevalence Ratio (PR), Confidence Interval (CI) 95%, p < 0.005. The project was approved by the Research Ethics Committee of the Southern University of Santa Catarina.

135 individuals were interviewed, with a mean age of 69.9 years and a standard deviation of 8 years, ranging from 60 to 97 years. The highest frequency was female (70.0%), white (81.3%), married (50.4%), and widowed (28.1%). About the occupation, 80.7% were retired and 54.6% received up to 250 Euros monthly. The schooling was predominantly until elementary school (74.6%), with 9.7% being illiterate, 4.5% with higher education and 0.7% with post-graduation. The majority (63.7%) did not practice physical exercise, 6.8% were smokers. The frequency of depression was 39.6%, with 5.2% categorized as severe depression. The use of drugs among the interviewees ranged from none (4.4%) to 25 different medications per day (0.7%), with 42.7% using 5 or more medications. Among those who reported the use of medication, 63% used antihypertensives, 32.6%, anti-depressants, 37.8%, anti-diabetics and 14.8% analgesics. Self-medication was identified in 23.9%. Polymedication had a significant association (p < 0.001) with the presence of arterial hypertension (RP = 2.85, CI 1.59-5.09), Diabetes Mellitus (RP = 2.31, CI 1.58-3.36), arthritis (RP = 1.60, CI 1.10-2.33), depression (RP = 1.87, CI 1.30-2.69), and cardiovascular diseases (RP = 2.29, CI 1.61 -3.26).

Polymedication in the elderly presented high prevalence and was associated with the presence of cardiovascular, endocrine, joint and depression diseases. A presence of symptoms of depression was present in 39.6% of the elderly.

Polymedication, Associated factors, Comorbidity, Elderly.

P84 Lack of Vitamin D in elderly individuals: case study – Figueira da Foz

Ana azul 1 , cristina santos 1 , antĂłnio gabriel 2 , joĂŁo p figueiredo 3 , ana ferreira 1, 1 department of environmental health, coimbra health school, 3046-854 coimbra, portugal; 2 department of laboratory biomedical sciences, coimbra health school, 3046-854 coimbra, portugal; 3 department of complementary sciences, coimbra health school, 3046-854 coimbra, portugal, correspondence: ana azul ([email protected]).

Nowadays, the increase of aging among world’s population is becoming a reality, due to the decrease of the fertility and mortality rates and to the consequent increase of the average life expectancy [1]. This population aging brings different health, economic and social needs, turning elderly dependent and making them living in an institutionalized way [2]. In turn, the institutionalization favours the decline of their physical and cognitive functions, which consequently, weakens the old-aged. For all this, it is very likely that this age group will have D vitamin lack, being considered a serious public health problem at a worldwide level. Therefore, its supplementation must be considered, since this age group has the tendency of staying shortly exposed to solar radiation, has little mobility and their bodies manifest a reduction of the ability of synthesis of this hormone [3-5].

This investigation intends to evaluate de existence of the lack of D vitamin in old-aged people who may be institutionalized and old-aged people in ambulatory from the region of Figueira da Foz.

Application of a questionnaire and collection of blood samples.

With the purpose of evaluating the concentrations of 25(OH)D for each one of the studied groups, it was verified that the non-institutionalized group showed higher 25(OH)D values, when to compared to the institutionalized group. This result has shown to be statistically significant, with a p-value of 0.003. On the other hand, other variables were compared, as for example, feeding, sun exposure, chronic diseases and intake of vitamin supplements, to try to understand if they had any influence on the 25(OH)D levels. Nevertheless, concerning these parameters, no great differences were verified, because p-values were always higher than 0.005.

With this study, we conclude that the geriatric population presents a high lack of vitamin D, both the institutionalized group (although with higher values of 25(OH)D) and the ambulatory group.

1. Nogueira P, Afonso D, Alves MI, Vicêncio PO, Silva Jd, Rosa MV, et al. Portugal Idade Maior em números, 2014: A Saúde da População Portuguesa com 65 ou mais anos de idade. 2014. p. 223

2. Bårrios MJ, Fernandes AA. A promoção do envelhecimento ativo ao nível local: anålise de programas de intervenção autårquica. Revista Portuguesa de Saúde Pública. 2014;32(2):188-96.

3. Jalal S, Khan NU. Frequency of Vitamin D Deficiency in Elderly Patients Visiting Tertiary Care Hospital in a Low Income Country. 2014;40:44-53.

4. Lanske B, Razzaque MS. Vitamin D and aging: old concepts and new insights. The Journal of nutritional biochemistry. 2007;18(12):771-7.

5. Zumaraga MP, Medina PJ, Rectoa JM, Abrahanc L, Azurinc E, Tanchoco CC, et al. Targeted next generation sequencing of the entire vitamin D receptor gene reveals polymorphisms correlated with vitamin D deficiency among older Filipino women with and without fragility fracture. 2017:98- 108.

Aging, Vitamin D, Vitamin supplementation, Public health.

P85 Error prevention in nursing: strategies for a safety culture

Teresa vinagre 1 , rita marques 2, correspondence: rita marques ([email protected]).

The Safety culture is becoming more and more linked to quality and excellence of care, being a crucial factor for health error prevention. Nursing assumes a crucial role in patient safety, being at forefront of patient care, and should protect patient interests and assure consolidation of strategies that guarantee safety.

Identify what are the strategies for an effective safety culture and to prevent errors in nursing.

Literature Review, following the recommended methodology of the Cochrane Centre, guided by the investigation question: What are the strategies for an effective safety culture and to prevent errors in nursing?

The study includes the analysis of articles found in EBSCO (CINAHL, MEDLINE, Nursing & Allied Health Collection, Cochrane Database of Systematic Reviews), in B-ON and in SCIELO, with the following descriptors: Nursing; Patient Safety; Errors, with the timeframe between 2012 and 2017. The sample resulted in 12 articles.

The Team Work [1-5,8-12] and communication [1-6,8-11] where referenced in 75.0% of the studies as vital measures in an effective safety culture and error preventing in nursing, 66.7% reinforce the importance of error notification [1,2,4,7-11], 58.3% defend that continuous improvement/training is essential [1,2,4,5,9,11,12], 33.3% consider global safety perception [4,10-12] and the importance of trust in superiors and their compromise with their subordinates [4,10-12] as effective methods, 25.0% stand out the importance of error feedback to health professionals [5,10,11]. As an ending, less than 10% of the analysed studies refer work conditions [4], critical reflexion [6], supervision and existence of standards [3], conflict management [3], and assume the person as the centre of health care [3] as important strategies.

Many are the strategies used for an effective safety culture and error prevention in nursing, being the most significant, team work and communication, followed by error notification and continuous improvement/training. Besides the aspects mentioned above, in every article analysed two crucial factors where identified: the direct relation between the existence of a safety culture and the decrease of advert events in health and the need to make the system safer, instead of trying to change human conditions, as a mean to ensure safety and quality care provision.

1. MarinhoM, RadßnzV, TourinhoF, RosaL, MisiakM. IntervençþesEducativas e seu Impacto na Cultura de Segurança: Uma Revisão Integrativa. Enferm Foco. 2016. 7 (2): 72-77.

2. Mendes C, Barroso, F. Promover uma cultura de segurança em cuidados de saúde primårios. Rev Port Saúde Pública. 2014. 32 (2): 197-205.

3. SilvaE, RodriguesF.Segurançadodoenteeosprocessossociaisnarelação com enfermeiros em contexto de bloco operatório. Cultura de los Cuidados. 2016. 45: 134-145.

4. Paese F, Sasso G. Cultura de segurança do paciente na atenção primåria à saúde. Texto e Contexto Enferm. 2013. 22 (2): 302-310.

5. MinuzzA, SalumN, Locks, M.Avaliaçãodaculturadesegurançadopaciente em terapia intensiva na perspectiva da equipe de saúde. Texto e Contexto Enferm. 2016. 25 (2): 1-9.

6. Araújo M, Filho W, Silveira R, Souza J, Barlem E, Teixeira N. Segurança do paciente na visão de enfermeiros: uma questão multiprofissional. Enferm Foco. 2017. 8 (1): 52-56.

7. Correia T, Martins M, Forte E. Processes developed by managers regarding the errors. Rev Enferm ReferĂŞncia. 2017. IV (12): 75-84.

8. Cavalcante A, Cavalcante F, Pires D, Batista E, Nogueira L. Cultura de segurança na percepção da enfermagem: Revisão integrativa. Rev Enferm UFPE On Line. 2016. 10 (10): 3890-3897.

9. Wang X, Liu K, You L, Xiang J, Hu H, Zhang L, Zheng J, Zhu X.The relationship between patient safety culture and adverse events: A questionnaire survey. Int J Nurs Stud. 2014. 51: 1114-1122.

10. Noord I, Wagner C, Dyck C, Twisk J, Bruijne M. Is culture associated with patient safety in the emergency department? A study of staff perspectives. Int J Qual Health Care. 2013. 26 (I): 64-70.

11. Ballangrud R, Hedelin B, Hall-Lord M. Nurses’ perceptions of patient safety climate in intensive care units: A cross-sectional study. Intensive Crit Care Nurs. 2012. 28: 344-354.

12. Feng X, Bobay K, Krejci J, McCormick B. Factors associated with nurses’ perceptions of patient safety culture in China: a cross-sectional survey study. J Evid Based Med. 2012. 50-56.

Nursing, Patient Safety, Errors.

P86 Error notification: a strategy for a safety culture

Correspondence: teresa vinagre ([email protected]).

Errors are an inevitable condition of the human being, and one of the biggest contributors to morbidity and mortality around the world. Error notification is a scientifically proved strategy as being one of the most effective of a safety culture [1-8], being essential for the prevention and detection of factors that contribute for the error. The Operation Room (OR) is one of the most prone health services for the occurrence of adverse events/errors [3], hence being vital the fulfilment of studies regarding this issue, to contribute to an improvement in health care and assure patient safety.

To determine the error notification frequency in the OR; and to characterise the safety culture of the OR.

Exploratory study, with a quantitively approach. A survey was conducted with 9 closed questions to 33 nurse professionals that work in a OR in a Lisbon Hospital.

54.6% of the adverse events that caused damage to the patient were always notified by the nurses, nevertheless, none of the participants pointed this regular notification for adverse events, that could have resulted in damage for the patient, but that did not. Of the several adverse events, 55.6% of the occurred cases were not notified, being the more frequent justification for not notifying them, the lack of time to notify. A negative correlation was obtained between professional experience and the error notification frequency, being this difference statistically significant (p < 0.05). Regarding the factors that contribute the most for error occurrence in OR, all of the participants mentioned the pressure for working fast, 87.0% referred the lack of human resources, 85.2% absence of motivation, 82.6% professional inexperience and workload overcharge and 65.2% of the participants considered fails in communication, as a factor preponderant to error. Patient safety perception by the nurse professionals of the OR was evaluated as “acceptable” by the majority of the participants.

It was evident the low notification frequency of adverse events/errors, and it was found that professional experience is inversely proportional to error notification. Error notification is a central aspect of health care, particularly in the OR, hence it is fundamental to educate teams for the setting of strategies that promote a safety culture. It is important to continuously train professionals as well as work on the errors, making them a learning opportunity to prevent new errors associated to the same causes, to achieve a quality safety culture.

1. Marinho M, Radßnz V, Tourinho F, Rosa L, Misiak M. Intervençþes Educativas e seu Impacto na Cultura de Segurança: Uma Revisão Integrativa. Enferm Foco. 2016. 7 (2): 72-77.

3. Paese F, Sasso G. Cultura de segurança do paciente na atenção primåria à saúde. Texto e Contexto Enferm. 2013. 22 (2): 302-310.

4. Correia T, Martins M, Forte E. Processes developed by managers regarding the errors. Rev Enferm ReferĂŞncia. 2017. IV (12): 75-84.

5. Cavalcante A, Cavalcante F, Pires D, Batista E, Nogueira L. Cultura de segurança na percepção da enfermagem: Revisão integrativa. Rev Enferm UFPE On Line. 2016. 10 (10): 3890-3897.

6. Wang X, Liu K, You L, Xiang J, Hu H, Zhang L, Zheng J, Zhu X. The relationship between patient safety culture and adverse events: A questionnaire survey. Int J Nurs Stud. 2014. 51: 1114-1122.

7. Noord I, Wagner C, Dyck C, Twisk J, Bruijne M. Is culture associated with patient safety in the emergency department? A study of staff perspectives. Int J Qual Health Care. 2013. 26 (I): 64-70.

8. Ballangrud R, Hedelin B, Hall-Lord M. Nurses’ perceptions of patient safety climate in intensive care units: A cross-sectional study. Intensive Crit Care Nurs. 2012. 28: 344-354.

Nursing, Safety Culture, Error Notification.

P87 The use of ultrasound in peripheral venous catheterization

Bruno santos 1 , josĂŠ pinho 2 , rogĂŠrio figueiredo 2 , pedro parreira 2 , luciene braga 3 , anabela salgueiro-oliveira 2, 1 hospital privado do algarve, 8500-322 alvor, portugal; 2 escola superior de enfermagem de coimbra, 3046-851 coimbra, portugal; 3 universidade federal de viçosa, 36570-900 viçosa, minas gerais, brasil, correspondence: anabela salgueiro-oliveira ([email protected]).

The insertion of peripheral vascular catheters (PVCs) is the most common procedure performed in clinical settings [1]. The traditional method for detection and selection of a venous access includes the use of a tourniquet, palpation, and observation. However, when veins are not visible or palpable, this may lead to successive puncture attempts, causing pain to the patient and discomfort to the nurse, which results in increased costs [2]. In this regard, nurses should consider using vascular visualization technologies that aid in vein identification and selection for difficult intravenous access [3]. However, these technological resources available today, are still underutilized in Portugal.

This study aims to explore an alternative method (ultrasound) for assistance of the traditional venous cannulation, in order to ensure the satisfaction of the patients and of the health professionals.

The search method used was the integrative literature review, which analysed relevant research that supports decision-making and the improvement of clinical practice [4]. The investigation question was formulated based on the PICO strategy: How important is the use of ultrasound technology by the nurses in patients with PVCs insertion needs? The search was conducted between 4 and 10 January, 2017, using the timeframe between 01-01-2011 and 31-12-2016, with the purpose of finding only primary scientific studies developed over the past 5 years, in English, Portuguese or Spanish. The search was conducted in the following databases: Medline, Cinahl, Psychology and Behavioural Sciences Collection, MedicLatina, ERIC, Business Source Complete, Library, Information Science & Technology Abstracts e Academic Search. We used two search strategies P1 and P2 with the following descriptors, respectively: Ultrasound AND Peripheral catheterization AND Cannulation; Ultrasound AND Peripheral catheterization AND ultrasound guided NOT Paediatric NOT PICC NOT Artery. We found 146 scientific articles and after reading the title, abstract and full text we retained 8 studies for analysis.

The results found confirm the venous cannulation with assistance of ultrasonography has a higher success rate, when compared to the traditional venous cannulation. It was also possible to observe a decrease in: the numbers of attempts to puncture the vein; the time used in the procedure; the incidence of central venous catheter placement; and as a consequence, the reduction of possible complications. The patient also presented lower levels of pain and higher degrees of satisfaction.

The implementation of ultrasound in clinical settings, such as in nurses training programs, are important to perform ultrasound-guided PVCs placement and quality care.

1. Webster J, Osborne S, Rickard C, New K. Clinically-indicated replacement versus routine replacement of peripheral venous catheters. Cochrane Database Syst Rev. 2015;8. Art. No.: CD007798.

2. Aponte H, Acosta S, Rigamonti D, Sylvia B, Austin P, Samolitis T. The use of ultrasound for placement of intravenous catheters. AANA Journal. 2007;75(3):212-216.

3. Gorski L, Hadaway L, Hagle M, McGoldrick M, Orr M, Doellman, D. Infusion Nursing Standards of Practice. Journal of Infusion Nursing. 2016; 39(1S): 1-159.

4. Mendes K S, Silveira R C, Galvão C M. Revisão integrativa: MÊtodo de pesquisa para a incorporação de evidências na saúde e na enfermagem. Texto e Contexto Enfermagem.2008;17(4):758-764.

Peripheral venous cannulation, Ultrasound, Nurses.

P88 Nursing care in the person with intestinal elimination ostomy

Igor pinto 1 , silvia queirĂłs 1 , cĂŠlia santos 2 , alice brito 2, 1 universidade catĂłlica portuguesa, 1649-023 lisbon, portugal; 2 escola superior de enfermagem do porto, 4200-072 porto, portugal, correspondence: igor pinto ([email protected]).

Performing an intestinal elimination stoma triggers changes in the physical, psychological, social, self-care and lifestyle of the person. The way this event is experienced is influenced by several factors. The literature suggests that a systematized nursing care, started in the preoperative period, which includes the postoperative period and continues after hospital discharge, is associated with a better level of adaptation and a higher quality of life of the person with intestinal elimination ostomy.

To identify the existing literature on nursing care programs and to map the respective interventions performed in the person proposed for the construction of a stoma or with an intestinal elimination stoma.

A literature review was performed in the Web of Science, CINAHL Plus with Full Text, CINAHL Complete and Scopus databases, based on the Joanna Briggs Institute for Scoping Reviews model, from inception to April 2017. Two independent reviewers performed the analysis of article relevance, extraction and synthesis of the data.

A total of 1,728 articles was identified and only 17 were included for content analysis. It was not possible to find a program that contemplates all phases of perioperative and post-discharge intervention, with the studies focused essentially on one or two specific moments. Considering the interventions mentioned in the literature, the most stated were: stoma site marking; preoperative education; post-operative education; and nursing follow-up after hospital discharge. However, there is still no evidence to suggest timings, methodology and contents to guide the implementation of each of the interventions.

A systematized nursing care in the person with intestinal elimination ostomy, covering the perioperative period and follow-up after discharge, has a significant impact on the adaptation to the stoma, reduces complications, increases the perception of self-efficacy and also the quality of life. It is imperative to create and test an intervention program that contemplates all these phases and all the interventions mentioned in the literature. On the other hand, further studies should be carried out to determine the defining characteristics of these interventions, which help in the decision-making process and the nurses' performance.

Ostomy, Nursing Care, Cecostomy, Colostomy, Ileostomy.

P89 Nurses competencies in catastrophes and disaster nursing

PatrĂ­cia mg godinho, maria t leal, correspondence: patrĂ­cia mg godinho ([email protected]).

Disasters and catastrophes are unpredictable multi-victim events, leading to a sudden demand for emergency health care. An adequate response requires multidisciplinary professionals that must be experienced and specialized in the field. There is evidence that nurses are important in a catastrophic situation as key-elements that can contribute positively in these situations, because of their broad care-giving skills, that can be applied in a variety of disaster settings, with high levels of creativity, adaptability, and leadership [1]. Nevertheless, nurses need to be competent in disaster nursing, in order to make the difference [2].

To review the available evidence regarding nursing competencies/interventions that improve victims’ outcomes in the context of catastrophe or multi-victim emergencies.

We completed an integrative review of the literature available from MEDLINE and CINAHL databases and grey literature, to answer the question: “ Which nursing competencies/interventions contribute to improve victims’ outcomes in the context of catastrophes or multi-victim emergencies ?” [3].

Eight articles, published between 2012 and 2016, satisfied the search criteria and were analysed. The number of participants varied between 16 and 620 nurses. Most of the articles demonstrate that nurses’ competencies have gaps in this field, due to lack of knowledge, training, and simulation, both in nursing education as in working contexts. One of the studies emphasized that nurses from military hospitals have a superior knowledge and preparation in catastrophes, when compared with nurses from civil hospitals, given their military training [4]. The academic training, and the implementation of catastrophe training in the nursing curriculum is also addressed in three of the analysed articles [4–6]. The fact that hospital administrations and nursing leadership fail as promoters of training and promotion of regular exercises and simulations of disasters, is also evidenced in three articles [5,7,8].

Globally, the results of this integrative review show that most nurses don’t have enough training in disaster nursing. They are not prepared to respond adequately in a mass-causality event. The recommendations are that both, in academic fields or at work contexts, regular training and simulations should be part of disaster preparedness.

1. World Health Organization, International Council of Nurses, editors. ICN Framework of Disaster Nursing Competencies. Geneva: ICN & WHO; 2009.

2. Loke AY, Fung OWM. Nurses’ Competencies in Disaster Nursing: Implications for Curriculum Development and Public Health. In: Kapur GB, Baéz AA, editors. International disaster health care: preparedness, response, resource management, and education. Oakville: Apple Academic Press; 2017. p. 185–203.

3. The Joanna Briggs Institute. Joanna Briggs Institute Reviewers’ Manual: 2014 Edition [Internet]. Adelaide: The Joanna Briggs Institute; 2014. 197 p. Available from: http://joannabriggs.org/assets/docs/sumari/ReviewersManual-2014.pdf

4. Thobaity A, Plummer V, Innes K, Copnell B. Perceptions of knowledge of disaster management among military and civilian nurses in Saudi Arabia. Australas Emerg Nurs J. 2015 Aug;18(3):156–64.

5. Labrague LJ, Yboa BC, McEnroe-Petitte DM, Lobrino LR, Brennan MGB. Disaster Preparedness in Philippine Nurses. J Nurs Scholarsh. 2016 Jan;48(1):98–105.

6. Khalaileh MA, Bond E, Alasad JA. Jordanian nurses’ perceptions of their preparedness for disaster management. Int Emerg Nurs]. 2012 Jan;20(1):14–23.

7. Baack S, Alfred D. Nurses’ preparedness and perceived competence in managing disasters. J Nurs Scholarsh. 2013 Sep;45(3):281–7.

8. Li YH, Li SJ, Chen SH, Xie XP, Song YQ, Jin ZH, et al. Disaster nursing experiences of Chinese nurses responding to the Sichuan Ya’an earthquake. Int Nurs Rev. 2017 Jun;64(2):309–17.

Disaster nursing, Nursing competencies, Catastrophe, Multi-victims emergencies, Mass causality events.

P90 Patient safety culture: the same functional typology, distinct cultures

Vanda pedrosa ([email protected]).

Fostering a culture of safety in health organizations should begin by evaluating the current culture. In Primary Health Care, patient safety becomes more important because a considerable proportion of safety incidents, detected in hospitals, originate from earlier levels of the system, from most of the interactions and from the largest volume of appointments of the functional units. At this level of health care, in family health units that provide accessible care, global and longitudinal follow-ups on the health process in a lifetime, enables greater health gains, and greater proximity to the patient. These are elementary health care units, based on multi-professional teams, composed by doctors, nurses and administrative staff. Still, many are on very different levels in terms of safety culture of the patient, although in all, the patient wants to have security.

Describe the patient's safety culture of health professionals from two family health units (USF), belonging to the same Health Centre, to understand the similarities and/or differences between them.

Qualitative study, with a semi-structured interview (to GPs and nurses), at two USFs, models A and B, of one Health Centre in the Lisbon area. Content analysis was supported by maturity levels of a patient safety culture, five maturity levels ranging from 1 (worst culture) to 5 (best culture).

By adjusting the responses within a maturity level for the patient safety culture of the 2 functional units, it was observed that the culture oscillated between 1.8 values in the USF model A, close to a reactive culture, in which the organization only cares about safety when problems occur, and 4 values for USF B, close to a proactive culture, with patient safety measures, even without adverse events, close to the ideal, with an informed and worried team.

The functional units have the same typology, belong to the same Health Centre, but align the patient's safety culture with its greater and lesser complexity, respectively, model B and A. In other words, patient safety is not observed from the same perspective, although it operates in the same geographical area. There is a need for more and a better evaluation, information and training so that the safety culture develops.

Functional units, Primary Health Care, Patient Safety Culture, Education and Training in Patient Safety.

P91 Online opinion leaders and weight loss: a literature review based model

Inga saboia 1 , ana m almeida 2, 1 universidade federal do cearĂĄ, 60020-181 fortaleza, cearĂĄ, brasil; 2 universidade de aveiro, 3810-193 aveiro, portugal, correspondence: inga saboia ([email protected]).

Online opinion leaders shape the trends of the current Web 2.0 [1] and eHealth context [2]. In 2013, in the USA, 72% of the netizens sought out others with the same health problem [3]. This illustrates a scenario of a network among users that mutually influence health behaviours and health decisions, creating a new setting of an increasing digital literacy in health [4], in which the patient becomes an agent of their treatment. One of the most researched topics is weight control (3). The subject of this study is related to public health, specifically obesity, the epidemic of the 21st century [5].

This study pretends to build a model of analysis, based on a literature review. This model is expected to support a deeper understanding of the role of the opinion leaders on social networks, particularly the ones which sphere of influence acts on weight loss.

A literature review was conducted through a systematic mapping [6]. The handled databases were: Web of Science, Scopus, PubMed, and Google Scholar. The keywords were: Opinion leader OR Digital Influencer OR Powerful Patient OR Community AND Nutrition OR Obesity OR Weight AND Behaviour change, from 2012 to 2017.

Opinion leadership relates to the degree in which an individual can influence others, according to their characteristics and practices, building interpersonal ties [7]. The opinion leaders types in the current Web 2.0 context are: professionals and non-professionals, being these last ones mainly patient opinion leaders (POLs) [8] or digital influencers [9]. POLs are patients who share content as support for others [8]. Besides these agents, we see the rise of health professionals who connect directly with their audience [9]. These have conquered the media by promoting the “right way to feed” and legitimating themselves through a scientific discourse [10]. Considering the digital influencers context, a dichotomy arises: they can be taken as a threat to public health, or as partners fostering the communication between doctors and patients [9]. Furthermore, social networks have also an important role in this scenario as they connect people with a common purpose (weight loss) [10-15].

This study identified two types of opinion leaders: health professionals, and nonprofessional ones. Both have different behaviours in social networks, but both have an important role in influencing the experience of their followers in weight loss.

Online opinion leaders, Online social network, Digital influencers, Patient opinion leader, Digital literacy in health, Nutrition, Public health, Obesity, eHealth, Web 2.0.

P92 Pharmacotherapeutic follow-up in institutionalized elderly

Ana grou 1 , carmen monteiro 2 , jorge balteiro 1, 1 escola superior de tecnologia da saĂşde de coimbra, instituto politĂŠcnico de coimbra, 3046-854 coimbra, portugal; 2 farmĂĄcia luciano e matos, 3000-142 coimbra, portugal.

The pharmacotherapy follow-up is one of the best methods to diminish health problems and medicine therapy-associated morbidity. This procedure aims at improve the results obtained at the clinical level and optimize therapeutic plans.

This work aimed to perform the pharmacotherapy follow-up of elderly residents in long-stay institutions, identify their prevalent pathologies and trends in medicines consumption, as well as to identify and solve negative results of medication (NOM).

Pharmacotherapy follow-up procedures were applied to 38 elderly Residents in a Long-Stay Institution. The population analysed referred to suffer from a total of 212 pathologies. The majority of these concerned the circulatory system (n=45) and mental and behavioural disturbs (n=38). Daily, 273 medicines are consumed by this population and most of them target the nervous and cardiovascular systems (n= 93 and n=74, respectively).

During the pharmacotherapy follow-up interviews 88 NOM were identified. Upon pharmacist intervention, 52 of the identified NOM were solved or placed under control. During this pharmacotherapy follow-up procedure 131 interventions were performed.

The establishment of pharmacotherapy follow-up is greatly advantageous for patients, particularly for elderly ones. This procedure optimized the results of the therapeutic plan, decreased the impact and solved NOM.

Pharmacotherapy Follow-upk, Elderly Residents in Long-Stay Institutions, Negative Outcomes Associated with Medication Pharmacist Intervention.

P93 The person with ostomy of intestinal elimination: social representation of nurses

Joana pinho, tânia jesus, liliana mota, correspondence: liliana mota ([email protected]).

During the surgical formation of an intestinal elimination ostomy, the person is challenged to develop a set of self-care skills to guarantee quality of life throughout the health/illness transition process, and the nurse should act as a facilitator in this process [1]. Nurses have an important role to prepare these patients to return back home. This study intends to extract the essence from the nurses' point of view, about people with ostomy of intestinal elimination.

To describe the social representation of nurses about the person with intestinal elimination ostomy.

We conducted a qualitative, descriptive-exploratory study. Data was collected with an online form. The online form had five questions focused in the aim of the study, and each participant answered each question with five words according to their perception. The sample was of convenience, non-probabilistic constituted by 64 nurses which answered to an online form. Data were collected during the month of November (2017) sending emails to all contacts of nurses in the data bases of the nursing school. Anonymity was preserved. Data analysis was computed in IRAMUTEQ. We performed a classic lexicographical analysis.

Participants had on average 33.08 (Âą 8.83) years old (between 22 and 58 years). The majority (76.6%) belonged to the feminine gender and 84.4% of participants were graduated. When the nurses think in ostomy of intestinal elimination, they think in the characteristics of the stoma. The smell has an important role in this category. Focused in person with intestinal ostomy, nurses are centred in self-care. The nurses consider support (emotional and familiar) and the teachings, the most important necessities of the person with an intestinal ostomy. When they think in the care of these persons they are focused in capacity and knowledge. The preparation to return back home is centred in the acceptance of the disease and on the relationship between nurses and patient.

The social representation of nurses about the person with intestinal elimination ostomy is focused in emancipatory patterns of nursing. The person is the centre of care and the care plan is focused in helping the person to live with quality with this new condition. These results are an important contribute to enhance the practices and to demonstrate the relevance of nursing health/illness transitions of the person with intestinal elimination ostomy.

1. Mota M, Gomes G, Petuco V, Heck R, Barros E. Facilitadores do processo de transição para o autocuidado da pessoa com estoma: subsídios para enfermagem. Revista da Escola de Enfermagem USP. 2015. 49(1):82-88.

Ostomy, Intestinal elimination, Social representation, Nursing.

P94 A synthesis of Portuguese studies regarding infertile patients

Joana romeiro, sĂ­lvia caldeira, institute of health sciences, universidade catĂłlica portuguesa, 1649-023 lisbon, portugal, correspondence: joana romeiro ([email protected]).

Infertility is clinically defined as the inability to conceive and to achieve successful clinical pregnancy after 12 months of regular and unprotected sexual intercourse [1]. In 2010, 48.5 million couples worldwide were reported to have fertility problems [2], affecting both genders in 40% of the cases [3]. Due to this high and broad prevalence, infertility is acknowledged as a public health issue with prioritized intervention [4]. The prevalence in Portuguese population was first known in 2009, when a study estimated that about 260-290 thousand individuals were infertile and approximately 9% to 10% of couples displayed some type of reproductive confinement [5]. These results triggered scientific interest in the study of Portuguese infertile patients and a synthesis of the published Portuguese studies regarding infertility seems important in understanding and caring for these patients.

To review scientific health empirical research in the study of Portuguese infertile patients.

Literature review based on search conducted in December 2017. A total of 12 scientific data bases were searched: CINAHL with full text, MEDLINE with full text, MedicLatina, Academic Search Complete, Pubmed, Web of Science, LILACS, SciELO, RCAAP, and across ESENFC; Nursing School of Lisbon and Nursing School of Porto databases. No date limit has been applied. Studies considered eligible for inclusion were primary studies in Portuguese samples of male or female individuals and/or in couples having reproductive impairment, available in a full-text format, published on peer-reviewed journals in English, Spanish or Portuguese language.

A total of 2,052 results have been identified and 101 papers were included. Empirical research regarding infertile couples started to be published in 1995. Until current date, 2013 was the year with the highest publication score (13.8%) with psychological aspects of the infertile experience being the most explored (57.4%) in comparison with other health aspects, like for instance related to nursing (2.9%), and psychiatry (0.9%). Primary studies were also published in international journals (53.4%) as original papers (62.3%), and in a thesis format (37.6%).

Although the developments in health research regarding infertile couples, a significant gap in the knowledge remains, particularly concerning other health disciplines (despite psychology). This seems to be a global tendency in healthcare, and further investigation is needed to fully acknowledge this phenomenon and consequently allow the provision of an effective patient-centred care to these patients.

1. Zegers-Hochschild F, Adamson G D, Mouzon J de, Ishihara O, Mansour R, Nygren K van der, Poel S. The International Committee for Monitoring Assisted Reproductive Technology (ICMART) and the World Health Organization (WHO) Revised Glossary on ART Terminology, 2009. Human Reproduction, 24(11), 2683–2687.

2. Mascarenhas M N, Flaxman S R, Boerma T, Vanderpoel S, Stevens G A. National, Regional, and Global Trends in Infertility Prevalence Since 1990: A Systematic Analysis of 277 Health Surveys. PLOS Medicine. 2012, 9(12), 1–12.

3. NICE. Fertility, Assessment and treatment for people with fertility problems. 2013. National Institute for Health and Care Excellence.

4. Centers for Disease Control and Prevention. National Public Health Action Plan for the Detection, Prevention, and Management of Infertility. U.S. Department of Health and Human Services. 2014. Retrieved from http://www.cdc.gov/reproductivehealth/infertility/pdf/drh_nap_final_508.pdf

5. Carvalho, J. L. S., & Santos, A. Estudo Afrodite, Caracterização da Infertilidade em Portugal (p. 74). Porto: Faculdade de Medicina da Universidade do Porto, Sociedade Portuguesa de Medicina da Reprodução, KeyPoint. 2009.

Infertility, Health, Evidence-based, Review.

P95 Knowledge and consumption of vitamins and food supplements in sportspeople and physical exercise in Coimbra

Adriana ferreira, clara rocha, jorge balteiro.

The demand for healthy lifestyles, the concern with health and well-being, and the relentless pursuit of the trend of the “ideal body” has been increasing in recent years, as well as the prevalence of supplement use, as a compensation for an unbalanced diet and search for physical/psychic intensity.

In order to evaluate the consumption and knowledge about vitamin and dietary supplements in Coimbra, a sample of 333 individuals practicing sports was studied.

The study lasted for nine months. The collection of information was carried out through a questionnaire. The study found that 201 (60.4%) subjects have consumed supplementation, with a prevalence of higher consumption in males (73.3%). Supplement use was higher between 33 and 40 years old individuals. The most consumed type of vitamin supplement was multivitamins with minerals (44.3%) and the food supplement was protein (69%). The most cited reason for the consumption of supplements was “physical and/or intellectual fatigue” (50.5%). The daily frequency of supplementation was high (33.7%), with the highest expenditure on consumption of supplements varying from 10 to 20€, monthly. The place of purchase and the source from which subjects obtained knowledge about supplements was the Internet. As for knowledge on the subject, it was noted that it has been classified as “insufficient” (45.8%) by respondents.

In Portugal, the prevalence of supplementation consumption is still unknown, so it becomes necessary to raise awareness among the population, about potential risks associated with improper supplementation, special diets and unbalanced exercise.

Vitamins supplements, Food supplements, Consumption, Knowledge.

P96 Stability of paediatric oral diazepam suspensions

PatrĂ­cia marinho 1 , patrĂ­cia correia 2, 1 escola superior de saĂşde do porto, instituto politĂŠcnico do porto, 4200-072 porto, portugal; 2 centro de investigação em saĂşde e ambiente, escola superior de saĂşde, instituto politĂŠcnico do porto, 4200-072 porto, portugal, correspondence: patrĂ­cia marinho ([email protected]).

Currently, hospital pharmacies prepare formulations that aim to adjust the medication to the needs of each patient when the pharmaceutical industry is not able to respond to those needs [1]. One of the formulations produced in the hospital pharmacy is the diazepam suspension 0.4 mg/ml for paediatric use, obtained from diazepam tablets. However, the use of tablets or powders in oral liquid formulations may alter the stability of the active ingredients. Therefore, these formulations should be submitted to stability studies [2]. Nevertheless, the information on the stability of manipulated oral suspensions is scarce [3], so this study is relevant.

The main goal of this study is to validate a method of diazepam quantification in suspensions. Additionally, we aim to evaluate the stability of diazepam in suspensions during 30 days, after the suspension preparation, establishing an expiration date.

The quantification method of diazepam in oral suspensions arose from the adaptation of the method described in Portuguese Pharmacopoeia [4], for the same active ingredient in tablets. After the method’s validation, the stability of diazepam was evaluated weekly, during 30 days, and the first analysis was done immediately after the preparation of the suspension. During the study period, suspensions were stored under suitable cold conditions (4°C).

With an accuracy, evaluated by the mean recovery of 80%, and a precision, evaluated by the variation coefficient, varying between 6.1 and 11.5%, the method proved to be practicable. Two suspension’s samples were prepared with a similar diazepam concentration (0.43 mg/ml). The stability study of those suspensions showed that diazepam concentration decayed linearly, and that diazepam suspensions lose about 70% of their active principle within 30 days. Moreover, given the limits indicated by the Portuguese Pharmacopoeia [4] for diazepam tablets, it was verified that these suspensions only comply with these limits after 7 days, and that within the established period of validity these limits are no longer met.

Despite all limitations, the adapted method proved to be practicable and the results that followed have pointed to the possible instability of diazepam, when included in this oral suspension formulation. Given the dosage limits set for diazepam tablets [4] and knowing in advance that the validity period usually attributed to the suspension is 15 days, the results point to a new shelf-life of approximately 7 days. However, for a more consistent period of validity to be established, a more detailed stability study is required.

1. Patel VP, Desai TR, Chavda BG, Katira RM. Extemporaneous dosage form for oral liquids. Pharmacophore, 2011, 2(2), 86-103.

2. Schlatter J, Bourguignon E, Majoul E, Kabiche S, Balde I B, Cisternino S, Fontan J E. Stability study of oral paediatric idebenone suspensions. Pharmaceutical Development and Technology, 2016, 22(2), 296-299

3. Ensom M H H, Kendrick J, Rudolph S, Decarie D. Stability of Propranolol in Extemporaneously Compounded Suspensions. The Canadian Journal of Hospital Pharmacy, 2013, 66(2), 118–124.

4. INFARMED. Farmacopeia Portuguesa. 8ª edição. 2008, 1925-1926; 2224-2225.

Diazepam suspensions, Chemical stability, Validation tests, Dosing method, Expiration date.

P97 First-time grandparents and transition to grandparenthood: integrative review of the literature

SĂłnia coelho 1 , rogĂŠrio rodrigues 2 , isabel mendes 2, 1 health sciences research unit: nursing, nursing school of coimbra, 3046- 851 coimbra, portugal; 2 nursing school of coimbra, 3046-051 coimbra, portugal.

Nowadays families become smaller but at the present time a family involves several generations (even if they do not live together). The family members’ roles change and the role of the grandparents in the transition of first-time parents to grandparenthood needs to be understood.

To systematize an integrative review of the literature related to the transition to grandparenthood in contemporary Western societies.

We conducted an integrative review of the literature, in electronic databases (EBSCO®, b-On® and Web of Knowledge®) in order to answer the question: “ How is experienced the transition to grand-parenting? ” The search was limited to articles published between 2006-2016 years, with the descriptors “grandparents” and “transition" in English, French, Portuguese or Spanish.

After analysing the abstracts of 179 articles, excluding repetitions, and those who did not respond to the original question, we obtained 13 articles to include in the integrative review. The level of the methodological approach was level 4. Only descriptive and qualitative studies (non-experimental) were included. The results of the literature review on the topic were grouped into five themes: grand-parenting and gender; become a grandfather/grandmother; parenting and the transition process; role and health; parenting and intergenerational relations.

It was found that the transition to Grandparenthood is studied in risk situations, and more studied in women than in men. Grandparenthood can be seen as a transition or as an adaptive process; as the search for the meaning of life; opportunity for personal growth; a normative event that has emotions and positive and negative cognitions. The process of becoming a grandparent can be considered an event of great social impact. Grandparents see their grandchildren as their extension in time and this gives them a more positive view of aging. The perception that grandparents have of themselves may be important in promoting a positive and healthy aging.

Grandparenthood, Grandparents, Transition.

P98 Intestinal microbiota - impact on host health

Nastasia fernandes, alice nunes, maria josĂŠ alves, polytechnic institute of bragança, 5300-253 bragança, portugal, correspondence: nastasia fernandes ([email protected]).

At present it is known that in addition to establish and maintain a normal intestinal health, the intestinal microbiota can exacerbate a multitude of diseases, ranging from colorectal cancer to autoimmune and allergic diseases [1]. The interest in studying the human microbiome, its diversity and human-microorganism interactions has been developing in the last years, as such it has been made available immense information in this area.

Bibliographic review of the intestinal microbiota: constitution, what affects it, and its influence in the triggering of some pathologies.

A comprehensive search was performed on the PubMed search, being obtained from this review 112 articles from which 67 were used.

The intestinal microbiota is considered a “superorganism” and is extremely complex. It is composed of a great diversity of microorganisms, which varies among individuals; however, it is essentially dominated by two phyla, the Bacteroidetes and the Firmicutes. It is dynamic and can be affected by several factors such as diet [2], breastfeeding [3], use of antibiotics [4,5] and type of delivery [6,7]. When an imbalance of the microbiota occurs, known as dysbiosis [1,8], the host is affected, being related to pathologies such as allergies, obesity and Crohn's disease. Some studies [9,10-14] have demonstrated that the microbiota participates in the maturation of the immune system and as such is predominant in the response to infectious processes. On the other hand, the intestinal microbiota seems to play a fundamental role in the prevention of allergies [15-26]. Inappropriate colonization after birth and excessive hygiene during childhood, may promote greater allergic reactions. Rats without bacteria have been shown to present more severe allergic reactions [22-26]. Regarding obesity, several authors [27-30] have demonstrated that changes in microbiota are strongly related to the establishment of obesity. It has been demonstrated that the type of microbiota influences obesity; rats with higher amounts of Firmicutes compared to the amount of Bacteroidetes, present greater capacity to promote fat deposition. On the other hand, a switch to a less caloric diet produced a change in the microbiota that led to a decrease in Firmicutes and an increase in Bacteroidetes. These results are surprising and suggest that future obesity control may originate from the type of intestinal microbiota.

Intestinal microbiota is of great relevance because it protects against external factors and the development of certain pathologies. It is therefore important to keep the population informed so that a microbiota considered “normal” can be maintained from childhood to adulthood.

1. Parnell JA, Reimer RA. Prebiotic fiber modulation of the gut microbiota improves risk factors for obesity and the metabolic syndrome. Gut Microbes. 2012;3(1):29-34.

2. David LA, Maurice CF, Carmody RN, Gootenberg DB, Button JE, Wolfe BE. Diet rapidly and reproducibly alters the human gut microbiome. Nature. 2014;505(7484):559-563.

3. Cox LM, Blaser M. J. Antibiotics in early life and obesity. Nat Rev Endocrinol. 2015;11(3):182-190.

4. Clemente JC, Ursell LK, Parfrey LW, Knight R. The impact of the gut microbiota on human health: an integrative view. Cell. 2012;148(6):1258-1270.

5. Jernberg C, LĂśfmark S, Edlund C. Long-term impacts of antibiotics exposure on the human intestinal microbiota. Microbiology. 2010;156(Pt11):3216-3223.

6. Adlerberth I, Strachan DP, Matricardi PM, Ahrne S, Orfei L, Aberg N, et al. Gut microbiota and development of atopic eczema in 3 European birth cohorts. J Allergy and Clin Immunol. 2007;120(2):343–50.

7. Gronlund MM, Lehtonen OP, Eerola E, Kero P. Fecal microflora in healthy infants born by different methods of delivery: permanent changes in intestinal flora after cesarean delivery. J. Pediatr Gastroenterol Nutr. 1999;28(1):19–25.

8. Blumberg R, Powrie F. Microbiota, Disease, and Back to Health: A Metastable Journey. Sci Transl Med. 2012;4(137):137rv7.

9. Swidsinski A, Loening-Baucke V, Lochs H, Hale LP. Spatial organization of bacterial flora in normal and inflamed intestine: a fluorescence in situhybridization study in mice. Word J Gastroenterol. 2005;11(8):1131- 1140.

10. Hartstra AV, Bouter KE, Backhed F, Nieuwdorp M. Insights into the role of the microbiome in obesity and type 2 diabetes. Diabetes Care. 2015;38(1):159-65.

11. Chow J, Lee SM, Shen Y, Khosravi A, Mazmanian SK. Host-bacterial symbiosis in health and disease. Adv Immunol. 2010;107:243–274.

12. O'Hara AM, Shanahan F. The gut flora as a forgotten organ. EMBO Rep. 2006;7:688–693.

13. Purchiaroni F, Tortora A, Gabrielli M, Bertucci F, et al. The role of intestinal microbiota and the immune system. Eur Rev Med Pharmacol Sci. 2013;17(3):323-33.

14. Round JL, Mazmanian SK. The gut microbiota shapes intestinal immune responses during health and disease. Nat Rev Immunol. 2009;9:313–323.

15. Bach JF. The effect of infections on susceptibility to autoimmune and allergic diseases. N Engl J Med. 2002;347:911-20.

16. Pelucchi C, Galeone C, Bach JF, La Vecchia C, Chatenoud L. Pet exposure and risk of atopic dermatitis at the pediatric age: a metaanalysis of birth cohort studies. J. Allergy Clin Immunol. 2013;132(3):616622.e7.

17. Stiemsma LT, Turvey SE. Asthma and the microbiome: defining the critical window in early life. Allergy Asthma Cli Immun. 2017;13:3.

18. Chieppa M, Rescigno M, Huang AY, Germain RN. Dynamic imaging of dendritic cell extension into the small bowel lumen in response to epithelial cell TLR engagement. J Exp Med. 2006;203(13):2841–52.

19. Ignacio A, Morales CI, Camara NO, Almeida RR. Innate sensing of the gut microbiota: modulation of inflammatory and autoimmune diseases. Front Immunol. 2016;7:54.

20. Round JL, Lee SM, Li J, Tran G, Jabri B, Chatila TA, et al. The Toll-like receptor pathway establishes commensal gut colonization. Science. 2011;332(6032):974-977.

21. Hessle C, Hanson LA, Wold AE. Lactobacilli from human gastrointestinal mucosa are strong stimulators of IL-12 production. Clinical and Experimental Immunology. 1999;116(2):276-282.

22. Herbst T, Sichelstiel A, Schar C, Yadava K, Burki K, Cahenzli J, et al. Dysregulation of allergic airway inflammation in the absence of microbial colonization. Am J Respir Crit Care Med. 2011;184(2):198–205.

23. rompette A, Gollwitzer ES, Yadava K, Sichelstiel AK, Sprenger N, NgomBru C, et al. Gut microbiota metabolism of dietary fiber influences allergic airway disease and hematopoiesis. Nat Med. 2014;20(2):159– 66.

24. Schuijs MJ, Willart MA, Vergote K, Gras D, Deswarte K, Ege MJ, et al. Farm dust and endotoxin protect against allergy through A20 induction in lung epithelial cells. Science. 2015;349(6252):1106–10.

25. Kumar H, Lund R, Laiho A, Lundelin K, Ley RE, Isolauri E, et al. Gut microbiota as an epigenetic regulator: pilot study based on wholegenome methylation analysis. MBio. 2014;5(6):e02113–4.

26. Thorburn AN, McKenzie CI, Shen S, Stanley D, Macia L, Mason LJ, et al. Evidence that asthma is a developmental origin disease influenced by maternal diet and bacterial metabolites. Nat Commun. 2015; 6:7320.

27. Turnbaugh PJ, Ley RE, Mahowald MA, Magrini V, Mardis ER, Gordon JI. An obesity-associated gut microbiome with increased capacity for energy harvest. Nature. 2006;444(7122):1027–1031.

28. Bäckhed F, Ding H, Wang T, Hooper LV, Koh GY, Nagy A, et al. The gut microbiota as an environmental factor that regulates fat storage. Proc Natl Acad Sci U S A. 2004; 101:15718–23.

29. Schwiertz A, Taras D, Schäfer K, Beijer S, Boss NA, Donus C, Hardt PD. Microbiota and SCFA in lean and overweight healthy subjects. Obesity (SilverSpring) 2010;18:190–195.

30. Bäckhed F, Manchester JK, Semenkovich CF, Gordon JI. Mechanisms underlying the resistance to diet- induced obesity in germ-free mice. Proc Natl Acad Sci U S A. 2007; 104:979–84.

Intestinal microbiota, microbioma, immune system, dysbiosis, obesity, allergies, Crohn's disease.

P99 Effects of aging on neuromuscular activity during the performance of a ballistic motor skill

AntĂłnio m vencesbrito 1,2,3,4 , mĂĄrio a rodrigues-ferreira 1,2,3, 1 escola superior de desporto de rio maior, instituto politĂŠcnico de santarĂŠm, 2040-413 rio maior, portugal; 2 unidade de investigação do instituto politĂŠcnico de santarĂŠm, 2040-413 rio maior, portugal; 3 centro de investigação em qualidade de vida, 2040-413 rio maior, portugal; 4 international martial arts and combat sports scientific society, rzeszĂłw, poland, correspondence: antĂłnio m vencesbrito ([email protected]).

Human aging leads to a progressive decline of biological functions that affects the neuro-muscular systems. Sport practice is associated with health maintenance and a better quality of life in older people.

The aim of this study was to investigate the effects of aging on the neuromuscular reaction time and electromechanical delay during the performance of the karate frontal kick (Mae-Geri).

Participated in this study 9 elite karate athletes (age, 21.0 Âą 2.47 years; height, 175 Âą 6.53 cm; weight, 72.0 Âą 9.25 kg) and 9 veteran karate practitioners (age, 54.0 Âą 3.87 years; height, 176 Âą 4.72 cm; weight, 76.0 Âą 9.17 kg). Surface electromyography was recorded from rectus femoris (RF) and vastus lateralis (VL) portions of the quadriceps femoris, long head of the biceps femoris (BF), tibialis anterior (TA) and lateralis gastrocnemius (GA). Kinematic analysis was performed with the Ariel Performance Analysis System (APAS, Ariel Dynamics-2003). The neuromuscular reaction time was defined as the time interval between the auditory stimulus and the onset of electrical activation of a muscle, while the electromechanical delay was the time interval between the onset of the electric activity of a muscle and the beginning of joint movement. Student t-test (two-tailed) was used to analyse the differences between groups, with a significance level of p < 0.05 (SPSS 17.0).

It was observed a tendency to a longer neuromuscular reaction time of the TA in veteran karate practitioners than among elite karate athletes (136.00 Âą 58.80 vs 122.00 Âą 45.94 ms, p = 0.566), although a significantly shorter neuromuscular reaction time was found in the RF in veteran karate practitioners (137.00 Âą 27.93 vs 184 Âą 51.55 ms, p = 0.030). Veteran karate practitioners presented a significantly longer RF electromechanical delay than elite karate athletes (127.00 Âą 59.11 vs 39.00 Âą 47.68 ms, p = 0.003).

The results of the study showed that with the aging process there is an increase in the electromechanical delay, although no negative impact on the neuromuscular reaction time has been observed. Therefore, continuous sport practice in veteran karate practitioners seems to attenuate the effects of aging on neuromuscular systems.

Neuromuscular reaction time, Electromechanical delay, Electromyography, Karate.

P100 Nutritional status in institutionalized elderly: is it influenced by polymedication and length of stay?

Maria a marques 1,2 , ana faria 3,4 , marisa cebola 2,5, 1 santa casa da misericĂłrdia de alvaiĂĄzere, 3250-115 alvaiĂĄzere, portugal; 2 faculdade de medicina, universidade de lisboa, 1649-028 lisboa, portugal; 3 centro hospitalar e universitĂĄrio de coimbra, 3000-075 coimbra, portugal; 4 escola superior de tecnologia da saĂşde de coimbra, instituto politĂŠcnico de coimbra, 3046-854 coimbra, portugal; 5 escola superior tecnologia da saĂşde de lisboa, instituto politĂŠcnico de lisboa, 1990-096 lisboa, portugal, correspondence: maria a marques ([email protected]).

Aging is frequently associated with conditions, like malnutrition, that may affect the elderly health status and quality of living. Malnutrition has a high prevalence in institutionalized elderly [1]. Polymedication is also frequent in older individuals, impairing appetite and possibly contributing to the development of malnutrition [2].

The aim of this study was to identify nutritional risk in institutionalized elderly and establish a relationship between length of stay in the institution, pharmacotherapy and presence of malnutrition.

Day of admission, number of medications and sociodemographic data were collected from the patient’s medical file. Malnutrition and nutritional risk were assessed using the Mini Nutritional Assessment (Short Form) (MNA-SF).

Seventy-eight individuals, mainly female (59.0%), with a mean age of 81.7 years (SD= 10.2) were evaluated. Mean length of stay was 6.4 years (SD= 7.8). MNA-SF classified 32 individuals (41.0%) at risk of malnutrition and 20 (25.6%) as malnourished. According to the elderly Body Mass Index (BMI) classification, 28 were at risk of malnutrition and 23 were malnourished (35.9% and 29.7%, respectively), showing a positive correlation with MNA-SF results (p < 0.05). The female gender presented an overall higher risk of malnutrition (p < 0.05). Those who were more dependent to feed themselves were at risk of malnutrition or malnourished (p < 0.05). Fifty-seven of the subjects (73.0%) were under polymedication, with a mean number of daily medications of 7.5 (SD= 3.4). A higher number of regular medications is correlated with higher BMI (p < 0.05). No association was found between length of stay or number of drugs taken and malnutrition.

In this population, a high prevalence of risk of malnutrition was identified, particularly in the female gender. Although previously described, a correlation between polypharmacy and malnutrition was not found. A closer look to type of medication might be necessary. In this sample, a very long length of stay was also found. Nutritional intervention in this population should be prompt since admission and regularly provided, preventing the development of malnutrition and comorbidities.

1. Cereda E. Mini nutritional assessment. Curr Opin Clin Nutr Metab Care. 2012;15(1):29–41.

2. Jyrkkä J, Mursu J, Enlund H, Lönnroos E. Polypharmacy and nutritional status in elderly people. Curr Opin Clin Nutr Metab Care. Janeiro de 2012;15(1):1–6.

Malnutrition, Elderly, Polypharmacy .

P101 Food safety and public health in canteens of public and private educational establishments and in private institutions of social solidarity

Cristina santos 1 , esmeralda santos 2, 1 department of environmental health, coimbra health school, 3046-854 coimbra, portugal; 2 agrupamentos de centros de saĂşde do baixo mondego, 3150-195 condeixa, portugal, correspondence: cristina santos ([email protected]).

To promote and guarantee hygiene and food safety is nowadays a requirement in any service involving the provision of food, as a means of ensuring the promotion of a high level of protection and consumer confidence. These changes boosted the growth of the catering industry. However, they also require the evolution of techniques, so as to enable catering and catering companies to offer food of quality [1-3]. Most cases of food poisoning are due to poor hygiene habits. Structural failures and ignorance or neglection of good hygiene and food safety practices may also lead to food contamination [4,5].

The sample consisted of canteens of public and private educational establishments and of public and private social solidarity institutions, totalling 26 canteens and 127 professionals. Data collection was performed using a diagnostic sheet of the structural conditions and operation of the facilities.

Measurements of polar compounds in canteens indicated good quality, except for one of the measurements that indicated a less satisfactory quality. In the evaluation of food temperature, it was found that there are some foods that are served in the “danger zone” (< 65°C). School cafeterias (without food confectionery) had, in majority, deficient conditions of installation because they were rooms of activities where the meals were served. For this reason, there were no water baths or meal service facilities.

With this work it was concluded that there are deficiencies regarding the structural and operating conditions of canteens/refectories, which could be filled by the construction/enlargement of spaces. Regarding the evaluation of the quality of the oils and temperature of the meals, there were flaws, with possible repercussions on the quality of the meals served. It is also important to develop skills for the elaboration of menus suited to the different age groups and the confection of healthier diets. Emphasis can be placed on the training of manipulators in order to raise awareness of the repercussions of their role and responsibilities in preventing contamination. Ensuring and promoting food safety is nowadays a requirement of any institution, where food is produced or distributed, as a means of ensuring the promotion of high levels of confidence and safeguarding of the consumer’s health.

1. Baptista P, Antunes C. Higiene e Segurança Alimentar na Restauração. Vol. I. Guimarães: Forvisão – Consultoria em Formação Integrada; 2005.

2. Baptista P, Antunes C. Higiene e Segurança Alimentar na Restauração. Vol. II. Guimarães: Forvisão – Consultoria em Formação Integrada; 2005.

3. Afifi HS, Abushelaibi A. Assessment of personal hygiene knowledge, and practices in Al Ain, United Arab Emirates. Food Control. 2012;25:249-253.

4. Associação da Restauração e Similares de Portugal. Higiene e Segurança Alimentar – Código de boas práticas para a restauração pública; 2006.

5. Lima VT. Educação nutricional na escola. In: Seminårio de Alimentação ESCOLAR, 3, 1999, ITAL. Resumos. Campinas, São Paulo. p.61.

Public Health; Food Safety; Canteens; Promoting Food Safety.

P102 Evaluation of the correlation between height and health of the spine in the student population in the age group of 16 - 19 years old - evaluation with spinal mouse ÂŽ

Alexandra monteiro, nelson azevedo, joĂŁo silva, liliana rodrigues, gilvan pacheco, instituto superior de saĂşde do alto ave, 4720-155 amares, portugal, correspondence: nelson azevedo ([email protected]).

The increasing number of postural deviations observed in the student population leads to changes in the spine' normal curvature, which translates into a greater vulnerability to mechanical stress and traumatic injuries. Although the causes of these postural deviations are diverse and difficult to analyse, the present study decided to investigate whether the height of the students may be one of the factors influencing the appearance of postural alterations detected by the non-invasive evaluation method of the Spinal MouseÂŽ.

Analyse differences in the incidence of postural changes in a sample of students aged 16-19 years with different heights (cm).

Eighty-five (85) students aged 16-19 years from Amares High school (Braga) were selected and submitted to a non-invasive postural evaluation by the Spinal MouseÂŽ device, which showed the presence of hypomobility, normal mobility and hypermobility at the sagittal plane in three zones of the vertebral spine (sacral, thoracic and lumbar), as well as an overall tilt in three distinct positions: orthostatic, flexion and extension. Data analysis was performed using the statistical program IBMÂŽ SPSSÂŽ (Statistical Package for the Social Sciences), version 25. In the statistical tests performed, it was considered as levels of significance, the values of 0.05 (significant) and of 0.01 (extremely significant).

The results indicate statistically significant differences (Kruskal-Wallis test, H) in the incidence of postural deviations (sagittal plane) in the sacral zone in flexion position (H = 6.629, p-value = 0.036), in general slope in flexion position (H = 6.738, p-value = 0.046), in the thoracic zone in the extension position (H = 11.390, p-value = 0.003) and in the lumbar zone in the extension position (H = 6.738, p-value= 0.034) for the different height groups considered (“<159 cm”, “159 - 177 cm” and “> 177 cm”).

Through the results we can conclude that there is a significant relationship between postural changes and students' height. In this way, it is fundamental to equate the ergonomic model of the school support material in order to adjust to different postures.

Spine, Postural changes, Adolescent height, Ergonomics, Spinal Mouse.

P103 The contribution of a Portuguese innovation to prevent complication in venous catheterization

InĂŞs cardoso 1 , anabela salgueiro-oliveira 1 , armĂŠnio cruz 1 , josĂŠ m martins 1 , liliana sousa 1 , sara cortez 2 , filipa carneiro 3 , pedro parreira 1, 1 coimbra nursing school, 3046-851 coimbra, portugal; 2 muroplĂĄs - indĂşstria de plĂĄsticos, 4745-334 muro, trofa, portugal; 3 innovation in polymer engineering, university of minho, 4800-058 guimarĂŁes, portugal, correspondence: inĂŞs cardoso ([email protected]).

Venous catheterization is one of the most frequent procedures in nursing clinical practice. Despite the procedure’s importance for healthcare quality, it has some risks and complications such infiltration, bloodstream infection and phlebitis. Healthcare associated infections are considered a worldwide problem, having a considerable impact on the patients’ and community’s health and economy [1,2]. According to the European Centre for Disease Prevention and Control, prevalence of this type of infections is 6.0% and 10.6%, in Europe and in Portugal, respectively. Bloodstream infections, related with venous catheters, have one of the lowest prevalence, but they may lead to serious consequences [2].

Explore the procedure of venous catheterization as a risk factor for infection. Explore the role of an innovation as a contribution to the implementation of a prevention measure, in the practice of flushing.

A literature review involved search in EBSCOhost databases, including articles up to 2017. Some terms used were “infection”, “venous catheter”, “catheterization”, catheter-related infection” “complication” “prevention” “management” “practices”, “flushing”. Articles regarding haemodialysis and urinary catheters were excluded. Guidelines of the societies regarding infusion nursing practices were also included.

Catheterization is considered a risk factor because it creates a possibility for microorganisms to access to the bloodstream. The implementation of an insertion protocol is not enough to eliminate microorganisms and avoid the formation of a biofilm. Integrated in maintenance procedures, flushing contributes to reduce the risk of complications. Studies concluded that the use of an adequate technique of flushing may reduce the biofilm and the probability of infection [3]. Infusion Nurses Society [4] and the Royal College of Nursing [5] include it as a procedure to prevent complications of the infusion therapy. Despite the recommendations, the implementation of this practice is not consistent. To explain lack of adherence, one of the reasons pointed out is the complexity of the task [6]. These results support the development of a Medical Device (MD) (double chamber syringe) that may contribute to the adherence of the flushing practice.

It is important to create strategies to improve adherence to guidelines in clinical context, particularly regarding healthcare associated infections. The development of a MD that may simplify the accomplishment of a good practice, such as, flushing in venous catheter to prevent complication, is proposed in this project.

Work funded by the FEDER fund, through the Operational Programme for Competitiveness and Internationalisation (COMPETE 2020), project POCI-01-0247-FEDER-017604.

1. World Alliance for Patient Safety. The Global Patient Safety Challenge 2005-2006 “Clean Care is Safer Care”. Geneva, World Health Organization; 2005.

2. European Centre for Disease Prevention and Control. Point prevalence survey of healthcare- associated infections and antimicrobial use in European acute care hospitals. ECDC; 2013.

3. Ferroni A, Gaudin F, Guiffant G, Flaud P, Durussel JJ, Descamps P, et al. Pulsative flushing as a strategy to prevent bacterial colonization of vascular access devices. Medical Devices. 2014;7:379-383.

4. Royal College of Nursing. Standards for infusion therapy (4th ed.). London: Royal College of Nursing; 2016.

5. Infusion Nurses Society. Infusion therapy standards of practice. Journal of Infusion Nursing. 2016;39(1S):S1-160.

6. Keogh S, Shelverton C, Flynn J, Davies K, Marsh N, Rickard C. An observational study of nurses’ intravenous flush and medication practice in the clinical setting. 2017;3(1):3-10.

Venous catheter, Infection, Prevention, Flushing.

P104 Patient safety culture: evaluation of multiprofessional teams

Luciane pa cabral 1 , daniele brasil 1 , andressa p ferreira 2 , clĂłris rb grden 1 , caroline gonçalves 1 , guilherme arcaro 2, 1 departamento de enfermagem e saĂşde pĂşblica, universidade estadual de ponta grossa, 4748 ponta grossa, paranĂĄ, brasil; 2 hospital universitĂĄrio regional dos campos gerais, 84031-510 ponta grosa, paranĂĄ, brasil, correspondence: luciane pa cabral ([email protected]).

Patient safety is an essential constituent of the quality of care, and assumes absolute relevance for managers, health professionals, family members and patients, in an attempt to provide a safe care.

The aim of the study was to evaluate the characteristics of the Patient Safety Culture among professionals of an Intensive Care Unit, through the application of the Hospital Survey on Patient Safety Culture (HSOPSC) instrument.

The study population consisted of 2% (n= 1) resident physicians; 2% (n= 1) nutritionists; 2% (n= 1) administrative assistants; 5% (n= 3) general service aiders; 7% (n= 4) social workers; 8% (n= 5) dentists; 17% (n= 10) nurses; 18% (n= 11) therapists and 40% (n= 24) nursing technicians. Of these 77% (n= 46) were females and 23% (n= 14) males. When questioned about whether their errors were recorded on their functional sheets or used against them in future opportunities, the data became alarming. When questioned about the conduct of their supervisor or boss, it was noted that a considerable number of respondents opted to abstain from comments, marking the option “ I do not agree or disagree ”. In the issue that addresses the communication in the service, the results were partially satisfactory. Regarding the frequency of reported events, there is a lack of notifications in the sector. When questioned about how patient safety is in the industry 3% (n= 2) considered it excellent, 63% (n= 38) very good and 33% (n= 20) regular. Regarding the general information of the interviewees, which were cited in the introduction of the results, in the open question, only 5% (n= 3) of respondents answered.

The study allowed the evaluation of patient safety characteristics from the perspective of the multi-professional team of an Intensive Care Unit, indicating that there are many aspects to improve in several dimensions on the patient's culture. However, there are areas with greater fragility, that need a closer look, such as the lack of notifications on the part of the team, issues on the supervisor/chief of the sector and mainly fear of punitive culture.

Patient, Safety, Multi-professional.

P105 Innovation in nursing in the creation of medical devices: a Portuguese case study

Pedro parreira 1 , inês cardoso 1 , liliana sousa 1 , armÊnio cruz 1 , josÊ martins 1 , sara cortez 2 , filipa carneiro 3 , luciene braga 4 anabela sousa-salgueiro 1, 1 coimbra nursing school, 3046-851 coimbra, portugal; 2 muroplås - indústria de plåsticos, 4745-334 muro, trofa, portugal; 3 innovation in polymer engineering, university of minho, 4800-058 guimarães, portugal; 4 federal university of viçosa, 36570-900 viçosa, minas gerais, brazil.

The administration of intravenous medication is a frequent practice in health units (approximately 90% of hospitalized patients experience intravenous medication). Venous catheterization (peripheral or central) allows the administration of medication directly into the bloodstream through an inserted catheter. Although the flushing procedure is desirable between and after the administration of intravenous medication, this procedure is often not observed.

In order to reduce this problem, a consortium with complementary experience and skills was created between: Muroplás company, Innovation Polymers Engineering Centre (PIEP) and the Coimbra Nursing School (ESEnfC). The consortium developed the “DUO SYRINGE”, a new Medical Device consisting in a sequential-release double-chamber syringe. The benefits of this new device are several: a greater adherence to flushing by health professionals, a higher level of patient safety and a significant reduction of complications.

After a literature review regarding disease control and prevention, mainly sustained by the Infusion Nurses Society and Royal College of Nursing guidelines, we established important characteristics for this new MD. A focus groups with nurse experts identified and validated the technical characteristics of the device. After developing the preliminary geometry of the syringe through three-dimensional modelling (3D), a new expert panel of end users evaluated the usability of the MD alpha version sustained in Technology Acceptance Model. In the future, we also intend to carry out laboratory tests for safety validation and to perform clinical studies in a hospital environment.

We identified a set of characteristics that the MD should incorporate, namely syringe size, volume of the two chambers, cannon syringe configuration, and plunger configuration. There was an alignment between literature review and the experts panel opinion.

Clinical practice creates daily new challenges to Nursing and it is crucial to create responses that promote better quality of health care. Identifying problems, creating technological partnerships with companies and technological centres allows to innovate through the development and creation of new MDs. The clinical research with MD allows the evaluation of safety, making clinical practice more effective and safe. This is a new challenge placed to the health professionals. It is desirable to display the intellectual capital available to generate innovations for citizens, reverting in gains to the quality of care.

This work is funded by the FEDER fund through the Operational Programme for Competitiveness and Internationalisation (COMPETE 2020) within project POCI-01-0247- FEDER-017604.

Medical Device, Nursing, Innovation, Syringe.

P106 Nursing home care: nurses' perspective

Tatiana antunes 1 , teresa capelo 1 , anabela salgueiro-oliveira 1 , luciene braga 2 , pedro parreira 1, 1 coimbra nursing school, 3046-851 coimbra, portugal; 2 federal university of viçosa, 36570-900 viçosa, minas gerais, brazil.

Nursing home care is a trend of the current society, due to the aging of the population and shorter hospitalization times, for economic reasons and also to prevent infections. In addition, it represents a high potential for improving the quality of care by enabling self-care of patients in their life contexts, involving their families and taking advantage of existing social support networks.

Analyse the scientific production, in the last seven years, concerning domiciliary nursing care, attending to the nurse’s perspective.

The search method used was the integrative literature review. The investigation question was formulated based on the PICO strategy: What is the nurse’s perspective about nursing home care? The search was conducted between 22 and 26 May, 2017, in the following databases: SciELO, Medline with full text, CINHAL with full text, Academic Search Complete, Complementary Index. We only looked for primary scientific studies, published in the last six years, in English, Portuguese or Spanish. The selected descriptors were: Home care nursing OR Domiciliary Care Or Home visits) AND Self-care AND Nurse. We accessed 1,293 scientific articles. After reading title, abstract and full text we retained 9 studies for analysis.

The mobilization of different nursing competencies is important because of patient’s profile and difficulties associated with the contexts where they perform nursing care [1]. Nurses should demonstrate availability, sensitivity, education, creativity and attend to the care needs of the person and family [2]. The care process success depends on the relationship between nurse and patient and/or family [3]. The unpredictability, the lack of in house resources, distance and work overload are some of the difficulties manifested by the nurses [2, 4].

Nurses who provide nursing home care, due to the limited resources they face, will have to carry out a rigorous planning of care, mobilize and inform patients and families about social support networks, as well as to promote self-care.

1. Sherman H, Forsberg C, Törnkvist A. The 75-year-old persons’ self-reported health conditions: a knowledge base in the field of preventive home visits. Journal of Clinical Nursing. 2012;21:3170-3182.

2. Consoni E, Salvaro M, Ceretta L, Soratto, M. Os desafios do enfermeiro no cuidado domiciliar. Enfermagem Brasil. 2015;14(4):229-234.

3. Gago E, Lopes M. Cuidados domiciliares – interação do enfermeiro com a pessoa idosa/família. Acta Paulista de Enfermagem. 2012;25(1):74-80.

4. Rodrigues A, Soriano J. Fatores influenciadores dos cuidados de enfermagem domiciliårios na prevenção de úlceras por pressão. Revista de Enfermagem Referência. 2011;3(5):55-63.

Patients, Home nursing care.

P107 Knowledge on pharmacogenomics: gaps and needs of educational resources

Andreia pinho, marlene santos, correspondence: marlene santos ([email protected]).

Pharmacogenomics is a science that aims to predict the contribution of genes in an individual's response to the administration of a drug, in order to increase the therapeutic effect and minimize any Adverse Drug Reaction (ADR). The professionals must have knowledge on the subject, however the studies point to a lack of information of the future health professionals, about concepts and applications of Pharmacogenomics.

To compare the study plans of Pharmacy, Pharmaceutical Sciences and Medicine courses at a national level and to find out the existence of topics related to Pharmacogenomics, and to verify the knowledge of the students of the Degree in Pharmacy of Escola Superior de Saúde (ESS) of Porto on the topic “Pharmacogenomics”, identifying gaps and needs of educational resources among students of this course.

A questionnaire-type study was carried out, the first one being applied to the Coordinators of the three courses, at a national level, and another applied to the ESS Pharmacy students about their knowledge about Pharmacogenomics.

The courses have an hourly schedule for Pharmacogenomics between 2.5 hours in Pharmacy and 60 hours in Pharmaceutical Sciences. The students' knowledge of this subject went from 15.91% in the first year, to 95.92% and 97.3% in the 3rd and 4th years, respectively. Between 76% and 86% of the students were not able to identify drugs or drug metabolizing enzymes whose activity is influenced by genetic variations. Comparing the 3 courses it can be stated that the workload in the curricular plans is reduced, being especially evident in the course of Pharmacy. There is a significant increase in knowledge about Pharmacogenomics as the years of undergraduate studies progress, and the difference between the 3rd and 4th year is not significant, since this subject is taught only on the 2nd year.

The knowledge passed on to undergraduate students and future health professionals is reduced, with an insufficient workload, and does not take place uniformly at a national level. In the case of ESS Pharmacy, there was an increase of knowledge as the degree progresses, despite the few contents taught regarding Pharmacogenomics. In the future, it may be useful to create supplementary courses and trainings for students on this subject.

Pharmacogenomics, Knowledge, Students, Curricular plan

P108 Influence of the rs776746 CYP 3A5 gene polymorphism on response to immunosuppressant tacrolimus in patients undergoing liver transplantation: a systematic review

Cristiana rocha, marlene santos.

Hepatic transplantation is a lifesaving therapy that has been increasing over the years in Portugal. Its success is due largely in part to the use of immunosuppressants, like tacrolimus, the first-line immunosuppressant drug for people undergoing liver transplantation. It is a drug with narrow therapeutic window and great inter-individual variability. This variability is explained in part by polymorphisms of the CYP3A5 gene, which encodes the CYP3A5 metabolizing enzyme. The rs776746 polymorphism affects the CYP3A5 gene and gives rise to a non-functional metabolizing enzyme. The CYP3A5 gene is expressed in both the liver and the gut, that is, the metabolism of tacrolimus is affected by the transplanted liver (donor) genotype, as well as by the gut (receptor) genotype. The identification of polymorphisms becomes important especially in the period immediately after transplantation in order to avoid acute rejection of the organ.

The objective of this work was to review the influence of rs776746 polymorphism of the CYP3A5 gene on pharmacokinetics of tacrolimus.

A systematic review was conducted through the Pubmed database search, from 2000 to 2017. Articles that meet the study query and the inclusion and exclusion criteria were included for review.

We selected 23 articles that discuss the influence of the rs776746 polymorphism on the pharmacokinetics of tacrolimus. The evidence suggests that individuals with the CYP3A5*3 (non-expressing) allele have a decreased metabolism of tacrolimus and, consequently, lower blood concentrations of the drug compared to individuals carrying the CYP3A5*1 (expressing) allele. The receptor genotype plays a more important role in the first days after transplantation and the donor genotype becomes more important later when the transplanted organ begins to function properly.

This review concluded that regarding hepatic transplantation it is important to identify both the polymorphisms affecting the metabolism of tacrolimus in the donor and recipient genotypes for a more effective dose adjustment, especially in the critical period immediately after transplantation.

Transplant, Liver, Polymorphism, rs776746, Tacrolimus, CYP3A5.

P109 The FITWORK European Project - good practices to develop physical activity programs at work

Maria campos, alain massart, carlos gonçalves, luĂ­s rama, ana teixeira, faculty of sport sciences and physical education, university of coimbra, 3040-156 coimbra, portugal, correspondence: maria campos ([email protected]).

Workplace physical demands have widely changed in the last century. Nowadays, most of the jobs in the European Union (EU) have a low overall energy demand. In this context, the FITWORK project aims to develop good practices to support ergonomics and health by implementing physical activity programs, addressed to reduce specific ergonomic risks at the workplace. This 2-year project (2017-2018) is co-funded by the Erasmus+ Programme of the European Union and coordinated by Instituto de BiomecĂĄnica of Valencia (IBV) Spain. The partners are the University of Coimbra (UC); Romtens Foundation, Romania; Eindhoven University of Technology (TU/e); the European Network for Workplace Health Promotion (ENWHP) and KOMAG, Poland (http://fitwork.eu/).

Therefore, the general objective of the project is to promote physical activity at work, awareness of workers and health and safety professionals on the significance of health-enhancing physical activity attending to job demands. To meet this objective, FITWORK will identify good practices in occupational risk prevention through physical activities, including motivational aspects, and best practices for implementing workplace health promotion programs (WHPP).

The workout programs are being implemented in two different organizations, with experimental group and control group, during six months at the Institute of Mining Technology KOMAG, Poland and INNEX S.R.L, Italy, with the following aims: I) to identify and evaluate the worksites and the professional risks within each organization; II) to adapt the WHP Programme to every worksite: identify the most appropriate exercises to carry out in each worksite and when the workers have to perform them; III) to monitor and collect data using specific instruments and report periodically about the development of the programme; IV) to give recommendations related to good practice and aspects for improving the implementation of the program.

The primary purposes of the analysis of the results are to validate the effect of the designed physical activity programs and to elaborate good practices guidelines in developing and implementing WHP Programs.

There is evidence that behaviour changes are ignited by a complex cocktail of perceived benefits other than health alone, but a lack of evidence still exists on the effectiveness of health promotion activities on productivity, absenteeism or wellbeing. Hence, the desired impact of this European Project is to raise awareness and to engage stakeholders and target groups, sharing solutions and know-how with professional audiences.

FITWORK, Job demands, Workplace, Physical activity programs, Erasmus+ Programme.

P110 Adventitious respiratory sounds to monitor lung function in pulmonary rehabilitation

Cristina jĂĄcome 1,2 , joana cruz 2,3,4 , alda marques 2,5, 1 center for health technology and information systems research, faculty of medicine, university of porto, 4200-450 porto, portugal; 2 respiratory research and rehabilitation laboratory, school of health sciences, university of aveiro, 3810-193 aveiro, portugal; 3 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 4 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 5 institute of biomedicine, university of aveiro, 3810-193 aveiro, portugal.

Peak expiratory flow (PEF) has been traditionally used to monitor lung function in patients with chronic obstructive pulmonary disease (COPD) before pulmonary rehabilitation (PR) sessions. However, PEF mainly reflects changes in large airways and it is known that COPD primarily targets small airways. Adventitious respiratory sounds (ARS - crackles and/or wheezes), are related to changes within lung morphology and are significantly more frequent in patients with acute exacerbations of COPD. Thus, ARS may be also useful for the routine monitoring of lung function during PR programs.

This study explored the convergent validity of ARS and PEF in patients with COPD.

Twenty-four (24) stable patients (66 Âą 9 years; FEV1 71 Âą 19% pred) participating in a PR program were included. Assessments were conducted immediately before one PR session. Presence of ARS (crackles and/or wheezes) at posterior right chest was first assessed by a physiotherapist using a digital stethoscope (ds32a, ThinkLabs, CO, USA). Resting dyspnoea was collected using the modified Borg scale (0-10) and PEF with a peak flow meter (Micro I, Carefusion, UK). Independent t-tests, Pearson and point-biserial correlations were used.

ARS were present in 5 participants (20.8%). Patients with ARS had a lower PEF than patients without ARS (294 Âą 62 l/min vs. 419 Âą 128 l/min; p = 0.048). PEF was negatively correlated with presence of ARS (r = -0.41; p = 0.048). Resting dyspnoea was negatively correlated with PEF (r = -0.41; p = 0.039), but not with ARS (r = 0.21; p = 0.32).

Findings suggest that both ARS and PEF offer complementary information before a PR session, but that ARS provide additional information on the patents’ respiratory status. Further research correlating ARS and PEF with patients’ performance and progression during PR is needed to strengthen the usefulness of assessing these parameters in PR.

Peak expiratory flow, Adventitious respiratory sounds, Crackles, Wheezes, Pulmonary rehabilitation.

P111 Health care services and their influence on the autonomy and quality of life of the elderly

Keila cr pereira, silvania silva, jefferson l traebert, universidade do sul de santa catarina, 88137-272, palhoça, santa catarina, brasil, correspondence: keila cr pereira ([email protected]).

Primary Care (PA) performance can be assessed by Ambulatory Care-Sensitive Conditions (ACSC) [1]. They are health problems whose morbidity and mortality can be reduced through resolute and comprehensive care. The performance and access to the health system can delay the hospitalizations of the elderly with all the risks arising from it [2,3].

Analyse the impact of actions of health care of the elderly on primary care in the ICSAB rate.

Ecological study of the information on the hospitalizations of people over 60 years of age was obtained by hospital admission authorizations (HAA) from the Hospital Information System (HIS), from all municipalities in the state of Santa Catarina (SC) from 2008 to 2015. For the definition of the Primary Care Sensitive Conditions (PCSC), the official report published by the Ministry of Health was used [4]. The crude PCSC rate was calculated by the ratio between the number of PCSC in the elderly and the reference population for the period multiplied by 10,000. Next, the PCSC hospitalization rates for the elderly were standardized by age using the direct method, using the world population [5] as the standard. To soften the historical series, as a function of the oscillation of the points, was calculated the moving average centred in three groups. The analysis was performed through the Joinpoint program, version 4.3.1, used to calculate the variation of the rates of hospitalization of elderly people by age-adjusted PCSC, in the period from 2008 to 2015, resulting in the behaviour of the rate in the period studied for each municipality of Santa Catarina.

The analysis showed that for each percentage point of increase in the elderly population rate, one percentage point increases in the annual rate of hospitalization rate for PCSC in the elderly (R 2 = 0.025). The variables of the performance of attention to the elderly did not show association in the hospitalizations.

The individual's lifestyle may be more determinant for a healthy aging than access to services when the individual has aged. In order for services to effectively act as a reducer of hospitalizations in the elderly they must be offered before the establishing of the aging process in the individual.

1. Caminal J, Starfield B, Sanchez E, Casanova C, Morales M. The role of primary care in preventing ambulatory care sensitive conditions. Eur J Public Health. 2004 Sep;14(3):246-51

2. Santos M. Epidemiologia do envelhecimento. In: Nunes I M; Ferreri R E L; Santos M. Enfermagem em geriatria e gerontologia. Rio de Janeiro: Guanabara Koogan, 2012. pp. 4-8.

3. Silveira R E, Santos A S, Sousa M C, Monteiro T S A. Gastos relacionados a hospitalizaçþes de idosos no Brasil: perspectivas de uma dÊcada. Gestão e Economia em Saúde, São Paulo, 2013. Dec;11(4):514-520.

4. BRASIL. Ministério da Saúde. Secretaria de Atenção à Saúde. Departamento de Atenção Básica. Política Nacional de Atenção Básica / Ministério da Saúde. Secretaria de Atenção à Saúde. Departamento de Atenção Básica. – 2012. Brasília.

5. Doll R, Payne P, Waterhouse J. Cancer Incidence in Five Continents: A Technical Report. Berlin: Springer-Verlag (for UICC), 1966.

Aged, Primary Health Care, Health Promotion, Healthy Aging, Life Style.

P112 Morphological and functional cardiac changes in TAVI follow-up – evaluation through transthoracic echocardiography

Virginia fonseca, ana costa, inĂŞs antunes, joĂŁo lobato, escola superior de tecnologia da saĂşde de lisboa, 1990-094 lisboa, portugal.

Aortic Stenosis is a valvular disease with increasing prevalence. Transcatheter aortic valve implantation (TAVI) is a treatment option for patients who cannot undergo surgical valve replacement [1,2,3,4].

The aim of this study was to describe and compare morphological and functional cardiac changes, through transthoracic echocardiography, in the follow-up after TAVI.

Patients, with ages between 63 and 85 years old, submitted to TAVI were evaluated by transthoracic echocardiography between 24h to 72h and 1 to 4 months after the procedure. The study variables selected were perivalvular regurgitation, maximum velocity and gradient, left ventricular (LV) function and dimensions, and left atrium (LA) diameter. Statistical analysis of the study variables was made using descriptive statistics, Shapiro-Wilk test, Wilcoxon's test and McNemar test. The results were considered statistically significant when p value < 0.05.

It was registered a significant increase in maximum velocity and gradient (p=0.004 and p=0.010, respectively) from the first to the second echocardiogram. There weren’t significant differences in LV ejection fraction, LV telediastolic and telesistolic volumes and in LA diameter. LV índex mass decreased comparing to the first echocardiogram (from 157.92 to 142.28 g/m2), however, this difference wasn’t statistically relevant. The prevalence of regurgitation (80%) was unchanged between evaluations.

Transcatheter valve aortic implantation is a relatively new procedure for aortic stenosis treatment, with morphological and functional changes in the heart [3] The studied variables didn’t demonstrate any significant changes, with the exception of maximum velocity and gradient. LV mass decreased in average 15.71 g/m2, and from a clinical perspective, can have an impact in the patient’s prognostic.

1. Pereira E, Silva G, Caeiro D, Fonseca M, Sampaio F, Fonseca C, et al. Cirurgia cardíaca na estenose aórtica severa: o que mudou com o advento do tratamento percutâneo? Revista Portuguesa de Cardiologia. 2013 Oct;32(10):749–56.

2. Gavina C, Gonçalves A, Almeria C, Hernandez R, Leite-Moreira A, Rocha-Gonçalves F, et al. Determinants of clinical improvement after surgical replacement or transcatheter aortic valve implantation for isolated aortic stenosis. Cardiovascular ultrasound. 2014;12:41.

3. Holmes DR, Mack MJ, Kaul S, Agnihotri A, Alexander KP, Bailey SR, et al. 2012 ACCF/AATS/SCAI/STS Expert Consensus Document on Transcatheter Aortic Valve Replacement. Journal of the American College of Cardiology. Elsevier Inc.; 2012;59(13):1200–54.

4. Leon MB, Smith CR, Mack M, Miller DC, Moses JW, Svensson LG, et al. Transcatheter Aortic-Valve Implantation for Aortic Stenosis in Patients Who 10 Cannot Undergo Surgery. New England Journal of Medicine. 2010 Oct 21;363(17):1597–607.

TAVI, Transcatheter aortic valve implantation, Aortic stenosis, Transthoracic echocardiography.

P113 Literacy of family caregivers of people with Alzheimer's disease

Rui moreira 1 , lia sousa 2 , carlos sequeira 3, 1 centro hospitalar de sĂŁo joĂŁo, pĂłlo valongo, 4440–563 valongo, portugal; 2 centro hospitalar de sĂŁo joĂŁo, 4200-319 porto, portugal; 3 escola superior de enfermagem do porto, 4200-072 porto, portugal, correspondence: rui moreira ([email protected]).

Increasing population ageing is bringing with it a higher incidence and prevalence of Alzheimer's disease. In addition to incapacitating the sick person this disease has destabilising effects on the family, particularly because they are the caregivers. Therefore, it is important to know how informed family caregivers are about Alzheimer’s disease.

To characterise family caregivers of people with Alzheimer's disease and identify their level of knowledge about the disease.

A quantitative, transversal and descriptive study was carried out. For this, 52 caregivers who are relatives of people with Alzheimer's disease were identified through a convenience sample from both private homes and day-care centres in the North region of Portugal. A questionnaire was administered that contained the sociodemographic characterisation of the caregivers as well as questions that dealt with different facets of the disease, ranging from pathophysiology to intervention strategies. The questionnaire was designed for this study, underpinned by consistent theoretical bases, and was sent to five experts for evaluation. These experts were nurses with experience in gerontology/Alzheimer's disease. The questions are essentially closed, with the only open question coming at the end, relating to the difficulties experienced by the family members in caring for the patient.

Focusing on aspects related to literacy, the main results indicate that although 87% of these caregivers know how to define Alzheimer's disease, only about 30% understand what underlies it, while about 50% show some difficulties in identifying risk factors. Most of them (75%) are able to list symptoms of the disease but only half know how to keep the sick person active. It should be noted that only 38% can identify ways to preserve memory and that about 30% of family caregivers are unaware of the purpose of the medication.

It was found that there is considerable uncertainty among family caregivers about several facets of Alzheimer's disease. There was also some lack of knowledge about existing resources and support. The study also highlights the fact that the family members questioned do not often ask nurses for information relevant to the care process.

Health literacy, Family caregivers, Alzheimer’s disease.

P114 Positive effects of a health promotion program in sedentary elderly with type 2 diabetes

LuĂ­s coelho 1,2 , nuno amaro 1,2 , joĂŁo cruz 1,2 , rogĂŠrio salvador 1,2 , paulino rosa 1,2 , ricardo gonçalves 1,2 , rui matos 1,2, 1 school of education and social sciences, polytechnic institute of leiria, 2411-091 leiria, portugal; 2 life quality research centre, 2001-904 santarĂŠm, portugal, correspondence: luĂ­s coelho ([email protected]).

Diabetes is projected to be the 7th leading cause of death by 2030 by WHO (2017) [1]. Physical Activity along with a healthy diet and medication is one of the crucial options to prevent and control diabetes. About 40% of the Portuguese population aged 25 to 79 years-old presents a condition of diabetes or intermediate hyperglycaemia (ObservatĂłrio Nacional da Diabetes, 2016) [2].

To examine the impact of a health promotion program in sedentary elderly with type 2 diabetes.

The program consisted on a 30-minute daily walking routine and a weekly educational session regarding healthy behaviours for 8 weeks. All participants were medicated with insulin and anti-diabetics. Twenty-six elderly diabetics (16 male and 10 female) aged 70.1 Âą 8.0 years old, were assessed for Body Mass (BM), Body Mass Index (BMI), Systolic and Diastolic Blood Pressure (SBP, DBP respectively), Waist Circumference (WC), and Capillary Glycaemia (CG). Wilcoxon test was used on inferential analysis for repeated measures (pre-post). Significance level was kept at 5%. The effect size for this test was calculated by dividing the z value by the square root of N, being N the number of observations over the two times points [3].

All parameters measured values decreased significantly from initial to final moments: BM from 80.8 Âą 8.9 kg to 78.2 Âą 9.0 kg (p=0.000; r=-0.512), BMI 30.2 Âą 3.8 kg/m 2 to 29.1 Âą 3.1 kg/m 2 (p=0.000; r=-0.543), SBP from 143.4 Âą 10.9 mmHg to 134.3 Âą 10.4 mmHg (p=0.002; r=-0.426), DBP from 79.5 Âą 8.6 mmHg to 75.3 Âą 8.7 mmHg (p=0.035; r=-0.292), WC from 100.9 Âą 7.8 cm to 96.7 Âą 6.7cm (p=0.000; r=-0.584), CG from 182.5 Âą 56.3 mg/dl to 124.1 Âą 17.7 mg/dl (p=0.000; r=-0.588).

The inclusion of physical activity and the awareness of engaging in healthy behaviours seem to complement the medication-based therapeutic in sedentary elderly with type 2 diabetes. Although the physical activity assessment was self-reported, sport sciences can play an important role in the prescription and monitoring of exercise in clinical patients. Multidisciplinary interventions in community health programs are recommended in order to achieve stronger and consistent results. These should include medical practitioners, physiologists and nutritionists.

1. World Health Organization. Diabetes Fact Sheet. Updated November 2017. http://www.who.int/mediacentre/factsheets/fs312/en/.

2. Diabetes: Factos e Números – O Ano de 2015 − Relatório Anual do Observatório Nacional da Diabetes. Sociedade Portuguesa de Diabetologia. Depósito Legal n.º: 340224/12 ISBN: 978-989-96663-2-0.

3. Pallant J. SPSS Survival Manual: A Step by Step Guide to Data Analysis using SPSS for Windows (3rd ed.). Milton Keynes, UK: Open University Press: 2007.

Health promotion, Type 2 diabetes, Active life styles, Elderly.

P115 Prevalence of childhood obesity

FĂĄtima frade 1 , joana m marques 1 , luis sousa 1 , maria j santos 1 , fĂĄtima pereira 1 , dora carteiro 1 , joĂŁo mg frade 2,3,4, 1 escola superior de saĂşde atlântica, 2730-036 barcarena, portugal; 2 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 4 multidisciplinary unit for biomedical research, institute of biomedical sciences abel salazar, university of porto, 4050-313 porto, portugal, correspondence: fĂĄtima frade ([email protected]).

The prevalence of obesity in children and adolescents has been increasing worldwide [1, 2], having impact on children's physical, psychological and social well-being [3, 4].

To identify the prevalence of childhood obesity worldwide.

A systematic review of the literature began with the question: “ What is the prevalence of childhood obesity worldwide? ” The research was carried out on the EBSCO host, Google Scholar and B-On, on the scientific databases Medline/Pubmed, LILACS, CINAHL, Nursing & Allied Health Collection, Cochrane Plus Collection, MedicLatina and SciELO. The inclusion criteria were: full-text articles, in English, Portuguese or Spanish, published from 2013 to 2017. The Boolean equation used was: (Pediatric Obesity) OR (Overweight) AND (Children) AND (Prevalence). One hundred twenty-two (122) articles were found, of these, 24 were selected after comprehensive reading.

Globally, in 2016, there were 41 million children under 5 years of age who were overweight or obese and 340 million children and adolescents aged 5 to 19 years were overweight/obese [5]. In 2013, in the European region the prevalence of overweight/obese people was 31.6%, with 17.7% corresponding to pre-obesity and 13.9% to childhood obesity [6, 7]. In China, the prevalence of overweight people doubled from 13% in 1986, to 27.7% in 2009. In the United States, 31.8% of children were overweight or obese [8]; in New Zealand, 31.7% were overweight and obese, and 2.5% were severely obese [9]. In Mexico City, 30.8% of adolescents, 24.2% of school-age children, 14.5% of latent and 11.5% of children in preschool age were overweight and obese [2]. In Brazil, 30.59% of the children/adolescents studied were overweight, obese or severely obese [8].

Childhood obesity is one of the Public Health problems worldwide, it becomes urgent to monitor the problem properly and implement preventive measures to reduce this risk.

1. Viveiro C, Brito S, Moleiro P. Sobrepeso e obesidade pediĂĄtrica: a realidade portuguesa. Rev Port SaĂşde PĂşblica. 2016;34(1):30-37.

2. Wollenstein-Seligson D, Iglesias-Leboreiro J, BernĂĄrdez-Zapata I, BravermanBronstein A. Prevalencia de sobrepeso y obesidad infantil en uno hospital privado de la ciudad de MĂŠxico. Rev Mex Pediatr. 2016;83(4):108-114.

3. VĂĄsquez-GuzmĂĄn M, GonzĂĄlez-Castillo J, GonzĂĄlez-Rojas J. Prevalencia de perĂ­odo de sobrepeso y obesidade en escolares. Rev Sanid Milit Mex. 2014;68(2):64-67.

4. Salinas_Martínez A, Mathiew-Quirós, Hernández-Herrera R, GonzálezGuajardo E, Garza-Sagástegui M. Estimación de sobrepeso y obesidad en preescolares – Noramtividad nacional e international. Rev Med Inst Mex Seguro Soc. 2014;52(Supl 1):S23-S33.

5. World Health Organization - WHO. Obesity and Overweight. [online] 2013. [cited 2013 May 15]. Available from: http://www.who.int/ mediacentre/factsheets/fs311/en/.

6. Wijnhoven T, van Raaij J, Breda J. WHO European Childhood Obesity Surveillance Initiative: Implementation of Round 1 (2007/2008) and Round 2 (2009/2010). Copenhagen: WHO Regional Office for Europe,2014. DisponĂ­vel em: www.euro.who.int/__data/assets/pdf_file/0004/258781/C OSI-report-round1-and-2_final-for-web.pdf

7. Kulaga Z, Gurzkowska B, Grajda A, Wojtylo M, GĂłzdz M, Litwin M. The prevalence of owerweight and obesity among Polish pre-school-aged children. Dev Period Med. 2016;XX,2:143-149.

8. Cabrera T, Correia I, Oliveira dos Santos D, Pacagnelli F, Prado M, Dias da Silva T, Monteiro C, Fernani D. Analisys of the prevalence of owerwight and obesity and the level of physical activity in chilfren and adolescents of a soudwestern city of SĂŁo Paulo. JHGD. 2014;24(1):67-66.

9. Farrant B, Utter J, Ameratunga S, Clark T, Fleming T, Simon D. Prevalence of severe obesity among New Zealand adolescents and associations with health risk behaviors and emotional well-Being. J Pediatr. 2013;25:1–7.

Prevalence, Pediatric Obesity, Overweight, Children.

P116 The urgency for a nursing intervention towards sexual education at Cape Verde: university students’ perception

SĂłnia ramalho 1,2 , carolina henriques 1,2 , elisa caceiro 1 , maria l santos 1.

We know today that knowledge about sexuality is essential for young people to live in a society that allows to develop healthy attitudes and behaviours. To that end, health professionals, namely nurses, should be able to Educate for Sexuality in order to contribute for the improvement of affective-sexual relationships among the young; contribute to the reduction of possible negative occurrences resulting from sexual behaviours, such as early pregnancy and sexually transmitted infections (STIs), and contribute to conscious decision-making in the area of health education/sex education [1-4].

To evaluate young people's knowledge about sexuality.

A descriptive, cross-sectional study using a questionnaire consisting of questions related to sociodemographic data, and a questionnaire consisting of twenty questions related to the anatomy of the reproductive system, contraceptive methods and sexually transmitted infections was applied. One hundred and eight (108) young people from the Republic of Cape Verde participated in the study. All formal and ethical procedures were taken into account.

The results show that in a sample of 108 university students, 81.5% female, with a mean age of 21.26 years; 1.9% reported having already been forced by a stranger, family member or older person to have sex, and 10.2% reported having had sex after a party, under the influence of alcohol or drugs. As far as knowledge is concerned, it can be said that the level of knowledge of young people regarding the sexual health aspects is satisfactory, safeguarding that the most erroneous questions were those related to: male anatomy (40.7%) and hormonal physiology of women (25.9%). It was found that 32.4% of the university students did not know/did not answer the questions related to female hormonal processes and their functioning when associated with an oral contraceptive.

It is essential to know what young people know about sexuality, so that specific nursing interventions can be designed to meet their sexual education needs.

1. Barbosa A, Gomes-Pedro J. Sexualidade. Lisboa: Faculdade de Medicina da Universidade de Lisboa; 2000.

2. López F, Fuertes A. Para compreender a sexualidade. Lisboa: Associação para o Planeamento da Família; 1999.

3. Epstein D, O’Flynn S, Telford D. Innocence and Experience: Paradoxes in sexuality and education. In: Richardson D, Seidman S, editors. Handbook of Lesbian and Gay Studies. Thousand Oaks, CA: Sage; 2002. p 271-311.

4. Louro GL. Um corpo estranho: Ensaios sobre sexualidade e Teoria Queer. Belo Horizonte: AutĂŞntica Editores; 2004.

Young people, Knowledge, Sexuality.

P117 Cape Verde young university students: determinants of whether or not to have sex

Carolina henriques 1,2 , sĂłnia ramalho 1,2 , elisa caceiro 1 , maria l santos 1, correspondence: carolina henriques ([email protected]).

In Cape Verde there is still no national study regarding the sexual behaviour of its youth, however, data provided by the Cape Verde Association for Family Protection (VERDEFAM) [1], tells us that more than half of Cape Verde adolescents and youngsters start their sexual lives before the age of 16 and the determinants of having or not having sex are not known. In this study, we tried to make some efforts to know some determinants of whether or not to have sex in young Cape Verde university students.

To know the determinants of whether or not to have sex in young Cape Verde university students.

A descriptive, cross-sectional study using a questionnaire consisting of questions related to sociodemographic data and the motivation scale for having or not having sex by Gouveia, Leal, Maroco and Cardoso (2010) [2]. Ninety-eight (98) university students from the Republic of Cape Verde participated in the study. All formal and ethical procedures have been taken into account.

Youngsters had a mean age of 21.25 years (SD = 2.76), 62.2% started sexual activity with their boyfriend, and 64.3% used the condom as contraceptive method. Considering the determinants of having sex, the young people that participated in the study considered not important to have sex: “ because my partner wanted ” (51.9%), “ to please my partner ” (56.6%), “ to seduce ” (64.8%), “ for curiosity ” (53.7%) and “ for fun or to play ” (72.2%). In turn, they consider very important not to have sex: for fear of venereal diseases (24.1%); fear of AIDS (37%); fear of pregnancy (28.7%); lack of opportunity or inability to find a partner (25%) and because they did not know their partner long enough (46.3%).

The Cape Verde youth that participated in the study emphasizes the importance of health-related and safe relationships, not emphasizing both the desire of the other and pleasure for pleasure, factors that are strongly associated with the determinants of whether or not to have sex.

1. Cape Verdean Association for the Protection of the Family (VERDEFAM). Recovered from http://www.verdefam.cv/

2. Gouveia P, Leal I, Maroco J, Cardoso J. Escala de Motivação para fazer e para não fazer Sexo. In: Leal I, Maroco J, editors. Avaliação em sexualidade e parentalidade. Porto: LivPsic; 2010. p. 84-99.

Young; Sex; Determinants; Cape Verde.

P118 Attitude of Cape Verdean young university students towards sexuality

Carolina henriques 1,2 , sĂłnia ramalho 1,2 , maria l santos 1 , elisa caceiro 1.

Cape Verde, a country with about 500,000 inhabitants, has a markedly young population, a fact that places special responsibility on the role of local health professionals, in the contexts of sexual and reproductive health, where sexuality issues are included.

To evaluate the attitudes of Cape Verdean university students towards sexuality.

A descriptive, cross-sectional study using a questionnaire consisting of questions related to sociodemographic data and the sexual attitudes scale of Gouveia, Leal, Maroco and Cardoso (2010) [1]. One hundred and eight (108) university students from the Republic of Cape Verde participated in the study. All formal and ethical procedures have been taken into account.

The results show that participants of this cross-border research had a mean age of 21.26 years (SD = 2.93), were mostly female (81.5%) and started their sexual activity with a mean age of 17.37 years (SD = 1.31). As far as sexual attitudes are concerned, 11.1% agrees that “ one does not have to be committed to the person to have sex with her/him ”; 18.5% agree that “ casual intercourse is acceptable ”; 76.9% totally disagree with “ I would like to have sex with many partners ”; 74.1% disagree completely that “ it is right to have sex with more than one person during the same time period ” and 36.1% agree that “ sex is primarily a physical activity ”. 10.2% of young people agree that “ sex, by sex alone, is perfectly acceptable ”, 33.3% agree that “sex is primarily a bodily function, just like eating ”. Regarding the permissiveness of university students in relation to occasional/non-engaging sex, they present a significant level of agreement (M = 14.44, Sd = 3.66, Xmax = 24.00) which relates to instrumentality (M = 11.93, SD = 2.62, Xmax = 19.00).

Data shows that young Cape Verdean university students seek to have sexual relations with respect for their partner, although they agree with sex without commitment, closely associated with the vision of the sexual act as a corporal function response.

1. Gouveia P, Leal I, Maroco J, Cardoso J. Escala de Atitudes Sexuais–Versão adolescentes (EAS-A). In: Leal I, Maroco J, editors. Avaliação em sexualidade e parentalidade. Porto: LivPsic; 2010. p. 58-72.

Youth, Sexuality, Attitudes, Cape Verde.

P119 Central auditory processing and sleep deprivation

Diogo garcia, carla silva, audiology department, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal, correspondence: diogo garcia ([email protected]).

Auditory Processing is a natural process of taking in sound through the ear and having it travel to the language area of the brain to be interpreted. Sleep deprivation may influence this process.

To evaluate the impact of sleep deprivation, for a 24-hour period, on central auditory processing in healthy young adults.

Fourteen (14) healthy young adults were selected, 9 of which (64.3%) were female and 5 (35.7%) were male, aged 18-29 years. The subjects were submitted to audiological evaluation using tonal audiometry and the threshold of 1 in 1 dB was obtained in the frequencies of 250Hz, 500Hz, 1000Hz, 2000Hz, 4000Hz and 8000Hz, alternating disyllabic dichotic listening test Staggered Spondaic Words Test (SSW). The evaluation of central auditory processing, as well as the determination of the auditory threshold was performed in two situations: without sleep deprivation and after 24 hours of sleep deprivation.

At the level of the Audiogram, the reduction of the auditory thresholds after sleep deprivation in the frequencies of 500Hz and 1000Hz, around 3dB, increased in the frequency of 250Hz and 4000Hz, between 1 and 9dB, and remained in the right ear in frequencies of 2000Hz and 8000Hz. In the right ear, none of the differences found were statistically significant. In the left ear the auditory threshold increased at the frequency of 2000Hz about 5dB, and decreased in the frequencies of 250Hz, 500Hz, 1000Hz, 4000Hz and 8000Hz between 1 and 8dB, without statistically significant differences. In the SSW, there was a slight decrease in the percentage of correct answers in both ears, as well as in the percentage of total hits, after sleep deprivation. In none of the ears there were statistically significant differences in SSW results before and after 24 hours of sleep deprivation.

The results demonstrate that there are no statistically significant changes before and after 24h of sleep deprivation, both at the level of the Auditory Threshold and in the SSW.

Central Auditory Processing (PAC), Sleep Deprivation, Staggered Spondaic Words Test (SSW), Audiological Evaluation.

P120 Missed nursing care: incidence and predictive factors - integrative literature review

Ivo cs paiva 1,2 , isabel m moreira 1 , antĂłnio fs amaral 1, 1 nursing school of coimbra, 3046-851 coimbra, portugal; 2 department of internal medicine and palliative care, portuguese oncology institute of coimbra francisco gentil, 3000-075 coimbra, portugal, correspondence: ivo cs paiva ([email protected]).

The increasing complexity and demand in health care, together with patients’ changing needs, put into evidence the need for nurses to focus their interventions on people’s real needs, thus rethinking their role. In this context, unfinished, delayed, or missed nursing care (MNC), which is a strong indicator of health care quality, can compromise quality care and patient safety.

To identify the most common MNC, as well as its predictors and strategies to prevent its occurrence.

A systematic literature review was conducted on studies available in the EBSCOhost and B-on databases. Twenty-four articles were selected based on predefined criteria. All articles were published between 2012 and 2017.

The studies showed that nurses and patients have different perceptions about MNC. Autonomous interventions related to early mobility and walking, repositioning every 2 hours, or oral and body hygiene are more often missed than interdependent interventions. [1-5]. Medication administration within 30 minutes of prescription, planning and update of care plans, vital signs monitoring, and treatment effectiveness assessment also emerged as MNC [5-7]. Predictive factors are associated with the patient (health status/workload) [2, 5]; the nurse (interruptions by other professionals or patients’ relatives) [4, 8]; the materials or equipment (late deliveries) [9], or the hospital’s health care policies (management/leadership and staffing) [10]. The nurse-patient/family communication emerged as MNC, whereas the nurse-other health professional communication emerged as a predictive factor because teamwork and its effectiveness are compromised [2, 11]. Nurses’ personal interests and moral sense can influence the occurrence of MNC. Therefore, the strategies used to reduce the incidence of MNC should be adjusted to the needs of each setting [2, 5].

Although MNCs are being explored internationally, it is an understudied topic in Portugal, which may be explained by the punitive error reporting culture. The findings cannot be generalized due to the diversity of studies. Thus, the phenomenon of MNC and its impact on patient/family prognosis should be explored in different settings, with the purpose of achieving excellence in health care and ensuring that people recognize the importance of health care.

1. Bruyneel K, Li B, Ausserhofer D, Lesaffre E, Dumitrescu I, Smith H, etal. Organization of Hospital Nursing, Provision of Nursing Care, and Patient Experiences with Care in Europe. Med Care Res Rev. 2015;72(6):643-664.

2. Chapman R, Rahman A, Courtney M, Chalmers C. Impact of teamwork on missed care in four Australian hospitals. J Clin Nurs. 2017;26(1-2):170-181.

3. Cho S, Kim Y, Yeon K, You S, Lee I. Effects of increasing nurse staffing on missed nursing care. Int Nurs Rev. 2015;62(2):267-274.

4. Kalisch B, Xie B, Dabney B. Patient Reported Missed Nursing Care Correlated With Adverse Events. Am J Med Qual. 2014;29(5):415-422.

5. Papastavrou E, Charalambous A, Vryonides S, Eleftheriou C, Merkouris A. To what extent are patients' needs met on oncology units? The phenomenon of care rationing. Eur J Oncol Nurs. 2016;21:48-56.

6. Ball J, Murrels T, Rafferty A, Morrow E, Griffiths P. ‘Care left undone’ during nursing shifts: associations with workload and perceived quality of care. BMJ Qual Saf. 2014, 23:116-125.

7. Schubert M, Ausserhofer D, Desmedt M, Schwendimann R, Lessafre E, Li B, Geest S. Levels and correlates of implicit rationing of nursing care in Swiss acute care hospitals - A cross sectional study. Int J Nurs Stud. 2013;50(2):230-239.

8. Cho S, Mark B, Knafl G, Chang H. Yoon H. Relationships Between Nurse Staffing and Patients’ Experiences, and the Mediating Effects of Missed Nursing Care. J Nurs Scholarsh. 2017;49(3):347-355.

9. Moreno-MonsivĂĄis M, Moreno-RodrĂ­guez C, Interial-GuzmĂĄn M. Missed Nursing Care in Hospitalized Patients. AquichĂĄn. 2015;15:318-328.

10. Dehghan-Nayeri N, Ghaffari F, Shali M. Exploring Iranian nurses’ experiences of missed nursing care: a qualitative study: a threat to patient and nurses' health. Med J Islam Repub Iran. 2015;29:276.

11. BragadĂłttir H, Kalisch B, TryggvadĂłttir G. Correlates and predictors of missed nursing care. J Clin Nurs. 2017;26(11-12):1524-1534.

Missed Nursing Care, Delayed Nursing Care, Unfinished Nursing Care.

P121 The meaning of the family for future family nurses

JoĂŁo mg frade 1,2 , carolina henriques 1,2 , cĂŠlia jordĂŁo 1 , clarisse louro 1.

The family must be understood as a natural context of growth, involving the notion of complexity, of blood and/or affective bonds, generating love, but also suffering. The systemic view of the family considers the family as a whole, in which its members interact with one another, being that the imbalance of the system can cause imbalance in the individual and vice versa. The Family Nurse can be the reference professional, ensuring for the specialized accompaniment of the family, as a care unit, throughout the life cycle.

To know the perception about “family” and “family health nursing” by future family nurses.

A descriptive, cross-sectional study using a questionnaire consisting of questions related to sociodemographic data, and open questions regarding conceptual understanding of family and family health nursing. A total of 13 nurses participated in the study to develop skills in the field of family health nursing. All formal and ethical procedures have been taken into account.

Nurses report that they (100%) never attended before any training course in the field of family health nursing, recognizing the importance of knowing who the family members are (92.3%). 38.5% disagrees that the presence of family members alleviates their workload and 23.1% state that the presence of family members makes them feel that they are being evaluated. Mostly, the family is defined as a group of people with a common link (n= 10). Family health nursing is seen as: caring for the family; personalizing and integrating the nursing care provided to the person and to the family; the existence of a family nurse who knows all its members; the one that equates the care to the individuals according to the characteristics of the family; a health support of the group; a nurse that focuses mainly on the patient and on the family, in partnership with the respective family doctor.

Given the data obtained, it is understood that the conceptualization of family and family health nursing should be clarified so that nurses can focus on the internal dynamics of the families and their relationships, family structure and functioning. This can be as such, that the relationship of the different subsystems, of the family as a whole and with the surrounding environment, generates changes in the intrafamilial processes and in the interaction of the family with the environment.

Family, Nurses, Family nursing.

P122 The dizziness in patients with cochlear implants

Ana rosado, carla silva, correspondence: ana rosado ([email protected]).

The cochlear implant (CI) is a surgical method widely used today for the (re)habilitation of individuals with bilateral, severe to profound sensorineural hearing loss. Due to the proximity between the cochlea and the surrounding structures without vestibular system, and a delivery endolymphatic fluid, there may be a relationship between this form of (re)hearing activation and vestibular dysfunctions presented after a cochlear implant placement surgery.

Through a systematic review of the literature, we intend to determine the post-surgical vestibular changes existing in individuals submitted to CI placement, as well as to understand the procedures involved in this evaluation.

Scientific articles were searched according to the search engines B-on, PubMed, Scielo and ScienceDirect, obtaining a total of 48 articles. After applying the inclusion criteria, 12 articles were analysed in more detail, 4 of which were selected to integrate this systematic review of the literature.

After a systematic review of all articles, we found data reporting a decrease in the amplitude of the response wave vestibular evoked myogenic potential (cVEMP), an increase in average scores in the Dizziness Handicap Inventory (DHI), and an alteration of the classification of the caloric response (normal → hyporeflexia and/or hyporeflexia → arreflexia), in the postoperative period. However, it is not possible, to conclude that these vestibular changes are directly related to the placement of the CI. The use of a protocol evaluating vestibular function at the pre- and post-surgical periods was not verified, nor was the anatomical relationship between the cochlea and the vestibule and semi-circular canals (SSC).

We conclude that the studies directed to the vestibular evaluation during the protocol of placement of CI are reduced and without conclusions on the long term, since this follow up was done within a short time after its placement. There is a lack of a basic protocol, which helps the health professionals involved in the process to evaluate the vestibular system, as well as the sensitization of these to the possible influence of the CI insertion surgery in this system.

Audiology, Vestibular Disorders, Dizziness, Cochlear Implant, cVEMP.

P123 Practice of episiotomy during labour

Manuela ferreira 1 , onĂŠlia santos 2 , joĂŁo duarte 1, 1 instituto politĂŠcnico de viseu, 3504-510 viseu, portugal; 2 centro hospitalar cova da beira, 6200-251 covilhĂŁ, portugal, correspondence: manuela ferreira ([email protected]).

The World Health Organization (WHO, 1996) recommends the use of limited episiotomy since no credible evidence that its widespread use or routine practice has a beneficial effect.

To demonstrate scientific evidence of the determinants of the practice of selective episiotomy in women with normal/eutocic delivery; to identify the prevalence of episiotomy; and analyse the factors (sociodemographic variables, variables related to the new-born, contextual variables of pregnancy and contextual delivery) that influence the practice of episiotomy.

The empirical study I (part I) followed the systematic methodology of literature review. A search of studies published between January 2008 and December 23, 2014, was made in the databases: EBSCO, PubMed, Scielo, RCAAP. The studies found were evaluated taking into account the inclusion criteria previously established. Two reviewers assessed the quality of the studies to include using a prospective, randomized and controlled grid for critical evaluation. After critical evaluation of the quality of the study, 4 research articles that obtained a score between 87.5% and 95% were included.

The empirical study II (part II) is part of a quantitative, cross-sectional, descriptive, retrospective study, developed in the Obstetrics Service of Hospital Cova da Beira, according to a non-probability sampling process for convenience (n = 382). Data collection was executed by consulting medical records of women aged ≥ 18 years who had a vaginal delivery with a live foetus after 37 weeks of gestation.

Evidences that episiotomy should not be performed routinely and that should be limited to specific clinical situations were found. While performing selective episiotomy, when compared to routine episiotomy, a lower risk of trauma of the posterior perineum was associated with less need for suturing and fewer healing complications. Studies carried out on a sample of 382 women, aged between 18-46 years, which did not carry episiotomy in 41.7% of the cases, pointed to the relevance of selective episiotomy. Among the sample, a significant number of women with eutocic delivery (80.5%), with suture (95.0%), grade I lacerations (64.9%), perineal pain (89.1%) were subjected to episiotomy (58.3%). Among the group of women undergoing episiotomy (91.4%), most of the babies born presented a normal weight (92.3%).

In view of these results and based on the available scientific evidence recommendations given for several years, that a more a selective use of episiotomy should be made; it is suggested that health professionals should be more awake to this reality, so we can override the resistance and barriers against the selective use of episiotomy.

Eutocic Childbirth, Selective Episiotomy, Routine episiotomy.

P124 RNAO’s Best Practice Guidelines in the nursing curriculum – implementation update

Ana v antunes 1 , olga valentim 1 , fĂĄtima pereira 1 , fĂĄtima frade 1 , cristiana firmino 1 , joana marques 1 , maria nogueira 1 , luĂ­s sousa 1,2, 1 escola superior de saĂşde atlântica, 2730-036 barcarena, portugal; 2 hospital curry cabral, centro hospitalar lisboa central, 1069-166 lisboa, portugal, correspondence: ana v antunes ([email protected]).

In the last 30 years Nursing Education in Portugal went through several changes which directly impacted on the professional development model and on the recognition of nurse’s scope of practice. Since the Declaration of Bolonha, nursing students are provided with a more practical and profession oriented nursing training [1, 2]. As our professionals’ skills become more recognized in the global health market, also our need to improve education and professional development rises. The best way to enhance the quality of practice education provided to undergraduate nursing students and to improve clinical outcomes is by enriching the academic curriculum with evidence-based nursing practices (EBNP) [3]. The Best Practice Guidelines Program (BPGP) was developed by the Registered Nurses Association of Ontario (RNAO) to support EBNP [4].

Provide an update on the process of implementation of RNAO’s Best Practice Guidelines (BPGs) in the nursing curriculum.

The implementation process was supported by the RNAO’s Toolkit for Implementing Best Practice Guidelines (BPGs) [5]. It is a comprehensive resource manual, grounded in theory, research and experience, that provides practical processes, strategies and tools to both Providers, Educational Institutions, Governments, and others committed to implement and evaluate BPGs.

The BPGs selection and implementation brought together some of the suggested activities from the six steps of the manual. It resulted in the selection of three clinical guidelines (Engaging Clients Who Use Substances [6]; Prevention of Falls and Fall Injuries in the Older Adult [7]; Primary Prevention of Childhood Obesity [8] and a Healthy Work Environments Guideline (Practice Education in Nursing, [9]). We considered two main areas to intervene, in order to address the challenge of generating scientific evidence for nursing practice: the academic and the clinical setting (partner institutions, where students undertake their clinical practice). The implementation process included three fundamental players from both settings: professors, nursing students and clinical nursing instructors. To evaluate our performance and measure the improvements, we created structure, process and outcome indicators for each guideline. Data collection tools were first used in the curricular units that precede clinical teaching, and results will be processed and analysed.

Professors, students and partner institutions were successfully engaged in the initiative. We are investing in an action plan to embed the evidence-based practice culture, through an orientation program for clinical nursing instructors. The strategy is to strengthen the relationship with providers in order to standardize evidence-based procedures and improve both nurses’ education and quality of care.

1. Hvalič-Touzery S, Hopia H, Sihvonen S, Diwan S, Sen S, Skela-Savič B. Perspectives on enhancing international practical training of students in health and social care study programs-A qualitative descriptive case study. Nurse Educ Today. 2017;48:40-47.

2. Arrigoni C, Grugnetti AM, Caruso R, Gallotti ML, Borrelli P, Puci M. Nursing students’ clinical competencies: a survey on clinical education objectives. Ann Ig. 2017;29(3):179-188.

3. Drayton-Brooks SM, Gray PA, Turner NP, Newland JA. Building clinical education training capacity in nurse practitioner programs. J Prof Nurs. 2017;33(6):422-428.

4. Athwal L, Marchuk B, Laforêt, Fliesser Y, Castanza J, Davis L, LaSalle M. Adaptation of a Best Practice Guideline to Strengthen Client‐ Centered Care in Public Health. Public Health Nursing. 2014 Mar 1;31(2):134-43.

5. Registered Nurses’ Association of Ontario (RNAO). Toolkit: Implementation of best practice guidelines (2nd ed.). Toronto, ON: Registered Nurses’ Association of Ontario. 2012.

6. Registered Nurses’ Association of Ontario (RNAO). Engaging Clients Who Use Substances. Toronto, ON: Registered Nurses’ Association of Ontario. 2015.

7. Registered Nurses’ Association of Ontario (RNAO). Prevención de caídas y lesiones derivadas de las caídas en personas mayores. (Revisado). Toronto, ON: Asociación Profesional de Enfermeras de Ontario. 2015.

8. Registered Nurses’ Association of Ontario (RNAO). Primary Prevention of Childhood Obesity (Second Edition). Toronto, ON: Registered Nurses’ Association of Ontario. 2014.

9. Registered Nurses’ Association of Ontario (RNAO). Practice Education in Nursing. Toronto, ON: Registered Nurses’ Association of Ontario. 2016.

Evidence-Based Nursing, Nursing Education, Substance-Related Disorders, Accidental Falls, Pediatric Obesity.

P125 Swimming practice and hearing disorders

Mara rebelo, carla silva, correspondence: mara rebelo ([email protected]).

Health assessment, promotion and prevention are a crucial pillar in the face of emerging trends and threats. In fact, disease prevention is surely the way to go. This is not to say that we should neglect the treatment of disease, but rather that we must make a clear bet on its prevention according to our daily behaviour and the circumstances in which we live. The practice of swimming is recommended, especially for children, since it presents benefits, namely in the treatment of respiratory diseases, allergic problems and in the improvement of motor coordination and/or postural problems. However, during swimming lessons, children are exposed to numerous harmful risk factors to the middle ear and the outer ear. Therefore, preventive measures are essentially the eviction of some factors that increase the associated risks.

To analyse possible audiological changes in swimming children, aged between 3 and 10 years. It is also intended to sensitize educators about the risk factors associated with possible hearing loss, even if temporary.

The sample consisted of 56 students from the School Group of Benedita - Municipality of Alcobaça. All children underwent: otoscopy, tympanogram and tonal audiogram of screening, after the prior authorization of their legal representatives for their participation in the study.

Among the 112 otoscopies performed were: 85.7% without alterations in the right ear and 83.9% in the left ear. Of the 56 individuals participating in the sample, it was verified that 17.9% had audiological alterations, of which 20.7% were swimming practitioners and 14.8% did not practice swimming. Of the 29 swimming practitioners, 6.9% had a type B tympanogram, of which 7.1% in the 3 to 5 age group, and 6.7% in the 6 to 10 age group.

It seems unlikely that swimming practice is directly related to the increase of middle ear secretions, and possible auditory alterations, given the minimal differences observed between swimmers and non-swimmers. Swimming practice beyond the malefic factors can to some extent be associated with beneficial factors, both in promotion and in the prevention of middle ear health.

Hearing Loss, Swimming, Children, Middle ear infections, Audiological changes.

P126 The impact of physical activity on spirometric parameters in non-institutionalised elderly people

Fernanda silva 1 , joĂŁo petrica 1,3 , joĂŁo serrano 1,3 , rui paulo 1,2 , andrĂŠ ramalho 1,2 , josĂŠ p ferreira 4 , pedro duarte-mendes 1,2.

Physical activity decreases as a result of the ageing process. Therefore, the elderly tend to spend more time adopting sedentary behaviours [1]. The respiratory system also undergoes progressive involution with age, resulting in anatomical and functional alterations [2]. The positive relationship between physical activity and spirometric parameters has been thus confirmed [3]. It is recommended that the elderly should accumulate at least 30 minutes of moderate- to vigorous-intensity physical activity per day [4].

The aim of this paper is to verify the existence of differences regarding spirometric values between two groups of people: those who complied and those who did not comply with the Global Recommendations on Physical Activity for Health [4].

The current study included 36 participants of an average age of 72.28, both male and female (SD= 6.58). The group which has fulfilled the recommendations included 16 elderly individuals (53.76 Âą 24.39 minutes); the group which has not fulfilled the recommendations included 20 elderly individuals (15.95 Âą 7.79 minutes). Physical activity was assessed using accelerometry (ActiGraphÂŽ, GT1M model, Fort Walton Beach, Florida, EUA). Data were recorded for three consecutive days and 600 minutes of daily recording, at least. Spirometry tests were performed using the CosmedÂŽ Microquark spirometer. The following parameters were analysed: Forced Vital Capacity (FVC), Forced Expiratory Volume in one second (FEV1), Peak Expiratory Flow (PEF) and Expiratory Volume in one second and Forced Vital Capacity rate (FEV1/FVC). In order to analyse data, descriptive and inferential statistics were used. The Shapiro-Wilk test was applied to assess normality, whereas the Mann-Whitney test and the t-Test were used for independent samples.

The group which has fulfilled the recommendations on physical activity has achieved better percentage spirometric values in VEF1, PEF e VEF1/FVC. However, significant differences were only found in VEF1/FVC% (p= 0.023).

The results therefore suggest that compliance with the Global Recommendations on Physical Activity for Health is associated with better VEF1/FVC% values in non-institutionalised elderly people.

1. Matthews CE, Moore SC, Sampson J, Blair A, Xiao Q, Keadle SK, Hollenbeck A, Park Y. Mortality Benefits for Replacing Sitting Time with Different Physical Activities. Med Sci Sports Exerc. 2015 ;47:1833-1840.

2. Lalley PM. The aging respiratory system-Pulmonary structure, function and neural control. Respir Physiol Neurobiol. 2013;187:199-210.

3. Nawrocka A, Mynarski W. Objective Assessment of Adherence to Global Recommendations on Physical Activity for Health in Relation to Spirometric Values in Nonsmoker Women Aged 60-75 Years. J Aging and Phys Act. 2016;25:123-127.

4. WHO. Global Recommendations on Physical Activity for Health. Switzerland: World Health Organization; 2011. Available from: http://apps.who.int/iris/bitstream/10665/44399/1/9789241599979_eng.pdf

Accelerometry, Recommendations on physical activity, Elder, Spirometry.

P127 Predictors of abandonment of exclusive breastfeeding before 6 months

Cristiana afonso, cristiana lopes, fernanda pais, suzi marques, joĂŁo lima, correspondence: cristiana afonso ([email protected]).

The World Health Organization (WHO) recommends exclusive breastfeeding up to 6 months of age, considering its nutritional, immunological, psychological and economic benefits. However, according to the InquĂŠrito Alimentar Nacional e de Atividade FĂ­sica , only about 46% of the children were exclusively breastfed for less than 4 months and only 21.6% for 6 months or more.

This study aimed to identify the predictors that most influence mothers in the decision to not to exclusively breastfeed until 6 months.

A review of the literature on the main determinants of exclusive breastfeeding was carried out, and an inquiry was then compiled with the identified predictors. In this context, mothers who had not breastfed or had not exclusively breastfed until 6 months of age were asked to choose the determinant that best suited their situation. The inquiry was made available through several online platforms, some of which aimed at childcare. It was considered inclusion criteria to be a mother.

A total of 1,685 mothers were questioned, 1,644 (97.57%) of whom breastfed and 866 (51.39%) exclusively breastfed up to 6 months. The predictors most frequently identified by mothers who had not breastfed or had not breastfed exclusively until 6 months (819 mothers) were “ work-constrains related to work schedule to breastfeeding ” (33.1%), “ milk drying ” (27.1%), “ few daily periods of breastfeeding ” (12.9%), “ personal condition ” (11.5%), “ missing or few conditions for breastfeeding at work ” (11.2%).

This work allowed the identification of the predictors of non-breastfeeding or of its non-exclusivity until 6 months, observing a strong contribution of the working conditions to this problem. Knowledge of this reality may be important to develop policy measures to act against this trend.

Exclusive breastfeeding, Predictors, Inquiry.

P128 Auditory training in children and youngsters with learning disabilities

Mariana araĂşjo, cristina nazarĂŠ, correspondence: mariana araĂşjo ([email protected]).

Hearing has a fundamental role in the learning process, and studies had showed that some children and youngsters, with normal hearing, could present auditory processing disorders with possible implications in the learning process. It is also known that not all learning problems are due to auditory processing disorders and all cases of auditory processing disorders do not lead to learning problems. Studies also point out that an adequate and personalized auditory training may be a viable option in the rehabilitation of auditory information processing in the central nervous system (brain neuroplasticity training), being the early assessment and intervention important to minimize the associated consequences, such as the possible difficulties on the learning process.

To analyse the influence that auditory training has on the improvement of auditory processing disorders of children and youngsters, with learning disabilities.

For this purpose was conducted a systematic literature review with search of scientific papers on electronic databases B-on, PubMED, ScienceDirect and SciELO with keywords such as auditory training, auditory processing, auditory processing disorders, learning disabilities, learning difficulties, children and youngsters (in Portuguese, English or Spanish). For this review were established inclusion criteria such as: publication type and date (original articles available since 2007); sample (in accordance with our purpose) and tests used (to evaluate and training the auditory processing).

After the search strategies five articles in accordance with the pre-established inclusion criteria were selected from out of the 127 found.

The auditory training is effective in the rehabilitation of auditory processing disorders in children and youngsters with learning disabilities, and studies showed that a specific diagnosis of the abilities affected is fundamental, in order to achieve the perfect and most efficient training plan for each individual, as well as, a continuous re-evaluation to adjust the training. Since it is a complex interaction between those disorders it is still necessary to carry out further studies in this area that should try to establish some guidelines and try to clarify the plan of the auditory training program.

Auditory training, Auditory processing disorders, Learning disabilities, Children, Youngsters.

P129 Inadequated environmental sanitation diseases (IESDs) in Porto Alegre – RS/ Brasil

Rita c nugem 1 , roger s rosa², ronaldo bordin 3 , caroline n teixeira 4, 1 health services and performance research, claude bernard lyon university, 69100 lyon, france; 2 department of social medicine, medical school, rio grande do sul federal university, 90040-060 porto alegre, brazil; 3 administration school, rio grande do sul federal university, 96201-900 rio grande do sul, brazil; 4 fundação hospitalar getĂşlio vargas, 93210-020 sapucaia do sul, rio grande do sul, brazil, correspondence: rita c nugem ([email protected]).

Countries in Europe and North America managed to control and eradicate most of the infectious-parasitic diseases that occurred in the first half of the twentieth century [1]. Nevertheless, infectious and parasitic diseases are still present in certain metropolitan areas of Brazil, despite the increased prevalence of chronic diseases. This work aims to present the general aspects of the situation of diseases related to inadequate environmental sanitation (IESDs) and of the sanitation policy of Porto Alegre.

The general objective was to examine the public policy for environmental sanitation in Porto Alegre, and, the specific objectives were: I) to analyse the relationship between indicators of poverty and inadequate environmental sanitation and the occurrence of diseases related to inadequate environmental sanitation; and, II) to present the situation of IESDs and the sanitation policy of Porto Alegre.

The method was qualitative and quantitative, with data collection and analysis of public policies. The period analysed was from 2008 to 2012. Data were obtained from Health Information Systems, DATASUS website of the Ministry of Health, along with a set of basic indicators from the Porto Alegre’s Observatory. The indicators were classified according to the specific objectives: poverty(P); environmental sanitation(S); diseases(D). Pearson's linear correlation coefficient was used for statistical analysis to test the associations between poverty and basic sanitation indicators with IESDs indicators.

The results showed that the biggest problems related to IESDs occur in the poorest regions, which are: Restinga, Parthenon, Nordeste, Lomba do Pinheiro, Gloria, Ilhas and Extremo Sul . The higher concentration of Dengue was found in the Parthenon region; Leptospirosis in the regions of Restinga, Extremo Sul, Lomba do Pinheiro, Norte and Eixo Baltazar; Hepatitis A in the regions of Ilhas, Nordeste, HumaitĂĄ /Navegantes, Centro, Lomba do Pinheiro, Norte, Leste and Parthenon . Regarding the public policy for Environmental Sanitation in Porto Alegre, we concluded that there are some urban policies, but the subject needs greater systemic view directed to the most specific problems of the city. About the Sanitation Plans, we concluded that the regions that need the sanitation, at most - a sewage collection network - have a lower footage for infrastructure installation, such as, the region of Ilhas . The sanitation basic plan (Water) gives various information about areas that need the implementation of infrastructures of universal supply, however there is still no date of when that will be possible.

Finally, infectious and parasitic diseases are a reality in Porto Alegre. Still at the XXI century, there are about 1,200 annual hospitalizations in health services (SUS) and are responsible for about 750 deaths per year in the capital city.

1. Carvalho EMF, Lessa F, Gonçalves FR, Silva JAM, Lima MEF, Melo Jr, SW. O processo de transição epidemiológica e iniquidade social: o caso de Pernambuco. RASPP Rev. Assoc. Saúde Pública de Piauí. 1998;1(2):107-119.

Environmental sanity, Health, Hydric disease, Sanitation, Public policy.

P130 Lateralization of the visual word form area in patients with alexia after stroke

InĂŞs rodrigues 1,2 , nĂĄdia canĂĄrio, alexandre castro-caldas 3, 1 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 instituto de ciĂŞncias da saĂşde, universidade catĂłlica portuguesa, 1649-023 lisboa, portugal, correspondence: inĂŞs rodrigues ([email protected]).

Knowledge of the process by which visual information is integrated into the brain reading system promotes a better understanding of writing and reading models.

This study aimed to use functional Magnetic Resonance Imaging (fMRI) to explore whether the Blood-oxygen-level dependent (BOLD) contrast imaging patterns, of putative cortical region of the Visual Word Form Area (VWFA), are distinct in aphasia patients with moderate and severe alexia.

Twelve chronic stroke patients (5 patients with severe alexia and 7 patients with moderate alexia) were included. A word categorization task was used to examine responses in the VWFA and its right homolog region. Patients performed a semantic decision task in which words were contrasted with non-verbal fonts to assess the lateralization of reading ability in the ventral occipitotemporal region.

A fixed effects (FFX) general linear model (GLM) multi-study from the contrast of patients with moderate alexia and those with severe alexia (FDR, p = 0.05, corrected for multiples comparisons using a Threshold Estimator plugin (1000 Monte Carlo simulations), was performed. Activation of the left VWFA was robust in patients with moderate alexia. Aphasia patients with severe reading deficits also activated the right homolog VWFA.

This bilateral activation pattern only in patients with severe alexia could be interpreted as a result of reduced recruitment of the left VWFA for reading tasks due to the severe reading deficit. This study provides some new insights about reading pathways and possible neuroplasticity mechanisms in aphasia patients with alexia. Additional reports could explore the predictive value of right VWFA activation for reading recovery and aid language therapy in patients with aphasia.

Stroke, Aphasia, Alexia.

P131 Sexuality in women with oncological pathology

Filomena paulo 1 , manuela ferreira 2, 1 centro hospitalar tondela-viseu, 3509-504 viseu, portugal; 2 instituto politĂŠcnico de viseu, 3504-510 viseu, portugal, correspondence: filomena paulo ([email protected]).

It is a challenge for current nursing to care for women with oncological pathology, in what a nursing philosophy of care is concerned, and whose main focus of care are women in their multidimensionality and not the disease itself.

To identify some determinant factors of physical well-being, quality of life and sexuality in women with oncological disease.

The methodology that guided the research was an integrative review (IR) of literature. A survey was carried out in the databases: LILACS, SCIELO, Google Academic, BDENF and B-ON, between 2012 and 2016, starting from the previously defined inclusion criteria. The selected studies were subsequently evaluated. 71 articles emerged from this research study strategy. After reading all the abstracts of the articles, 27 were selected, including integrative review articles, systematic reviews of cross-sectional studies and descriptive and exploratory studies, whose contents were of interest to this review.

After analysing the articles, it was concluded that sexual health and treatments are important aspects for the quality of life of women with oncological disease and that radical mastectomy has repercussions on body image and sexual function, since it affects self-esteem and the feeling of femininity and sexuality. Factors such as hair loss, gain or loss of weight, chronic fatigue, nausea, pain, stress, feeling of not being a complete woman, decreased arousal, lack of interest or dissatisfaction are determinants for a woman's sexual activity.

Integrative care for women with oncological pathology is a challenge for health professionals. The strategies to be adopted involve the inclusion of women's sexuality in the nursing care plan, without taboos or prejudices, because the evidence identifies the need for intervention in this field to improve, effectively, the quality of life of these women.

Sexuality, Woman, Oncology, Integrative care, Quality of life.

P132 Patient compliance to arterial hypertension treatment: integrative review

Diana tavares 1 , cĂŠlia freitas 2 , alexandre rodrigues 3, 1 usf salinas – agrupamento de centros de saĂşde do baixo vouga, portugal; 2 center for research in health technologies and services, aveiro university, higher school of health, 3810-193 aveiro, portugal; 3 center for health studies and research of the university of coimbra, aveiro university, higher school of health, 3810-193 aveiro, portugal, correspondence: diana tavares ([email protected]).

Cardiovascular diseases are the main cause of worldwide mortality, and hypertension is an important public health problem [1]. Nonadherence to treatment influences the health of the patient and can generate several complications. In this way, health care providers are confronted with the best strategies to promote effective management of the disease, together with patients and families. The nursing consultation seems to be a privileged setting for assisting patients for healthy behaviour changes with a personalized intervention adjusted to the needs of each one [2, 3].

To analyse the factors that promote or inhibit adherence to treatment and to identify the nursing interventions that determine treatment adherence.

We conducted an integrative literature review with qualitative and quantitative studies through electronic databases Scielo, Scopus, Medline, LILACS and B-on, using the following selected research discriminators, “adherence”, “patient”, “hypertension” and “nursing care” from the DeCs and MeSH for articles published between 2011 and 2016. The PICOD [4, 5] method and the QualSyst [6] tool were used to evaluate the research question and the quality of the articles, respectively.

In this research 372 articles were found, and 10 were selected, corresponding to 2,565 hypertensive patients over 18 years old, of whom 64% were female. Despite the differences found in the methodologies of these studies, the results indicated a poor adherence to the treatment [7]. The analysis resulted in two major categories that explain patients' adherence to the treatment of hypertension, namely the factors that influence adherence and nursing interventions. The most significant factors that inhibit adherence are complex drug regimens, lack of knowledge, and poor physical activity. The nursing interventions that enhance adherence were the adoption of continued educational strategies.

Patient adherence is a complex issue with a huge burden for the health system because it not only implies the change of patient behaviours, but also other factors like the involvement of caregivers and families. Considering the factors described in this review, the nursing consultation, through the definition of educational strategies, seems to increase patient adherence to treatment. In future research, guidelines will be suggested that aim to increase adherence to treatment.

1. Direção-Geral da Saúde. Doenças CÊrebro-Cardiovasculares em Números 2015 [Internet]. 2016 [cited 2016 Mar 18]. Available from: https://www.dgs.pt/em-destaque/portugal-doencascerebro-cardiovasculares-em-numeros-201511.aspx

2. Organização Mundial da Saúde [OMS]. Adherencia a los tratamientos a largo plazo [Internet]. Genebra; 2004 [cited 2016 Feb 20]. p. 127–32. Available from: http://www.paho.org/hq/index.php?option=com_docman&task=doc_view&gid=18722&Itemid=

3. Organização Mundial da Saúde [OMS]. A global brief on HYPERTENSION [Internet]. World Health Day 2013. Geneva; 2013 [cited 2016 Feb 22]. Available from: http://www.who.int/iris/handle/10665/79059

4. The Joanna Briggs Institute. Reviewers’ Manual [Internet]. 2011th ed. Vol. 53, Adelaide: The Joanna Briggs Institute; 2011 [cited 2016 Oct 5]. Available from: http://joannabriggs.org/assets/docs/sumari/reviewersmanual-2011.pdf

5. Ramalho A. Manual para redacção de Estudos e Projectos de Revisão Sistemática com e sem metanálise – Estrutura, funções e utilização na investigação em enfermagem. Coimbra: Formasau - Formação e Saúde, Lda; 2005.

6. Kmet LM, Lee RC, Cook LS. Standard quality assessment criteria for evaluating primary research from a variety of fields. HTA Initiat [Internet]. 2004;13(February). Available from: http://gateway.nlm.nih.gov/MeetingAbstracts/103140675.html%5Cnpapers3://publication/uuid/C9499D35-AE13-428E-960B-68727C1B1833

7. Organização Mundial da Saúde. Adherence to long-term therapies: evidence for action. 2003.

Patient compliance, Nursing care, Hypertension.

P133 Didactic material as an intervention strategy in homecare for families of patients with mental disorders

LuĂ­sa ft ferreira 1 , ednĂŠia an cerchiari 1 , simara s elias 1 , joĂŁo b almeida 2, 1 mato grosso do sul state university, 79804-970 mato grosso do sul, brazil; 2 centro universitĂĄrio salesiano de sĂŁo paulo, 13467-600 americana, sĂŁo paulo, brazil, correspondence: luĂ­sa ft ferreira ([email protected]).

On a daily basis, we have discussed and reflected about health education on public care as a way of interaction between health care professional and patients, with the purpose of, not only to inform, but also to exchange knowledge and experiences.

To elaborate and validate a Practical Guide of Care for family members of patients with mental disorders: guidelines and highlights.

This is a qualitative, descriptive, ongoing study with relatives of intern patients with mental disorder of a university hospital in Brazil, starting in August 2014 and finishing by October 2018.

Five families, a total of 9 people, were interviewed through a semi-structured questionnaire. In order to create the Practical Guide, the methodological steps of the Popular Education process of Paulo Freire were used to analyse the data, as following: Thematic investigation; Thematic and Problematization. The Practical Guide of Care for Families of Patients with Mental Disorders: Guidelines and Highlights, was created in 2015 and published in 2017, containing 32 pages printed on coloured on couché paper with watercolour pictures and information arranged in short and direct texts, related to the home care of patients with mental disorders. The guide validation process began in January 2018 and is going along with professional judges that were chosen by their expertise, with one of each area: physician, psychiatrist, psychologist, nurse, social worker, pharmacist and occupational therapist. For collecting data, an instrument adapted from the Oliveira’s (2006) [1] study was used, allowing apparent and content evaluation. The instrument has affirmations about the evaluated material and was developed based on a Likert scale. The data collected will be analysed through simple statistics and organized through tables, respecting the ethical aspects proposed by Resolution 466/2012 of the National Health Council, regarding research with humans. An apparent and content evaluation with the relatives of the patients will also take place.

We expect that the final product of this study, the Validated Practical Guide, will provide support for adoption of increasingly effective strategies for the home treatment of patients with mental disorders.

1. Oliveira VLB, Landim FLP, Collares PM, Santos ZMSA. Modelo explicativo popular e profissional das mensagens de cartazes utilizados nas campanhas de saĂşde. Texto Contexto Enferm. 2007;16(2):287-293.

Teaching materials, Public health education, Mental Disorder.

P134 Family health in Leiria council: study of some determinants

Marta serrano, rodrigo correia, andrĂŠ branco, luĂ­s mousinho, teresa kraus 1,2, correspondence: teresa kraus ([email protected]).

Family is the main responsible unit for personal development, and any change on a member causes alterations to the entire family [1]. The binomial family-determinants has an impact in education, socialization, in healthcare, on believes and values of the family, as well as in the well-being and health of its members.

To characterize the families according to its composition, family cohesion and adaptability, protective factors, health state and its determinants. Identify points of intervention on the community and family.

This is a quantitative, correlational and transversal study, whose data was collected in February and March 2017, in the council of Leiria, by the application of a questionnaire which evaluated sociodemographic data, family functioning, family protective factors, and the participants’ mental health, resorting to MHI5 [2], FACES IV and to the inventory of protective factors [3].

The sample (N=224) had an average age of 40.7 years, the participants were mainly females (58.9%) residing in the countryside (69.9%). The family composition went from the nuclear type (the most common), as well as a couple, single parent, celibate, extended family and even a reconstituted family. Most of the participants consider their family function as reasonable, are satisfied with the quality of their communication and perceive family protection. However, these consider having few gratifying experiences. The majority of the sample claimed to have a good family health state, furthermore 25.0% showed signs and symptoms of severe depression and 13.4% was going through some chronical disease. As for the determinant factors, most of the respondents seemed to have a good access to health care, unlikely to transportation and social services. Around 80.0% denied daily tobacco or alcohol consumption and the religious activities were the most popular among the participants.

The results of this study allow to identify two major focus of community attention in the council of Leiria: the accessibility and regular function of social services; the early diagnosis and treatment of the family/person with depressive symptoms, with prevention of new cases; and the knowledge and integration of care related to the person living with chronical disease. Associated with the Calgary Model and with the Dynamic Model, the philosophy of care expressed in the Competency for Proactive Unconditional Care guides the discovery of meaning and promotes rewarding relationships, even during apparently negative experiences [4].

1. Hanson S. Enfermagem de Cuidados de Saúde à Família: Teoria, Pråtica e Investigação. 2nd edition. Loures : Lusociência; 2005.

2. Ribeiro JLP. Mental Health Inventory: Um estudo de adaptação à população portuguesa. Psicologia, Saúde & Doenças. 2001;2(1):77-99.

3. Augusto C, Ferreira O, AraĂşjo B, Rodrigues V, Figueiredo M. Adaptation and validation of the Inventory of Family Protective Factors for the Portuguese culture. Revista latino-americana de enfermagem. 2014;22(6):1001-1008.

4. Kraus T, Dixe M, Rodrigues M. Dor, Sofrimento e Sentido de Vida: Desafio Para a CiĂŞncia, a Teologia e a Filosofia. In Lehmann O, Kroeff P, editors. Finitude e Sentido da Vida: Logoterapia no embate com a trĂ­ade trĂĄgica. SĂŁo Paulo: Evangraf; 2014. p. 193-237.

Family, Family health, Health determinants.

P135 Simulation as a pedagogical strategy in nursing teaching: teachers’ perspective

Correspondence: catarina carreira ([email protected]).

Currently, classes with the resource of simulated practice have become fundamental to the nursing degree. In this way, teachers who teach classes with the resource of simulated practice feel obliged to prepare their students for clinical situations and there is a need to increase their training on new teaching strategies, such as simulation. Consequently, it becomes important for the development of students' competences, being therefore essential to deepen knowledge in this area.

We intend to know the perception of teachers of the nursing graduation course on the use of simulated practice as a pedagogical strategy.

To achieve our objective, we developed a research study using a qualitative approach and a semi-structured interview applied to seven teachers who teach classes with the resource of simulated practice in Escola Superior de SaĂşde de Leiria.

From the results of our study, we verified that for the teachers, the simulation is a pedagogical strategy important in the creation of scenarios, a facilitator of the learning process that contributes to the increase of the security, confidence, satisfaction, motivation and development of technical and non-technical competences, for the formation of professional identity, management of autonomy, development of critical thinking and standardization of nursing care. Throughout the interviews they identified constraints in the use of simulation, such as: economic and material resources, realism, time available for training resourcing to simulators and the constitution of classes, since the number of students is considered excessive, and the personality of the students is also called into question. To overcome the constraints mentioned above, teachers appeal for the acquisition of material, in sufficient numbers to respond to the constitution of the classes and the desired realism. They affirm that the practical classes should have a superior workload, with unfolding of the classes, thus increasing the time given to debriefing.

Our results agree with the opinion of Gomes & Germano (2007) [1], Guhde (2011) [2] and Amendoeira, et al., (2013) [3], when they affirm throughout their studies that simulated practice allows a better articulation of the theoretical content with the practice, facilitating the learning of the contents taught in theory, via a consequent reflection of previous learning in the context of clinical practice, with the development of students' competences. In summary, the teachers interviewed highlight the importance of simulation, to answer to our research question, once they recognize the importance of simulation in the health field.

1. Amendoeira J, Godinho C, Reis A, Pinto R, Silva M, Santos J. Simulação na Educação em Enfermagem. Conceitos em Transição. Revista da UIIPS. 2013;2:212-228.

2. Gomes C, Germano R. Processo Ensino/Aprendizagem no LaboratĂłrio de Enfermagem: visĂŁo dos estudantes. Revista GaĂşcha de Enfermagem. 2007;28(3):401-408.

3. Guhde JA. Nursing Students' Perceptions of the Effect on Critical Thinking, Assessment, and Learner Satisfaction in Simple. J Nurs Educ. 2011;50(2):73-78.

Simulation, Nursing, Trainning.

P136 Simulation as a pedagogical strategy in nursing teaching: students’ perspective

Correspondence: clĂĄudia chambel ([email protected]).

Nowadays, students increasingly recognize the importance of using simulated practice as an excellent pedagogical strategy, since it leads to experience situations very similar to reality, in an environment free of risks and penalties, which allows reflection and eventually to be repeated within useful time, thus leaving them readier to practice, in clinical situations.

We intend to know the perception of students of the nursing degree, on the use of simulated practice as a pedagogical strategy.

To achieve this, we developed a research study using a qualitative approach and a semi-structured interview applied to six students of the nursing degree of Escola Superior de SaĂşde de Leiria.

From the results achieved we verify that the students indicate that the simulation is a pedagogical strategy, that facilitates the learning process and contributes to safety, confidence, satisfaction, motivation and development of technical and non-technical skills, with the recreation of scenarios closer to reality. However, they identify some constraints in the use of simulation, such as; economic resources for the acquisition of more recent and sophisticated material; realism, because the available material doesn’t allow feedback; time available for simulated practice and the constitution of classes (since they consider that the number of students is excessive and also the different personality of the students, because some students can’t take advantage of this strategy).

To overcome the constraints mentioned above, nursing students appeal for the acquisition of new and recent material, in sufficient number to overcome the heterogenic constitution of the classes and the desired realism. They affirm that practical classes should contribute more to the workload of the degree and should receive more attention from the teacher (either reducing the number of students or allowing the practical classes to be taught by two teachers simultaneously).

Our findings are in line with studies which states that high fidelity simulation facilitates the students' learning and acquisition of competencies, and results in increased motivation, satisfaction, critical thinking, and clinical decision-making. Also, Batista, Pereira and Martins (2014) [1] point out that in order for the simulated practice to reach its maximum exponent of realism, it is necessary equipment, environmental conditions similar to clinical practice and a high-fidelity simulator. In summary, the students interviewed highlight the importance of simulation in the health field.

1. Batista R, Pereira M, Martins J. Simulação no Ensino de Graduação em Enfermagem: Evidências Científicas. SÊrie Monogråfica Educação e Investigação em Saúde: A simulação no ensino de Enfermagem. 2014; pp. 65-81.

P137 Partnership between nurses and security forces to reinforce literacy in the use of child safety seats

Rosa moreira 1 , anabela almeida 2, 1 hospital center cova da beira, 6200-251 covilhĂŁ, portugal; 2 research unit in business, university of beira interior, 6201-001 covilhĂŁ, portugal.

In Portugal the low literacy in health is recognized. Literacy in health is not only related to education, it arises from a convergence of factors involving education, cultural and social factors and health services. According to data provided by the World Health Organization and World Bank, if awareness is not raised and if global behavior does not change, road traffic injuries will increase dramatically by 2020, becoming the third leading cause of death around the world [1]. Nurses assume that they can be real agents of change and have a role to play in helping to shape the behavior of parents/other educators and to train them in the correct use of child safety seats (CSS).

The aim of this study was to evaluate whether the partnership between nurses from the Cova da Beira Hospital Center and the regional security forces generated better results in the effective use of CSS of parents or other carers during car transport and whether there is a gap between them and parents who have never been targeted by the team of nurses and security forces.

A cross-sectional descriptive-correlational study with a quantitative approach, whose participants are the children and their educators from 1st cycle schools in the counties of FundĂŁo, CovilhĂŁ and Belmonte. Sample collected by accidental or convenience method, not random. The interview and the observation occur at the same moment with the driver of the vehicle that carries the child and is the subject of stop operation.

The stop operations had a strong pedagogical and informative component, where the drivers were clarified about the data found during the observation, thus offering a good training opportunity in an informal context. In this study, 83% of the sample was for the first time benefiting from a stop operation promoted by the PROVIDAS and it was possible to conclude that drivers who have already been subject to the supervision of PROVIDAS make fewer errors in the use of CSS in relation to drivers who had never been audited by this team.

The results suggest to the positive influence of the training and pedagogical activity of nurses and the importance of the partnership with security forces in the effective use of CSS. Drivers were found to have made more mistakes without connection to the PROVIDAS in relation to drivers had contact with PROVIDAS and security forces.

1. Fundación Gonzalo Rodríguez. Manual de Buenas Pråcticas: Cómo Abordar la Seguridad de los Niùos como Pasajeros de Vehículos. Uruguay : Fundación Gonzalo Rodríguez; 2010.

Child safety seats, Health literacy, Partnership.

P138 Physical activity and body image in physiotherapy students

Paula c santos 1 , rafael b pereira 1 , sofia mr lopes 1 , cristina c mesquita 2, 1 school of health, polytechnic institute of porto, 4200-465 porto, portugal; 2 center for research and rehabilitation, school of health, polytechnic institute of porto, 4200-465 porto, portugal, correspondence: cristina c mesquita ([email protected]).

There is a decrease in the physical activity of future physiotherapists, due to readjustments after admission to college. The high levels of sedentarism, cause diverse physical consequences, especially in terms of corporal image perception. This fact is due to a multidimensional construction of several psychosocial factors, which include motivational factors and behavioural changes.

To characterize the level of physical activity and satisfaction with body image among students of the first year of graduation in physiotherapy. Analyse the influence of initiating a graduation degree, in terms of physical activity and perception of corporal image and to also identify the main barriers to regular practice of physical activity.

An analytical cross-sectional study was carried out on a sample of 60 students (13 males and 47 females) from the first year of the physiotherapy degree at Escola Superior de SaĂşde do Porto (ESS-P), excluding those who had already attended a graduation degree. A sample characterization questionnaire and the International Questionnaire on Physical Activity (IPAQ) - Short Version were administered through the Qualtrics platform. For the anthropometric measurements and body composition, the Tanita BC-545NTM scale and a Seca stadiometer were used. Satisfaction with body image was assessed through the Body Shape Questionnaire. The questionnaire score and the Body Mass Index (BMI) were calculated. Data and its treatment was performed in the SPSS software, with a level of significance Îą = 0.05.

About 20% individuals of the total sample were physically inactive and 56.7% were moderately active, there were differences in the level of activity between male and female; males were more in a “very active level” than females (61.5 vs 14.9%; p = 0.008). To start a graduation degree leads to a decrease in the regular practice of physical activity (78.3 vs 40.0%; p = 0.001 before and after, respectively). The main barriers identified for the regular practice of physical exercise were: 71.7% inadequate schedules; 30.0% laziness and 20.0% fatigue. About satisfaction with body image, only females were dissatisfied (30% vs 0%; p < 0.001). Starting a graduation degree made the perception of body image worse in 46.7% of the sample, without differences in gender.

There is a high percentage of students physically inactive and dissatisfied with body image, being this process more notorious among the female gender. Being admitted to a graduation degree has shown to influence negatively the level of physical activity and body satisfaction. Inadequate schedules are the main barrier to the practice of a physical activity.

We thank all the students of the first year of the degree in Physiotherapy of ESS-Porto of the 2016/2017 school year for the readiness and willingness to collaborate in the present study.

Physical Activity, Students, Higher Education, Physiotherapy, Body Image.

P139 Physical activity and stress vulnerability in physiotherapy students

Cristina c mesquita 1 , elsa s rodrigues 2 , sofia mr lopes 2 , paula cr santos 2, 1 center for research and rehabilitation, school of health, polytechnic institute of porto, 4200-465 porto, portugal; 2 school of health, polytechnic institute of porto, 4200-465 porto, portugal, correspondence: paula cr santos ([email protected]).

International and national recommendations in Health, consider adopting an active lifestyle as fundamental. Due to the decrease in physical activity observed in students of higher education, it became fundamental to raise awareness and promote healthy behaviours. Physiotherapy students are future healthcare professionals who are experts in movement, with a primary role in health promotion. Physical activity has benefits in physical well-being, stress reduction, and academic performance.

Characterize the physical activity level of first year physiotherapy students and its influence on stress vulnerability, as well as to analyse the evolution of physical exercise practice in the transition to higher education.

Cross-sectional analytical study of 60 first year physiotherapy students from Escola Superior de SaĂşde, Instituto PolitĂŠcnico do Porto (ESS-PP). The level of physical activity was evaluated through the International Physical Activity Questionnaires (IPAQ) and the vulnerability to stress with the Stress Vulnerability Assessment Scale (23QVS). The 23QVS is a self-assessment tool, consisting of 23 questions and allows assessing the vulnerability of an individual to stress. The higher the final score, the more vulnerable the individual is to stress, considering that the value 43 is the value above which an individual is considered vulnerable to stress. The Qualtrics software was used to fill in the questionnaires and the SPSS program for data analysis, with significance of Îą=0.05.

Forty percent (40%) of the students practiced physical exercise, 18.3% were considered insufficiently active, with significant differences between genders, with males being more active (61.5 vs. 14.9, p = 0.003). (78.3% vs. 40.0%, p <0,001). It was verified that 40% of the individuals obtained a value of > 43 in the 23 QVS, showing more vulnerability to stress, being the greater proportion among the feminine gender. Nevertheless, statistically significant differences were not identified between genders (p = 0.074). Physical activity did not present a statistically significant relationship with stress vulnerability (p = 0.134; r S = -0.195).

More than half of the students did not practice physical exercise and about a fifth were considered insufficiently active. The male gender had a higher level of physical activity. A large percentage of students showed excessive stress vulnerability. Starting higher education led to a decrease in the practice of physical exercise. There was no relationship between the level of physical activity and vulnerability to stress.

Physical Activity, Health Promotion, Stress, Academic Success.

P140 Representations of dementia experienced in the first person: a hermeneutic analysis

Carlos laranjeira 1 , helena quaresma 2, 1 hospital distrital da figueira da foz, 3094-001 figueira da foz, portugal; 2 escola superior de enfermagem de coimbra, 3046-051 coimbra, portugal, correspondence: carlos laranjeira ([email protected]).

The global incidence of dementia has been growing exponentially in recent decades. As a chronic disease, it poses as a threat to physical and social existence, amputating or redefining the roles we assume as socially integrated individuals, leading to a heavy deconstruction on the everyday world.

a) Describe the representations of the person with dementia on the disease, after the diagnosis; b) understand the process of adjustment of the person with dementia, from the lived experience.

The methodological option was an empirical research study of the qualitative type, of phenomenological-interpretative nature and inspired by the hermeneutic philosophy of Paul Ricoeur. Seven people with mild dementia were interviewed, most diagnosed with Alzheimer's disease, with a mean of 71 years old. For each participant, two interviews were conducted in a natural setting [residence], with data collection occurring between July and October 2017.

The main focus of the present analysis focused on the identity of the person with dementia. Two main themes were created, taking into account the factors that may influence their (de)construction. The first theme “ life in suspense ” aims to describe knowledge and representations about dementia. Regarding the analysis of the lived experience, it was represented by the second theme “ map the transition process - living on the edge of the cliff ”.

Findings from this study indicate that disease representations are useful frameworks for developing an understanding of how people with dementia try to manage the threats posed by disease as they negotiate the day-to-day process. The development of disease representations reflects an understanding that the progressive decline imposed by dementia is linked to a set of consequences that are circumscribed in the personal, relational, and transcendental dimensions.

In summary, the person with dementia faces several challenges, the first one stems from the need to manage the treatment; the second arises from the need to create and assign meaning to their social roles and finally the need to deal with the emotional consequences that arise from the process of disease by providing the person with adaptive strategies that promote their adjustment. In fact, this study, in addition to revealing the lived experience of the person with dementia, has the potential to contribute to the improvement of nursing care in mental health.

Mild dementia, Lived-experience, Hermeneutic, Illness representation.

P141 The institutionalized elderly person: representations of happiness and well-being

Magda guerra, carlos laranjeira, zaida azeredo, school of health, jean piaget institute, research in education and community intervention, 3515-776 viseu, portugal, correspondence: magda guerra ([email protected]).

Population aging in developed and developing countries is an unequivocal reality and poses multiple challenges to their communities and political entities. The societies aim at prolonging the lives of their citizens but also at an improvement of their quality of life; however, the constraints of the elderly population are diverse and it is sometimes necessary to institutionalize them. The elderly persons will have to become familiar with a set of new situations such as a new space, routines and unknown people with which they will share their life. The often-negative connotations associated with these institutions may not be appropriate to reality, because of the changes that have taken place in social policy in recent times.

We sought to know which representations the institutionalized elders have about their happiness and well-being.

The sample consisted of 13 elderly people institutionalized in a Nursing Home of Viseu, aged between 77 and 94, with 4 to 12 years of institutionalization, an option for some, due to their own volition; while for others, due to decision of another (children/nephews). It is a qualitative study, using the semi-structured interview. The results were analysed according to content analysis, with a priori categorization.

Happiness for most of the elderly depends on a number of factors, such as being healthy, being well with oneself, being cherished at home, living with others, escaping from loneliness, not starving oneself, being loved and loving, possessing money for oneself and for others and fun. Most elderly people have confirmed that they feel good about themselves, yet two elderly people do not feel well because of sadness and illness. Their memories of the past relate to marriage, family constitution, strength to work and conviviality with friends; whereas, in the present, relate to happiness, a sense of general well-being, not being alone and living with other institutionalized elders.

Elderly, Institutionalization, Happiness, Well-being.

P142 Prevention of ventilator associated pneumonia- evidence in oral care

Ana sousa 1,2,3 , cândida ferrito 4, 1 universidade catĂłlica portuguesa, 4169-005 porto, portugal; 2 centro hospitalar s. joĂŁo, 4200-319 porto, portugal; 3 escola superior de enfermagem do porto, 4200-072 porto, portugal; 4 escola superior de saĂşde, instituto politĂŠcnico de setĂşbal, 2914-503 setĂşbal, portugal, correspondence: ana sousa ([email protected]).

Ventilator-associated pneumonia (VAP) is the most important nosocomial infection in intensive care units (ICUs), with an estimated incidence rate of 50% and the major cause of mortality and morbidity in ICUs [1,2]. Inadequate oral care develops an important role in this setting allowing various organisms to flourish in oral cavity and cause infections [1]. Many VAP prevention guidelines include oral care, but they don’t specify its demandings.

The aim of this study is to describe evidence-based VAP prevention oral care in ICU, in terms of products, frequency and technique.

Integrative review. Research was conducted on B-on, PUBMED, and RCAAP between 24 and 28 December 2015, including guidelines and original articles from the last 5 years. We found 256 documents and after analysing their abstract and methodological quality, nine documents were selected. Data were compiled in a chart in terms of grade of evidence, acceptance and applicability.

We found inconsistent results regarding the use of an antiseptic solution in oral care, though there were meta-analysis which indicated the benefit of chlorhexidine mostly in cardio-thoracic surgical patients [2-4]. We also found evidence that tooth brushing reduces oral bacterial colonization and may reduce VAP when used with chlorhexidine [5,6]. There is no consensus regarding the adequate concentration of chlorhexidine. Some studies, thought, find an association with the use of chlorhexidine 2% and the incidence of Acute Respiratory Distress Syndrome [7]. Because of this potential risk, we do not recommend the use of this type of concentration, as more randomized controlled trials are needed. We found evidence in VAP prevention oral care comprising suctioning, tooth and gums wash and rising with 15 mL chlorhexidine 0.12%. This procedure should be performed at least 2 times a day. Secretions removal and moisturization should occur between 2 to 4 times a day [1-9].

This review allowed us to describe the adequate oral care in ICUs in order to potentially reduce VAP. As limitation of this study, we can find the lack of high grade of evidence concerning most recommendations. More randomized controlled trials are needed to support the impact of each intervention separately.

1. Munro CL, Grap MJ. Oral health and care in the intensive care unit: state of the science. Am J Crit Care. 2004;13(1):25-33.

2. Eom JS, Lee MS, Chun HK, Choi HJ, Jung SY, Kim YS, et al. The impact of a ventilator bundle on preventing ventilator-associated pneumonia: a multicenter study. Am J Infect Control. 2014;42(1):34-37.

3. Labeau SO, Van de Vyver K, Brusselaers N, Vogelaers D, Blot SI. Prevention of ventilator-associated pneumonia with oral antiseptics: a systematic review and metaanalysis. Lancet Infect Dis. 2011;11(11):845-854.

4. Shi Z, Xie H, Wang P, Zhang Q, Wu Y, Chen E, et al. Oral hygiene care for critically ill patients to prevent ventilator-associated pneumonia. Cochrane Database Syst Rev. 2013;8:Cd008367.

5. Munro, C. L., Grap, M. J., Jones, D. J., McClish, D. K., & Sessler, C. N. Chlorhexidine, toothbrushing, and preventing ventilator-associated pneumonia in critically ill adults.American Journal of Critical Care : An Official Publication, American Association of Critical-Care Nurses. 2009; 18(5), 428–438.

6. Roberts N, Moule P. Chlorhexidine and tooth-brushing as prevention strategies in reducing ventilator-associated pneumonia rates. Nursing in Critical Care. 2011;16(6):295-302.

7. Klompas M, Speck K, Howell MD, Greene LR, Berenholtz SM. Reappraisal of routine oral care with chlorhexidine gluconate for patients receiving mechanical ventilation: systematic review and meta-analysis. JAMA Intern Med. 2014;174(5):751-761.

8. Kornusky J, Schub E. Oral Hygiene: Performing for an Intubated Patient. CINAHL. 2015; Nursing Guide.

9. Pileggi C, Bianco A, Flotta D, Nobile CG, Pavia M. Prevention of ventilatorassociated pneumonia, mortality and all intensive care unit acquired infections by topically applied antimicrobial or antiseptic agents: a meta-analysis of randomized controlled trials in intensive care units. Critical Care. 2011;15(3) :R155.

ICU, Oral care, Chlorhexidine, Tooth brushing, Ventilator-associated pneumonia.

P143 Assessing preferences and features for a mobile app to promote healthy behaviors in adolescence: an exploratory study

Pedro sousa 1,2 , roberta frontini 1 , maria a dixe 1,2 , regina ferreira 3,4 , maria c figueiredo 3,4, 1 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 school of health sciences, polytechnic institute of santarĂŠm, 2005-075 santarĂŠm, portugal; 4 indicators monitoring unit in health, polytechnic institute of santarĂŠm, 2005-075 santarĂŠm, portugal, correspondence: pedro sousa ([email protected]).

A mobile application (TeenPower) to promote healthy behaviours in adolescents is being created. To better tailor the features and digital content of the mobile app, it was important to understand some of the characteristics of the devices more frequently used by the adolescents. Moreover, it was important to understand which contents are essential for health-professionals who frequently work with these adolescents. This data is extremely important during the conception and planning phase of the creation of the mobile app.

This study has two main aims. Firstly, to characterize and assess the devices frequently used by adolescents, as well as the preferences of adolescents for the mobile app. Secondly, to understand what features are more important for health-professionals who work closely with adolescents to promote healthy behaviours.

Two samples were recruited. A sample of 15 adolescents (M = 15.20; SD = 0.68) with the characteristics of the future users of the mobile app was recruited. A sample of 11 health-professionals who work closely with adolescents was also recruited. Both samples answered 2 questionnaires specifically created for the purpose. Five-point Likert scales and open questions were used. The instruments comprised questions regarding the type of devices frequently used by adolescents, the content that both, adolescents and health-professionals, consider more important regarding the promotion of healthy behaviours, and the reasons adolescents consider to use mobile apps.

All adolescents use smartphones, but only 20% of the sample frequently use lifestyle and health apps, with the majority (93.3%) using social networks. The majority of the sample referred that food suggestions (93.3%) and physical activity suggestions (93.3%) should be included in the app. Adolescents also referred what reasons and features would influence them to use a health mobile app. Health-professionals (90.9% nurses and 9.1% psychologists) referred that the app should have food suggestions (90.9%) and physical activity suggestions (90.9%). They all referred that they would advise an adolescent to use a health-related app, with 81.8% referring that they would feel comfortable giving advices through a mobile app.

The results of our study help us tailor and choose the most important features present in the TeenPower app. Understanding what content may be more appealing for adolescents may also help the creation of future content for prevention programs.

The current abstract is being presented on behalf of the research group of the project TeenPower: e-Empowering teenagers to prevent obesity, co-funded by the FEDER (European Regional Development Fund), under the Portugal 2020 Program, through COMPETE 2020 (Competitiveness and Internationalization Operational Program). We acknowledge the Polytechnic Institutes of Leiria, SantarĂŠm and Castelo Branco, the Municipality of Leiria (City Hall), and also other members, institutions and students involved in the project.

Adolescents, e-Health, Preferences, Prevention, TeenPower.

P144 Epidemiological and clinical characterization of men who age with HIV/AIDS in Teresina-PiauĂ­, Brasil

Keila mgs fortes 1 , maria ls fortes 2 , joĂŁo go freitas 2 , lucas s terto 3, 1 municipal health foundation, 64002-595 piauĂ­, brazil; 2 federal university of piauĂ­, 64049-550 piauĂ­, brazil; 3 university center uninovafapi, 64073-505 piauĂ­, brazil, correspondence: lucas s terto ([email protected]).

AIDS is not a “young-only disease” as many older people still evaluate [1]. People with a more advanced age range are starting to appear in the settings of those living with HIV/AIDS [2]. Many causes are attributed: sociocultural changes, especially in sexuality; resistance in using condoms; healthcare innovations; access to antiretroviral therapy and other innovations in clinical areas [3]. Among men aged 60 years and over, in the last decade, there has been an increase in the rate of detection [4], which raises the need for public policies aimed at this reality.

To investigate epidemiological and clinical characteristics of men housed in a shelter for people living with HIV/AIDS.

Descriptive study of epidemiological and clinical information on people living with HIV/AIDS, housed in a shelter in Teresina-PiauĂ­, Brazil. Data were collected from 28 selected records, with the following criteria: male, aged 50 years or older. The following variables were considered: age group, educational level, place of residence, time of antiretroviral therapy and clinical manifestations. The data were typed in an Excel 2007 spreadsheet, analysed through percentage differences and discussed, with reference to documents and also articles in which there was content about the proposed theme.

Regarding the age of the group, ages were between 50 and 64 years old, with predominance of individuals aged from 50 to 60 years old. The majority of the elderly presented a low educational level (79.75%), coming from cities with less than 50,000 inhabitants of other states of Brazil (53.57%), who seek Teresina for treatment and follow-up of their health, shelter to receive food, nursing, social and spiritual assistance and, in some cases, family reintegration. Most used antiretroviral drugs since 10 to 20 years ago, with 7.15% using it for more than 30 years. The clinical manifestations detected were: Lung Cancer, Hepatitis B and C, depression, dermatitis, tuberculosis, neuropathy and leishmaniasis.

This study draws attention to the increase in the detection of HIV in men aged 50 years and over, especially those aged 60 years or older, inhabitants of cities of low demographic density in Brazil, with several clinical manifestations, low educational level, which makes difficult the access to preventive information on sexual health, and under the use of antiretrovirals, from more than ten years ago. The study points towards the need for more research on HIV infection among the elderly, to help to implement more effective public policies for this group.

1 Gorinchteyn J. Sexo e AIDS Depois dos 50.1st edition. São Paulo: Ícone; 2010.

2 Okuno MFP, Gomes AC, Meazzini L, JĂşnior GS, Junior DB, Belasco AGS. Qualidade de Vida de Pacientes Idosos Vivendo Com HIV/AIDS. Cad. SaĂşde PĂşblica, Rio de Janeiro. 2014;30(7):1551-1559.

3 Serra A, Sardinha, AHL, Pereira ANS, Lima SCVS. Percepção de Vida dos Idosos Portadores do HIV/AIDS atendidos em Centro de Referência Estadual. Saúde em Debate, Rio de Janeiro. 2013;37(97):294-304.

4 MinistÊrio da Saúde. Secretaria de Vigilância em Saúde. Boletim Epidemiológico - AIDS e DST. Ano V, Nº 1. 2015-2016.

Old men, AIDS, Health care.

P145 Assessing digital contents for health promotion and obesity prevention in adolescence

Rita luz 1 , pedro sousa 1,2 , roberta frontini 2 , andreia silva 1 , briana manual 1 , clĂĄudia ramos 1 , mĂłnica ruivo 1 , rĂşben abreu 1 , tiago pozo 1 , ana e sardo 1 , francisco rodrigues 1 , luĂ­s fernandes 1, correspondence: rita luz ([email protected]).

The TeenPower project is an e-Health multidisciplinary program to promote healthy behaviours and prevent obesity in adolescents. In this study we focus in two of the components present in the platform: stress management and interpersonal relationships. It is important to focus on stress while designing prevention programs for adolescents, given that literature has associated higher levels of stress with obesity in youth. Moreover, interpersonal relationships were also found to influence obesity-related behaviours. Online sessions were specifically created to address those issues on the mobile app. This digital content (videos and posters) should not only be appealing for the adolescent but also scientifically valid and correct.

The main aim of this study was to assess the scientific quality and adequacy of posters and videos created for the TeenPower mobile app, regarding stress management and interpersonal relationships.

Digital resources in video and poster format were created. Contents regarded stress management and interpersonal relationships. Videos were 2 minutes long and posters included infographics and written content with adequate language for adolescents. These resources were developed by students of health, sports and design, in collaboration with health-professionals and researchers. The sample included adolescents with the sociodemographic characteristics of the future users of the mobile app, and health-professionals working in this field. A questionnaire was developed and validated for the purpose, based on the questionnaire create by Junior and colleagues.

Results regarding the concept idea, information, construction of scenes and characters, dialogues, visual and audio style, and quality and relevancy of the information were obtained. Moreover, adolescents answered regarding the attractiveness and adequacy of the content, and health-professionals answered regarding the scientific accuracy of the content information. Both samples suggested content improvements.

The digital content of the mobile app regarding stress management and interpersonal relationships is adequate and appealing for both adolescents and health-professionals. The assessment of digital content is crucial to understand the acceptability of the contents for future users. Digital contents and online sessions are extremely important, given that adolescents use new technologies on a daily basis. Furthermore, it is important to note that digital contents may have the potential to enhance the adherence to programs, promoting healthy behaviours and preventing obesity.

Adolescents, e-Health, Obesity, Health promotion, Digital contents.

P146 Assessing digital content in the TeenPower project: development and validation of a questionnaire

Roberta frontini 1 , pedro sousa 1,2 , rita luz 2 , ana duarte 2 , beatriz sismeiro 2 , maria moreira 2 , romeu machado 2.

The TeenPower project aims to develop a program to promote healthy behaviours and prevent obesity in adolescents. It is a multidisciplinary project with an important e-Health component. Therefore, including valid digital content may help to maximize and optimize the impact of the program. Given that over the years digital resources had a great evolution, there is a growing concern regarding the acceptance of the digital content by the target public. Thus, to assess the digital resources of the TeenPower project, there was the need to develop and validate a questionnaire that could accurately assess the quality and adequacy of the digital content of the TeenPower mobile app.

Develop and validate a questionnaire to assess the quality and adequacy of the videos and posters of the TeenPower.

Two scales were developed based on the questionnaire created by Junior and colleagues [1]: one for adolescents (12-16 years old) and one for health-professionals. The questionnaire for adolescents comprised 18 items answered on a 5-point Likert scale and 2 open questions (to assess the video content); as well as 11 items answered on a 5-point Likert scale and 2 open questions (to assess the poster content). The questionnaire for health-professionals comprised 17 items answered on a 5-point Likert and 2 open questions (to assess the video content); as well as 11 items answered on a 5-point Likert scale and 2 open questions (to assess the poster content). The sample included adolescents with the sociodemographic characteristics of the future users of the mobile app, and specialized health-professionals. Exploratory factor analyses and analysis of internal consistency through Cronbach's alpha were performed.

Data regarding the concept idea, the construction of scenes and characters, the dialogues, visual and audio style and the quality and relevance of the information was obtained. Data regarding the acceptance and comprehension of the content and digital form of the app was obtained from adolescents. The quality and rigorousness of the scientific information was validated by health-professionals.

The questionnaire presented good psychometric qualities with adequate values for internal consistency and factorial analysis. Given that nowadays there is a vast offer of digital content related to health, there is a concern to use not only appealing content for future users, but also valid and scientifically correct information. This questionnaire may be an important tool to understand the acceptability and quality of the scientific content of the videos and posters.

Adolescents, e-Health, Validation, Questionnaire, Digital content.

P147 Implementation process of “Engaging Clients Who Use Substances” guideline in a nursing school curriculum

Olga valentim 1 , maria josĂŠ nogueira 1 , luĂ­s sousa 1,2 , vanessa antunes 1 , sandy severino 1,3 , antĂłnio ferreira 4 , luĂ­s gens 4 , luĂ­s godinho 5, 1 school of health sciences, atlântica university, 2730-036 barcarena, portugal; 2 hospital center lisbon central, curry cabral hospital, 1050-099 lisbon, portugal; 3 health center groupings loures-odivelas, regional health administration lisboa e vale do tejo, 2685-101 sacavĂŠm, portugal; 4 hospitaller order of sĂŁo joĂŁo de deus, telhal health house, 1600-871 lisboa, portugal; 5 psychiatry department, garcia da orta hospital, 2805-267 almada, portugal, correspondence: olga valentim ([email protected]).

Nursing research has led to knowledge which has contributed to improving health care and to reduce costs. The implantation of guidelines ensures the transfer of the best evidence for clinical practice [1]. Substance-related problems can occur at any age, but usually begin in adolescence [2]. The Guideline Engaging Clients Who Use Substances developed by the Registered Nurses' Association of Ontario (RNAO), provides evidence-based recommendations related to the assessment and interventions for people over 11 years of age, who use substances, may be at risk of, or have a substance use disorder [3].

To present the experience of the guideline implementation process of the RNAO’s Engaging Clients Who Use Substances, in the curriculum of the nursing degree (CLE) of the Atlantic Health School (ESSATLA).

Implementation procedures indorsed by the RNAO were followed, involving teachers, students and nurses from several clinical practice contexts. First, an analysis and reflection were made considering ESSATLA's CLE curriculum , Unit Sheets and the Engaging Clients Who Use Substances guideline recommendations. Afterwards, a guideline implementation plan was designed to fit the CLE, based on structure, process and outcome indicators. Teachers and clinical tutor train was performed and some guideline topics were included in several units: establishing therapeutic relationships [4] and person- and family-centred care [5].

To date, guideline implementation process results include several outcomes: seminar meetings held with all stakeholders involved in the guideline’s implementation process; a partnership training project - Partnership training seminars; a workshop scheduling plan; Portuguese translation of the “Engaging Clients Who Use Substances” in process (teacher and nursing expert stakeholder collaboration); didactic materials to support content implementation in the nursing curriculum ; student evaluation tools and instruments; three students included the topic of substance use in their end-of-course monograph project; some students in the clinical practice of the elderly did a in-service training session on this subject.

The implementation of this guideline in the CLE curriculum has empowered students to become more confident and competent to care for substance abusers, namely regarding screening, the assessment process and intervention in substance use disorders. It also meets the expectations of the stakeholders involved, empowering their performance based on scientific evidence.

1. Silva AG. ImplementaciĂłn de guĂ­as de buenas prĂĄcticas clĂ­nicas elaboradas por Registered Nurses Association of Ontario (RNAO) en el curriculum de EnfermerĂ­a Universidad de Chile. MedUNAB. 2015;17(3):182-9.

2. Serviço de Intervenção nos Comportamentos Aditivos e nas Dependências (SICAD). Relatório Anual 2015: A Situação do País em MatÊria de Drogas e Toxicodependências. Lisboa: SICAD. 2016.

3. Registered Nurses’ Association of Ontario (RNAO). Engaging Clients Who Use Substances. Toronto, ON: Registered Nurses’ Association of Ontario. 2015.

4. Registered Nurses’ Association of Ontario (RNAO). Establishing Therapeutic Relationships. Toronto, ON: Registered Nurses’ Association of Ontario. 2002.

5. Registered Nurses’ Association of Ontario (RNAO). Person and family centred care Toronto, ON: Registered Nurses’ Association of Ontario. 2015.

Substance-Related Disorders, Evidence-Based Nursing, Nursing Education.

P148 An overview of vitamin B in food supplements

Correspondence: isabel m costa ([email protected]).

Over last decade, sales of vitamins have had a significant increase worldwide. Besides the growing of self-diagnosis and self-medication by consumers, these products are also often consumed without any control or medical supervision, during extended periods of time, due to the general misperception that natural indicates harmless. Despite its beneficial effects, excess intake of vitamin is not innocuous. Although vitamin B6 is a co-factor for several enzymatic reactions, involved in numerous metabolic and physiological processes, overdoses may produce neurological disturbances, including sensory neuropathy.

The aim of this study was to check food supplements (FS) labels in terms of vitamin B (vitB) dosages, compared to the recommended daily allowances (RDA) defined by European Union Directive for these vitamins. A total of 80 FS sold in Portuguese pharmacies, supermarkets or health shops and on the internet were examined for indicated daily intake and dosage of vitamin B1, B2, B3, B5, B6, B7 and B12. Selection criteria included: oral solid pharmaceutical forms for adults, containing vitB in its composition, as stated in the label, regardless of the purpose of the FS.

Results showed FS label doses above RDA: 70.0% (vitB1), 75.0% (vitB2), 67.4% (vitB3), 51.1%, (vitB5), 74.3% (vit B6), 45.7% (vitB7) and 60.3% (vit B12). Thirty-three (33) FS contained all the studied VitB, six of which with all vitamins above RDA. Four (4) FS (5.7%) indicated a daily dose of vitB6 ≥ the tolerable upper intake level defined by EFSA (UL=25 mg/day).

The majority of FS presented vitB far above defined RDA. Although reports of toxic events due to vitamins are scarce, it is crucial that the daily doses present in FS are reviewed ensuring for the safety of these products. Authors also consider that FS should be under the same quality control of pharmaceuticals, safeguarding the health of the consumers.

Vitamin B, Food Supplements, Recommended Daily Allowances.

P149 Is the International Physical Activity Questionnaire (IPAQ-sf) valid to assess physical activity in patients with COPD? Comparison with accelerometer data

Joana cruz 1,2,3 , cristina jĂĄcome 3,4 , alda marques 3,5, 1 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 respiratory research and rehabilitation laboratory, school of health sciences, university of aveiro, 3810-193 aveiro, portugal; 4 center for health technologies and information systems research, faculty of medicine, university of porto, 4200-319 porto, portugal; 5 institute of biomedicine, university of aveiro, 3810-193 aveiro, portugal, correspondence: joana cruz ([email protected]).

The International Physical Activity Questionnaire short form (IPAQ-sf) is primarily designed for physical activity (PA) surveillance, presenting good psychometric properties in people with an age range of 15-69 years. However, studies conducted in older people have shown conflicting results, suggesting that it may not be adequate for this population. Therefore, the use of the IPAQ-sf for the assessment of PA in patients with chronic conditions such as chronic obstructive pulmonary disease (COPD), in which patients are frequently older, remains unclear.

To preliminary evaluate the validity and test-retest reliability of the IPAQ-sf in patients with COPD.

This exploratory cross-sectional study included 10 patients with COPD (71.6 ± 7.3 years old, 7 males, FEV1=77.2 ± 20.7% predicted). Participants completed the IPAQ-sf on two occasions separated by 1 week and wore an accelerometer (Actigraph GT3X+) for 7 consecutive days. The following statistical analyses were conducted: 1) Pearson’s correlation coefficient (r) to assess correlations between the results obtained from the IPAQ-sf (PA in METs-min/week; sitting time in min/day) and the accelerometer (PA: total moderate-to-vigorous physical activity [MVPA] per week and recommended MVPA per week – i.e., MVPA conducted in bouts of at least 10-min as internationally recommended [1]; sedentary time in min/day); 2) percentage of agreement (%agreement) and Cohen’s kappa to assess the agreement between categorical scores obtained from the two measures ( i.e. , ‘sufficiently’ and ‘insufficiently’ active patients); 3) Intraclass Correlation Coefficient (ICC2,1) and 95% limits of agreement (LoA) to assess test-retest reliability and agreement.

Significant correlations were found between IPAQ-sf METs-min/week and total MVPA (r=0.729, p=0.017), but not between METs-min/week and recommended MVPA (r=0.346, p=0.327) or between IPAQ-sf sitting time and accelerometer-based sedentary time (r=-0.383, p=0.308). Agreement between the IPAQ-sf and accelerometer-based data, in identifying ‘sufficiently’ and ‘insufficiently’ active patients, was low (total MVPA: kappa=-0.538, %agreement=20%; recommended MVPA: kappa=-0.087, %agreement=50%). Test-retest reliability of the IPAQ-sf was poor to moderate (PA: ICC2,1=0.439 [-0.267→0.838]; sedentary time: ICC2,1=0.511 [-0.178→0.864]) and the agreement was low (PA: LoA: -10361→4548 METs-min/week; sedentary time: LoA: -194→148 min/day).

Findings suggest that the IPAQ-sf has limited validity and reliability in the assessment of PA in patients with COPD. Further research with a larger sample is needed to support these findings.

1. Garber CE, Blissmer B, Deschenes MR, Franklin BA, Lamonte MJ, Lee IM, etal. Med Sci Sports Exerc. 2011;43(7):1334-1359.

Accelerometry, Chronic obstructive pulmonary disease, Psychometric properties, Physical activity, Self-report measure.

P150 Concurrent validity of the Portuguese version of the brief physical activity assessment tool

Joana cruz 1,2,3, cristina jĂĄcome 3,4 , nuno morais 1,2,5 , ana oliveira 3,6 , alda marques 3,6, 1 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 respiratory research and rehabilitation laboratory, school of health sciences, university of aveiro, 3810-193 aveiro, portugal; 4 center for health technologies and information systems research, faculty of medicine, university of porto, 4200-319 porto, portugal; 5 centre for rapid and sustainable product development, polytechnic institute of leiria, 2411-901 leiria, portugal; 6 institute of biomedicine, university of aveiro, 3810-193 aveiro, portugal.

Physical activity (PA) is recognised as an important health enhancing behaviour and should be routinely assessed in clinical practice to identify insufficiently active people. Activity monitors, such as accelerometers, provide objective assessment of free-living PA being the preferred assessment method in research settings. However, they are too expensive to be used in resource-constrained clinical settings. Several PA questionnaires have already been validated to the European Portuguese but some of them take too long to complete, hence unfeasible for use in clinical practice. Shorter PA assessment tools are, therefore, needed.

To explore the relationship between the Portuguese version of a short PA questionnaire, the Brief physical activity assessment tool (Brief-PA tool), and the International Physical Activity Questionnaire short form (IPAQ-sf), which is a valid and reliable PA assessment tool already tested in the Portuguese population. A secondary aim was to explore the test-retest reliability of the Brief-PA tool.

The Brief-PA tool [1] consists of 2 questions which assess the frequency and duration of moderate and vigorous PA undertaken in a ‘usual’ week. The total score is obtained by summing the results of the two questions (range 0-8). People with a score ≥ 4 are considered ‘sufficiently active’. Since the tool is not available in Portuguese, a linguistic adaptation was conducted using the forward- and back-translation method. Then, 86 healthy volunteers (49.5±18.1 years, age range 20-69; 53 female) completed the Brief-PA tool and the IPAQ-sf. A sub-sample (n=56, 43.1±18.1 years, 37 female) completed the Brief-PA tool one week later. Spearman’s rank correlation coefficient (ρ) was used to assess correlations between the Brief-PA total score with IPAQ-sf results (MET-min/week). Percentage of agreement (%agreement) and Cohen’s kappa were used to assess the agreement between categorical scores obtained from the two measures (i.e., ‘sufficiently’ and ‘insufficiently’ active) and test-retest reliability of the Brief-PA tool.

Significant correlations were found between the Brief-PA tool and the IPAQ-sf (ρ = 0.721, p < 0.001). The Brief-PA tool identified 34.8% sufficiently active participants while the IPAQ-sf identified 59.3%. Agreement between measures was moderate (%agreement=70.9%, kappa=0.450). Test-retest reliability of the Brief-PA tool was substantial (%agreement=89.3%, kappa=0.755).

The Brief-PA tool seems to be valid and reliable for assessing PA in the Portuguese adult population, although the agreement with the IPAQ-sf was only moderate. Further research assessing the validity of the Brief-PA tool with objective measures is needed.

1. Marshall AL, Smith BJ, Bauman AE, Kaur S. Reliability and validity of a brief physical activity assessment for use by family doctors. Br J Sports Med. 2005;39(5):294-297.

Concurrent validity, Daily living, Physical activity, Self-report measure.

P151 Effect of an exercise program on risk of fall in a community dwelling older adults

Sara martins, anabela martins, carla guapo, sĂ­lvia vaz, correspondence: sara martins ([email protected]).

Falls are a problem among the elderly population. It is known that currently about 30% of people over 65 years fall every year. The European Union estimates a cost of € 281 per inhabitant per year and a cost of € 25 billion per year in health care [1] which translates into a significant economic impact. The World Health Organization (WHO) [2] argues that it is possible to reduce these costs through prevention and health promotion strategies. For this, it is important to raise awareness, evaluate risk factors and identify and implement intervention programs.

To test the effect of an exercise program on the prevention of risk of fall.

This study, which lasted 4 months, was experimental, prospective. The experimental group (EG) performed an exercise program and the control group (CG) maintained their usual routine. For the measurement and evaluation of the variables under study, were used: a sociodemographic data questionnaire, the self-efficacy for exercise scale, the Portuguese version of the falls efficacy scale (FES), the 10m walking speed (WS), the Timed Up & Go test (TUG), step test and the Hercules® Force Platform (static balance). A significance level of 5% (p ≤ 0.05) was considered for all comparisons.

After intervention, there were differences in walking speed (p < 0.001), FES (p < 0.001), static balance (p < 0.001), and self-efficacy for exercise (p = 0.004), with EG scoring higher than CG.

This exercise program integrated activities of daily living (ADL), muscle strengthening, balance and flexibility exercises, complemented by walking, showed improvements in static balance and walking speed. There was also a change in the behaviour regarding confidence in the performance of the ADL and perception of ability to learn and integrate exercise in daily life, thus contributing to decrease the risk of fall.

To the Penacova Health Center - ARS CENTRO for opening their facility to the program implementation and for the collaboration provided by all professionals.

1 Active ageing through preventing falls: “Falls prevention is everyone’s business”. European Stakeholders Alliance for Active Ageing through Falls Prevention. Prevention of Falls Network for Dissemination (ProFouND).; 2015.

2 Global recommendations on physical activity for health. Switzerland.: World Health Organization (WHO); 2010.

Prevention, Active Ageing, Fall, Fall Risk, Exercise Programs.

P152 Nutritional impact of food waste in a school cafeteria

Ana braga 1 , ana pedrosa 1 , fabiana estrada 1 , matilde silva 1 , ana sousa 1 , cidĂĄlia pereira 1,2 , joĂŁo lima 1,3 , vânia ribeiro 1,2, 1 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal, correspondence: ana braga ([email protected]).

In Portugal, it is estimated that about 31% of food waste occurs in the last stage of the food production chain and that about 13% of the food served is not consumed [1]. Food waste contributes to an inadequate nutritional intake, since part of the served food is being discarded, therefore its evaluation is important [2].

To evaluate the nutritional impact of food waste in relation to nutritional requirements in a food unit of the Social Services of the Polytechnic of Leiria.

Food waste was determined by the aggregate component weighing method, being evaluated at lunch and dinner periods of a randomly chosen day. The data were collected in the cafeteria and snack bar that served the largest number of meals. Only non-fractioned dishes were considered in this analysis. The nutritional impact of food waste was evaluated considering the nutritional requirements of a typical costumer of the food services - an average energy value of 2000 kcal and the percentages corresponding to the two evaluation moments of the day, corresponding to 55% of the Total Energy Value (TEV), 30% to lunch and 25% to dinner [3, 4].

It was produced 288.9 kg of food, being 40.3 kg of plate waste. The average amount of food served per meal was 329 g and an average of 50 g per meal was plate waste. It can be concluded that about 14% of the total food produced was discarded as plate waste, which represents around 123 meals. When evaluating the nutritional impact of food waste, it was observed that the average waste per person was 84 kcal, which represents about 15.3% of the energy value of meals. It was also observed that 2.4 g of lipids, 5.6 g of protein and 9.6 g of carbohydrates were wasted, per meal, which is the equivalent to about 13.1%, 20.4% and 14.0%, respectively, of the needs of the typical costumer of the food services.

Based on these results, we can observe a food waste impact of about 10% of the nutritional requirements of an individual, being imperative the reinforcement of periodic evaluation of consumption and plate waste in school meals, attending that food waste is a decisive factor in terms of energy and nutritional adequacy [5]. On the other hand, it is still necessary to develop actions to raise the awareness among the student community about the nutritional impact of food waste on the health of the population.

1. Baptista P, Campos I, Pires I, Vaz S. Do Campo ao Garfo. 2012.

2. Figueira J. Influência da satisfação com as refeiçþes escolares no desperdício alimentar, em crianças do 4º ano de escolaridade. 2012.

3. Afonso C, Santos MCT, Morais C, Franchini B, Chilro R, Rocha A. Sistema de planeamento e avaliação de refeições escolares - SPARE. Rev Aliment Humana. 2011;17(1–3):37–46.

4. Jorge IN de SDR. Tabela da Composição de Alimentos. 2007.

5. Bergman EA, Buergel NS, Englund TF, Femrite A. The Relationship of Meal and Recess Schedules to Plate Waste in Elementary Schools. J Child Nutr Manag. 2004;28(2).

College, Dining, Lunch, School Meals, Rests.

P153 Food insecurity and obesity paradox: nutritional intervention strategies

Carla c correia 1 , ana l baltazar 2 , josĂŠ camolas 1 , manuel bicho 1, 1 faculty of medicine, university of lisbon, 1649-028 lisbon, portugal; 2 coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal, correspondence: carla c correia ([email protected]).

The economic crisis in the recent years has triggered social disparities in Europe, which are shown in people’s food insecurity levels and public health. Food insecurity occurs when the consumer’s physical, social and economic access to adequate and nutritional food are scarce or non-existent. Food insecurity is associated to chronic diseases, such as obesity, type 2 diabetes, dyslipidaemia, hypertension, and a poor health status, due to unbalanced food habits and sedentary lifestyles. In this low socioeconomic position, people need social and nutritional intervention to improve their habits and their health, in general.

Analyse and discuss the existing strategies to further intervene at the paradox “food insecurity versus obesity”.

A scientific narrative of the state of art was performed according to PRISMA standards and in snowball, inserting scientific articles, official documents and books applied to the European population, from 2007-2017.

The access of each person to a health and nutritive diet should be a right guaranteed by any country. The strategies to deal with the impact of food insecurity in health status should be multidisciplinary, addressing economic, psychologic, social and physiological issues, together with the health, social, education, agriculture and economic sectors. The prices are an important determinant for people’s choices. Food marketing control and agriculture and local markets supports are strategies to facilitate the access to healthy food. It’s important to implement monitoring programs in primary health care and schools to develop nutrition and physical activity projects at a local level, to alert the professionals to food insecurity issues and the relation with obesity, and to intervene timely in pregnancy and family planning appointments, as means to prevent diseases related to food insecurity.

Chronic diseases bring us high costs to health systems and some questions about their sustainability. An adequate and timely intervention should consider food education and health lifestyles promotion, so that the integration of nutritionists into food assistance programs is emergent.

Food insecurity, Chronic diseases, Social disparities, Nutritional intervention.

P154 Numerical methodology to support a medical device development

Filipa carneiro 1 , lourenço bastos 1 , ângelo marques 1 , rita marques 1 , jordana gonçalves 1 , andreia vilela 1 , andrĂŠ maia 2 , sara cortez 2 , anabela salgueiro-oliveira 3 , pedro parreira 3 , bruno silva 1, 1 innovation in polymer engineering, universidty of minho, 4800-058 guimarĂŁes, portugal; 2 muroplĂĄs – indĂşstria de plĂĄsticos, s.a., 4745-334 trofa, portugal; 3 health sciences research unit: nursing, nursing school of coimbra, 3046-851 coimbra, portugal, correspondence: filipa carneiro ([email protected]).

The use of numerical simulations as part of product development process allows the existence of virtual prototypes that can be tested quickly and cheaply. These computational tools facilitate and improve the design optimization and the materials selection, to match the pre-defined product requirements, during the development process, for use in health contexts [1].

Development and validation of a numerical methodology to support an innovative syringe, predicting its mechanical and flow behaviour, during the syringe loading and patient administration; as also the injection molding process of its constituent components. This methodology aims to optimize the geometries of the syringe’s components, creating in this way an iterative process for product development based on numerical simulations.

This numerical methodology was implemented using software for fluid dynamics, mechanical behaviour and injection process, respectively. To study the fluid-structure interaction (FSI), during syringe loading and patient administration of medicines and washing solutions, the fluid dynamics output served as structural simulations input. Material properties were experimentally determined. The FSI numerical models were validated by comparison with experimental tests on a single chamber syringe. Afterwards, the same numerical models were implemented in new innovative syringe concepts. The injection molding processes of these concepts were also numerically evaluated.

Validation of the numerical simulations using a simple case of a single chamber syringe, where the numerical solution is compared with an experimental case. Development and application of an iterative process for product development, based on numerical simulations. Optimization of an innovative product design that fulfils all the specifications and requirements predefined.

This iterative process based on numerical simulations is a powerful tool for product development that allows obtaining fast and accurate results, without the strict need of prototypes. An iterative process can be implemented, consisting on consecutive constructions and evaluations of new concepts, to obtain an optimized solution, which fulfils all the predefined specifications and requirements. The prior validation of the numerical methodology with a reference model, is an extremely important step to guarantee the reliability of the numerical model applicability in the development of medical devices.

Work funded by the FEDER fund, Operational Programme for Competitiveness and Internationalisation (COMPETE 2020), project POCI-01-0247-FEDER-017604.

1. Oliveira RF, Teixeira SFCF, Silva LF; Teixeira JCF, Antunes H. Development of new spacer device geometry: a CFD study (Part I), Computer Methods in Biomechanics and Biomedical Engineering. 2012;15(8):825-833.

Numerical simulation, Fluid-structure interaction, Injection process, Syringe.

P155 The impact in burden of care provided by informal caregivers of patients with mental illness

Catarina tomĂĄs 1,2 , ana querido 1,2 , marina cordeiro 1,2 , daniel carvalho 3 , joĂŁo gomes 3.

Psychiatric disease is one of the most incapacitating conditions, creating the need for continuous care, generating burden in family members [1, 2]. Recent research has revealed some tasks developed by family caregivers like preparing meals, help in daily living activities and in-house maintenance which contributes to enhance burden.

To understand the correlation between provided care and burden in informal caregivers of patients with mental illness and to analyse the impact of the care provided in caregivers’ burden.

This is a cross-sectional correlational study. Data was collected in a sample of 113 caregivers, in 2015, using a face-to-face interview which comprised sociodemographic questions, the type of care provided and the Zarit Burden Interview (Portuguese version by Sequeira [3]). Ethical procedures were taken into account during research according to the Helsinki Declaration.

Sample was mostly composed by females (70.8%), aged between 20 and 81 years old (Mean=49.85; SD=14.25), married (74.3%), wives (15.9%) and mothers (17.7%) of the patients. Depressive disorders (36.3%) were the most common mental problems of their sick relatives. Most caregivers categorized themselves has primary (39.8%) and secondary caregivers (42.5%) providing total care most frequently (Mean=26.58; SD=31.20). The majority of these caregivers presents no burden (62.8%), scoring the mean 38.31 (SD=22.54). Total burden is correlated with the provision of total care (R=0.340; p=0.000) and support (R=0.216; p=0.022). This kind of care provision is also correlated with impact of giving care (R=0.355; p=0.000), interpersonal relations (R=0.360; p=0.000) and perceptions of self-efficacy (R=0.275; p=0.003). Burden is 11.5% explained by the total care provided (F=14.330; p=0.000) and 6% by support provided (F=11,542; p=0,000). Additionally, 25.3% of burden is explained by care in dressing and shoeing (F=37.288; p=0.000), 5% by preparing the meals (F=23.724; p=0.000) and 4.8% by support to accomplish patient professional demands. All factors of burden were influenced by total care provided (p<0.005).

A medium range of burden was found in this sample of caregivers of patients with mental illness inquired. By considering themselves has secondary caregivers they provide regularly or occasionally care depending on their relative’s needs. Nevertheless, they provide total care frequently. There was a positive and significant correlation found between burden and care provided, with an impact of this care in the caregiver’s burden. Intervention in caregivers of patients with mental illness should address this relation, providing support and developing their skills to improve the care provided and preventing caregivers’ burnout.

1. Albuquerque E, Cintra A, Bandeira M. Sobrecarga de familiares de pacientes psiquiåtricos: comparação entre diferentes tipos de cuidadores. J Bras Psiquiatr. 2010;59(4):308-316.

2. Eloia S, Oliveira E, Eloia S, Lomeo R, Parente J. Sobrecarga do cuidador familiar de pessoas com transtorno mental: uma revisĂŁo integrativa. SaĂşde Debate. 2014;38(103):996-1007.

3. Sequeira C. Adaptação e validação da Escala de Sobrecarga do Cuidador de Zarit. Revista Referência. 2010;2(12):9-16.

Family caregivers, Care provided, Mental disorders, Burden.

P156 Levels of physical activity and sedentary behavior in school-time of elementary school children

Mariana lima 1 , ana soares 1 , andreia santos 1 , fernando martins 1,2 , rui mendes 1,3, 1 coimbra education school, polytechnic institute of coimbra, 3030-329 coimbra, portugal; 2 telecommunications institute, university beira interior, 6200-161 covilhĂŁ, portugal; 3 center for research on sport and physical activity, university of coimbra, 3040-248 coimbra, portugal, correspondence: mariana lima ([email protected]).

Zimmo et al. (2017) [1], concluded that in what concerns the physical activity (PA) of children in elementary schools (ES), only 39% of the children reached the recommended values of moderate (M) and vigorous (V) PA (30 or more daily minutes), and showed that children spend most of their school time involved in sedentary activities (SB).

The aims of this study were I) the description of PA (light, moderate and vigorous levels) and sedentary behaviours (SB) of ES boys and girls in relation with the time spent in each level of PA; II) to determine the average time children spend on each level of PA during 4 weekdays (9:00 a.m. to 5:30 p.m., Monday to Thursday) and, III) to compare the MVPA developed by children during formal physical education sessions, with other weekdays.

Forty (40) voluntarily children with 7.9 Âą 0.6 years (30 girls and 10 boys) were authorized by their parents to participate in the study. PA was assessed using a three-axial accelerometer (ActiGraphÂŽ wGT3X-BT) during to 7.5 daily hours of school period. For classifying moderate-to-vigorous physical activity (MVPA) a cut-off point of 818 counts per 5s was used.

The average duration of MVPA was 36.51 Âą 13.50 min per day. Only 35% of the participating children reached the recommended school-based MVPA of 30 min or more, per day. Children spent on average 71.74 Âą 6.37% of their school time on SB. MVPA of girls were lower (31.9 Âą 8.9 min/day) than boys (51.3 Âą 16.1 min/day, ES = 1.76, p=0.004). Our results showed that the percentage of MVPA on the day of the physical education lesson (11.4 Âą 3.8%) was higher, when compared to other weekdays (7.5 Âą 2.6%).

We objectively assessed PA during school hours among elementary school-children. This study found that children spend the majority of their school time in SB and many of them do not perform sufficient time of being physically active at school. The low participation of girls in MVPA and the lesser time spend in MVPA on weekdays without physical education lessons are relevant data to reflect and implement strategies to increase the time of physical activity during school-time, which corresponds to one third of the daily life of children.

Research partially supported by QREN, Mais Centro - Programa Operacional Regional do Centro, FEDER (CENTRO-07- CT62-FEDER- 005012; ID: 64765).

1. Zimmo L, Farooq A, Almudahka F, Ibrahim I, Al-Kuwari M. School-time physical activity among Arab elementary school children in Qatar. BMC Pediatrics. 2017;17(1):76.

Physical activity, Motor development, School-time, Accelerometer, Physical education.

P157 Labor pain relief: sterile water injection vs finger ischemic compression technique on the lumbosacral region

Ana moulaz ([email protected]).

Nowadays there is an overvaluation about the pain of giving birth. Therefore, Brazil presented 56% of caesareans and Portugal 33.1% in 2016. In order to assist women when giving birth, analgesia has been a strong ally to enable delivery without pain. There are non-pharmacological techniques for pain control in obstetrics. There is, the Sterile Water Injection on the Michaelis Triangle which causes immediate pain relief in the lumbosacral region. Physiologically, sterile water doesn’t act as a local anaesthetic and doesn’t inhibit the fibres that report visceral pain; however, causes a release of the C fibres and A-delta fibres associated with somatic pain. Distilled water stimulates A-delta fibres and subjugates the visceral pain reported by the C fibres, which fails to notify visceral pain, modulating afferent patterns of pain. In this way, it silences the C fibres and releases endorphins. Based on this mechanism, an experimental study has been done applying the finger ischemic compression technique to the same region, in Brazil in 2014, to characterize the technique as a tool to pain control.

To compare the technique of sterile water injection with the experiment of finger ischemic compression in the lumbosacral.

A Comparative study between those techniques through a systematic review of literature based on the Clinical Practice Guide on Normal Childbirth Care, by the Basque Government, about sterile water injection and the finger ischemic compression technique (Nursing Residency Program in Obstetrics, in Brazil in 2015). This study compares the effect versus time duration of analgesia between them.

The analysis of 292 studies showed that the sterile water injection into the Michaelis triangle decreased lumbar pain during delivery by approximately 60%, and the effect remained for up to 2 hours. In the finger ischemic compression experiment, there was a pain level reduction by 66%, remaining for 4 hours. However, in both cases, patients referred an intense burning during application.

This study contributes to pain control in obstetrics, as the two methods lead to a significant reduction of the pain level. However, finger ischemic compression had a longer duration of analgesia, when compared to sterile water injection. The injection in obstetrics is already recommended as an effective method for pain control and there is a need for further research on the finger ischemic compression experiment, to also consolidate the technique as a method of pain management.

Labor pain relief, Obstetrics, Giving birth, Non-pharmacological techniques, Analgesia.

P158 Microencapsulation of phytosterols and/or other bioactive ingredients for minimizing cardiovascular risk

Pedro vieira 1 , isabel andrade 2 , rui cruz 1, correspondence: pedro vieira ([email protected]).

Several bioactive ingredients such as phytosterols, resveratrol, curcumin and catechin have shown efficacy in the prevention of chronic diseases, namely in cardiovascular diseases, one of the leading causes of death in the world. Besides all the potentialities, many of these bioactive ingredients present the disadvantage of instability and sensitivity to environmental factors (e.g. light and oxygen) and are also characterized by low water solubility due to its lipophilicity. Microencapsulation of bioactive compounds emerges as a strategy to improve their stability, avoiding adverse conditions and promoting their bioavailability. It includes a set of several techniques that allows the coating of these ingredients through a protective film, to guarantee its protection and to modulate its release at the target cells. Many techniques of microencapsulation and different materials can be used, the choice depends on the intended application, particle size, the release mechanism and on the physicochemical properties of both active material to be encapsulated and encapsulating agent.

To review the available research in this field of expertise.

A research was conducted using Google Scholar, PubMed and Elsevier databases, in Portuguese, English or Spanish languages over the last 10 years, and using as keywords Microencapsulation; bioactive compounds; phytosterols; cardiovascular risk.

The search retrieved 10 studies. The findings demonstrated not only the potential of the compounds in the reduction of cardiovascular risk, through the action on risk factors, but also the increase of their activity due to the increase of their bioavailability, achieved through microencapsulation. The research also revealed a wide range of materials that can be used, while the choice of method must take into account several factors such as its cost and necessary equipment, as well as the quality of the microcapsules formed. The microencapsulation process has a wide range of advantages, namely the ability to increase gastrointestinal absorption of the bioactive compounds encapsulated, and in addition, while the conversion of those compounds into the powder form is enhanced, their stability increases and their handling is facilitated.

The use of microencapsulation avoids some undesirable characteristics, proving to be an alternative to increase the bioavailability of the bioactive compounds. Future studies should address the exploration of newer encapsulating methods and materials.

Microencapsulation, Bioactive compounds, Phytosterols, cardiovascular risk.

P159 A weighted decision making approach for a new medical device concept selection

Marta gomes 1 , ângelo marques 1 , ricardo freitas 1 , anabela salgueiro-oliveira 2 , pedro parreira 2 , alberta coelho 3 , sara cortez 3 , bruno silva 1 , filipa carneiro 1, 1 innovation in polymer engineering, universidty of minho, 4800-058 guimarães, portugal; 2 health sciences research unit: nursing, nursing school of coimbra, 3046-851 coimbra, portugal; 3 muroplás – indústria de plásticos, s.a., 4745-334 trofa, portugal.

The development of an innovative syringe results, at an early stage, in several preliminary concepts that should be compared and analysed to select the most promising one. Making the selection decision is a very important task that should be structured and well-founded. Decision making methods can be applied to help in the selection process, leading to more informed and better decisions.

To apply a weighted decision matrix, in order to select the most promising concept for an innovative syringe.

To select the most suitable concept for the device to be developed, it is mandatory to define the specifications that it must obey, considering the functional requirements of the product and the requirements for the production process. Not all the specifications have the same importance in concept selection, and so this was analysed trough a weighted decision matrix. Based on the identified specifications, it was defined the corresponding criteria, which must be evaluated and weighted regarding their relative importance to each of the other criteria. These weights were summed for each criterion. As experts from different areas were involved, it was decided that each expert group should weigh, considering their experience. As expected, different matrixes were obtained and results were managed to achieve only one weight matrix which included the opinion of all the groups. A final weighted matrix with all the criteria and its weight for concept selection was obtained. Each concept of the syringe was evaluated according to these defined criteria and calculated it weighted sum. It was selected the concept with the highest weighted sum of all the defined criteria.

By analysing the proposed weighting table of the specifications, and considering the defined relative weight, it was verified that the most relevant specifications are the cost, the number of operations to be performed and the possibility of error. Based on this, the concept with better weighted performance was selected for further detailed development.

A weighted decision matrix has showed to be a very effective tool to assist the development of new products for use in health, particularly in cases where there are many concepts and many criteria of varying importance to be considered.

Weighted decision matrix, Medical device, Product design.

P160 Vestibular symptoms in sensorineural hearing loss

Maria araĂşjo 1 , luĂ­s rama 2, 1 audiology department, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal; 2 research center for sport and physical activity, faculty of sport sciences and physical education, university of coimbra, 3040-256 coimbra, portugal, correspondence: maria araĂşjo ([email protected]).

The type of aetiology of hearing loss is directly related with the embryonic and physiological interactions of the anatomical structures of the auditive and vestibular systems. The proximity between both system structures can evolve simultaneously audition and balance, mainly in individuals with peripheral pathologies.

To characterize and evaluate the vestibular symptoms in individuals with bilateral severe and profound hearing loss.

The information was gathered with resource to two questionnaires: one for anamnesis and the Dizziness Handicap Inventory (DHI). The sample was constituted by 28 adults with severe and profound sensorineural hearing loss, between 19 and 64 years old (15 females and 13 males). 42.9 % of the sample had idiopathic aetiology, 10.7 % had meningitis and the same percentage with the aetiology of measles.

Seventy-five percent 75% had tinnitus, being 33% bilateral. Regarding vertigo symptoms, 53.6% reported at least one episode, describing it as a rotatory vertigo (66.7%), with the length of minutes (33.3%) to days (26.7%). Regarding DHI, the functional sub scale is the one that perceives more difficulties when performing daily tasks, as a consequence of vertigo and/or unbalance (5.7 Âą 8.5), followed by the physical sub scale (4.8 Âą 6.8).

The study has shown that the major part of the sample had already faced symptoms as tinnitus, vertigo and/or unbalance.

Hearing Loss, Vestibular symptoms, Tinnitus, Vertigo.

P161 Men’s prenatal experience in the transition to fatherhood

Catarina silva 1 , cristina martins 2 , cândida pinto 3, 1 agrupamento de centros de saĂşde do alto ave, 4810-503 guimarĂŁes, portugal; 2 nursing school, university of minho, 4700-057 braga, portugal; 3 nursing school of porto, 4200-072 porto, portugal, correspondence: catarina silva ([email protected]).

Pregnancy is a demanding period in terms of psychological reorganization in the transition to fatherhood [1,2]. The involvement of men in this period is associated with their own psychological well-being as well as the whole household [3]. This is a transition with implications for the couple, for the father/child relationship, and child development [4]. Contemporary fatherhood emphasizes the involvement of men and greater affective contact with their children, in addition to their traditional role as a financial provider [5].

This study sought to understand the experiences of men as they transition to fatherhood during the prenatal period, aiming to contribute to a complete assistance to the family and improving the health gains of the family.

Qualitative research paradigm. Exploratory, descriptive, cross-sectional and retrospective study, with the participation of 10 men experiencing, for the first time, the pregnancy of their partners, in the last trimester, in a common-law marriage and with gestation without maternal-foetal pathology. Data collection was performed using the semi-structured interview. The data was analysed with the content analysis technique, with semantic categorization and an inductive approach.

Through content analysis three themes emerged, “experiencing transition”, “development of father identity” and “(de)construct bridges to transition.” The results revealed a male experience of pregnancy characterized by an enormous psychic and emotional depth. Men accept and try to actively engage in the pregnancy process and experience a panoply of positive and negative feelings and emotions, sometimes ambivalent. The prenatal period triggers the development of paternal identity. During this process, expectant fathers revaluate their personal values, reflect on their own fathering experiences and reshape their view of the world and themselves. Despite their proactive role in the fatherhood journey, they find obstacles and not bridges to their transition. The fact that they feel peripheral in prenatal care services makes it more difficult to embrace fatherhood and may compromise overcoming the transition process.

This study provided insight on the complex nature of this developmental transition and showed fragile environments in the approach of the expectant fathers to the prenatal care services. This reality encourages healthcare professionals to think critically about how fatherhood transition can be facilitated by practices which promote smoother transitions and benefit the family as a whole.

1. Condon J, Boyce P, Corkindale C. The First-Time Fathers Study: a prospective study of the mental health and wellbeing of men during the transition to parenthood. Aust N Z J Psychiatry. 2004;38(1-2):56-64.

2. Genesoni L, Tallandini M. Men's Psychological Transition to Fatherhood: An Analysis of the Literature, 1989-2008. Birth. 2009;36(4):305-318.

3. Plantin L, Olukoya A, Ny P. Positive Health Outcomes of Fathers' Involvment in Pregnancy and Childbirth Paternal Support: A Scope Study Literature Review. Fathering: A Journal of Theory, Research, and Practice about Men as Fathers. 2011;9(1):87-102.

4. Bawadi H, Qandil A, Al-Hamdan Z, Mahallawi H. The role of fathers during pregnancy: A qualitative exploration of Arabic fathers’ beliefs. Midwifery. 2016;32:75-80.

5. McGill B. Navigating New Norms of Involved Fatherhood. J Fam Issues. 2014;35(8):1089-1106.

Fathers, Parenting, Pregnancy, Qualitative research.

P162 Diffuse large b-cell lymphoma, which treatment options are available?

Mariana carvalho 1 , fernando mendes 2,3,4 , salomĂŠ pires 2,4 , ricardo santo 2,7 , nicole eicher 8 , ricardo teixo 2,4 , rui cruz 1, 1 pharmacy department, coimbra health school, polytechnic institute of coimbra, 3046-854 coimbra, portugal; 2 biophysics unit, faculty of medicine of university of coimbra, 3004-504 coimbra, portugal; 3 department biomedical laboratory sciences, coimbra health school, polytechnic institute of coimbra, coimbra health school, 3046-854 coimbra, portugal; 4 center of investigation in environment- genetics and oncobiology, faculty of medicine of university of coimbra, 3004-504 coimbra, portugal; 5 immunology institute, faculty of medicine, university of coimbra, 3004-504 coimbra, portugal; 6 applied molecular biology and clinical university of hematology, faculty of medicine of university of coimbra, 3004-504 coimbra, portugal; 7 faculty of sciences and technologies, university of coimbra, 3004-504 coimbra, portugal; 8 biomedical laboratory sciences department, university of applied sciences, 6711 gratz, austria.

Diffuse large B-cell lymphoma (DLBCL) is one of the most common and aggressive subtypes of non-Hodgkin's lymphoma (NHL). The current therapy for DLBCL is the combination of rituximab, cyclophosphamide, doxorubicin, vincristine and prednisone (R-CHOP). R-CHOP-14 and R-CHOP-21 are two subtypes of R-CHOP, which differ in their treatment, like CHOP-14 that has better results within high dosage in advanced stages, while CHOP-21 has better efficiency and better overall survival in elderly patients. These treatments have increased the survival of patients, though about 40% of patients there is still a failure rate of treatment. New approaches for relapsed DLBCL are outlined here, for example the understanding of the new drugs role, individualized treatments and dosage regimes.

To review the treatments available for DLBCL and all the developments of novel treatments, particularly in DLBCL recurrence variants of treatment with R-CHOP and in patients over 80 years.

A research was conducted using Google Scholar and PubMed databases, in English languages over the last 5 years, and using as keywords: Diffuse large B-cell lymphoma (DLCBL) treatment; rituximab, cyclophosphamide, doxorubicin hydrochloride, vincristine sulphate, prednisone (R-CHOP); cyclophosphamide, doxorubicin hydrochloride, vincristine sulphate, prednisone (CHOP).

The treatment of this pathology has advanced over time starting with CHOP and later incorporating rituximab with lower relapse and upturn survival rates. Currently, about 80% to 90% of patients in early stage of DLBCL remain free of the disease after treatment with R-CHOP and radiotherapy consolidation. Concerning elderly patients (> 80 years), the addition of rituximab, namely, a reduced dose R-CHOP provides a good compromise between toxicity and the efficiency of overall survival. The survival of patients with DLBCL with more than 66 years has improved substantially since the introduction of rituximab. However, finding the optimal dose for older patients without associated toxicity is an important focus for further research.

The DLBCL is a heterogeneous disease, both clinically and biologically. Although, DLBCL therapy results have improved significantly over the past decades with the introduction of new specific therapeutic antitumor strategies.

Diffuse large B-cell lymphoma, Rituximab, Cyclophosphamide, Doxorubicin, Vincristine, Novel treatments.

P163 Mistreatment to elderly in family context

Maria fp ribeiro ([email protected]), school of health of vale do sousa, north polytechnic institute of health, 4760-409 vila nova de famalicĂŁo, portugal.

The evidence about the increasing number of elderly people who, in their everyday lives, are abused and the implications that these events have on the practice of health professionals, plus the need for new ÂŤpowersÂť bear alone the importance of the subject of this study. This issue of great sensitivity imposes a multifactorial attention to the social, educational, clinical and ethical domains. This involves the concerted action by all health professionals whose mission passes through contacting with seniors.

To identify signs of elder abuse and typology of abuse in a family context.

Exploratory study and empirical work, constituted by a sample of four hundred (400) elderly, enrolled in a ACES in the north of the country, in order to identify signs of abuse. In this study, a questionnaire was used to collect information about indicators of abuse in the elderly [1]. For processing and analysis, we’ve recurred to technics of descriptive and inferential statistics using factor analysis.

The analysis made allowed us to sort indicators of abuse, by type of abuse, in the following order: neglection (64.8%), emotional/psychological (26.6%), economic (21.6%) and physical abuse (7.1%).

The results of this study suggest the emergence and necessity of early screening for signs and/or symptoms indicative of risk factors that may lead to the installation of maltreatment. The role of health professionals proves to be of prime importance and helps fighting this phenomenon.

1. Carney MT, Kahan FS, Paris Barbara EC. Questions to Elicit Elder Abuse (QEEA), 2003. Translated and adapted into Portuguese by Alves, JF; Sousa M. 2005.

Abuse, Elderly, Mistreatment, Prevention.

P164 Guidelines and training a role to play for learning health organizations? The HAIs example

Sandra oliveira 1,2 , sofia ferreirinha 3 , carla cordiro 3 , antĂłnio lopes 4, 1 polytechnic institute of santarĂŠm, 2001-902 santarĂŠm, portugal; 2 center for health studies and research, university of coimbra, 3004-504 coimbra, portugal; 3 district hospital of santarĂŠm, 2005-177 santarĂŠm, portugal; 4 hospital center of the medio tejo, 2304-909 tomar, portugal, correspondence: sandra oliveira ([email protected]).

International projections estimate that, in 2050, 390,000 people will die annually in Europe, as a direct consequence of Healthcare Associated Infections (HAIs) [1]. In Portugal, we estimate that 5 in 100 patients could have acquired HAIs during hospitalization [2]. Research based on practice guidelines was published, and despite evidence that good practice strategies are sufficient to reduce the rate of HAIs, hospitals struggle to comply [3]. Investigation of organizational solutions that may contribute to reduce the rate of HAIs is much needed.

This exploratory study has the following objectives: (1) to identify if the legal standards are known by health professionals of acute services of Portuguese hospitals, detect training actions or courses as well as the most common subjects taught in formal training actions; (2) to recognize the strategies implemented in the hospitals; (3) to analyse whether there are differences between knowledge and implementation of practices in hospitals; (4) to study the suggestions made by the health professionals.

Through a quantitative and qualitative approach, using quantitative data analysis and content analysis procedures, this study explores the different perspectives given by the health professional groups. The design of the study involved a literature review, study of the legal standards and conduction of informal interviews to field experts, with the aim of producing a list of topics to assess the level of implementation of the legal norms. A focus group method was used to encourage participants to exchange experiences and perspectives. These interactions allowed the collection of more detailed information and an in-depth exploration of the opinions of the participants [4][5]. The focus group also served as a pre-test to the questionnaire. A convenience sample of four acute services of Portuguese Hospitals was selected. The questionnaire was distributed in the hospitals after receiving the accordance of the Administration Board and Ethical Commission of the institutions involved. Participation in the study was voluntary and ranged all the health professionals of the medical and surgery services.

The results suggest that, although aware of the legal norms, when we control for differences between groups, we find differences between health professional groups. Health professionals recognize and value the existence of training, mainly under the responsibility of the Health Institutions, but do not consider it effective.

This research highlights the importance of spreading knowledge and training in healthcare organizations, notably through the identification of the need for new approaches of training as well as for new training areas.

1. Direção Geral de Saúde 2016.

2. OPSS Acesso aos cuidados de saĂşde. Um direito em risco?. RelatĂłrio de Primavera ObservatĂłrio PortuguĂŞs dos Sistemas de SaĂşde (OPSS). Lisboa; 2015.

3. Zingg W, Holmes A, Dettenkofer M, Goetting T, Sicca F, Clack L, et al. Hospital organization, management, and structure for prevention of health-care-associated infection: a systematic review and expert consensus. Lancet Infect Dis. 2015;15(2):212-224.

4. Kitzinger J. Introduction focus groups in qualitative research. In: Mays N, Pope C., editors. Health care. London: Blackwell; 1996. p. 36–45.

5. Morgan DL. Focus groups as qualitative research. 2nd Edition. Outstand Oaks, CA: Sage; 1994.

Guide lines, HAIs, Training and Health Organizations.

P165 Barriers, obstacle, difficulties or challenges in development of health partnerships in community intervention projects: a systematic review

Odete alves 1 , paula c santos 2,3 , lĂ­dia fernandes 4 , paulo moreira 5,6, 1 abel salazar institute of biomedical sciences (icbas), university of porto, 4050-313 porto, portugal; 2 physical therapy department, health school, polytechnic institute of porto, 4200-465 porto, portugal; 3 research center in physical activity, health and leisure, faculty of sport, university of porto, 4200-450 porto, portugal; 4 school of health, polytechnic institute of viana do castelo, 4900-314 viana do castelo, portugal; 5 center of administration and public policies of the university of lisbon, 1300-663 lisbon, portugal; 6 university atlântica, 2730-036 oeiras, portugal, correspondence: odete alves ([email protected]).

Engaging communities in authentic partnerships is increasingly accepted as best practice in community intervention projects, despite the many barriers or challenges to doing so.

The purpose of this study is to identify barriers, obstacles, difficulties or challenges in development of health partnerships in community intervention projects of some countries.

We conducted a systematic review using the following data sources: PubMed, B-on, Medline and EBSCOhost. We searched articles from September 2006 through January 2016. A standard form was used to extract data using the key words in the search: Health Partnerships AND Community Health AND Primary Health Care. The articles were selected according to inclusion and exclusion criteria. In the end, we grouped these results based on the six categories used by the Wilder Research Centre: Environment; Membership; Process and structure; Purpose; Communication; Resources, which includes leadership and power.

From the research conducted, 844 articles emerged, which were submitted to the filter, which implied references to at least one of the keywords: Barriers OR Obstacle OR Difficulties OR Challenges. According to the inclusion criteria, a total of fifty-six articles was found. Of these articles, forty-four dealt with factors relating to Environment, which included factors related to community, geography, culture, religious faith and homophily, and politics. Regarding the characteristics of the members that influence the development of the partnership, the relationship between the partners is key and this was commented in fifty-three articles. Factors relating to the process of collaboration were found in a total of forty-six articles, while factors related to structural elements were mentioned in forty articles. Thirty-one articles identified the factors relating to objectives, vision and mission. Another factor that was highly discussed was communication, appearing in a total of thirty-four articles. Factors relating to resources were given great importance in the literature, appearing in forty-eight articles. In twenty-three articles, the subject of leadership and factors relating to power were mentioned. The literature reviewed highlighted that factors such as relationships, commitment, communication, funding and structure, are key in the long-term sustainability of the partnership. This topic appeared in fifteen articles.

The systematic literature review identified a set of barriers, obstacles, difficulties or challenges for the development of health partnerships in community intervention projects. In each of the categories we present the factors that are related to them and that can positively or negatively influence the development of those collaborations.

We are very grateful to Dr. Alcino Maciel Barbosa for his insightful comments on an earlier draft of this project and to Caroline Esteves for her contributions in this paper.

Barriers, Obstacle, Difficulties, Challenges, Partnerships.

P166 Parental perception of child body image: retrospective analysis of two studies

Graça aparĂ­cio 1,2 , patrĂ­cia nascimento 3 , madalena cunha 1,2 , joĂŁo duarte 1,2, 1 escola superior de saĂşde de viseu, instituto politĂŠcnico de viseu, 3500-843 viseu, portugal; 2 centro de estudos em educação, tecnologias e saĂşde, 3504-510 viseu, portugal; 3 centro de saĂşde de seia, 6270-468 seia, portugal, correspondence: graça aparĂ­cio ([email protected]).

Childhood obesity is a major problem in Portugal and studies show that parents are concerned about their children's overweight, but their perception of children's nutritional status is not always adequate or is even distorted.

The general objective was to explore the parental perception of their children's body image in two studies, study A [1] and study B [2].

Cross-sectional and retrospective study using two samples, from study A (792) and study B (1424) totalizing 2,216 pre-school children, with mean age = 4.51 years old (SD= 0.97), living in the region of Viseu and Dão (study A) and Viseu, Lamego, Vila Real, Évora and Leiria (study B). The original authors performed the children's anthropometric evaluation and the nutritional classification based on NCHS reference (CDC, 2000). A Sociodemographic Characterization Questionnaire for Children and Parents was used and the Parental Perception of Children's Body Image Assessment [3].

In Study A, overweight was 31.3% (including 12.4% obesity) and 2.7% underweight. In study B was 34.3% overweight (17.4% obesity) and 5.5% underweight, with significant differences (Chi-square= 21.355; p= 0.000). In study B, parents were significantly more concerned about their children nutritional status (UMW = 498564.000, p = 0.000) and a higher percentage of parents pointed the representative pre-obesity images (27.5%) and obesity (0.6%), compared to study A, where more children in the normal and low-weight group (56.3% and 20.4% respectively) were selected. A significant difference of means from the parental perception of the child's body image was found between studies (UMW = 528960.500; p = 0.037), showing a closer perception to the higher values of BMI, i.e. , parents presented a less distorted perception of their children's body image, when they have higher BMI values.

The results indicate more accuracy of the parental perception of the children body image and an oncoming to their real nutritional status in the latest study. This may be the first step towards their recognition of their children's overweight that is critical to prompting family action, and consequently preventing and treating childhood obesity.

1. Aparício G, Cunha M, Duarte J, Pereira A. Olhar dos pais sobre o estado nutricional das crianças prÊ-escolares. Millenium 2011;(40):99-115.

2. AparĂ­cio G, Cunha M, Duarte J, Pereira A, Bonito J, Albuquerque C. Nutritional status in preschool children: Current trends and concerns. AtenciĂłn Primaria 2013;(45)(Espec cong 1):94-200.

Parents, Weight perception, Body image, Pediatric obesity.

P167 Informal caregivers of mental health patients: burden and care provided

Families and informal caregivers play an important role in providing care for mental health patients [1]. Family caregivers of these patients usually are overloaded with caring activities [2], being more vulnerable to psychological disturbance and burden due to the care provided [3].

To characterize the care provided; access the burden experienced by informal caregivers of mental health patients and identify its determinants.

Cross-sectional correlational study, with a non-probabilistic sample of 113 Portuguese relatives and caregivers of mental health patients. Data were collect in the first semester of 2015. Caregivers were interviewed about sociodemographics, type of care (wholly compensatory, partially compensatory, supervision), time spent in self-care daily activities and its intensity (0-7) and with the Portuguese version of the Zarit Burden Interview [4]. Ethical procedures, according to Helsinki Declaration, were taken into account.

Caregivers were mainly females (n=80), with a median age of 51 years, married (n=84), with 12 years of education (n=33) and patients-mothers (n=20). These patients had majorly depressive disorders (36.3%). Caregivers cared for their relatives from 0 up to 54 years (Median=3.25; Mean=7.86; SD=10.78). Most of them (42.5%) corresponded to someone who provided care occasionally. Nevertheless, they provided a wholly compensatory care in most areas of patient self-care daily activities (Mean=26.58; SD=31.20). Preparing and providing meals (Mean=3.54; SD=3.29), Organizing medication (Mean=3.57; SD=3.16) and its administration (Mean=3.30; SD=3.29) was the most frequent self-care provided. Intensity of wholly compensatory care provided was higher in female (t=-2.950; p=0.004) older (r=0.215; p=0.022) relatives, who took full responsibility for caring (p<0.001), living with the patient (t=-2.762; p=0.007). Most caregivers revealed no burden (62.8%; Mean=38.31; SD=22.54). Burden was higher in females (t=-2.869; p=0.005), older (r=0.259; p=0.006) and among primary or secondary caregivers (p<0.001). Impact of giving care, interpersonal relations and perceptions of self-efficacy were also higher in older females with lower education relatives, primary caregivers and caregivers that perspective gravity of their relative illness higher.

The caregivers inquired were mainly secondary and provided total care in all areas. The burden presented by those was low, and they provided care especially in preparing meals and organizing and administering medicines. In this sample, care provided and burden were higher in older female caregivers, low educated and with a high perspective of gravity of their relative illness. Intervention among these caregivers is needed, promoting knowledge, improving skills and providing support, which can allow to reduce the psychological consequences of the needed assistance by the relatives.

1. Pakenham K. Caregiving tasks in caring for an adult with mental illness and associations with adjustment outcomes. Int J Behav Med. 2012;19(2):186-198.

2. Martins S, Bandeira M, Nascimento E. Sobrecarga de familiares de pacientes psiquiĂĄtricos atendidos na rede pĂşblica. Rev Psiq ClĂ­n. 2007;34(6):270-277.

3. Cabral L, Duarte J, Ferreira M, Santos C. Anxiety, stress and depression in family caregivers of the mentally ill. AtenciĂłn Primaria. 2014;46(5):176-179.

4. Sequeira C. Adaptação e validação da Escala de Sobrecarga do Cuidador de Zarit. Revista Referência. 2010;2(12):9-16.

Informal caregivers, Mental health, Burden, Family care.

P168 Your PEL - promote and empower for literacy in health in young people: from investigation to action

HĂŠlia dias 1 , josĂŠ amendoeira 1 , ana spĂ­nola 1 , maria c figueiredo 1 , celeste godinho 1 , clara andrĂŠ 1 , filipe madeira 1 , manuela ferreira 2 , josĂŠ c quaresma 3 , mĂłnica ferreira 4 , teresa simĂľes 5 , rosĂĄrio martins 6 , antĂłnio duarte 1 , madalena ferreira 1 , marta pintor 1, 1 escola superior de saĂşde de santarĂŠm, instituto politĂŠcnico de santarĂŠm, 2005-075 santarĂŠm, portugal; 2 escola superior de saĂşde de viseu, instituto politĂŠcnico de viseu, 3500-843 viseu, portugal; 3 escola superior de saĂşde de leiria, instituto politĂŠcnico de leiria, 2411-901 leiria, portugal; 4 agrupamento de escolas da chamusca, 2140-052 chamusca, portugal; 5 agrupamento de escolas da golegĂŁ, azinhaga e pombalinho, 2154-909 golegĂŁ, portugal; 6 unidade cuidados na comunidade chamusca golegĂŁ, agrupamento de centros de saĂşde lezĂ­ria, 2140-078 chamusca, portugal, correspondence: hĂŠlia dias ([email protected]).

The “Your PEL - Promote and empower for literacy in health in young people” project focus on a health approach, supported by the new technologies, including three different areas: feeding, harmful consumption and sexuality. It is based on scientific evidences of health promotion, on which the need to refocus the action on the results implies the development of appropriate interventions [1, 2]. It is a multi-regional project outlined in the national strategy of smart specialization, in a partnership between IPSantarém – ESSS and ESGT, IPLeiria – ESSL e IPViseu – ESSV, the Agupamento de Escolas da Chamusca, the Agrupamento de Escolas da Golegã and ACES Lezíria – UCC Chamusca/Golegã. The students’ participation reveals itself in the project construction, framed in the curricular program and valued by the knowledge mobilization and by the skill gaining on real context.

Develop a tool for impact evaluation of health education programs for school in the feeding, harmful consumption and sexuality areas, at ages between 12 to 15 years old and monitoring the health determinants and the effectiveness of the developed strategies.

A research-action study divided in three phases: I) construction of the data collection tool and web communication platform; II) evaluation of the intervention needs of the students, development and implementation of the intervention program using the web platform; and III) evaluation of the impact of the program developed. The valuation of the knowledge generated by the project is based on a plan of actions different from the diffusion and dissemination of results, involving the institutions plots and adapted to the very essence.

The project strategic impact is situated at two levels. One, evaluated after execution, corresponding to the expected results: tool development for impact measurement of the program and intervention program creation, supported by the web platform. The other, longer in time, that will be evaluated by health gains of the populations in the future.

The project’s relevance and originality support themselves on scientific evidence, reviling the monetarization as an essential component of the promotional health program [3, 4], valorising the innovation and sustentation of the action on the results, including more suitable interventions for the young population on a school environment. The scientific and technological knowledge impact generated and disseminated by the project will contribute for regional and national valorisation, on a logic translation of knowledge.

1. Amendoeira J, Carreira T, Cruz O, Dias H, Santiago MC. Programas de educação sexual em meio escolar: Revisão sistemática da literatura”, Revista da UIIPS. 2013;1(4):198-211.

2. AndrĂŠ C, Amendoeira J. Intervention programs for the prevention of smoking in children and adolescents: A systematic literature review. AtenciĂłn Primaria. 2013;45(Especial C):23.

3. Matos M, SimĂľes C, Camacho I, Reis M. A SaĂşde dos Adolescentes Portugueses em tempos de recessĂŁo. HBSC; 2015. Acedido em:www.aventurasocial.com

4. MinistÊrio da Saúde. Plano Nacional de Saúde - Orientaçþes estratÊgicas para 2012-2016. Lisboa, Portugal: DGS; 2012.

Health promotion, Empowerment, Health literacy, Young population.

P169 Effects of education on functional health: mobility and musculoskeletal back pain in the elderly

Gustavo desouzart 1,2,3 , cristina farias 3 , sandra gagulic 1,2,3, 1 research in education and community intervention, piaget institute, 1950-157 lisbon, portugal; 2 piaget institute, 1950-157 lisbon, portugal; 3 health school of viseu, piaget institute, 3515-776 viseu, portugal, correspondence: gustavo desouzart ([email protected]).

The bio-psycho-social changes that seniors undergo determine the importance of promoting a better quality of life, simultaneously with the extension of life expectancy. In what concerns this theme, a new concept of aging emerges, “the active aging” [1,2].

The aim of the study is to evaluate the effects of a functional health education program on the functional capacity of a group of elderly.

This is an experimental study, with a sample of 20 elderly people aged 67-91 years (mean of 80.70 ± 5.99), who attended day centres and were enrolled in the “Atividade Sénior” Program. This Program was developed by the social responsibility of the Viseu city council and is very important among the community, to promote physical activity in the population. Elders were randomly assigned to Experimental (n=10) and Control (n=10) groups. During this study all individuals maintained the physical activity training of the “Atividade Sénior” Program, where the experimental group had a training with aerobic, flexibility and strength components, associated with stimulation. The exercise program lasted 12 weeks with a frequency of 3 times a week, with each session lasting 30 minutes. The study was pre and post-tested, with the following scales: Timed Up and Go (TUG), to check the functional capacity [3] and the Visual Analogue Scale (VAS) to check pain level [4], with the use of body chart for identifying the site of pain. The study was submitted to the International Ethics Committee in accordance with the guidelines of the World Health Organization (WHO).

In the study population, 80% of the elderly indicated some type of back pain. Of these, 50% indicated that the pain is chronic and 81.2% indicated that the main location is low back pain. At the beginning of the study participants had a mean pain according to the VAS of 5.70; the experimental group with a mean pain score of 7.10 and the control group with an average of 4.30. After 12 weeks, the participants presented a reduction in pain level (3.78), the experimental group with a significant reduction to 3.38 (p= 0.034), and the control group also with a reduction to 3.70 (p= 0.529). Regarding the functionality, according to the TUG, the experimental group obtained the mean initial time of 21.10 +10.06 and the final time of 16.32 Âą 7.80 (p= 0.043), compared to the control group (p= 0.436).

The implemented program demonstrated that physical exercise in general allows global improvements in the elderly population of day centres, and the specific implementation of a functional mobility physiotherapy program has allowed significant results. Then there is the awakening to this theme.

Authors would like to thank those responsible for the senior activity program of the Municipal Council of Viseu, as well as to thank participants, local promoters and day centres.

Australian New Zealand Clinical Trials Registry (ANZCTR) registration number: ACTRN12617001170314.

1. Desouzart G, Matos R, Melo F, Filgueiras E. Effects of sleeping position on back pain in physically active seniors: A controlled pilot study. Work. 2016;53(2).

2. WHO AA. A policy Framework. Geneva, Switz World Heal Organ. 2002.

3. Steffen TM, Hacker TA, Mollinger L. Age-and gender-related test performance in community-dwelling elderly people: Six-Minute Walk Test, Berg Balance Scale, Timed Up & Go Test, and gait speeds. Phys Ther. Oxford University Press; 2002;82(2):128–137.

4. Ferreira-Valente MA, Pais-Ribeiro JL, Jensen MP. Validity of four pain intensity rating scales. PAIN®. Elsevier; 2011;152(10):2399–2404.

Functional health, Senior activity, Back pain.

P170 The Practice Environment Scale of the Nursing Work Index (PES-NWI): validation to primary health care

Eva menino 1 , maria a dixe 2,3 , clarisse louro 2 , francisco stefanie 4, 1 escola superior de saĂşde da cruz vermelha portuguesa, 1300-906 lisbon, portugal; 2 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 center for innovative care and health technology, instituto politĂŠcnico de leiria, 2411-901 leiria, portugal; 4 posto saĂşde da junta de freguesia de penha de frança, 1170-070 lisboa, portugal, correspondence: eva menino ([email protected]).

The environment of nurses' professional practice and the adequacy of resources are structural factors that are related to the results and quality of care. The validated PES scale for the Portuguese population is adequate to evaluate the conditions for nursing practice, and it was validated in Portugal in hospital settings but not in primary health care. This study proposes to make content validation for primary health care and its psychometric validation, having obtained previous authorization of the author for this validation.

Validation of the Practice Environment Scale of the Nursing Work Index (PES-NWI) for Primary Health Care.

The original scale was submitted to a panel of 5 experts. We decided to keep items with levels of agreement greater than 75%, after analysing and after making the changes suggested, we submitted the scale to a second round by the same panel of experts. The new version was used to determine its psychometric characteristics and its revalidation.

The PES version adapted for primary health care presents a Cronbach value of 0.905, meaning a very good internal consistency, with reasonable correlation values for each item with the total of the scale (0.315-0.685). In the analysis through the matrix of main components, only factorial loads above 0.30 were considered, so it was necessary to regroup the items by factors, other than the original scale. Considering the KMO value of 0.797, which indicates that there is a good correlation between the variables, and with a Bartlett test (1407.494, p < 0.001), we could infer that the variables are significantly correlated and consequently we can confirm the validity of the adapted scale. Between the original scale and the one obtained, we identified factors that focus on the same areas, but there was a change in the items that make up these factors. These changes are expected and show greater adequacy regarding the differences between primary and hospital health care. There are differences regarding the items that portray the nurse's functions, which is in line with the evidence that shows that the community context is generally more favourable for nurses to perform autonomous functions.

It is considered essential to use valid instruments to evaluate the characteristics of the nursing practice environment in Primary Health Care, since it is the first “door” to access to health services, with nurses having a central role in this context.

Public Health Nursing, Continuous quality management, Validation studies.

P171 Elaboration of an IAP prevention clinical practice guideline using the ADAPTE methodology

Ana sousa 1,2,3 , cândita ferrito 4 , josÊ a paiva 2.

Intubation associated pneumonia (IAP) is the most frequent health care associated infection in Intensive Care Units (ICU), causing increased length of stay, multiple health and economic costs and antibiotic resistance [1-4]. The impact of this infection motivated this study.

Elaborate a Clinical Practice Guideline (CPG).

Using the ADAPTE methodology, we performed the following sequence of steps: Configuration (definition of the study area, objectives and research questions), Adaptation (search for guidelines and other relevant documents, quality selection and assessment of currency, acceptability and applicability and elaboration of recommendations); Finalization (production of the final document, implementation and statistical data collection in terms of feedback by users about its contributions and final result). We evaluated the document regarding its quality through the application of the AGREE II instrument and its clarity, content and applicability through the presentation of the CPG draft and the application of a questionnaire to all its users in three ICUs. These results were processed using the Statistical Package for the Social Sciences system.

After assessment of evidence grade, applicability and acceptability, we included in the CPG eight recommendations: avoid endotracheal intubation; daily sedation assessment and reduction; daily ventilator weaning; change ventilator circuit only when visibly soiled; head of bed elevation of 30-45º; early exercise and patient mobilization; maintenance of endotracheal tube cuff pressure between 20-30 cm H 2 O; oral hygiene care with chlorhexidine 0.12% or 0.20% [5-9]. Regarding the application of the AGREE II, the CPG obtained a rating of 7 in all domains, which corresponds to a high quality. The questionnaire obtained a total of 82 responses, which corresponds to a rate of 45.6% of the respondents. All of the health care professionals stated that CPG’s objective is clear, relevant and agree with the content and the adequacy of the recommendations. Regarding applicability, 89% of the respondents stated that is applicable.

There is a lack of systematization and adequacy concerning CPG’s elaboration. This methodology allowed us to find the most recent recommendations regarding IAP prevention and adapt to a specific scenario successfully according to quality and user’s evaluation. We can find a limitation of the study in the fact that the evidence supporting some of the recommendations included in the CPG is moderate, and there is a shortage of experimental studies that assess the impact of implementing each individual recommendation. In the next phase

1. Agbaht K, DĂ­az E, MuĂąoz E, Lisboa T, GĂłmez F, Depuydt PO, et al. Bacteremia in patients with ventilator-associated pneumonia is associated with increased mortality: a study comparing bacteremic vs. nonbacteremic ventilator-associated pneumonia. Crit Care Med. 2007;35:2064-2070.

2. American Thoracic Society. Guidelines for the management of adults with hospital-acquired, ventilator-associated, and healthcare-associated pneumonia. Am J Respir Crit Care Med. 2005;171(4):388-416.

3. Tablan OC, Anderson LJ, Besser R, Bridges C, Hajjeh R. Guidelines for preventing health-care-associated pneumonia., 2003: recommendations of CDC and the Healthcare Infection Control Practices Advisory Committee. MMWR Recomm Rep. 2004;53:1-36.

4. Tejerina E, Frutos-Vivar F, Restrepo MI, Anzueto A, Abroug F, Palizas F, et al. Incidence, risk factors, and outcome of ventilator-associated pneumonia. J Crit Care. 2006;21:56-65.

5. Klompas M, Branson R, Eichenwald EC, Greene LR, Howell MD, Lee G, et al. Strategies to prevent ventilator-associated pneumonia in acute care hospitals: 2014 update. Infect Control Hosp Epidemiol. 2014;35 Suppl 2:S133-54.

6. Shi Z, Xie H, Wang P, Zhang Q, Wu Y, Chen E, et al. Oral hygiene care for critically ill patients to prevent ventilator-associated pneumonia. Cochrane Database Syst Rev. 2013;8:Cd008367.

7. Bo, Lulong & Li, Jinbao & Tao, Tianzhu & Bai, Yu & Ye, Xiaofei & S. Hotchkiss, Richard & H. Kollef, Marin & Crooks, Neil & Deng, Xiaoming. Probiotics for preventing ventilator-associated pneumonia. Cochrane Database Syst Rev. 2014; 10.

8. Paiva et al. “Feixe de Intervenções” de Prevenção de Pneumonia Associada à Intubação. Departamento da Qualidade na Saúde da Direção-Geral da Saúde. Direção-Geral da Saúde. 2015; 021/2015.

9. National Guideline C. Prevention of ventilator-associated pneumonia. Health care protocol. 2011.

ADAPTE, Clinical Practice Guideline, Health Care Associated Infection, Intubation-associated pneumonia, ICU.

P172 The factorial analysis of a quality of life scale for people addicted to drugs in methadone programs

Paulo seabra 1 , josĂŠ amendoeira 2 , luĂ­s sĂĄ 1 , olga valentim 3 , manuel capelas 1, 1 interdisciplinary research health center, portuguese catholic university, health sciences institute, 1649-023 lisbon, portugal; 2 interdisciplinary research health center, polytechnic institute of santarĂŠm, health school, 2005-075 santarĂŠm, portugal; 3 portuguese catholic university, health sciences institute, 1649-023 lisbon, portugal, correspondence: paulo seabra ([email protected]).

The evaluation scale of drug users Quality of life (QoL) in a substitution program with methadone, was developed with 21 items, two subscales, “family and economic situation” (11 items) and “personal satisfaction” (10 items) [1]. Concerning their reliability, in the validation study for the Portuguese population in 2005 (n=236) was obtained an Alfa of Cronbach of 0.88 and in a recent study in 2011(n=308) of 0.93 [1].

To determine a scale factorial structure and its psychometric properties.

Methodological study. Participants – 180 drug users participated, aging an average of 41 years (SD=7.58 [24-69]), mostly men (73.3%), single (55.6%) and with children (52.8%), from 3 outpatients drug units. Data analyses - The correlation matrix of the items was evaluated through exploratory and confirmatory factorial analysis. The factorial load as well as the internal consistency estimated the dimensionality.

In the reliability analysis with 21 items was found an Alfa of Cronbach of 0.89, all communalities >0.40, KMO=0.88 (p<0.001) and 58.42% of explained variance by 5 factors. When extracted item 18 (item-total correlation 0.16), all items assumed an item-total correlation > 0.20; alpha increased (Îą=0.90) and KMO increased to 0.88 maintaining the stability of the Bartlett test; communalities maintained above 0.40; total variance explained by the 5 factors increased to 60.4%, but 5 factors diverged from theoretical matrix and 10 items weighed in more than one factor. Through confirmatory analysis (excluding item 18) forcing for the 2 original scale factors, we verified that 4 items had commonalities <0.30. The total variance explained after the spin fell to 43.11% and 4 items weighed in more than one factor. We explored with 3 factors, KMO maintained in 0.88, Bartlett test remained within criteria, total variance explained after de rotation stayed 49.6%. Although this structure presents 3 items weigh in more than one factor, justifies its maintenance by underlying the theoretical model. Alpha is higher to initial (Îą=0.90). The extraction of any item will weaken scale consistency. The most stable structure was with 3 renamed factors: 1- Personal satisfaction and self-care (8 items) (38.5% explained variance); Social Family situation (8 items) (7.4%); 3- Socio professional and economic situation (4 items) (6.5%). The fidelity of the scale is reinforced by the internal consistency of its subscales, factor 1 Îą=0.85; factor 2 Îą=0.79; factor 3 Îą=0.72 and by the correlation between them (0.51-0.67; p<0.01).

Good internal consistency. Factorial analysis supported by the theoretical matrix. Good discriminant capacity by differences pointed out in some variables.

1. Murcho N, Pereira P. A qualidade de vida dos doentes toxicodependentes em programas de substituição com metadona no Algarve: Um estudo comparativo da sua situação em 2003 e 2008. Rev Investig em Enferm. 2011;(23):57–64.

Quality of Life, Substance related disorders, Assessment, Nursing, Methadone.

P173 Occupational sedentary lifestyle and overweight among workers of a higher education institution – Coimbra

SĂłnia fialho 1 , anabela martins 2 , joĂŁo almeida 3, 1 dietetics and nutrition department, coimbra health school, 3046-854 coimbra, portugal; 2 physiotherapy department, coimbra health school, 3046-854 coimbra, portugal; 3 environmental health department, coimbra health school, 3046-854 coimbra, portugal, correspondence: sĂłnia fialho ([email protected]).

In the last decades, with the introduction of changes in the work processes and with the innovation inherent to new technologies, there has been an increase of the sedentary labour lifestyle, where the worker remains sitting for long periods of time in a working day. Sedentary behaviour is associated with an increased risk of developing chronic diseases such as obesity, type II diabetes, cardiovascular diseases, and these diseases are the main cause of mortality and morbidity in Portugal.

This study aims to evaluate the relationship between sedentary work and overweight in teaching and non-teaching workers.

The Occupational Sitting and Physical Activity Questionnaire (OSPAQ) was applied and then calculated the percentage of the activity for each domain (sitting, standing, walking) by the number of hours worked per day. Data on age, gender and body mass index, between December 2017 and January 2018, were collected from a sample of 58 adult men and women in full-time employment at the time of the study. To study the correlation between the percentage of the occupational sitting by the number of working hours per day and overweight, authors analysed the information with SPSS Statistics.

In the present study, 39 of the individuals were females and 19 males, aged between 31 and 62 years. In the analysis done to the OSPAQ, 44 (75.8%) individuals spend more than 50% of their working day in the sitting position. In relation to the BMI, considering the purpose of the study and according to the classification of the World Health Organization, 32 (55.2%) of the individuals presented a BMI ≥ 25. Pearson correlation revealed that there is no association between the sitting time and the BMI ≥ 25 (p> 0.05).

With this study it was possible to verify that there are individuals with a sedentary lifestyle associated to their work day. Although in this study there is no association between occupational sitting and BMI ≥ 25, despite there are studies that demonstrate a significant association between these two parameters. A further study, including other criteria, is in progress, involving the anthropometric level, such as body fat, waist circumference and physical activity assessments.

1.Yang L, Hipp JA, Lee JA., Tabak RG, Dodson EA, Marx CM, Brownson RC. Work-related correlates of occupational sitting in a diverse sample of employees in Midwest metropolitan cities. Preventive Medicine Reports. 2017;6:197-202.

2. Chau J, Van der Ploeg H, Dunn S, Kurko J, Bauman A. Validity of the occupational sitting and physical activity questionnaire. Medicine & Science in Sports & Exercise. 2012;44(1):118-125.

3. Lin T, Courtney TK, Lombardi DA, Verma SK. Association between sedentary work and BMI in a U.S. national longitudinal survey. American Journal of Preventive Medicine. 2015;49(6):117-123.

4. Mummery WK, Schofield GM, Steele R, Eakin EG, Brown WJ. Occupational sitting time and overweight and obesity in Australian workers. American Journal of Preventive Medicine. 2005;29:91-97.

Health promotion, Sedentary behaviour, Body mass index, Workplace.

P174 Oral health assessment among elderly stroke patients

NĂŠlio veiga 1,2 , ricardo figueiredo 1 , antĂłnio coelho 1 , andrĂŠ almeida 1 , gonçalo lopes 1 , salvatore bellantone 1, 1 health sciences institute, portuguese catholic university, 3504-505 viseu, portugal; 2 center for interdisciplinary health research, portuguese catholic university, 3504-505 viseu, portugal, correspondence: nĂŠlio veiga ([email protected]).

Oral hygiene can become a very difficult task for patients who have suffered a stroke, due to motor and cognitive complications and lack of coordination, which usually accompany the post-stroke period. Many of these patients require support from caregivers to properly hygiene the oral cavity as well as, their prostheses.

Characterization of oral health among elderly who have suffered a stroke.

A cross-sectional observational study was carried out in institutionalized elderly individuals aged between 60 and 98 years old. Data collection was carried out in two households in the city of Viseu, Portugal: Fundação Dona Mariana Seixas and Viscondessa House of São Caetano. Due to health limitations of the study participants, in terms of speech and cognitive problems, the final sample consisted of 30 participants. Data collection was achieved through the application of a questionnaire.

Of the final sample, 20 elderlies had at least one stroke episode, representing 67% of the sample. Of these 20 elderlies, 5 reported having had cognitive or speech sequelae, while the remaining 15 reported that the main sequel was motor dysfunction. Of a total of 30 institutionalized elderly, 19 had total absence of dental pieces, corresponding to 63% of the sample. However, the remaining 37% reported multiple dental losses due mainly to dental caries and periodontal problems. These problems may be associated with deficient oral hygiene of the participants, where 18 affirm that they perform dental or denture brushing only once a day, while 40% said they did not take care of their own oral hygiene.

Within the limitations of this study, it is possible to conclude that stroke is a constant among the Portuguese population and that patients who have suffered from stroke have a lower oral hygiene.

Stroke, Oral hygiene, Elderly, Oral health, Dysfunction.

P175 The daily life of people with human immunodeficiency virus in an island space: what trends?

Gilberta sousa 1 , maria a lopes 2 , vitĂłria mourĂŁo 3, 1 universidade da madeira, 9000-034 funchal, madeira, portugal; 2 escola superior de enfermagem lisboa, 1600-190 lisboa, portugal; 3 universidade de lisboa, 1649-004 lisboa, portugal, correspondence: gilberta sousa ([email protected]).

Human immunodeficiency virus/acquired immunodeficiency virus (HIV /AIDS) infection continues to haunt the lives of millions of people and, despite the progress made in the treatment of HIV/AIDS, it is estimated that 36.7 million people in the world living with HIV in 2017 [1]. In the year 2016 and until June 30, 2017, 1,030 new cases of HIV infection were reported, corresponding mostly (99.7%) to individuals aged ≥ 15 years. Portugal has had the highest rates of new cases of HIV and AIDS infection in the European Union [2]. We found that it is not possible to stop the HIV epidemic only with medical interventions. It is vital to address in everyday life the underlying social issues that prevent people from accessing interventions for prevention, diagnosis and treatment of infection, including unequal human rights, stigma and discrimination. When a person is stigmatized or unable to access services as a result of discrimination, the health of the entire community is threatened and epidemic HIV transmission continues to expand rather than to contract [3].

To understand how people live with HIV/AIDS, everyday life in an island space.

Qualitative study, grounded theory [4], in-depth interviews were carried out, in a convenience sample, to seropositive adults of any sexual orientation who wished to talk about their experience. Data analysis included initial coding, grouping of codes, identification of categories/subcategories and memos. Fulfilled ethical requirements and assent of an ethics committee were obtained.

Participants were between the ages of 25 and 67 and had primary education as well as secondary education. The analysis of the data gave rise to three categories: “living with fear”, “surviving” and “facing fear”. The data are discussed in the light of the theory of transitions [5] of how to live everyday life in an island space.

We hope that the findings help in understanding the daily lives of people with HIV/AIDS, because in order to overcome this transition they have to reconfigure their way of living, especially when living on an Island. Realize how they face and fight in the daily life against stigma; what are the most demanding situations they face and what strategies they use. The strategies used and suggested will give subsidies to the health system and nursing professionals towards the design of new programs that will enable each patient to respond to their individual needs with the resources that each one has.

1. UNAIDS, Right to Heath - My health, My right. 2017. Available in: http://www.unaids.org/sites/default/files/media_asset/RighttoHealthReport_Full_web% 2020%20Nov.pdf

2. PORTUGAL. MinistÊrio da Saúde. Instituto Nacional de Saúde Doutor Ricardo Jorge, IP, Infeção VIH/SIDA: a situação em Portugal a 31 de dezembro de 2016/Departamento de Doenças Infeciosas do INSA. Unidade de Referência e Vigilância Epidemiológica; Programa Nacional para a Infeção VIH/SIDA. Direção-Geral da Saúde. - Lisboa: Instituto Nacional de Saúde Doutor Ricardo Jorge, IP, - (Documento VIH/SIDA; 148). 2017.

3. PEPFAR. President’s Emergency Plan for AIDS Relief. U.S. Department of State Office of the U.S. Global AIDS Coordinator and Health Diplomacy 2017. https://www.pepfar.gov/documents/organization/267809.pdf

4. Charmaz K. Constructing Grounded Theory. 2nd edition. London: Sage Publications Limited ; 2014.

5. Meleis AI, Sawyer LM, Im EO, Hilfinger Messias DK, Schumacher K. Experiencing Transitions: An Emerging Middle-Rang Theory. ANS Adv Nurs Sci. 2000;23(1):12-28.

People with HIV/AIDS, Theory of transitions, Everyday life, Stigma, Nursing.

P176 Validation of the nursing diagnosis of impaired walking in elderly

Cristina marques-vieria 1,2 , luĂ­s sousa 3,4 , dĂŠbora costa 5 , clĂĄudia mendes 5 , lisete sousa 5,6 , sĂ­lvia caldeira 1,2, 1 lisbon school of nursing, institute of health sciences, portuguese catholic university, 1649-023 lisbon, portugal; 2 interdisciplinary research center for health, portuguese catholic university, 1649-023 lisbon, portugal; 3 curry cabral hospital, central lisbon center hospital, 1069-166 lisbon, portugal; 4 atlântica school of health, 2730-036 barcarena, portugal; 5 faculty of science, lisbon university, 1749-016 lisbon, portugal; 6 statistics and applications center, lisbon university, 1749-016 lisbon, portugal, correspondence: cristina marques-vieria ([email protected]).

The increase in longevity causes restriction of activity in the elderly, causing changes on the execution of daily activities and consequently on the quality of life [1]. Walk is an activity that requires using a variety of skills and can be highly complex particularly for the elderly people [2]. The nursing diagnosis impaired walking is part of NANDA International since 1998 and requires further validation to improve the clinical evidence [3].

To validate the nursing diagnosis impaired walking in a sample composed of elderly.

Observational, cross-sectional and quantitative study. After the first research phase of systematic literature review several defining characteristics and related factors of the diagnosis impaired walking have been listed.2 Then, the translation, linguistic and cultural adaptation of the nursing diagnosis was conducted, and finally, the clinical validation of the diagnosis using the clinical validation model of Richard Fehring [4], in a sample of elderly and counting on the collaboration of registered nurses and rehabilitation nurses to collect the data and fill the questionnaires, which comprised demographic data, the defining characteristics, related factors and falls efficacy scale international [5]. The study was approved by the ethical committee of SESARAM. E.P.E (Madeira Island Healthcare System).

In the systematic literature review 17 defining characteristics and 34 etiological factors of impaired walking have been identified. A European Portuguese version was obtained to validate in a sample of 126 elderly, whose average age was 73.86 years, mostly female, with the primary school, in a situation of retirement, widowed and with history of falls. The prevalence of “impaired walk” was 64.3% according to the expert's opinion and 67.5% according to the elderly. All defining characteristics and related factors have been validated. The most sensitive defining characteristic was nine (e.g. impaired ability of gait speed) and also four related factors (fear of falling, physical deconditioning, medication and feminine gender).

This study justifies the need to review the defining characteristics and related factors of impaired walking. The identification of the most sensitive defining characteristics facilitates nurses’ clinical reasoning and interventions towards effective nursing outcomes.

1. Marques-Vieira CM, Sousa LM, Carias JF, Caldeira SM. Nursing diagnosis “impaired walking” in elderly patients: integrative literature review. Rev Gaucha Enferm. 2015;36(1):104-111.

2. Marques-Vieira CM, Sousa LM, Sousa LM, Berenger SM. The nursing diagnosis impaired walking in elderly: systematic literature review. Texto & Contexto Enferm. 2016;25(3):e3350015.

3. Herdman HT, Kamitsuru S, editors. Nursing Diagnoses: Definitions & Classification 2018-2020. Oxford: Wiley-Blackwell; 2017.

4. Fehring RJ. Methods to validate nursing diagnoses. Heart Lung. 1987;16(6):625-629.

5. Marques-Vieira CM, Sousa LM, Severino S, Sousa L, Caldeira S. Cross-cultural validation of the falls efficacy scale international in elderly: systematic literature review. J Clin Gerontol Geriatr. 2016;7(3):72-76.

Nursing, Nursing Diagnosis, Walking, Gait, Validation studies.

P177 Validation of the nursing diagnosis risk for falls in elderly

Falls and their consequences are critical for for elderly well-being quality of life, for caregivers, and for health care providers [1]. The nursing diagnosis risk for falls is listed in NANDA International since 2000 [2]. This diagnosis seems particularly important in planning effective nursing care for the community-dwelling elderly.

To validate the nursing diagnosis risk for falls in a sample of elderly.

Observational, cross-sectional and quantitative study conducted in three phases. The first phase, corresponded to a systematic literature review to identify the risk factors of risk for falls [3]. The second phase consisted of the translation, linguistic and cultural adaptation of the nursing diagnosis for European Portuguese language. The third, was the clinical validation of the diagnosis using the clinical validation model of Richard Fehring [4], in a sample of elderly and counting on the collaboration of registered nurses and rehabilitation nurses to collect the data and fill the questionnaires, which comprised demographic data, the risk factors and falls efficacy scale international [5]. The study was approved by the ethical committee of SESARAM. E.P.E (Madeira Island Healthcare System).

A total of 50 risk factors of risk for falls have been identified in the systematic literature review. A European Portuguese version was obtained and submitted to the clinical validation in a sample of 126 elderly, whose average age was 73.86 years, mostly female, with the primary school, in a situation of retirement, widowed and with history of falls. The prevalence of risk for falls was 68.3% in the expert's opinion and 63.5% in the opinion of the elderly. All risk factors have been validated. The most sensitive risk factor was history of falls, comorbidities, feminine gender, polymedication, difficulty with gait, and drugs.

This study found the main risk factors for falls in a sample of community-dwelling elderly. The identification of the most sensitive risk factors may support nurses’ clinical reasoning and interventions for effective fall prevention.

1. Lusardi MM, Fritz S, Middleton A, Allison L, Wingood M, Phillips E, Criss M, Verma S, Osborne J, Chui KK. Systematic Reviews. J Geriatr Phys Ther. 2017;40:1-36.

2. Herdman HT, Kamitsuru S, editors. Nursing Diagnoses: Definitions & Classification 2018-2020. Oxford: Wiley-Blackwell; 2017.

3. Sousa LM, Marques-Vieira CM, Caldevilla MN, Henriques CM, Severino SS, Caldeira SM. Risk for falls among community-dwelling older people: systematic literature review. Rev Gaucha Enferm. 2017;37(4):e55030.

4. Fehring RJ. Methods to validate nursing diagnoses. Heart Lung. 1987;16(6 Pt 1):625-629.

Nursing, Nursing Diagnosis, Risk for falls, Fear of falling, Validation studies.

P178 Teachers and professors’ mental health: prevalence of self-reported psychological symptoms

The work of teaching professionals is recognized to be demanding, involving dynamic interactions with students, parents, colleagues and school authorities [1]. Teaching has been ranked as one of the most stressful profession and its nature is applicable to all professional teaching roles. Several research reports have consistently documented physical and psychological symptoms experienced by teaching professionals. Several physical complains and psychosomatic symptoms such as lower back pain, headache, voice disorders and anxiety are frequently faced by teaching professionals, both in secondary and higher education, especially in women [1-5]. Presence of psychopathology symptoms in teachers are related to their rating of children mental health behaviors [6], as well as determinants to professional burnout. Therefore, identification of psychological symptoms among teachers is relevant in secondary and higher education.

To identify the prevalence of self-reported psychological symptoms; Characterize the symptoms by its dimension and determine the differences in psychological symptoms between high school teachers and higher education professors.

Cross-sectional correlational study, with a non-probabilistic sample of 96 Portuguese teaching professionals. Data were collected using an on-line questionnaire composed by sociodemographic questions and the Portuguese version of Brief Symptom Inventory (BSI) - 53 items covering nine symptom dimensions: Somatization, Obsession-Compulsion, Interpersonal Sensitivity, Depression, Anxiety, Hostility, Phobic anxiety, Paranoid ideation and Psychoticism; and three indices of distress: Global Severity Index (GSI), Positive Symptom Distress Index (PSDI), and Positive Symptom Total (PST). Ethical procedures were taken into account.

Teaching professionals were mostly women (70.8%), aged between 30 and 62 years old (Mean=44.8; SD=7.86), 43.8% with a bachelor degree, 27.1% diagnosed with a mental disease, and 41.1% acquainted with mental health patients. Professors (37.5%) were from different fields, including health, engineering, arts, communication, social sciences, marketing and sports. High school teachers (62.5%) were mainly from sociology, philosophy and mathematics. The most scored dimension was Obsession-Compulsion in high school teachers (Mean=1.03; SD=0.75). Globally high school teachers revealed more symptoms of distress than higher education professors. Significant differences between groups were found in Somatization, Obsession-Compulsion, Interpersonal Sensitivity, Depression, Anxiety, Phobic Anxiety, Psychoticism, GSI and PST (p < 0.05). In the 53 BSI items, the PST was low (Mean=19.09; SD=12.70).

Prevalence of symptoms were high in the samples of teaching professionals, although they experienced psychological distress in low intensity. Differences between high school teachers and higher education professors were highlighted in this study. Therefore, there is a need for intervention among high school teachers to minimize the impact of detecting mental disorders in students, as well as preventing professional absents and burn-out.

1. Au DWH, Tsang HWH, Lee JLC, Leung CHT, Lo JYT, Ngai SPC, et al. Psychosomatic and physical responses to a multi-component stress management program among teaching professionals: A randomized study of cognitive behavioral intervention (CB) with complementary and alternative medicine (CAM) approach. Behav Res Ther. 2016;80:10–16.

2. Chan AHS, Chong EYL. Subjective Health Complaints of Teachers From Primary and Secondary Schools in Hong Kong. Int J Occup Saf Ergon. 2010;16(1):23–39.

3. Ferreira RC, Silveira AP da, Sá MAB de, Feres S de BL, Souza JGS, Martins AME de BL. Transtorno mental e estressores no trabalho entre professores universitários da área da saúde. Trab Educ e Saúde. 2015;13(suppl 1):135–155.

4. Seibt R, Spitzer S, Druschke D, Scheuch K, Hinz A. Predictors of mental health in female teachers. Int J Occup Med Environ Health. 2013;26(6):856-869.

5. Zamri EN, Moy FM, Hoe VCW. Association of psychological distress and work psychosocial factors with self-reported musculoskeletal pain among secondary school teachers in Malaysia. PLoS One. 2017;12(2):e0172195.

6. Kokkinos CM, Kargiotidis A. Rating students’ problem behaviour: the role of teachers’ individual characteristics. Educ Psychol. 2016;36(8):1516–32.

Mental health, Psychological symptoms, Teachers, Teaching professionals.

P179 Generating high vegetable liking among young children to promote healthy eating: results from an intervention at a kindergarten school

CĂĄtia braga-pontes 1,2 , ana pinto moura 3 , luĂ­s cunha 4, 1 center for innovative care and health technology, polytechnic institute of leiria, 2411-901 leiria, portugal; 2 school of health sciences, polytechnic institute of leiria, 2411-901 leiria, portugal; 3 sciences and technology departament, open university of portugal, 4200-055 porto, portugal; 4 department of geosciences, environment and spatial planningsfaculty of sciences, university of porto, 4485-661 vila do conde, portugal, correspondence: luĂ­s cunha ([email protected]).

Fruit and vegetables have always played a prominent role in dietary recommendations because of their high concentration of vitamins, minerals, phytochemicals and because they are a great source of fibre [1,2]. Recent data show that in general, the population should at least double consumption of fruit and vegetables in order to reach the 400g/day recommended by World Health Organization (WHO) [3]. In Portugal, the data presented by the National Food Survey in 2017 showed that 68.9% of children do not consume more than 400g/day of fruit and vegetables, highlighting a lower consumption of vegetables compared to fruit [4].

The purpose of this study was to evaluate the impact of a food intervention, with exposure combined with a tangible reward, on food neophobia, liking and intake of different vegetables, at the kindergarten school environment, in the South of Portugal.

Children (n=82) aged 2 to 5 years old, from different classes, were randomly assigned by class to intervention (n=68) or control group (n=16) and the intervention lasted nine weeks. Children food neophobia [5] and eating behaviour [6,7] were evaluated by parents at the beginning of the intervention. Mother’s food neophobia [8] was also measured. In each week, children attended an educative session about the vegetable they would eat at lunch (carrot, bell pepper, broccoli, tomato, cucumber, purple cabbage, spinach, arugula and beet), being rewarded with a sticker when eating the vegetable. Assessments of intake and liking were recorded at baseline sessions and after each exposure, using ASTM’s pictorial scales for children [9]. Children at control group were exposed at the same experiment after the intervention group during the subsequent nine weeks.

Children from both groups presented high levels of liking for the different vegetables, with this being modulated by the children’s traces of personality and eating behaviour.

Exposure to the different vegetables with a playful approach yields high liking scores for a range of vegetables, indicating that such an approach has good potential to overcome vegetables avoidance by young children.

NCT03513081

1. Rodriguez-Casado A. The Health Potential of Fruits and Vegetables Phytochemicals: Notable Examples. Critical reviews in food science and nutrition. 2016;56(7):1097-107.

2. Slavin JL, Lloyd B. Health benefits of fruits and vegetables. Advances in Nutrition. 2012;3(4):506-16.

3. Pem D, Jeewon R. Fruit and Vegetable Intake: Benefits and Progress of Nutrition Education Interventions- Narrative Review Article. Iranian Journal of Public Health. 2015;44(10):1309-21.

4. Lopes C, Oliveira A, Severo M, AlarcĂŁo V, Guiomar S, Mota J, et al. InquĂŠrito Alimentar Nacional e de Atividade FĂ­sica. Porto: Universidade do Porto; 2017.

5. Pliner P. “Development of measures of food neophobia in children”. Appetite 1994;23(2):147-163.

6. Wardle J, Guthrie C A, Sanderson S, Rapoport L. Development of the Children's Eating Behaviour Questionnaire. J Child Psychol Psychiatry 2001;42(7):963-970.

7. Viana V, Sinde S. O Comportamento Alimentar em Crianças: estudo de validação de um questionario numa amostra portuguesa (CEBQ). Anålise Psicológica 2008;1(XXVI):111-120.

8. Pliner P, Hobden K. Development of a scale to measure the trait of food neophobia in humans. Appetite 1992;19(2):105-120.

9. ASTM. E 2299-03. Standard Guide for Sensory Evaluation of Products by Children. United States, ASTM International; 2008.

Children, Food Neophobia, Playful intervention, Vegetables.

P180 The limitations of medicinal package leaflets

Ana p izidoro 1 , patrĂ­cia correia 2, 1 farmĂĄcia rainha, 5140-067 carrazeda de ansiĂŁes, bragança, portugal; 2 escola superior de saĂşde do porto, 4200-072 porto, portugal, correspondence: ana p izidoro ([email protected]).

It is known that, in order to solve most of the existing health problems, it is often fundamental that a pharmacological approach exists. For this reason, patients must be adequately informed about their health status and about the medicines they are using. One of the most important resources of medicines information, available for patients, is the information leaflet. It provides a set of understandable information and should contribute to the appropriate and safe use of information, complementing the information given by health professionals. However, the content appears to be quite complex or too technical and the text is very dense and with a reduced font size, making it intimidating and difficult to read.

The main objectives of this study are: to identify if users read the information leaflet and whether reading frequency is associated with the importance they attribute to it; and to identify the limitations attributed by the respondents to information leaflets and possible relationships with the socio-demographic characteristics of the population.

This was a transversal and inferential descriptive observational study, which took place between March and November, 2017, based on the application of a questionnaire in the form of an interview to 320 Bragança residents. Various data were collected, particularly with regard to the frequency of reading the information leaflets, the importance attributed to the same or each section, and existing limitations.

Regarding the possible relationships between the data obtained, it was verified that there is sufficient statistical evidence to affirm that there is an association between the reading frequency of the information leaflet and the importance attributed to it, but the same cannot be trusted as to the association between the limitations attributed to the information leaflets and the socio-demographic characteristics of the respondents.

In sum, these associations and the fact that most respondents have indicated, as the main limitation in reading information leaflets, the use of very technical language, may mean that the information leaflet is not being developed in order to promote reading by part of the youngest or the oldest, which are the groups presenting the major reading difficulties.

Information leaflet, Illiteracy, Information, Limitations, Adherence to therapy.

P181 Eating habits and perception of body image in higher education students

Helena catarino 1,2 , clementina gordo 1, correspondence: helena catarino ([email protected]).

The transition to higher education is one of the risk factors in the adoption and maintenance of healthy lifestyles [1] and, at this stage, many students develop concerns related to eating habits, their body [2] and body image.

This correlational study had two main objectives, namely, to characterize eating habits and the perception of body image among higher education students; and to relate eating habits to the students’ perception of body image.

We used a questionnaire to collect socio-demographic and anthropometric data, an eating habits scale [3] and the body shape questionnaire [4]. The non-probabilistic convenience sample consisted of 386 students, with mean age of 21.94 years (SD = 5.30), of which 82.9% were females.

According to WHO Body Mass Index classification, 72.3% of the students presented normal weight, 14% pre-obesity and 6.2% moderate thinness. From the point of view of the concerns related with body image, the majority, 54.7% presented no distortion of body image. However, the remaining 45.3% presented distortion of body image and among these, 16.6% presented severe distortion. From the point of view of eating habits, the results show that the sample had adequate eating habits. The majority of respondents ate 5 (36.3%) or 4 (32.6%) meals per day and for the four dimensions of the eating habits scale (quality, quantity, variety and food adequacy), the scores varied between a minimum of 2 and a maximum of 5, where the highest mean value was found at the level of food variety (M = 3.62, SD = 0.53) at the lowest mean value observed was found at the level of food quality (M = 3.40, SD = 0.56). The correlations between the dimensions of the eating habits scale (quality, quantity, variety and food adequacy) and students' perceptions of body image are statistically non-significant (respectively, r = 0.099; r = 0.011; r = -0.046; r = -0.038 and p > 0.05).

Although the results show that these students have adequate eating habits, it is important to continue to study the food culture of the populations, to identify individuals at risk, to intervene from the point of view of promoting the acquisition of healthy eating habits, and to study the factors underlying the distortion of body image and its influence on students' physical and mental well-being.

1. Soares A, Pereira M, Canavarro J. Saúde e qualidade de vida na transição para o ensino superior. Revista Psicologia, Saúde & Doenças. 2014;15(2):356-379.

2. Tekin C, Bozkir C, Nese Karakas N, Gunes G. The relation between the body perceptions and eating habits of the students in Inonu University. Anals of Medical Research. 2017:24(1):1-9.

3. Marques A, Luzio F, Martins J, Vaquinhas M. Håbitos alimentares: validação de uma escala para a população portuguesa. Escola Anna Nery Revista de Enfermagem. 2011;15(2):402-409.

4. Pimenta F, Leal I, Maroco J, Rosa B. Validação do Body Shape Questionnaire (BSQ) numa amostra de mulheres de meia-idade. Atlas do 9º Congresso Nacional de Psicologia da Saúde. Lisboa: Placebo Editora; 2011.

Eating habits, Body image, Students.

P182 Psychosocial impact of the assistive technologies for mobility on the participation of wheelchair’s users

Anabela martins 1 , joĂŁo pinheiro 2 , patrĂ­cia francisco 1 , inĂŞs domingues 2, 1 physiotherapy department, coimbra health school, 3046-854 coimbra, portugal; 2 faculty of medicine, university of coimbra, 3004-504 coimbra, portugal, correspondence: anabela martins ([email protected]).

Recognizing that assistive technology (AT) for mobility play an important role in their users’ participation, it will be useful to rehabilitation professionals and services to assess if the perceived psychosocial impact of such devices and associated services, systems and policies contributes to enhance lifelong capacity and performance [1-3]. To build comprehensive rehabilitation services, information on a person’s experience in every aspect of his/her life is essential, as well as the role of AT on functioning and, particularly, in participation.

To study the psychosocial impact of manual and powered wheelchairs on the participation of their users.

From May to December 2017, sixty AT users (30 powered and 30 manual wheelchairs), aged 28-66 (mean 46.63 Âą 10.66) years old, 53.3% female, with mix diagnosis, were interviewed using the Psychosocial Impact of Assistive Devices Scale (P-PIADS), the Activities and Participation Profile related to Mobility (PAPM) and demographics, clinical and questions about AT use and training.

The participation profiles revealed that 6.7% of the participants present no restrictions, 20.0% mild, 35.0% moderate and 38.3% severe restrictions in social participation. All subscales (competence 1.50, adaptability 1.45, self-esteem 1.15) and P-PIADS total (1.38) were positive and moderately correlated to the activities and participation profile. Age and type of wheelchair and training of AT as an intervention do not influence statistically the participation, however, number of years on a wheelchair does. In general, training experiences were described as occurring in a clinical setting base, in terms of to enhance the capacity for handing the new device, with less attention to users’ performance in the community and at home, reducing the barriers and promoting a facilitator environment.

These results encourage the authors to keep studying the impact of AT for mobility on participation, namely the manual and powered wheelchairs, to develop robust evidence for rehabilitation, giving a contribution to the efforts of the Global Cooperation on Assistive Technology Initiative, and create support for Rehabilitation 2030 A Call for Action4.

2. Lofqvist C, Pettersson C, Iwarsson S, Brandt A. Mobility and mobility-related participation outcomes of powered wheelchair and scooter interventions after 4-months and 1-year use. Disabil Rehabil Assist Technol. 2012;7(3):211-218.

4. Gimigliano F, Negrini S. The World Health Organization “Rehabilitation 2030: a call for action”. Eur J Phys Rehabil Med 2017;53:155-68.

Assistive Technologies, Wheelchair, Psychosocial impact, Social participation.

P183 Unconventional therapies in nursing - innovating the practice

Maria i santos ([email protected]), higher school of health, polytechnic institute of santarĂŠm, 2005- 075-santarĂŠm, portugal.

The use of unconventional therapies by Nurses was transmitted to us as nursing students, in various clinical teaching contexts. The disciplinary and professional coherence of these therapies, as well as the therapeutic efficacy they present, legitimizes, from the perspective of nurses, the innovation they constitute in the practice of care.

To understand the process of integrating nonconventional therapies into nursing practice, in the dimensions: identification of therapies in use; meanings assigned and used strategies evaluated by nurses and patients.

Grounded Theory was used, in the constructivist perspective of Kathy Charmaz. The main data collection techniques were the intensive interview and participant observation. Participants were 15 nurses from 9 public hospitals, from the district and central levels, from the north, central and south of the country, and a team of 10 nurses and 17 users of a pain service from an oncology hospital.

From the results we point out: nurses innovate their practice of care through the use of nonconventional therapies of environmental, manipulative, mental-cognitive, energetic and relationship nature. The physical, social and normative environments condition the practice of these therapies; the modes of action evidence the importance they confer to the ethical aspects and the (re)combination of several techniques, result in innovative and individualized care. Nurses evaluate these therapies as very effective in various health/illness situations, emphasizing their contribution to well-being.

Conclusions and implications of the study: nurses identify a sense of high conceptual and professional coherence of nonconventional therapies with nursing, considering that they innovate and considerably increase the repertoire of practice. The innovation of Nursing practices, through the integration of these therapies, contributes to the well-being of the people cared for and enables them to better manage their health, which is relevant in an aging society.

Unconventional therapies, Nursing, Innovation practices, Well-being.

P184 Translation and cultural adaptation of English Modified DABS (EMDABS) Scale for Portuguese language

Pedro jc luz 1,2 ([email protected]), 1 hospital professor doctor fernando fonseca, 2720-276 lisbon, portugal; 2 lisbon nursing school, 1600-190 lisbon, portugal.

The occurrence of aggressive episodes by clients in the contexts of mental health settings are recurrent, being imperative to adapt staff interventions to stabilize the psychopathological state of the client, consequently reducing the risk of injury. The de-escalation technique prevents the escalation of aggressiveness, allows a relationship of trust to be established by demonstrating understanding of the client’s problems and concerns [1]. The same authors performed the psychometric validation of the English modified De-Escalating Aggressive Behaviour Scale (EMDABS). This scale aims to evaluate the education/training and de-escalation skills of health professionals, based on seven items: “valuing the client; reducing fear; inquiring about client’s queries and anxiety; providing guidance to the client; working out possible agreements; remaining calm; risky”. The scale revealed a high level of consistency between the seven items of the de-escalation α=0.901.

This study aims to translate and effect the cultural adaptation of the EMDABS scale to the Portuguese language in order to improve nursing de-escalation techniques.

The translation and cultural adaptation of this instrument followed a methodological sequence: translation of the scale into Portuguese; focus group discussion; retroversion to English and pre-testing [2]. The study was authorized by the hospital ethics committee. The participants of the focus-group agreed to participate freely and clarified.

The first translation was carried out by two teachers, born in Portugal and fluent in English, with a Degree in Modern Languages and Literatures. Conceptual translation was privileged instead of literal translation, promoting the conservation of simple and accessible language characterized by the original instrument. The translation was evaluated by a focus group (6 professionals) with experience in the field, all participants speak Portuguese and are fluent in English, having evaluated the translation obtained and suggested necessary changes. The translation was then back-translated into the English language, by a Clinical Psychologist, from South Africa, with English native language, fluent in Portuguese, with experience in the field, as well as in the translation of evaluation instruments. This professional had no previous contact with antecedent versions of this instrument. The Pre-testing was applied to a group of ten nurses to evaluate the adequacy of the structure and items of this scale, as well as to validate eventual difficulties in filling it.

In the pre-test, the instrument showed a good internal consistency and the participants did not present filling difficulties. The next step is to determine the psychometric properties of the scale.

1. Mavandadi V, Bieling P, Madsen V. Effective ingredients of verbal de-escalation: validating an English modified version of the De-Escalating Agressive Behaviour Scale. J Psychiatr Ment Health Nurs. 2016;23(6-7):357-68.

2. Waltz C, Strickland O, Lenz E. Measurement in Nursing and Health Research (4th. Ed.). New York: Springer Publishing Company; 2010.

Aggressive behavior, Cultural adaptation, De-escalation, Mental illness, Nursing interventions.

P185 Supporting informal caregivers of people dependent in self-care

Maria a dixe 1,2 , ana cs cabecinhas 2 , ana jcf santos 2 , marina g silva 2 , maura r domingues 2 , ana querido 1,2, correspondence: maria a dixe ([email protected]).

Informal caregivers (IC) are crucial in care provision to dependent patients, frequently aged and suffering from chronic diseases. Therefore, besides helping the dependent people carrying out daily activities, they should also provide them support and assistance in dealing with self-care difficulties.

This quantitative exploratory descriptive study aims to: identify self-care needs in which the person is dependent, and the dependency degree; identify socio-demographic characteristics and ICs support to care for a dependent-person; identify the type, quality of information provided to the IC and the health-professional who delivered the information; assess the perception of ICs regarding their abilities to care for dependent-patients.

An intentional sample of thirty-three dyads of dependent-patients (at least in one self-care daily activity) and their caregivers participated in this study. Participants were referred by the hospital health-care team of a hospital service, at the time of discharge. Dependent-people clinical and sociodemographic data were collected from clinical files. ICs answered a structured interview composed by sociodemographic and professional data, type of support in caring for dependent-people, ability of ICs to care for the dependent-person and to promote self-care. This study was approved by the ethic committee of the hospital where the study was conducted (nÂş 24/2017).

The majority of dependent-persons were female (60.6%) with a mean age of 81.6 (±11.3) years. Regarding self-care activities, the majority of persons were dependent and did not take part in preparing medication (66.7%), preparing food for ingestion (51.5%), nor in obtaining objects for the bath (51.5%). Caregivers were mostly women, mean age of 61.4 (±12.1) years old. Regarding family ties, is mostly a son/daughter (39.4%) or a spouse (33.3%) who takes care of their family member on average for 63.9 (±93) months. Nurses were the main providers of information, nonetheless most of the caregivers were not provided with information about auxiliary equipment (79.3%) nor economic support available (71.0%). ICs feel less competent in caring for a dependent-person regarding ability to transfer. Ability to dress was the area in which the dependent-person presented the highest degree of dependence. Regarding “feeding”, ICs refer the identification of signs of malnutrition and dehydration as the main difficulty.

Supporting families in acquiring practical problem-solving skills in every day’s life, in the management of resources and of emotional needs of the dependent-person should be promoted by health-professionals at the beginning of hospitalization, in order to take good care of the family member and of oneself.

Capacity, Informal caregiver, Self-care, Dependent-person, help2care project.

P186 Absenteeism on nurses in primary health care

Rosa m freire 1 , rosårio vieira 2 , elisabete borges 1, 1 escola superior de enfermagem do porto, 4200-072 porto, portugal; 2 agrupamento de centros de saúde tâmega ii vale do sousa sul, 4560-682 porto, portugal.

The Primary Health Care reform and the introduction of Health Centre Groupings brought changes in the management standards in force until then. The new model care reorganized and changed work contexts and human resources. The new organizational paradigm requires adaptation of the professionals to this new reality. In any situation of change people reacts with resistance in greater or lesser intensity. Also changes in organizations can create resistances that can manifest themselves with absenteeism. In health organizations the absenteeism of nurses constitutes a complex problem and needs to be valued and monitored by nursing managers [1].

To identify the presence and causes of absenteeism and analyse the relationship between absenteeism and demographic and professional variables.

Quantitative, exploratory and cross study was used as methodology. A sociodemographic/professional questionnaire and questions related to absenteeism were used for the collection of data. The sample was composed by 109 nurses working at primary health care units of northern Portugal. The sample was composed by 77.1% of females, 54.1% aged more than 36 years, 72.5% married, 82.6% working on fixed schedules, 51.4% with nearly 14 years of job experience (varied between 7 and 33 years), and 81.7% considered their job as stressful.

Of all nurses, 66% admitted to having been absent from service in the last year: 28.4% due to their own illness and 16.5% due to their relatives’ illness. Statistically, there were found significant differences in the association between the variables of to have children (p = 0.007), place of work (p = 0.045) and absenteeism. We also found that nurses with children and those who work in Health Family Units are the ones that are absent from work most frequently.

We believe that the results can induce the development, implementation and evaluation of preventive measures, aiming at the promotion and protection of nurses' health. In this regard, they can be crucial for the management of human resources and for strategic planning of the ACeS, in particular, in grounded decision-making.

1. Sancinetti TR, Soares AVN, Lima AFC, Santos NC, Melleiro MM, Fugulin FMT, et al . Nursing staff absenteeism rates as a personnel management indicator. Rev. esc. enferm. USP. 2011;45(4):1007-1012.

Nurse, Absenteeism, Primary health care, Management in nursing.

P187 Assessment and intervention in a family with a care dependent person and mental illness: a case study

InĂŞs esteves, isabel bica, health school, polytechnic institute of viseu, 3504-510 viseu, portugal, correspondence: isabel bica ([email protected]).

Mental illness can have profound repercussions in family dynamics, since it is a serious illness and generates great stress. The assessment and intervention of a family with psychosocial problems requires the nursing team to use models that allow the design of care oriented both for data collection and for the planning of interventions in domiciliary context.

To analyse the care needs in a family with mental illness using the Dynamic Model of Family Assessment and Intervention (MDAIF).

The case study focused on the process of family intervention developed with a family in a domiciliary context in Primary Health Care, according to MDAIF. The following data collection instruments were applied: genogram, ecomap, FACES II, Graffar and Family Apgar Scales.

The family consists of a middle-aged couple, childless, lower middle class (grade IV, Graffar, 1956). The woman is the main care provider of her husband, with mental illness. They have an extensive network of community support, namely in Social Solidarity Institutions, Recreational Associations, the extended family and the Community Care Unit (health team). As a couple, they show good cohesion and adaptability facing everyday problems (Faces II Scale) and perceive their family as highly functional (score 10 - Apgar Scale), despite the perception of communicational difficulties and emotional expression. After the interventions of the nursing team, in the different dimensions of the MDAIF, gains were obtained in the knowledge and capacity of income management, heating system of the residential building, management and knowledge about the mental illness, communication and marital satisfaction and the role of caregiver. In this way, the caregiver has been trained in knowledge and skills in the field of care for the dependent person and with mental pathology, in order to maximize their health, as well as in the care and importance of the satisfaction of her own needs.

The valuation of caregiver perspectives is essential for the adjustment of the formal responses of the nursing team. Empowerment of family caregivers - as well as being able to express themselves and be an active element in the healthcare process, as well as the articulation of formal and informal responses constitute challenges for health professionals. The support provided by the health team to the caregiver and dependent patient has proved to be effective not only in improving family functioning but also in reducing stress, burnout and isolation of the caregiver.

Caregiver, Family, Nursing, Primary health care.

P188 Elderly people, physical therapy services and human resources: current and future challenges

Carla leĂŁo 1,2 ([email protected]), 1 escola superior de saĂşde, universidade atlântica, 2730-036 barcarena, portugal; 2 instituto portuguĂŞs de relaçþes internacionais, 1000-155 lisboa, portugal.

The aging process implementation in Portugal, and the consequent increase in the number of elderly people is evident and constitutes an increasing process. Undeniably, the elderly people are the main users of health services and specifically physiotherapy services. The economic and financial crisis that affected Portugal, has imposed restrictions on the health system and specifically on the National Health Service (NHS) [1]. Considering this scenario, we have put as a guiding question: Are, in Portugal, physiotherapy health services and human resources included in the health system at NUTS III regional level, and essentially in the NHS, proportional to the number of elderly people and respond to their current and future needs? [NUTS III - Nomenclature of territorial units for statistics (NUTS) with level III corresponding to the territorial division constituted by 30 regions, according to the division of 2013].

We aim to understand the current and future numbers of the elderly people at the regional level (NUTS III); to know the current number of physiotherapy services and human resources at the same regional level; to prospective the future physical therapy services and human resources, considering the health political trends related to services and human resources; and to assess if these services and human resources respond to the needs of the current and future elderly people.

For this purpose, we performed a statistical analysis at the regional level NUTS III, considering the indicators - number of elderly people and physical therapy services and human resources. We used the data from the Statistics National Institute, the Health Regulatory Body, the Portuguese Physiotherapists Union and the Portuguese Association of Physiotherapists.

The results showed that at the regional level (NUTS III), physiotherapy services and human resources are scarce and there is no proportionality between the number of elderly people and the physiotherapy services and human resources. With the increasing number of elderly people and the restrictive political trend, we assume that, if the political trends continue, proportionality will remain non-existent.

In this way, we conclude that current and future physiotherapy services and human resources do not respond to the needs of the current and future elderly population, and there is a need to adapt physiotherapy services and human resources to the current and future demographic reality, in order to achieve better levels of healthy life expectancy.

1. Rodrigues T F, Martins M R O (Coordenadores). Envelhecimento e saúde. Prioridades políticas num Portugal em mudança. Lisboa. CEPESE/Instituto Hidrogråfico. 2014.

Portugal, Elderly people, Physical therapy services, Physical therapy human resources.

P189 Ultrasound evaluation of subcutaneous tissue and relation with the evaluation of fat mass in women with cellulitis

Alexandra andrĂŠ, joĂŁo p figueiredo, Ăłscar tavares, catarina bulamarqui, catarina frade, escola superior de tecnologia da saĂşde de coimbra, instituto politĂŠcnico de coimbra, 3046-854 coimbra, portugal.

Ultrasound (US) is an easily accessible imaging modality that allows the evaluation of various structures. Despite the potential of this technique there are a few studies referring to subcutaneous tissue analysis with US. This allows the evaluation of altered tissues such as gynoid lipodystrophy, a very common alteration in female. the better the characterization, location, definition of the affected areas and the degree of severity of LDG, the greater the success of treatment. In this study US and densitometry was performed to evaluate the thickness of the subcutaneous tissue and the body mass composition, such as lean mass and fat mass.

The aim was, by using two different images modality, to evaluate the subcutaneous tissue in normal weight and overweight, as well as the possible relation with the percentage of fat in women with cellulite.

The study was performed in 25 individuals aged between [18-60] years old. Some inclusion and exclusion criteria were done. The images were performed in the midpoint between the great trochanter and the medial aspect of the gluteal region and the thighs in the proximal third, particularly in the proximal and lateral portion of the thigh, between the great trochanter and the knee joint.

Spearman's correlation tests for a correlation between bone mineral density and fat mass with subcutaneous cell tissue thickness at a rho = 1 p-value must be less than or equal to 0.005%. Thus, with the results obtained, we verified that there is no possible correlation between measurements of bone mineral density, fat mass and subcutaneous cellular tissue of the gluteal and thigh regions. There were no significant differences in the variances between groups.

Ultrasonography has sensitivity to evaluate subcutaneous cellular tissue as well as deformations that may exist in it. We concluded that the percentage of fat measured by bone densitometry does not correlate with the thickness of subcutaneous tissue, with the percentage of fat being higher in overweight women but the thickness of subcutaneous tissue varies from woman to woman. The only possible relationship is between the bone mineral density and the thickness of the subcutaneous tissue of the right gluteus that leads to force that the greater the thickness the subcutaneous cellular tissue, the greater the bone mineral density, because although they are structurally different both respond to the stimuli of atrophy and hypertrophy.

Subcutaneous tissue, Ultrasound, Dexa.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Proceedings of the 4th IPLeiria’s International Health Congress. BMC Health Serv Res 18 (Suppl 2), 684 (2018). https://doi.org/10.1186/s12913-018-3444-8

Download citation

Published : 13 September 2018

DOI : https://doi.org/10.1186/s12913-018-3444-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

BMC Health Services Research

ISSN: 1472-6963

thesis statement recycling paper

Frequently asked questions

What’s the difference between a bibliography and a reference list.

Though the terms are sometimes used interchangeably, there is a difference in meaning:

  • A reference list only includes sources cited in the text – every entry corresponds to an in-text citation .
  • A bibliography also includes other sources which were consulted during the research but not cited.

Frequently asked questions: Knowledge Base

Methodology refers to the overarching strategy and rationale of your research. Developing your methodology involves studying the research methods used in your field and the theories or principles that underpin them, in order to choose the approach that best matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. interviews, experiments , surveys , statistical tests ).

In a dissertation or scientific paper, the methodology chapter or methods section comes after the introduction and before the results , discussion and conclusion .

Depending on the length and type of document, you might also include a literature review or theoretical framework before the methodology.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research , you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarise yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a dissertation , thesis, research paper , or proposal .

The literature review usually comes near the beginning of your  dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

Harvard referencing uses an author–date system. Sources are cited by the author’s last name and the publication year in brackets. Each Harvard in-text citation corresponds to an entry in the alphabetised reference list at the end of the paper.

Vancouver referencing uses a numerical system. Sources are cited by a number in parentheses or superscript. Each number corresponds to a full reference at the end of the paper.

Harvard style Vancouver style
In-text citation Each referencing style has different rules (Pears and Shields, 2019). Each referencing style has different rules (1).
Reference list Pears, R. and Shields, G. (2019). . 11th edn. London: MacMillan. 1. Pears R, Shields G. Cite them right: The essential referencing guide. 11th ed. London: MacMillan; 2019.

A Harvard in-text citation should appear in brackets every time you quote, paraphrase, or refer to information from a source.

The citation can appear immediately after the quotation or paraphrase, or at the end of the sentence. If you’re quoting, place the citation outside of the quotation marks but before any other punctuation like a comma or full stop.

In Harvard referencing, up to three author names are included in an in-text citation or reference list entry. When there are four or more authors, include only the first, followed by ‘ et al. ’

In-text citation Reference list
1 author (Smith, 2014) Smith, T. (2014) …
2 authors (Smith and Jones, 2014) Smith, T. and Jones, F. (2014) …
3 authors (Smith, Jones and Davies, 2014) Smith, T., Jones, F. and Davies, S. (2014) …
4+ authors (Smith , 2014) Smith, T. (2014) …

A bibliography should always contain every source you cited in your text. Sometimes a bibliography also contains other sources that you used in your research, but did not cite in the text.

MHRA doesn’t specify a rule about this, so check with your supervisor to find out exactly what should be included in your bibliography.

Footnote numbers should appear in superscript (e.g. 11 ). You can use the ‘Insert footnote’ button in Word to do this automatically; it’s in the ‘References’ tab at the top.

Footnotes always appear after the quote or paraphrase they relate to. MHRA generally recommends placing footnote numbers at the end of the sentence, immediately after any closing punctuation, like this. 12

In situations where this might be awkward or misleading, such as a long sentence containing multiple quotations, footnotes can also be placed at the end of a clause mid-sentence, like this; 13 note that they still come after any punctuation.

When a source has two or three authors, name all of them in your MHRA references . When there are four or more, use only the first name, followed by ‘and others’:

Number of authors Footnote example Bibliography example
1 author David Smith Smith, David
2 authors David Smith and Hugh Jones Smith, David, and Hugh Jones
3 authors David Smith, Hugh Jones and Emily Wright Smith, David, Hugh Jones and Emily Wright
4+ authors David Smith and others Smith, David, and others

Note that in the bibliography, only the author listed first has their name inverted. The names of additional authors and those of translators or editors are written normally.

A citation should appear wherever you use information or ideas from a source, whether by quoting or paraphrasing its content.

In Vancouver style , you have some flexibility about where the citation number appears in the sentence – usually directly after mentioning the author’s name is best, but simply placing it at the end of the sentence is an acceptable alternative, as long as it’s clear what it relates to.

In Vancouver style , when you refer to a source with multiple authors in your text, you should only name the first author followed by ‘et al.’. This applies even when there are only two authors.

In your reference list, include up to six authors. For sources with seven or more authors, list the first six followed by ‘et al.’.

The words ‘ dissertation ’ and ‘thesis’ both refer to a large written research project undertaken to complete a degree, but they are used differently depending on the country:

  • In the UK, you write a dissertation at the end of a bachelor’s or master’s degree, and you write a thesis to complete a PhD.
  • In the US, it’s the other way around: you may write a thesis at the end of a bachelor’s or master’s degree, and you write a dissertation to complete a PhD.

The main difference is in terms of scale – a dissertation is usually much longer than the other essays you complete during your degree.

Another key difference is that you are given much more independence when working on a dissertation. You choose your own dissertation topic , and you have to conduct the research and write the dissertation yourself (with some assistance from your supervisor).

Dissertation word counts vary widely across different fields, institutions, and levels of education:

  • An undergraduate dissertation is typically 8,000–15,000 words
  • A master’s dissertation is typically 12,000–50,000 words
  • A PhD thesis is typically book-length: 70,000–100,000 words

However, none of these are strict guidelines – your word count may be lower or higher than the numbers stated here. Always check the guidelines provided by your university to determine how long your own dissertation should be.

At the bachelor’s and master’s levels, the dissertation is usually the main focus of your final year. You might work on it (alongside other classes) for the entirety of the final year, or for the last six months. This includes formulating an idea, doing the research, and writing up.

A PhD thesis takes a longer time, as the thesis is the main focus of the degree. A PhD thesis might be being formulated and worked on for the whole four years of the degree program. The writing process alone can take around 18 months.

References should be included in your text whenever you use words, ideas, or information from a source. A source can be anything from a book or journal article to a website or YouTube video.

If you don’t acknowledge your sources, you can get in trouble for plagiarism .

Your university should tell you which referencing style to follow. If you’re unsure, check with a supervisor. Commonly used styles include:

  • Harvard referencing , the most commonly used style in UK universities.
  • MHRA , used in humanities subjects.
  • APA , used in the social sciences.
  • Vancouver , used in biomedicine.
  • OSCOLA , used in law.

Your university may have its own referencing style guide.

If you are allowed to choose which style to follow, we recommend Harvard referencing, as it is a straightforward and widely used style.

To avoid plagiarism , always include a reference when you use words, ideas or information from a source. This shows that you are not trying to pass the work of others off as your own.

You must also properly quote or paraphrase the source. If you’re not sure whether you’ve done this correctly, you can use the Scribbr Plagiarism Checker to find and correct any mistakes.

In Harvard style , when you quote directly from a source that includes page numbers, your in-text citation must include a page number. For example: (Smith, 2014, p. 33).

You can also include page numbers to point the reader towards a passage that you paraphrased . If you refer to the general ideas or findings of the source as a whole, you don’t need to include a page number.

When you want to use a quote but can’t access the original source, you can cite it indirectly. In the in-text citation , first mention the source you want to refer to, and then the source in which you found it. For example:

It’s advisable to avoid indirect citations wherever possible, because they suggest you don’t have full knowledge of the sources you’re citing. Only use an indirect citation if you can’t reasonably gain access to the original source.

In Harvard style referencing , to distinguish between two sources by the same author that were published in the same year, you add a different letter after the year for each source:

  • (Smith, 2019a)
  • (Smith, 2019b)

Add ‘a’ to the first one you cite, ‘b’ to the second, and so on. Do the same in your bibliography or reference list .

To create a hanging indent for your bibliography or reference list :

  • Highlight all the entries
  • Click on the arrow in the bottom-right corner of the ‘Paragraph’ tab in the top menu.
  • In the pop-up window, under ‘Special’ in the ‘Indentation’ section, use the drop-down menu to select ‘Hanging’.
  • Then close the window with ‘OK’.

It’s important to assess the reliability of information found online. Look for sources from established publications and institutions with expertise (e.g. peer-reviewed journals and government agencies).

The CRAAP test (currency, relevance, authority, accuracy, purpose) can aid you in assessing sources, as can our list of credible sources . You should generally avoid citing websites like Wikipedia that can be edited by anyone – instead, look for the original source of the information in the “References” section.

You can generally omit page numbers in your in-text citations of online sources which don’t have them. But when you quote or paraphrase a specific passage from a particularly long online source, it’s useful to find an alternate location marker.

For text-based sources, you can use paragraph numbers (e.g. ‘para. 4’) or headings (e.g. ‘under “Methodology”’). With video or audio sources, use a timestamp (e.g. ‘10:15’).

In the acknowledgements of your thesis or dissertation, you should first thank those who helped you academically or professionally, such as your supervisor, funders, and other academics.

Then you can include personal thanks to friends, family members, or anyone else who supported you during the process.

Yes, it’s important to thank your supervisor(s) in the acknowledgements section of your thesis or dissertation .

Even if you feel your supervisor did not contribute greatly to the final product, you still should acknowledge them, if only for a very brief thank you. If you do not include your supervisor, it may be seen as a snub.

The acknowledgements are generally included at the very beginning of your thesis or dissertation, directly after the title page and before the abstract .

In a thesis or dissertation, the acknowledgements should usually be no longer than one page. There is no minimum length.

You may acknowledge God in your thesis or dissertation acknowledgements , but be sure to follow academic convention by also thanking the relevant members of academia, as well as family, colleagues, and friends who helped you.

All level 1 and 2 headings should be included in your table of contents . That means the titles of your chapters and the main sections within them.

The contents should also include all appendices and the lists of tables and figures, if applicable, as well as your reference list .

Do not include the acknowledgements or abstract   in the table of contents.

To automatically insert a table of contents in Microsoft Word, follow these steps:

  • Apply heading styles throughout the document.
  • In the references section in the ribbon, locate the Table of Contents group.
  • Click the arrow next to the Table of Contents icon and select Custom Table of Contents.
  • Select which levels of headings you would like to include in the table of contents.

Make sure to update your table of contents if you move text or change headings. To update, simply right click and select Update Field.

The table of contents in a thesis or dissertation always goes between your abstract and your introduction.

An abbreviation is a shortened version of an existing word, such as Dr for Doctor. In contrast, an acronym uses the first letter of each word to create a wholly new word, such as UNESCO (an acronym for the United Nations Educational, Scientific and Cultural Organization).

Your dissertation sometimes contains a list of abbreviations .

As a rule of thumb, write the explanation in full the first time you use an acronym or abbreviation. You can then proceed with the shortened version. However, if the abbreviation is very common (like UK or PC), then you can just use the abbreviated version straight away.

Be sure to add each abbreviation in your list of abbreviations !

If you only used a few abbreviations in your thesis or dissertation, you don’t necessarily need to include a list of abbreviations .

If your abbreviations are numerous, or if you think they won’t be known to your audience, it’s never a bad idea to add one. They can also improve readability, minimising confusion about abbreviations unfamiliar to your reader.

A list of abbreviations is a list of all the abbreviations you used in your thesis or dissertation. It should appear at the beginning of your document, immediately after your table of contents . It should always be in alphabetical order.

Fishbone diagrams have a few different names that are used interchangeably, including herringbone diagram, cause-and-effect diagram, and Ishikawa diagram.

These are all ways to refer to the same thing– a problem-solving approach that uses a fish-shaped diagram to model possible root causes of problems and troubleshoot solutions.

Fishbone diagrams (also called herringbone diagrams, cause-and-effect diagrams, and Ishikawa diagrams) are most popular in fields of quality management. They are also commonly used in nursing and healthcare, or as a brainstorming technique for students.

Some synonyms and near synonyms of among include:

  • In the company of
  • In the middle of
  • Surrounded by

Some synonyms and near synonyms of between  include:

  • In the space separating
  • In the time separating

In spite of   is a preposition used to mean ‘ regardless of ‘, ‘notwithstanding’, or ‘even though’.

It’s always used in a subordinate clause to contrast with the information given in the main clause of a sentence (e.g., ‘Amy continued to watch TV, in spite of the time’).

Despite   is a preposition used to mean ‘ regardless of ‘, ‘notwithstanding’, or ‘even though’.

It’s used in a subordinate clause to contrast with information given in the main clause of a sentence (e.g., ‘Despite the stress, Joe loves his job’).

‘Log in’ is a phrasal verb meaning ‘connect to an electronic device, system, or app’. The preposition ‘to’ is often used directly after the verb; ‘in’ and ‘to’ should be written as two separate words (e.g., ‘ log in to the app to update privacy settings’).

‘Log into’ is sometimes used instead of ‘log in to’, but this is generally considered incorrect (as is ‘login to’).

Some synonyms and near synonyms of ensure include:

  • Make certain

Some synonyms and near synonyms of assure  include:

Rest assured is an expression meaning ‘you can be certain’ (e.g., ‘Rest assured, I will find your cat’). ‘Assured’ is the adjectival form of the verb assure , meaning ‘convince’ or ‘persuade’.

Some synonyms and near synonyms for council include:

There are numerous synonyms and near synonyms for the two meanings of counsel :

Direct Direction
Guide Guidance
Instruct Instruction

AI writing tools can be used to perform a variety of tasks.

Generative AI writing tools (like ChatGPT ) generate text based on human inputs and can be used for interactive learning, to provide feedback, or to generate research questions or outlines.

These tools can also be used to paraphrase or summarise text or to identify grammar and punctuation mistakes. Y ou can also use Scribbr’s free paraphrasing tool , summarising tool , and grammar checker , which are designed specifically for these purposes.

Using AI writing tools (like ChatGPT ) to write your essay is usually considered plagiarism and may result in penalisation, unless it is allowed by your university. Text generated by AI tools is based on existing texts and therefore cannot provide unique insights. Furthermore, these outputs sometimes contain factual inaccuracies or grammar mistakes.

However, AI writing tools can be used effectively as a source of feedback and inspiration for your writing (e.g., to generate research questions ). Other AI tools, like grammar checkers, can help identify and eliminate grammar and punctuation mistakes to enhance your writing.

The Scribbr Knowledge Base is a collection of free resources to help you succeed in academic research, writing, and citation. Every week, we publish helpful step-by-step guides, clear examples, simple templates, engaging videos, and more.

The Knowledge Base is for students at all levels. Whether you’re writing your first essay, working on your bachelor’s or master’s dissertation, or getting to grips with your PhD research, we’ve got you covered.

As well as the Knowledge Base, Scribbr provides many other tools and services to support you in academic writing and citation:

  • Create your citations and manage your reference list with our free Reference Generators in APA and MLA style.
  • Scan your paper for in-text citation errors and inconsistencies with our innovative APA Citation Checker .
  • Avoid accidental plagiarism with our reliable Plagiarism Checker .
  • Polish your writing and get feedback on structure and clarity with our Proofreading & Editing services .

Yes! We’re happy for educators to use our content, and we’ve even adapted some of our articles into ready-made lecture slides .

You are free to display, distribute, and adapt Scribbr materials in your classes or upload them in private learning environments like Blackboard. We only ask that you credit Scribbr for any content you use.

We’re always striving to improve the Knowledge Base. If you have an idea for a topic we should cover, or you notice a mistake in any of our articles, let us know by emailing [email protected] .

The consequences of plagiarism vary depending on the type of plagiarism and the context in which it occurs. For example, submitting a whole paper by someone else will have the most severe consequences, while accidental citation errors are considered less serious.

If you’re a student, then you might fail the course, be suspended or expelled, or be obligated to attend a workshop on plagiarism. It depends on whether it’s your first offence or you’ve done it before.

As an academic or professional, plagiarising seriously damages your reputation. You might also lose your research funding or your job, and you could even face legal consequences for copyright infringement.

Paraphrasing without crediting the original author is a form of plagiarism , because you’re presenting someone else’s ideas as if they were your own.

However, paraphrasing is not plagiarism if you correctly reference the source . This means including an in-text referencing and a full reference , formatted according to your required citation style (e.g., Harvard , Vancouver ).

As well as referencing your source, make sure that any paraphrased text is completely rewritten in your own words.

Accidental plagiarism is one of the most common examples of plagiarism . Perhaps you forgot to cite a source, or paraphrased something a bit too closely. Maybe you can’t remember where you got an idea from, and aren’t totally sure if it’s original or not.

These all count as plagiarism, even though you didn’t do it on purpose. When in doubt, make sure you’re citing your sources . Also consider running your work through a plagiarism checker tool prior to submission, which work by using advanced database software to scan for matches between your text and existing texts.

Scribbr’s Plagiarism Checker takes less than 10 minutes and can help you turn in your paper with confidence.

The accuracy depends on the plagiarism checker you use. Per our in-depth research , Scribbr is the most accurate plagiarism checker. Many free plagiarism checkers fail to detect all plagiarism or falsely flag text as plagiarism.

Plagiarism checkers work by using advanced database software to scan for matches between your text and existing texts. Their accuracy is determined by two factors: the algorithm (which recognises the plagiarism) and the size of the database (with which your document is compared).

To avoid plagiarism when summarising an article or other source, follow these two rules:

  • Write the summary entirely in your own words by   paraphrasing the author’s ideas.
  • Reference the source with an in-text citation and a full reference so your reader can easily find the original text.

Plagiarism can be detected by your professor or readers if the tone, formatting, or style of your text is different in different parts of your paper, or if they’re familiar with the plagiarised source.

Many universities also use   plagiarism detection software like Turnitin’s, which compares your text to a large database of other sources, flagging any similarities that come up.

It can be easier than you think to commit plagiarism by accident. Consider using a   plagiarism checker prior to submitting your essay to ensure you haven’t missed any citations.

Some examples of plagiarism include:

  • Copying and pasting a Wikipedia article into the body of an assignment
  • Quoting a source without including a citation
  • Not paraphrasing a source properly (e.g. maintaining wording too close to the original)
  • Forgetting to cite the source of an idea

The most surefire way to   avoid plagiarism is to always cite your sources . When in doubt, cite!

Global plagiarism means taking an entire work written by someone else and passing it off as your own. This can include getting someone else to write an essay or assignment for you, or submitting a text you found online as your own work.

Global plagiarism is one of the most serious types of plagiarism because it involves deliberately and directly lying about the authorship of a work. It can have severe consequences for students and professionals alike.

Verbatim plagiarism means copying text from a source and pasting it directly into your own document without giving proper credit.

If the structure and the majority of the words are the same as in the original source, then you are committing verbatim plagiarism. This is the case even if you delete a few words or replace them with synonyms.

If you want to use an author’s exact words, you need to quote the original source by putting the copied text in quotation marks and including an   in-text citation .

Patchwork plagiarism , also called mosaic plagiarism, means copying phrases, passages, or ideas from various existing sources and combining them to create a new text. This includes slightly rephrasing some of the content, while keeping many of the same words and the same structure as the original.

While this type of plagiarism is more insidious than simply copying and pasting directly from a source, plagiarism checkers like Turnitin’s can still easily detect it.

To avoid plagiarism in any form, remember to reference your sources .

Yes, reusing your own work without citation is considered self-plagiarism . This can range from resubmitting an entire assignment to reusing passages or data from something you’ve handed in previously.

Self-plagiarism often has the same consequences as other types of plagiarism . If you want to reuse content you wrote in the past, make sure to check your university’s policy or consult your professor.

If you are reusing content or data you used in a previous assignment, make sure to cite yourself. You can cite yourself the same way you would cite any other source: simply follow the directions for the citation style you are using.

Keep in mind that reusing prior content can be considered self-plagiarism , so make sure you ask your instructor or consult your university’s handbook prior to doing so.

Most institutions have an internal database of previously submitted student assignments. Turnitin can check for self-plagiarism by comparing your paper against this database. If you’ve reused parts of an assignment you already submitted, it will flag any similarities as potential plagiarism.

Online plagiarism checkers don’t have access to your institution’s database, so they can’t detect self-plagiarism of unpublished work. If you’re worried about accidentally self-plagiarising, you can use Scribbr’s Self-Plagiarism Checker to upload your unpublished documents and check them for similarities.

Plagiarism has serious consequences and can be illegal in certain scenarios.

While most of the time plagiarism in an undergraduate setting is not illegal, plagiarism or self-plagiarism in a professional academic setting can lead to legal action, including copyright infringement and fraud. Many scholarly journals do not allow you to submit the same work to more than one journal, and if you do not credit a coauthor, you could be legally defrauding them.

Even if you aren’t breaking the law, plagiarism can seriously impact your academic career. While the exact consequences of plagiarism vary by institution and severity, common consequences include a lower grade, automatically failing a course, academic suspension or probation, and even expulsion.

Self-plagiarism means recycling work that you’ve previously published or submitted as an assignment. It’s considered academic dishonesty to present something as brand new when you’ve already gotten credit and perhaps feedback for it in the past.

If you want to refer to ideas or data from previous work, be sure to cite yourself.

Academic integrity means being honest, ethical, and thorough in your academic work. To maintain academic integrity, you should avoid misleading your readers about any part of your research and refrain from offences like plagiarism and contract cheating, which are examples of academic misconduct.

Academic dishonesty refers to deceitful or misleading behavior in an academic setting. Academic dishonesty can occur intentionally or unintentionally, and it varies in severity.

It can encompass paying for a pre-written essay, cheating on an exam, or committing plagiarism . It can also include helping others cheat, copying a friend’s homework answers, or even pretending to be sick to miss an exam.

Academic dishonesty doesn’t just occur in a classroom setting, but also in research and other academic-adjacent fields.

Consequences of academic dishonesty depend on the severity of the offence and your institution’s policy. They can range from a warning for a first offence to a failing grade in a course to expulsion from your university.

For those in certain fields, such as nursing, engineering, or lab sciences, not learning fundamentals properly can directly impact the health and safety of others. For those working in academia or research, academic dishonesty impacts your professional reputation, leading others to doubt your future work.

Academic dishonesty can be intentional or unintentional, ranging from something as simple as claiming to have read something you didn’t to copying your neighbour’s answers on an exam.

You can commit academic dishonesty with the best of intentions, such as helping a friend cheat on a paper. Severe academic dishonesty can include buying a pre-written essay or the answers to a multiple-choice test, or falsifying a medical emergency to avoid taking a final exam.

Plagiarism means presenting someone else’s work as your own without giving proper credit to the original author. In academic writing, plagiarism involves using words, ideas, or information from a source without including a citation .

Plagiarism can have serious consequences , even when it’s done accidentally. To avoid plagiarism, it’s important to keep track of your sources and cite them correctly.

Common knowledge does not need to be cited. However, you should be extra careful when deciding what counts as common knowledge.

Common knowledge encompasses information that the average educated reader would accept as true without needing the extra validation of a source or citation.

Common knowledge should be widely known, undisputed, and easily verified. When in doubt, always cite your sources.

Most online plagiarism checkers only have access to public databases, whose software doesn’t allow you to compare two documents for plagiarism.

However, in addition to our Plagiarism Checker , Scribbr also offers an Self-Plagiarism Checker . This is an add-on tool that lets you compare your paper with unpublished or private documents. This way you can rest assured that you haven’t unintentionally plagiarised or self-plagiarised .

Compare two sources for plagiarism

Rapport begrijpen OSC

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organise your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organisation to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analysed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analysed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualise your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analysed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

An observational study could be a good fit for your research if your research question is based on things you observe. If you have ethical, logistical, or practical concerns that make an experimental design challenging, consider an observational study. Remember that in an observational study, it is critical that there be no interference or manipulation of the research subjects. Since it’s not an experiment, there are no control or treatment groups either.

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

Exploratory research explores the main aspects of a new or barely researched question.

Explanatory research explains the causes and effects of an already widely researched question.

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analysing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Triangulation can help:

  • Reduce bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labour-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference between this and a true experiment is that the groups are not randomly assigned.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomisation. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling , and quota sampling .

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from county to city to neighbourhood) to create a sample that’s less expensive and time-consuming to collect data from.

Sampling bias occurs when some members of a population are systematically more likely to be selected in a sample than others.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data are then collected from as large a percentage as possible of this random subset.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

In multistage sampling , you can use probability or non-probability sampling methods.

For a probability sample, you have to probability sampling at every stage. You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method .

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 × 5 = 15 subgroups.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

A sampling error is the difference between a population parameter and a sample statistic .

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction, and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

Attrition bias is a threat to internal validity . In experiments, differential rates of attrition between treatment and control groups can skew results.

This bias can affect the relationship between your independent and dependent variables . It can make variables appear to be correlated when they are not, or vice versa.

The external validity of a study is the extent to which you can generalise your findings to different groups of people, situations, and measures.

The two types of external validity are population validity (whether you can generalise to other groups of people) and ecological validity (whether you can generalise to other situations and settings).

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment, and situation effect.

Attrition bias can skew your sample so that your final sample differs significantly from your original sample. Your sample is biased because some groups from your population are underrepresented.

With a biased final sample, you may not be able to generalise your findings to the original population that you sampled from, so your external validity is compromised.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity: The extent to which your measure is unrelated or negatively related to measures of distinct constructs

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity, and criterion validity to achieve construct validity.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalisation : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalisation: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it ‘depends’ on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called ‘independent’ because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation)

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

On graphs, the explanatory variable is conventionally placed on the x -axis, while the response variable is placed on the y -axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term ‘ explanatory variable ‘ is sometimes preferred over ‘ independent variable ‘ because, in real-world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so ‘explanatory variables’ is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

There are 4 main types of extraneous variables :

  • Demand characteristics : Environmental cues that encourage participants to conform to researchers’ expectations
  • Experimenter effects : Unintentional actions by researchers that influence study outcomes
  • Situational variables : Eenvironmental variables that alter participants’ behaviours
  • Participant variables : Any characteristic or aspect of a participant’s background that could affect study results

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

‘Controlling for a variable’ means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

In statistics, ordinal and nominal variables are both considered categorical variables .

Even though ordinal data can sometimes be numerical, not all mathematical operations can be performed on them.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalisation .

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control, and randomisation.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomisation , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both.

You want to find out how blood sugar levels are affected by drinking diet cola and regular cola, so you conduct an experiment .

  • The type of cola – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of cola.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g., the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g., water volume or weight).

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g., understanding the needs of your consumers or user testing your website).
  • You can control and standardise the process for high reliability and validity (e.g., choosing appropriate measurements and sampling methods ).

However, there are also some drawbacks: data collection can be time-consuming, labour-intensive, and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when:

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyse your data quickly and efficiently
  • Your research question depends on strong parity between participants, with environmental conditions held constant

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualise your initial thoughts and hypotheses
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order.
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

A focus group is a research method that brings together a small group of people to answer questions in a moderated setting. The group is chosen due to predefined demographic traits, and the questions are designed to shed light on a topic of interest. It is one of four types of interviews .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favourably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias in research can also occur in observations if the participants know they’re being observed. They might alter their behaviour accordingly.

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups . Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with ‘yes’ or ‘no’ (questions that start with ‘why’ or ‘how’ are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a ‘cross-section’) in the population
Follows in participants over time Provides of society at a given point

Cross-sectional studies cannot establish a cause-and-effect relationship or analyse behaviour over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data are available for analysis; other times your research question may only require a cross-sectional study to answer it.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess. It should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations, and statistical analysis of data).

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (‘ x affects y because …’).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses. In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

A true experiment (aka a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment
  • Random assignment of participants to ensure the groups are equivalent

Depending on your study topic, there are various other methods of controlling variables .

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or by post. All questions are standardised so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organise the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomisation can minimise the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

Naturalistic observation is a qualitative research method where you record the behaviours of your research subjects in real-world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as ‘people watching’ with a purpose.

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

You can use several tactics to minimise observer bias .

  • Use masking (blinding) to hide the purpose of your study from all observers.
  • Triangulate your data with different data collection methods or sources.
  • Use multiple observers and ensure inter-rater reliability.
  • Train your observers to make sure data is consistently recorded between them.
  • Standardise your observation procedures to make sure they are structured and clear.

The observer-expectancy effect occurs when researchers influence the results of their own study through interactions with participants.

Researchers’ own beliefs and expectations about the study results may unintentionally influence participants through demand characteristics .

Observer bias occurs when a researcher’s expectations, opinions, or prejudices influence what they perceive or record in a study. It usually affects studies when observers are aware of the research aims or hypotheses. This type of research bias is also called detection bias or ascertainment bias .

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimise or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyse, detect, modify, or remove ‘dirty’ data to make your dataset ‘clean’. Data cleaning is also called data cleansing or data scrubbing.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardisation and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalisability of your results, while random assignment improves the internal validity of your study.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a die to randomly assign participants to groups.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

Blinding is important to reduce bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behaviour in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analysing the data.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field.

It acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Peer review is a process of evaluating submissions to an academic journal. Utilising rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication.

For this reason, academic journals are often considered among the most credible sources you can use in a research project – provided that the journal itself is trustworthy and well regarded.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information – for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The two main types of social desirability bias are:

  • Self-deceptive enhancement (self-deception): The tendency to see oneself in a favorable light without realizing it.
  • Impression managemen t (other-deception): The tendency to inflate one’s abilities or achievement in order to make a good impression on other people.

Demand characteristics are aspects of experiments that may give away the research objective to participants. Social desirability bias occurs when participants automatically try to respond in ways that make them seem likeable in a study, even if it means misrepresenting how they truly feel.

Participants may use demand characteristics to infer social norms or experimenter expectancies and act in socially desirable ways, so you should try to control for demand characteristics wherever possible.

Response bias refers to conditions or factors that take place during the process of responding to surveys, affecting the responses. One type of response bias is social desirability bias .

When your population is large in size, geographically dispersed, or difficult to contact, it’s necessary to use a sampling method .

This allows you to gather information from a smaller part of the population, i.e. the sample, and make accurate statements by using statistical analysis. A few sampling methods include simple random sampling , convenience sampling , and snowball sampling .

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection , using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extra-marital affairs)

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones. 

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalisations – often the goal of quantitative research . As such, a snowball sample is not representative of the target population, and is usually a better fit for qualitative research .

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalysing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

Construct validity has convergent and discriminant subtypes. They assist determine if a test measures the intended notion.

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Construct validity refers to how well a test measures the concept (or construct) it was designed to measure. Assessing construct validity is especially important when you’re researching concepts that can’t be quantified and/or are intangible, like introversion. To ensure construct validity your test should be based on known indicators of introversion ( operationalisation ).

On the other hand, content validity assesses how well the test represents all aspects of the construct. If some aspects are missing or irrelevant parts are included, the test has low content validity.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analysing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Attrition refers to participants leaving a study. It always happens to some extent – for example, in randomised control trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Criterion validity evaluates how well a test measures the outcome it was designed to measure. An outcome can be, for example, the onset of a disease.

Criterion validity consists of two subtypes depending on the time at which the two measures (the criterion and your test) are obtained:

  • Concurrent validity is a validation strategy where the the scores of a test and the criterion are obtained at the same time
  • Predictive validity is a validation strategy where the criterion variables are measured after the scores of the test

Validity tells you how accurately a method measures what it was designed to measure. There are 4 main types of validity :

  • Construct validity : Does the test measure the construct it was designed to measure?
  • Face validity : Does the test appear to be suitable for its objectives ?
  • Content validity : Does the test cover all relevant parts of the construct it aims to measure.
  • Criterion validity : Do the results accurately measure the concrete outcome they are designed to measure?

Convergent validity shows how much a measure of one construct aligns with other measures of the same or related constructs .

On the other hand, concurrent validity is about how a measure matches up to some known criterion or gold standard, which can be another measure.

Although both types of validity are established by calculating the association or correlation between a test score and another variable , they represent distinct validation methods.

The purpose of theory-testing mode is to find evidence in order to disprove, refine, or support a theory. As such, generalisability is not the aim of theory-testing mode.

Due to this, the priority of researchers in theory-testing mode is to eliminate alternative causes for relationships between variables . In other words, they prioritise internal validity over external validity , including ecological validity .

Inclusion and exclusion criteria are typically presented and discussed in the methodology section of your thesis or dissertation .

Inclusion and exclusion criteria are predominantly used in non-probability sampling . In purposive sampling and snowball sampling , restrictions apply as to who can be included in the sample .

Scope of research is determined at the beginning of your research process , prior to the data collection stage. Sometimes called “scope of study,” your scope delineates what will and will not be covered in your project. It helps you focus your work and your time, ensuring that you’ll be able to achieve your goals and outcomes.

Defining a scope can be very useful in any research project, from a research proposal to a thesis or dissertation . A scope is needed for all types of research: quantitative , qualitative , and mixed methods .

To define your scope of research, consider the following:

  • Budget constraints or any specifics of grant funding
  • Your proposed timeline and duration
  • Specifics about your population of study, your proposed sample size , and the research methodology you’ll pursue
  • Any inclusion and exclusion criteria
  • Any anticipated control , extraneous , or confounding variables that could bias your research if not accounted for properly.

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Quantitative observations involve measuring or counting something and expressing the result in numerical form, while qualitative observations involve describing something in non-numerical terms, such as its appearance, texture, or color.

The Scribbr Reference Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Reference Generator in our publicly accessible repository on Github .

To paraphrase effectively, don’t just take the original sentence and swap out some of the words for synonyms. Instead, try:

  • Reformulating the sentence (e.g., change active to passive , or start from a different point)
  • Combining information from multiple sentences into one
  • Leaving out information from the original that isn’t relevant to your point
  • Using synonyms where they don’t distort the meaning

The main point is to ensure you don’t just copy the structure of the original text, but instead reformulate the idea in your own words.

Plagiarism means using someone else’s words or ideas and passing them off as your own. Paraphrasing means putting someone else’s ideas into your own words.

So when does paraphrasing count as plagiarism?

  • Paraphrasing is plagiarism if you don’t properly credit the original author.
  • Paraphrasing is plagiarism if your text is too close to the original wording (even if you cite the source). If you directly copy a sentence or phrase, you should quote it instead.
  • Paraphrasing  is not plagiarism if you put the author’s ideas completely into your own words and properly reference the source .

To present information from other sources in academic writing , it’s best to paraphrase in most cases. This shows that you’ve understood the ideas you’re discussing and incorporates them into your text smoothly.

It’s appropriate to quote when:

  • Changing the phrasing would distort the meaning of the original text
  • You want to discuss the author’s language choices (e.g., in literary analysis )
  • You’re presenting a precise definition
  • You’re looking in depth at a specific claim

A quote is an exact copy of someone else’s words, usually enclosed in quotation marks and credited to the original author or speaker.

Every time you quote a source , you must include a correctly formatted in-text citation . This looks slightly different depending on the citation style .

For example, a direct quote in APA is cited like this: ‘This is a quote’ (Streefkerk, 2020, p. 5).

Every in-text citation should also correspond to a full reference at the end of your paper.

In scientific subjects, the information itself is more important than how it was expressed, so quoting should generally be kept to a minimum. In the arts and humanities, however, well-chosen quotes are often essential to a good paper.

In social sciences, it varies. If your research is mainly quantitative , you won’t include many quotes, but if it’s more qualitative , you may need to quote from the data you collected .

As a general guideline, quotes should take up no more than 5–10% of your paper. If in doubt, check with your instructor or supervisor how much quoting is appropriate in your field.

If you’re quoting from a text that paraphrases or summarises other sources and cites them in parentheses , APA  recommends retaining the citations as part of the quote:

  • Smith states that ‘the literature on this topic (Jones, 2015; Sill, 2019; Paulson, 2020) shows no clear consensus’ (Smith, 2019, p. 4).

Footnote or endnote numbers that appear within quoted text should be omitted.

If you want to cite an indirect source (one you’ve only seen quoted in another source), either locate the original source or use the phrase ‘as cited in’ in your citation.

A block quote is a long quote formatted as a separate ‘block’ of text. Instead of using quotation marks , you place the quote on a new line, and indent the entire quote to mark it apart from your own words.

APA uses block quotes for quotes that are 40 words or longer.

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

Common examples of primary sources include interview transcripts , photographs, novels, paintings, films, historical documents, and official statistics.

Anything you directly analyze or use as first-hand evidence can be a primary source, including qualitative or quantitative data that you collected yourself.

Common examples of secondary sources include academic books, journal articles , reviews, essays , and textbooks.

Anything that summarizes, evaluates or interprets primary sources can be a secondary source. If a source gives you an overview of background information or presents another researcher’s ideas on your topic, it is probably a secondary source.

To determine if a source is primary or secondary, ask yourself:

  • Was the source created by someone directly involved in the events you’re studying (primary), or by another researcher (secondary)?
  • Does the source provide original information (primary), or does it summarize information from other sources (secondary)?
  • Are you directly analyzing the source itself (primary), or only using it for background information (secondary)?

Some types of sources are nearly always primary: works of art and literature, raw statistical data, official documents and records, and personal communications (e.g. letters, interviews ). If you use one of these in your research, it is probably a primary source.

Primary sources are often considered the most credible in terms of providing evidence for your argument, as they give you direct evidence of what you are researching. However, it’s up to you to ensure the information they provide is reliable and accurate.

Always make sure to properly cite your sources to avoid plagiarism .

A fictional movie is usually a primary source. A documentary can be either primary or secondary depending on the context.

If you are directly analysing some aspect of the movie itself – for example, the cinematography, narrative techniques, or social context – the movie is a primary source.

If you use the movie for background information or analysis about your topic – for example, to learn about a historical event or a scientific discovery – the movie is a secondary source.

Whether it’s primary or secondary, always properly cite the movie in the citation style you are using. Learn how to create an MLA movie citation or an APA movie citation .

Articles in newspapers and magazines can be primary or secondary depending on the focus of your research.

In historical studies, old articles are used as primary sources that give direct evidence about the time period. In social and communication studies, articles are used as primary sources to analyse language and social relations (for example, by conducting content analysis or discourse analysis ).

If you are not analysing the article itself, but only using it for background information or facts about your topic, then the article is a secondary source.

In academic writing , there are three main situations where quoting is the best choice:

  • To analyse the author’s language (e.g., in a literary analysis essay )
  • To give evidence from primary sources
  • To accurately present a precise definition or argument

Don’t overuse quotes; your own voice should be dominant. If you just want to provide information from a source, it’s usually better to paraphrase or summarise .

Your list of tables and figures should go directly after your table of contents in your thesis or dissertation.

Lists of figures and tables are often not required, and they aren’t particularly common. They specifically aren’t required for APA Style, though you should be careful to follow their other guidelines for figures and tables .

If you have many figures and tables in your thesis or dissertation, include one may help you stay organised. Your educational institution may require them, so be sure to check their guidelines.

Copyright information can usually be found wherever the table or figure was published. For example, for a diagram in a journal article , look on the journal’s website or the database where you found the article. Images found on sites like Flickr are listed with clear copyright information.

If you find that permission is required to reproduce the material, be sure to contact the author or publisher and ask for it.

A list of figures and tables compiles all of the figures and tables that you used in your thesis or dissertation and displays them with the page number where they can be found.

APA doesn’t require you to include a list of tables or a list of figures . However, it is advisable to do so if your text is long enough to feature a table of contents and it includes a lot of tables and/or figures .

A list of tables and list of figures appear (in that order) after your table of contents, and are presented in a similar way.

A glossary is a collection of words pertaining to a specific topic. In your thesis or dissertation, it’s a list of all terms you used that may not immediately be obvious to your reader. Your glossary only needs to include terms that your reader may not be familiar with, and is intended to enhance their understanding of your work.

Definitional terms often fall into the category of common knowledge , meaning that they don’t necessarily have to be cited. This guidance can apply to your thesis or dissertation glossary as well.

However, if you’d prefer to cite your sources , you can follow guidance for citing dictionary entries in MLA or APA style for your glossary.

A glossary is a collection of words pertaining to a specific topic. In your thesis or dissertation, it’s a list of all terms you used that may not immediately be obvious to your reader. In contrast, an index is a list of the contents of your work organised by page number.

Glossaries are not mandatory, but if you use a lot of technical or field-specific terms, it may improve readability to add one to your thesis or dissertation. Your educational institution may also require them, so be sure to check their specific guidelines.

A glossary is a collection of words pertaining to a specific topic. In your thesis or dissertation, it’s a list of all terms you used that may not immediately be obvious to your reader. In contrast, dictionaries are more general collections of words.

The title page of your thesis or dissertation should include your name, department, institution, degree program, and submission date.

The title page of your thesis or dissertation goes first, before all other content or lists that you may choose to include.

Usually, no title page is needed in an MLA paper . A header is generally included at the top of the first page instead. The exceptions are when:

  • Your instructor requires one, or
  • Your paper is a group project

In those cases, you should use a title page instead of a header, listing the same information but on a separate page.

When you mention different chapters within your text, it’s considered best to use Roman numerals for most citation styles. However, the most important thing here is to remain consistent whenever using numbers in your dissertation .

A thesis or dissertation outline is one of the most critical first steps in your writing process. It helps you to lay out and organise your ideas and can provide you with a roadmap for deciding what kind of research you’d like to undertake.

Generally, an outline contains information on the different sections included in your thesis or dissertation, such as:

  • Your anticipated title
  • Your abstract
  • Your chapters (sometimes subdivided into further topics like literature review, research methods, avenues for future research, etc.)

While a theoretical framework describes the theoretical underpinnings of your work based on existing research, a conceptual framework allows you to draw your own conclusions, mapping out the variables you may use in your study and the interplay between them.

A literature review and a theoretical framework are not the same thing and cannot be used interchangeably. While a theoretical framework describes the theoretical underpinnings of your work, a literature review critically evaluates existing research relating to your topic. You’ll likely need both in your dissertation .

A theoretical framework can sometimes be integrated into a  literature review chapter , but it can also be included as its own chapter or section in your dissertation . As a rule of thumb, if your research involves dealing with a lot of complex theories, it’s a good idea to include a separate theoretical framework chapter.

An abstract is a concise summary of an academic text (such as a journal article or dissertation ). It serves two main purposes:

  • To help potential readers determine the relevance of your paper for their own research.
  • To communicate your key findings to those who don’t have time to read the whole paper.

Abstracts are often indexed along with keywords on academic databases, so they make your work more easily findable. Since the abstract is the first thing any reader sees, it’s important that it clearly and accurately summarises the contents of your paper.

The abstract is the very last thing you write. You should only write it after your research is complete, so that you can accurately summarize the entirety of your thesis or paper.

Avoid citing sources in your abstract . There are two reasons for this:

  • The abstract should focus on your original research, not on the work of others.
  • The abstract should be self-contained and fully understandable without reference to other sources.

There are some circumstances where you might need to mention other sources in an abstract: for example, if your research responds directly to another study or focuses on the work of a single theorist. In general, though, don’t include citations unless absolutely necessary.

The abstract appears on its own page, after the title page and acknowledgements but before the table of contents .

Results are usually written in the past tense , because they are describing the outcome of completed actions.

The results chapter or section simply and objectively reports what you found, without speculating on why you found these results. The discussion interprets the meaning of the results, puts them in context, and explains why they matter.

In qualitative research , results and discussion are sometimes combined. But in quantitative research , it’s considered important to separate the objective results from your interpretation of them.

Formulating a main research question can be a difficult task. Overall, your question should contribute to solving the problem that you have defined in your problem statement .

However, it should also fulfill criteria in three main areas:

  • Researchability
  • Feasibility and specificity
  • Relevance and originality

The best way to remember the difference between a research plan and a research proposal is that they have fundamentally different audiences. A research plan helps you, the researcher, organize your thoughts. On the other hand, a dissertation proposal or research proposal aims to convince others (e.g., a supervisor, a funding body, or a dissertation committee) that your research topic is relevant and worthy of being conducted.

A noun is a word that represents a person, thing, concept, or place (e.g., ‘John’, ‘house’, ‘affinity’, ‘river’). Most sentences contain at least one noun or pronoun .

Nouns are often, but not always, preceded by an article (‘the’, ‘a’, or ‘an’) and/or another determiner such as an adjective.

There are many ways to categorize nouns into various types, and the same noun can fall into multiple categories or even change types depending on context.

Some of the main types of nouns are:

  • Common nouns and proper nouns
  • Countable and uncountable nouns
  • Concrete and abstract nouns
  • Collective nouns
  • Possessive nouns
  • Attributive nouns
  • Appositive nouns
  • Generic nouns

Pronouns are words like ‘I’, ‘she’, and ‘they’ that are used in a similar way to nouns . They stand in for a noun that has already been mentioned or refer to yourself and other people.

Pronouns can function just like nouns as the head of a noun phrase and as the subject or object of a verb. However, pronouns change their forms (e.g., from ‘I’ to ‘me’) depending on the grammatical context they’re used in, whereas nouns usually don’t.

Common nouns are words for types of things, people, and places, such as ‘dog’, ‘professor’, and ‘city’. They are not capitalised and are typically used in combination with articles and other determiners.

Proper nouns are words for specific things, people, and places, such as ‘Max’, ‘Dr Prakash’, and ‘London’. They are always capitalised and usually aren’t combined with articles and other determiners.

A proper adjective is an adjective that was derived from a proper noun and is therefore capitalised .

Proper adjectives include words for nationalities, languages, and ethnicities (e.g., ‘Japanese’, ‘Inuit’, ‘French’) and words derived from people’s names (e.g., ‘Bayesian’, ‘Orwellian’).

The names of seasons (e.g., ‘spring’) are treated as common nouns in English and therefore not capitalised . People often assume they are proper nouns, but this is an error.

The names of days and months, however, are capitalised since they’re treated as proper nouns in English (e.g., ‘Wednesday’, ‘January’).

No, as a general rule, academic concepts, disciplines, theories, models, etc. are treated as common nouns , not proper nouns , and therefore not capitalised . For example, ‘five-factor model of personality’ or ‘analytic philosophy’.

However, proper nouns that appear within the name of an academic concept (such as the name of the inventor) are capitalised as usual. For example, ‘Darwin’s theory of evolution’ or ‘ Student’s t table ‘.

Collective nouns are most commonly treated as singular (e.g., ‘the herd is grazing’), but usage differs between US and UK English :

  • In US English, it’s standard to treat all collective nouns as singular, even when they are plural in appearance (e.g., ‘The Rolling Stones is …’). Using the plural form is usually seen as incorrect.
  • In UK English, collective nouns can be treated as singular or plural depending on context. It’s quite common to use the plural form, especially when the noun looks plural (e.g., ‘The Rolling Stones are …’).

The plural of “crisis” is “crises”. It’s a loanword from Latin and retains its original Latin plural noun form (similar to “analyses” and “bases”). It’s wrong to write “crisises”.

For example, you might write “Several crises destabilized the regime.”

Normally, the plural of “fish” is the same as the singular: “fish”. It’s one of a group of irregular plural nouns in English that are identical to the corresponding singular nouns (e.g., “moose”, “sheep”). For example, you might write “The fish scatter as the shark approaches.”

If you’re referring to several species of fish, though, the regular plural “fishes” is often used instead. For example, “The aquarium contains many different fishes , including trout and carp.”

The correct plural of “octopus” is “octopuses”.

People often write “octopi” instead because they assume that the plural noun is formed in the same way as Latin loanwords such as “fungus/fungi”. But “octopus” actually comes from Greek, where its original plural is “octopodes”. In English, it instead has the regular plural form “octopuses”.

For example, you might write “There are four octopuses in the aquarium.”

The plural of “moose” is the same as the singular: “moose”. It’s one of a group of plural nouns in English that are identical to the corresponding singular nouns. So it’s wrong to write “mooses”.

For example, you might write “There are several moose in the forest.”

Bias in research affects the validity and reliability of your findings, leading to false conclusions and a misinterpretation of the truth. This can have serious implications in areas like medical research where, for example, a new form of treatment may be evaluated.

Observer bias occurs when the researcher’s assumptions, views, or preconceptions influence what they see and record in a study, while actor–observer bias refers to situations where respondents attribute internal factors (e.g., bad character) to justify other’s behaviour and external factors (difficult circumstances) to justify the same behaviour in themselves.

Response bias is a general term used to describe a number of different conditions or factors that cue respondents to provide inaccurate or false answers during surveys or interviews . These factors range from the interviewer’s perceived social position or appearance to the the phrasing of questions in surveys.

Nonresponse bias occurs when the people who complete a survey are different from those who did not, in ways that are relevant to the research topic. Nonresponse can happen either because people are not willing or not able to participate.

In research, demand characteristics are cues that might indicate the aim of a study to participants. These cues can lead to participants changing their behaviors or responses based on what they think the research is about.

Demand characteristics are common problems in psychology experiments and other social science studies because they can bias your research findings.

Demand characteristics are a type of extraneous variable that can affect the outcomes of the study. They can invalidate studies by providing an alternative explanation for the results.

These cues may nudge participants to consciously or unconsciously change their responses, and they pose a threat to both internal and external validity . You can’t be sure that your independent variable manipulation worked, or that your findings can be applied to other people or settings.

You can control demand characteristics by taking a few precautions in your research design and materials.

Use these measures:

  • Deception: Hide the purpose of the study from participants
  • Between-groups design : Give each participant only one independent variable treatment
  • Double-blind design : Conceal the assignment of groups from participants and yourself
  • Implicit measures: Use indirect or hidden measurements for your variables

Some attrition is normal and to be expected in research. However, the type of attrition is important because systematic research bias can distort your findings. Attrition bias can lead to inaccurate results because it affects internal and/or external validity .

To avoid attrition bias , applying some of these measures can help you reduce participant dropout (attrition) by making it easy and appealing for participants to stay.

  • Provide compensation (e.g., cash or gift cards) for attending every session
  • Minimise the number of follow-ups as much as possible
  • Make all follow-ups brief, flexible, and convenient for participants
  • Send participants routine reminders to schedule follow-ups
  • Recruit more participants than you need for your sample (oversample)
  • Maintain detailed contact information so you can get in touch with participants even if they move

If you have a small amount of attrition bias , you can use a few statistical methods to try to make up for this research bias .

Multiple imputation involves using simulations to replace the missing data with likely values. Alternatively, you can use sample weighting to make up for the uneven balance of participants in your sample.

Placebos are used in medical research for new medication or therapies, called clinical trials. In these trials some people are given a placebo, while others are given the new medication being tested.

The purpose is to determine how effective the new medication is: if it benefits people beyond a predefined threshold as compared to the placebo, it’s considered effective.

Although there is no definite answer to what causes the placebo effect , researchers propose a number of explanations such as the power of suggestion, doctor-patient interaction, classical conditioning, etc.

Belief bias and confirmation bias are both types of cognitive bias that impact our judgment and decision-making.

Confirmation bias relates to how we perceive and judge evidence. We tend to seek out and prefer information that supports our preexisting beliefs, ignoring any information that contradicts those beliefs.

Belief bias describes the tendency to judge an argument based on how plausible the conclusion seems to us, rather than how much evidence is provided to support it during the course of the argument.

Positivity bias is phenomenon that occurs when a person judges individual members of a group positively, even when they have negative impressions or judgments of the group as a whole. Positivity bias is closely related to optimism bias , or the e xpectation that things will work out well, even if rationality suggests that problems are inevitable in life.

Perception bias is a problem because it prevents us from seeing situations or people objectively. Rather, our expectations, beliefs, or emotions interfere with how we interpret reality. This, in turn, can cause us to misjudge ourselves or others. For example, our prejudices can interfere with whether we perceive people’s faces as friendly or unfriendly.

There are many ways to categorize adjectives into various types. An adjective can fall into one or more of these categories depending on how it is used.

Some of the main types of adjectives are:

  • Attributive adjectives
  • Predicative adjectives
  • Comparative adjectives
  • Superlative adjectives
  • Coordinate adjectives
  • Appositive adjectives
  • Compound adjectives
  • Participial adjectives
  • Proper adjectives
  • Denominal adjectives
  • Nominal adjectives

Cardinal numbers (e.g., one, two, three) can be placed before a noun to indicate quantity (e.g., one apple). While these are sometimes referred to as ‘numeral adjectives ‘, they are more accurately categorised as determiners or quantifiers.

Proper adjectives are adjectives formed from a proper noun (i.e., the name of a specific person, place, or thing) that are used to indicate origin. Like proper nouns, proper adjectives are always capitalised (e.g., Newtonian, Marxian, African).

The cost of proofreading depends on the type and length of text, the turnaround time, and the level of services required. Most proofreading companies charge per word or page, while freelancers sometimes charge an hourly rate.

For proofreading alone, which involves only basic corrections of typos and formatting mistakes, you might pay as little as ÂŁ0.01 per word, but in many cases, your text will also require some level of editing , which costs slightly more.

It’s often possible to purchase combined proofreading and editing services and calculate the price in advance based on your requirements.

Then and than are two commonly confused words . In the context of ‘better than’, you use ‘than’ with an ‘a’.

  • Julie is better than Jesse.
  • I’d rather spend my time with you than with him.
  • I understand Eoghan’s point of view better than Claudia’s.

Use to and used to are commonly confused words . In the case of ‘used to do’, the latter (with ‘d’) is correct, since you’re describing an action or state in the past.

  • I used to do laundry once a week.
  • They used to do each other’s hair.
  • We used to do the dishes every day .

There are numerous synonyms and near synonyms for the various meanings of “ favour ”:

Advocate Adoration
Approve of Appreciation
Endorse Praise
Support Respect

There are numerous synonyms and near synonyms for the two meanings of “ favoured ”:

Advocated Adored
Approved of Appreciated
Endorsed Praised
Supported Preferred

No one (two words) is an indefinite pronoun meaning ‘nobody’. People sometimes mistakenly write ‘noone’, but this is incorrect and should be avoided. ‘No-one’, with a hyphen, is also acceptable in UK English .

Nobody and no one are both indefinite pronouns meaning ‘no person’. They can be used interchangeably (e.g., ‘nobody is home’ means the same as ‘no one is home’).

Some synonyms and near synonyms of  every time include:

  • Without exception

‘Everytime’ is sometimes used to mean ‘each time’ or ‘whenever’. However, this is incorrect and should be avoided. The correct phrase is every time   (two words).

Yes, the conjunction because is a compound word , but one with a long history. It originates in Middle English from the preposition “bi” (“by”) and the noun “cause”. Over time, the open compound “bi cause” became the closed compound “because”, which we use today.

Though it’s spelled this way now, the verb “be” is not one of the words that makes up “because”.

Yes, today is a compound word , but a very old one. It wasn’t originally formed from the preposition “to” and the noun “day”; rather, it originates from their Old English equivalents, “tō” and “dĂŚÄĄe”.

In the past, it was sometimes written as a hyphenated compound: “to-day”. But the hyphen is no longer included; it’s always “today” now (“to day” is also wrong).

IEEE citation format is defined by the Institute of Electrical and Electronics Engineers and used in their publications.

It’s also a widely used citation style for students in technical fields like electrical and electronic engineering, computer science, telecommunications, and computer engineering.

An IEEE in-text citation consists of a number in brackets at the relevant point in the text, which points the reader to the right entry in the numbered reference list at the end of the paper. For example, ‘Smith [1] states that …’

A location marker such as a page number is also included within the brackets when needed: ‘Smith [1, p. 13] argues …’

The IEEE reference page consists of a list of references numbered in the order they were cited in the text. The title ‘References’ appears in bold at the top, either left-aligned or centered.

The numbers appear in square brackets on the left-hand side of the page. The reference entries are indented consistently to separate them from the numbers. Entries are single-spaced, with a normal paragraph break between them.

If you cite the same source more than once in your writing, use the same number for all of the IEEE in-text citations for that source, and only include it on the IEEE reference page once. The source is numbered based on the first time you cite it.

For example, the fourth source you cite in your paper is numbered [4]. If you cite it again later, you still cite it as [4]. You can cite different parts of the source each time by adding page numbers [4, p. 15].

A verb is a word that indicates a physical action (e.g., ‘drive’), a mental action (e.g., ‘think’) or a state of being (e.g., ‘exist’). Every sentence contains a verb.

Verbs are almost always used along with a noun or pronoun to describe what the noun or pronoun is doing.

There are many ways to categorize verbs into various types. A verb can fall into one or more of these categories depending on how it is used.

Some of the main types of verbs are:

  • Regular verbs
  • Irregular verbs
  • Transitive verbs
  • Intransitive verbs
  • Dynamic verbs
  • Stative verbs
  • Linking verbs
  • Auxiliary verbs
  • Modal verbs
  • Phrasal verbs

Regular verbs are verbs whose simple past and past participle are formed by adding the suffix ‘-ed’ (e.g., ‘walked’).

Irregular verbs are verbs that form their simple past and past participles in some way other than by adding the suffix ‘-ed’ (e.g., ‘sat’).

The indefinite articles a and an are used to refer to a general or unspecified version of a noun (e.g., a house). Which indefinite article you use depends on the pronunciation of the word that follows it.

  • A is used for words that begin with a consonant sound (e.g., a bear).
  • An is used for words that begin with a vowel sound (e.g., an eagle).

Indefinite articles can only be used with singular countable nouns . Like definite articles, they are a type of determiner .

Editing and proofreading are different steps in the process of revising a text.

Editing comes first, and can involve major changes to content, structure and language. The first stages of editing are often done by authors themselves, while a professional editor makes the final improvements to grammar and style (for example, by improving sentence structure and word choice ).

Proofreading is the final stage of checking a text before it is published or shared. It focuses on correcting minor errors and inconsistencies (for example, in punctuation and capitalization ). Proofreaders often also check for formatting issues, especially in print publishing.

Whether you’re publishing a blog, submitting a research paper , or even just writing an important email, there are a few techniques you can use to make sure it’s error-free:

  • Take a break : Set your work aside for at least a few hours so that you can look at it with fresh eyes.
  • Proofread a printout : Staring at a screen for too long can cause fatigue – sit down with a pen and paper to check the final version.
  • Use digital shortcuts : Take note of any recurring mistakes (for example, misspelling a particular word, switching between US and UK English , or inconsistently capitalizing a term), and use Find and Replace to fix it throughout the document.

If you want to be confident that an important text is error-free, it might be worth choosing a professional proofreading service instead.

There are many different routes to becoming a professional proofreader or editor. The necessary qualifications depend on the field – to be an academic or scientific proofreader, for example, you will need at least a university degree in a relevant subject.

For most proofreading jobs, experience and demonstrated skills are more important than specific qualifications. Often your skills will be tested as part of the application process.

To learn practical proofreading skills, you can choose to take a course with a professional organisation such as the Society for Editors and Proofreaders . Alternatively, you can apply to companies that offer specialised on-the-job training programmes, such as the Scribbr Academy .

Though they’re pronounced the same, there’s a big difference in meaning between its and it’s .

  • ‘The cat ate its food’.
  • ‘It’s almost Christmas’.

Its and it’s are often confused, but its (without apostrophe) is the possessive form of ‘it’ (e.g., its tail, its argument, its wing). You use ‘its’ instead of ‘his’ and ‘her’ for neuter, inanimate nouns.

Then and than are two commonly confused words with different meanings and grammatical roles.

  • Then (pronounced with a short ‘e’ sound) refers to time. It’s often an adverb , but it can also be used as a noun meaning ‘that time’ and as an adjective referring to a previous status.
  • Than (pronounced with a short ‘a’ sound) is used for comparisons. Grammatically, it usually functions as a conjunction , but sometimes it’s a preposition .
Examples: Then in a sentence Examples: Than in a sentence
Mix the dry ingredients first, and add the wet ingredients. Max is a better saxophonist you.
I was working as a teacher . I usually like coaching a team more I like playing soccer myself.

Use to and used to are commonly confused words . In the case of ‘used to be’, the latter (with ‘d’) is correct, since you’re describing an action or state in the past.

  • I used to be the new coworker.
  • There used to be 4 cookies left.
  • We used to walk to school every day .

A grammar checker is a tool designed to automatically check your text for spelling errors, grammatical issues, punctuation mistakes , and problems with sentence structure . You can check out our analysis of the best free grammar checkers to learn more.

A paraphrasing tool edits your text more actively, changing things whether they were grammatically incorrect or not. It can paraphrase your sentences to make them more concise and readable or for other purposes. You can check out our analysis of the best free paraphrasing tools to learn more.

Some tools available online combine both functions. Others, such as QuillBot , have separate grammar checker and paraphrasing tools. Be aware of what exactly the tool you’re using does to avoid introducing unwanted changes.

Good grammar is the key to expressing yourself clearly and fluently, especially in professional communication and academic writing . Word processors, browsers, and email programs typically have built-in grammar checkers, but they’re quite limited in the kinds of problems they can fix.

If you want to go beyond detecting basic spelling errors, there are many online grammar checkers with more advanced functionality. They can often detect issues with punctuation , word choice, and sentence structure that more basic tools would miss.

Not all of these tools are reliable, though. You can check out our research into the best free grammar checkers to explore the options.

Our research indicates that the best free grammar checker available online is the QuillBot grammar checker .

We tested 10 of the most popular checkers with the same sample text (containing 20 grammatical errors) and found that QuillBot easily outperformed the competition, scoring 18 out of 20, a drastic improvement over the second-place score of 13 out of 20.

It even appeared to outperform the premium versions of other grammar checkers, despite being entirely free.

A teacher’s aide is a person who assists in teaching classes but is not a qualified teacher. Aide is a noun meaning ‘assistant’, so it will always refer to a person.

‘Teacher’s aid’ is incorrect.

A visual aid is an instructional device (e.g., a photo, a chart) that appeals to vision to help you understand written or spoken information. Aid is often placed after an attributive noun or adjective (like ‘visual’) that describes the type of help provided.

‘Visual aide’ is incorrect.

A job aid is an instructional tool (e.g., a checklist, a cheat sheet) that helps you work efficiently. Aid is a noun meaning ‘assistance’. It’s often placed after an adjective or attributive noun (like ‘job’) that describes the specific type of help provided.

‘Job aide’ is incorrect.

There are numerous synonyms for the various meanings of truly :

Candidly Completely Accurately
Honestly Really Correctly
Openly Totally Exactly
Truthfully Precisely

Yours truly is a phrase used at the end of a formal letter or email. It can also be used (typically in a humorous way) as a pronoun to refer to oneself (e.g., ‘The dinner was cooked by yours truly ‘). The latter usage should be avoided in formal writing.

It’s formed by combining the second-person possessive pronoun ‘yours’ with the adverb ‘ truly ‘.

A pathetic fallacy can be a short phrase or a whole sentence and is often used in novels and poetry. Pathetic fallacies serve multiple purposes, such as:

  • Conveying the emotional state of the characters or the narrator
  • Creating an atmosphere or set the mood of a scene
  • Foreshadowing events to come
  • Giving texture and vividness to a piece of writing
  • Communicating emotion to the reader in a subtle way, by describing the external world.
  • Bringing inanimate objects to life so that they seem more relatable.

AMA citation format is a citation style designed by the American Medical Association. It’s frequently used in the field of medicine.

You may be told to use AMA style for your student papers. You will also have to follow this style if you’re submitting a paper to a journal published by the AMA.

An AMA in-text citation consists of the number of the relevant reference on your AMA reference page , written in superscript 1 at the point in the text where the source is used.

It may also include the page number or range of the relevant material in the source (e.g., the part you quoted 2(p46) ). Multiple sources can be cited at one point, presented as a range or list (with no spaces 3,5–9 ).

An AMA reference usually includes the author’s last name and initials, the title of the source, information about the publisher or the publication it’s contained in, and the publication date. The specific details included, and the formatting, depend on the source type.

References in AMA style are presented in numerical order (numbered by the order in which they were first cited in the text) on your reference page. A source that’s cited repeatedly in the text still only appears once on the reference page.

An AMA in-text citation just consists of the number of the relevant entry on your AMA reference page , written in superscript at the point in the text where the source is referred to.

You don’t need to mention the author of the source in your sentence, but you can do so if you want. It’s not an official part of the citation, but it can be useful as part of a signal phrase introducing the source.

On your AMA reference page , author names are written with the last name first, followed by the initial(s) of their first name and middle name if mentioned.

There’s a space between the last name and the initials, but no space or punctuation between the initials themselves. The names of multiple authors are separated by commas , and the whole list ends in a period, e.g., ‘Andreessen F, Smith PW, Gonzalez E’.

The names of up to six authors should be listed for each source on your AMA reference page , separated by commas . For a source with seven or more authors, you should list the first three followed by ‘ et al’ : ‘Isidore, Gilbert, Gunvor, et al’.

In the text, mentioning author names is optional (as they aren’t an official part of AMA in-text citations ). If you do mention them, though, you should use the first author’s name followed by ‘et al’ when there are three or more : ‘Isidore et al argue that …’

Note that according to AMA’s rather minimalistic punctuation guidelines, there’s no period after ‘et al’ unless it appears at the end of a sentence. This is different from most other styles, where there is normally a period.

Yes, you should normally include an access date in an AMA website citation (or when citing any source with a URL). This is because webpages can change their content over time, so it’s useful for the reader to know when you accessed the page.

When a publication or update date is provided on the page, you should include it in addition to the access date. The access date appears second in this case, e.g., ‘Published June 19, 2021. Accessed August 29, 2022.’

Don’t include an access date when citing a source with a DOI (such as in an AMA journal article citation ).

Some variables have fixed levels. For example, gender and ethnicity are always nominal level data because they cannot be ranked.

However, for other variables, you can choose the level of measurement . For example, income is a variable that can be recorded on an ordinal or a ratio scale:

  • At an ordinal level , you could create 5 income groupings and code the incomes that fall within them from 1–5.
  • At a ratio level , you would record exact numbers for income.

If you have a choice, the ratio level is always preferable because you can analyse data in more ways. The higher the level of measurement, the more precise your data is.

The level at which you measure a variable determines how you can analyse your data.

Depending on the level of measurement , you can perform different descriptive statistics to get an overall summary of your data and inferential statistics to see if your results support or refute your hypothesis .

Levels of measurement tell you how precisely variables are recorded. There are 4 levels of measurement, which can be ranked from low to high:

  • Nominal : the data can only be categorised.
  • Ordinal : the data can be categorised and ranked.
  • Interval : the data can be categorised and ranked, and evenly spaced.
  • Ratio : the data can be categorised, ranked, evenly spaced and has a natural zero.

Statistical analysis is the main method for analyzing quantitative research data . It uses probabilities and models to test predictions about a population from sample data.

The null hypothesis is often abbreviated as H 0 . When the null hypothesis is written using mathematical symbols, it always includes an equality symbol (usually =, but sometimes ≥ or ≤).

The alternative hypothesis is often abbreviated as H a or H 1 . When the alternative hypothesis is written using mathematical symbols, it always includes an inequality symbol (usually ≠, but sometimes < or >).

As the degrees of freedom increase, Student’s t distribution becomes less leptokurtic , meaning that the probability of extreme values decreases. The distribution becomes more and more similar to a standard normal distribution .

When there are only one or two degrees of freedom , the chi-square distribution is shaped like a backwards ‘J’. When there are three or more degrees of freedom, the distribution is shaped like a right-skewed hump. As the degrees of freedom increase, the hump becomes less right-skewed and the peak of the hump moves to the right. The distribution becomes more and more similar to a normal distribution .

‘Looking forward in hearing from you’ is an incorrect version of the phrase looking forward to hearing from you . The phrasal verb ‘looking forward to’ always needs the preposition ‘to’, not ‘in’.

  • I am looking forward in hearing from you.
  • I am looking forward to hearing from you.

Some synonyms and near synonyms for the expression looking forward to hearing from you include:

  • Eagerly awaiting your response
  • Hoping to hear from you soon
  • It would be great to hear back from you
  • Thanks in advance for your reply

People sometimes mistakenly write ‘looking forward to hear from you’, but this is incorrect. The correct phrase is looking forward to hearing from you .

The phrasal verb ‘look forward to’ is always followed by a direct object, the thing you’re looking forward to. As the direct object has to be a noun phrase , it should be the gerund ‘hearing’, not the verb ‘hear’.

  • I’m looking forward to hear from you soon.
  • I’m looking forward to hearing from you soon.

Traditionally, the sign-off Yours sincerely is used in an email message or letter when you are writing to someone you have interacted with before, not a complete stranger.

Yours faithfully is used instead when you are writing to someone you have had no previous correspondence with, especially if you greeted them as ‘ Dear Sir or Madam ’.

Just checking in   is a standard phrase used to start an email (or other message) that’s intended to ask someone for a response or follow-up action in a friendly, informal way. However, it’s a clichĂŠ opening that can come across as passive-aggressive, so we recommend avoiding it in favor of a more direct opening like “We previously discussed …”

In a more personal context, you might encounter “just checking in” as part of a longer phrase such as “I’m just checking in to see how you’re doing”. In this case, it’s not asking the other person to do anything but rather asking about their well-being (emotional or physical) in a friendly way.

“Earliest convenience” is part of the phrase at your earliest convenience , meaning “as soon as you can”. 

It’s typically used to end an email in a formal context by asking the recipient to do something when it’s convenient for them to do so.

ASAP is an abbreviation of the phrase “as soon as possible”. 

It’s typically used to indicate a sense of urgency in highly informal contexts (e.g., “Let me know ASAP if you need me to drive you to the airport”).

“ASAP” should be avoided in more formal correspondence. Instead, use an alternative like at your earliest convenience .

Some synonyms and near synonyms of the verb   compose   (meaning “to make up”) are:

People increasingly use “comprise” as a synonym of “compose.” However, this is normally still seen as a mistake, and we recommend avoiding it in your academic writing . “Comprise” traditionally means “to be made up of,” not “to make up.”

Some synonyms and near synonyms of the verb comprise are:

  • Be composed of
  • Be made up of

People increasingly use “comprise” interchangeably with “compose,” meaning that they consider words like “compose,” “constitute,” and “form” to be synonymous with “comprise.” However, this is still normally regarded as an error, and we advise against using these words interchangeably in academic writing .

A fallacy is a mistaken belief, particularly one based on unsound arguments or one that lacks the evidence to support it. Common types of fallacy that may compromise the quality of your research are:

  • Correlation/causation fallacy: Claiming that two events that occur together have a cause-and-effect relationship even though this can’t be proven
  • Ecological fallacy : Making inferences about the nature of individuals based on aggregate data for the group
  • The sunk cost fallacy : Following through on a project or decision because we have already invested time, effort, or money into it, even if the current costs outweigh the benefits
  • The base-rate fallacy : Ignoring base-rate or statistically significant information, such as sample size or the relative frequency of an event, in favor of  less relevant information e.g., pertaining to a single case, or a small number of cases
  • The planning fallacy : Underestimating the time needed to complete a future task, even when we know that similar tasks in the past have taken longer than planned

The planning fallacy refers to people’s tendency to underestimate the resources needed to complete a future task, despite knowing that previous tasks have also taken longer than planned.

For example, people generally tend to underestimate the cost and time needed for construction projects. The planning fallacy occurs due to people’s tendency to overestimate the chances that positive events, such as a shortened timeline, will happen to them. This phenomenon is called optimism bias or positivity bias.

Although both red herring fallacy and straw man fallacy are logical fallacies or reasoning errors, they denote different attempts to “win” an argument. More specifically:

  • A red herring fallacy refers to an attempt to change the subject and divert attention from the original issue. In other words, a seemingly solid but ultimately irrelevant argument is introduced into the discussion, either on purpose or by mistake.
  • A straw man argument involves the deliberate distortion of another person’s argument. By oversimplifying or exaggerating it, the other party creates an easy-to-refute argument and then attacks it.

The red herring fallacy is a problem because it is flawed reasoning. It is a distraction device that causes people to become sidetracked from the main issue and draw wrong conclusions.

Although a red herring may have some kernel of truth, it is used as a distraction to keep our eyes on a different matter. As a result, it can cause us to accept and spread misleading information.

The sunk cost fallacy and escalation of commitment (or commitment bias ) are two closely related terms. However, there is a slight difference between them:

  • Escalation of commitment (aka commitment bias ) is the tendency to be consistent with what we have already done or said we will do in the past, especially if we did so in public. In other words, it is an attempt to save face and appear consistent.
  • Sunk cost fallacy is the tendency to stick with a decision or a plan even when it’s failing. Because we have already invested valuable time, money, or energy, quitting feels like these resources were wasted.

In other words, escalating commitment is a manifestation of the sunk cost fallacy: an irrational escalation of commitment frequently occurs when people refuse to accept that the resources they’ve already invested cannot be recovered. Instead, they insist on more spending to justify the initial investment (and the incurred losses).

When you are faced with a straw man argument , the best way to respond is to draw attention to the fallacy and ask your discussion partner to show how your original statement and their distorted version are the same. Since these are different, your partner will either have to admit that their argument is invalid or try to justify it by using more flawed reasoning, which you can then attack.

The straw man argument is a problem because it occurs when we fail to take an opposing point of view seriously. Instead, we intentionally misrepresent our opponent’s ideas and avoid genuinely engaging with them. Due to this, resorting to straw man fallacy lowers the standard of constructive debate.

A straw man argument is a distorted (and weaker) version of another person’s argument that can easily be refuted (e.g., when a teacher proposes that the class spend more time on math exercises, a parent complains that the teacher doesn’t care about reading and writing).

This is a straw man argument because it misrepresents the teacher’s position, which didn’t mention anything about cutting down on reading and writing. The straw man argument is also known as the straw man fallacy .

A slippery slope argument is not always a fallacy.

  • When someone claims adopting a certain policy or taking a certain action will automatically lead to a series of other policies or actions also being taken, this is a slippery slope argument.
  • If they don’t show a causal connection between the advocated policy and the consequent policies, then they commit a slippery slope fallacy .

There are a number of ways you can deal with slippery slope arguments especially when you suspect these are fallacious:

  • Slippery slope arguments take advantage of the gray area between an initial action or decision and the possible next steps that might lead to the undesirable outcome. You can point out these missing steps and ask your partner to indicate what evidence exists to support the claimed relationship between two or more events.
  • Ask yourself if each link in the chain of events or action is valid. Every proposition has to be true for the overall argument to work, so even if one link is irrational or not supported by evidence, then the argument collapses.
  • Sometimes people commit a slippery slope fallacy unintentionally. In these instances, use an example that demonstrates the problem with slippery slope arguments in general (e.g., by using statements to reach a conclusion that is not necessarily relevant to the initial statement). By attacking the concept of slippery slope arguments you can show that they are often fallacious.

People sometimes confuse cognitive bias and logical fallacies because they both relate to flawed thinking. However, they are not the same:

  • Cognitive bias is the tendency to make decisions or take action in an illogical way because of our values, memory, socialization, and other personal attributes. In other words, it refers to a fixed pattern of thinking rooted in the way our brain works.
  • Logical fallacies relate to how we make claims and construct our arguments in the moment. They are statements that sound convincing at first but can be disproven through logical reasoning.

In other words, cognitive bias refers to an ongoing predisposition, while logical fallacy refers to mistakes of reasoning that occur in the moment.

An appeal to ignorance (ignorance here meaning lack of evidence) is a type of informal logical fallacy .

It asserts that something must be true because it hasn’t been proven false—or that something must be false because it has not yet been proven true.

For example, “unicorns exist because there is no evidence that they don’t.” The appeal to ignorance is also called the burden of proof fallacy .

An ad hominem (Latin for “to the person”) is a type of informal logical fallacy . Instead of arguing against a person’s position, an ad hominem argument attacks the person’s character or actions in an effort to discredit them.

This rhetorical strategy is fallacious because a person’s character, motive, education, or other personal trait is logically irrelevant to whether their argument is true or false.

Name-calling is common in ad hominem fallacy (e.g., “environmental activists are ineffective because they’re all lazy tree-huggers”).

Ad hominem is a persuasive technique where someone tries to undermine the opponent’s argument by personally attacking them.

In this way, one can redirect the discussion away from the main topic and to the opponent’s personality without engaging with their viewpoint. When the opponent’s personality is irrelevant to the discussion, we call it an ad hominem fallacy .

Ad hominem tu quoque (‘you too”) is an attempt to rebut a claim by attacking its proponent on the grounds that they uphold a double standard or that they don’t practice what they preach. For example, someone is telling you that you should drive slowly otherwise you’ll get a speeding ticket one of these days, and you reply “but you used to get them all the time!”

Argumentum ad hominem means “argument to the person” in Latin and it is commonly referred to as ad hominem argument or personal attack. Ad hominem arguments are used in debates to refute an argument by attacking the character of the person making it, instead of the logic or premise of the argument itself.

The opposite of the hasty generalization fallacy is called slothful induction fallacy or appeal to coincidence .

It is the tendency to deny a conclusion even though there is sufficient evidence that supports it. Slothful induction occurs due to our natural tendency to dismiss events or facts that do not align with our personal biases and expectations. For example, a researcher may try to explain away unexpected results by claiming it is just a coincidence.

To avoid a hasty generalization fallacy we need to ensure that the conclusions drawn are well-supported by the appropriate evidence. More specifically:

  • In statistics , if we want to draw inferences about an entire population, we need to make sure that the sample is random and representative of the population . We can achieve that by using a probability sampling method , like simple random sampling or stratified sampling .
  • In academic writing , use precise language and measured phases. Try to avoid making absolute claims, cite specific instances and examples without applying the findings to a larger group.
  • As readers, we need to ask ourselves “does the writer demonstrate sufficient knowledge of the situation or phenomenon that would allow them to make a generalization?”

The hasty generalization fallacy and the anecdotal evidence fallacy are similar in that they both result in conclusions drawn from insufficient evidence. However, there is a difference between the two:

  • The hasty generalization fallacy involves genuinely considering an example or case (i.e., the evidence comes first and then an incorrect conclusion is drawn from this).
  • The anecdotal evidence fallacy (also known as “cherry-picking” ) is knowing in advance what conclusion we want to support, and then selecting the story (or a few stories) that support it. By overemphasizing anecdotal evidence that fits well with the point we are trying to make, we overlook evidence that would undermine our argument.

Although many sources use circular reasoning fallacy and begging the question interchangeably, others point out that there is a subtle difference between the two:

  • Begging the question fallacy occurs when you assume that an argument is true in order to justify a conclusion. If something begs the question, what you are actually asking is, “Is the premise of that argument actually true?” For example, the statement “Snakes make great pets. That’s why we should get a snake” begs the question “are snakes really great pets?”
  • Circular reasoning fallacy on the other hand, occurs when the evidence used to support a claim is just a repetition of the claim itself.  For example, “People have free will because they can choose what to do.”

In other words, we could say begging the question is a form of circular reasoning.

Circular reasoning fallacy uses circular reasoning to support an argument. More specifically, the evidence used to support a claim is just a repetition of the claim itself. For example: “The President of the United States is a good leader (claim), because they are the leader of this country (supporting evidence)”.

An example of a non sequitur is the following statement:

“Giving up nuclear weapons weakened the United States’ military. Giving up nuclear weapons also weakened China. For this reason, it is wrong to try to outlaw firearms in the United States today.”

Clearly there is a step missing in this line of reasoning and the conclusion does not follow from the premise, resulting in a non sequitur fallacy .

The difference between the post hoc fallacy and the non sequitur fallacy is that post hoc fallacy infers a causal connection between two events where none exists, whereas the non sequitur fallacy infers a conclusion that lacks a logical connection to the premise.

In other words, a post hoc fallacy occurs when there is a lack of a cause-and-effect relationship, while a non sequitur fallacy occurs when there is a lack of logical connection.

An example of post hoc fallacy is the following line of reasoning:

“Yesterday I had ice cream, and today I have a terrible stomachache. I’m sure the ice cream caused this.”

Although it is possible that the ice cream had something to do with the stomachache, there is no proof to justify the conclusion other than the order of events. Therefore, this line of reasoning is fallacious.

Post hoc fallacy and hasty generalisation fallacy are similar in that they both involve jumping to conclusions. However, there is a difference between the two:

  • Post hoc fallacy is assuming a cause and effect relationship between two events, simply because one happened after the other.
  • Hasty generalisation fallacy is drawing a general conclusion from a small sample or little evidence.

In other words, post hoc fallacy involves a leap to a causal claim; hasty generalisation fallacy involves a leap to a general proposition.

The fallacy of composition is similar to and can be confused with the hasty generalization fallacy . However, there is a difference between the two:

  • The fallacy of composition involves drawing an inference about the characteristics of a whole or group based on the characteristics of its individual members.
  • The hasty generalization fallacy involves drawing an inference about a population or class of things on the basis of few atypical instances or a small sample of that population or thing.

In other words, the fallacy of composition is using an unwarranted assumption that we can infer something about a whole based on the characteristics of its parts, while the hasty generalization fallacy is using insufficient evidence to draw a conclusion.

The opposite of the fallacy of composition is the fallacy of division . In the fallacy of division, the assumption is that a characteristic which applies to a whole or a group must necessarily apply to the parts or individual members. For example, “Australians travel a lot. Gary is Australian, so he must travel a lot.”

Base rate fallacy can be avoided by following these steps:

  • Avoid making an important decision in haste. When we are under pressure, we are more likely to resort to cognitive shortcuts like the availability heuristic and the representativeness heuristic . Due to this, we are more likely to factor in only current and vivid information, and ignore the actual probability of something happening (i.e., base rate).
  • Take a long-term view on the decision or question at hand. Look for relevant statistical data, which can reveal long-term trends and give you the full picture.
  • Talk to experts like professionals. They are more aware of probabilities related to specific decisions.

Suppose there is a population consisting of 90% psychologists and 10% engineers. Given that you know someone enjoyed physics at school, you may conclude that they are an engineer rather than a psychologist, even though you know that this person comes from a population consisting of far more psychologists than engineers.

When we ignore the rate of occurrence of some trait in a population (the base-rate information) we commit base rate fallacy .

Cost-benefit fallacy is a common error that occurs when allocating sources in project management. It is the fallacy of assuming that cost-benefit estimates are more or less accurate, when in fact they are highly inaccurate and biased. This means that cost-benefit analyses can be useful, but only after the cost-benefit fallacy has been acknowledged and corrected for. Cost-benefit fallacy is a type of base rate fallacy .

In advertising, the fallacy of equivocation is often used to create a pun. For example, a billboard company might advertise their billboards using a line like: “Looking for a sign? This is it!” The word sign has a literal meaning as billboard and a figurative one as a sign from God, the universe, etc.

Equivocation is a fallacy because it is a form of argumentation that is both misleading and logically unsound. When the meaning of a word or phrase shifts in the course of an argument, it causes confusion and also implies that the conclusion (which may be true) does not follow from the premise.

The fallacy of equivocation is an informal logical fallacy, meaning that the error lies in the content of the argument instead of the structure.

Fallacies of relevance are a group of fallacies that occur in arguments when the premises are logically irrelevant to the conclusion. Although at first there seems to be a connection between the premise and the conclusion, in reality fallacies of relevance use unrelated forms of appeal.

For example, the genetic fallacy makes an appeal to the source or origin of the claim in an attempt to assert or refute something.

The ad hominem fallacy and the genetic fallacy are closely related in that they are both fallacies of relevance. In other words, they both involve arguments that use evidence or examples that are not logically related to the argument at hand. However, there is a difference between the two:

  • In the ad hominem fallacy , the goal is to discredit the argument by discrediting the person currently making the argument.
  • In the genetic fallacy , the goal is to discredit the argument by discrediting the history or origin (i.e., genesis) of an argument.

False dilemma fallacy is also known as false dichotomy, false binary, and “either-or” fallacy. It is the fallacy of presenting only two choices, outcomes, or sides to an argument as the only possibilities, when more are available.

The false dilemma fallacy works in two ways:

  • By presenting only two options as if these were the only ones available
  • By presenting two options as mutually exclusive (i.e., only one option can be selected or can be true at a time)

In both cases, by using the false dilemma fallacy, one conceals alternative choices and doesn’t allow others to consider the full range of options. This is usually achieved through an“either-or” construction and polarised, divisive language (“you are either a friend or an enemy”).

The best way to avoid a false dilemma fallacy is to pause and reflect on two points:

  • Are the options presented truly the only ones available ? It could be that another option has been deliberately omitted.
  • Are the options mentioned mutually exclusive ? Perhaps all of the available options can be selected (or be true) at the same time, which shows that they aren’t mutually exclusive. Proving this is called “escaping between the horns of the dilemma.”

Begging the question fallacy is an argument in which you assume what you are trying to prove. In other words, your position and the justification of that position are the same, only slightly rephrased.

For example: “All freshmen should attend college orientation, because all college students should go to such an orientation.”

The complex question fallacy and begging the question fallacy are similar in that they are both based on assumptions. However, there is a difference between them:

  • A complex question fallacy occurs when someone asks a question that presupposes the answer to another question that has not been established or accepted by the other person. For example, asking someone “Have you stopped cheating on tests?”, unless it has previously been established that the person is indeed cheating on tests, is a fallacy.
  • Begging the question fallacy occurs when we assume the very thing as a premise that we’re trying to prove in our conclusion. In other words, the conclusion is used to support the premises, and the premises prove the validity of the conclusion. For example: “God exists because the Bible says so, and the Bible is true because it is the word of God.”

In other words, begging the question is about drawing a conclusion based on an assumption, while a complex question involves asking a question that presupposes the answer to a prior question.

“ No true Scotsman ” arguments aren’t always fallacious. When there is a generally accepted definition of who or what constitutes a group, it’s reasonable to use statements in the form of “no true Scotsman”.

For example, the statement that “no true pacifist would volunteer for military service” is not fallacious, since a pacifist is, by definition, someone who opposes war or violence as a means of settling disputes.

No true Scotsman arguments are fallacious because instead of logically refuting the counterexample, they simply assert that it doesn’t count. In other words, the counterexample is rejected for psychological, but not logical, reasons.

The appeal to purity or no true Scotsman fallacy is an attempt to defend a generalisation about a group from a counterexample by shifting the definition of the group in the middle of the argument. In this way, one can exclude the counterexample as not being “true”, “genuine”, or “pure” enough to be considered as part of the group in question.

To identify an appeal to authority fallacy , you can ask yourself the following questions:

  • Is the authority cited really a qualified expert in this particular area under discussion? For example, someone who has formal education or years of experience can be an expert.
  • Do experts disagree on this particular subject? If that is the case, then for almost any claim supported by one expert there will be a counterclaim that is supported by another expert. If there is no consensus, an appeal to authority is fallacious.
  • Is the authority in question biased? If you suspect that an expert’s prejudice and bias could have influenced their views, then the expert is not reliable and an argument citing this expert will be fallacious.To identify an appeal to authority fallacy, you ask yourself whether the authority cited is a qualified expert in the particular area under discussion.

Appeal to authority is a fallacy when those who use it do not provide any justification to support their argument. Instead they cite someone famous who agrees with their viewpoint, but is not qualified to make reliable claims on the subject.

Appeal to authority fallacy is often convincing because of the effect authority figures have on us. When someone cites a famous person, a well-known scientist, a politician, etc. people tend to be distracted and often fail to critically examine whether the authority figure is indeed an expert in the area under discussion.

The ad populum fallacy is common in politics. One example is the following viewpoint: “The majority of our countrymen think we should have military operations overseasÍž therefore, it’s the right thing to do.”

This line of reasoning is fallacious, because popular acceptance of a belief or position does not amount to a justification of that belief. In other words, following the prevailing opinion without examining the underlying reasons is irrational.

The ad populum fallacy plays on our innate desire to fit in (known as “bandwagon effect”). If many people believe something, our common sense tells us that it must be true and we tend to accept it. However, in logic, the popularity of a proposition cannot serve as evidence of its truthfulness.

Ad populum (or appeal to popularity) fallacy and appeal to authority fallacy are similar in that they both conflate the validity of a belief with its popular acceptance among a specific group. However there is a key difference between the two:

  • An ad populum fallacy tries to persuade others by claiming that something is true or right because a lot of people think so.
  • An appeal to authority fallacy tries to persuade by claiming a group of experts believe something is true or right, therefore it must be so.

To identify a false cause fallacy , you need to carefully analyse the argument:

  • When someone claims that one event directly causes another, ask if there is sufficient evidence to establish a cause-and-effect relationship. 
  • Ask if the claim is based merely on the chronological order or co-occurrence of the two events. 
  • Consider alternative possible explanations (are there other factors at play that could influence the outcome?).

By carefully analysing the reasoning, considering alternative explanations, and examining the evidence provided, you can identify a false cause fallacy and discern whether a causal claim is valid or flawed.

False cause fallacy examples include: 

  • Believing that wearing your lucky jersey will help your team win 
  • Thinking that everytime you wash your car, it rains
  • Claiming that playing video games causes violent behavior 

In each of these examples, we falsely assume that one event causes another without any proof.

The planning fallacy and procrastination are not the same thing. Although they both relate to time and task management, they describe different challenges:

  • The planning fallacy describes our inability to correctly estimate how long a future task will take, mainly due to optimism bias and a strong focus on the best-case scenario.
  • Procrastination refers to postponing a task, usually by focusing on less urgent or more enjoyable activities. This is due to psychological reasons, like fear of failure.

In other words, the planning fallacy refers to inaccurate predictions about the time we need to finish a task, while procrastination is a deliberate delay due to psychological factors.

A real-life example of the planning fallacy is the construction of the Sydney Opera House in Australia. When construction began in the late 1950s, it was initially estimated that it would be completed in four years at a cost of around $7 million.

Because the government wanted the construction to start before political opposition would stop it and while public opinion was still favorable, a number of design issues had not been carefully studied in advance. Due to this, several problems appeared immediately after the project commenced.

The construction process eventually stretched over 14 years, with the Opera House being completed in 1973 at a cost of over $100 million, significantly exceeding the initial estimates.

An example of appeal to pity fallacy is the following appeal by a student to their professor:

“Professor, please consider raising my grade. I had a terrible semester: my car broke down, my laptop got stolen, and my cat got sick.”

While these circumstances may be unfortunate, they are not directly related to the student’s academic performance.

While both the appeal to pity fallacy and   red herring fallacy can serve as a distraction from the original discussion topic, they are distinct fallacies. More specifically:

  • Appeal to pity fallacy attempts to evoke feelings of sympathy, pity, or guilt in an audience, so that they accept the speaker’s conclusion as truthful.
  • Red herring fallacy attempts to introduce an irrelevant piece of information that diverts the audience’s attention to a different topic.

Both fallacies can be used as a tool of deception. However, they operate differently and serve distinct purposes in arguments.

Argumentum ad misericordiam (Latin for “argument from pity or misery”) is another name for appeal to pity fallacy . It occurs when someone evokes sympathy or guilt in an attempt to gain support for their claim, without providing any logical reasons to support the claim itself. Appeal to pity is a deceptive tactic of argumentation, playing on people’s emotions to sway their opinion.

Yes, it’s quite common to start a sentence with a preposition, and there’s no reason not to do so.

For example, the sentence “ To many, she was a hero” is perfectly grammatical. It could also be rephrased as “She was a hero to  many”, but there’s no particular reason to do so. Both versions are fine.

Some people argue that you shouldn’t end a sentence with a preposition , but that “rule” can also be ignored, since it’s not supported by serious language authorities.

Yes, it’s fine to end a sentence with a preposition . The “rule” against doing so is overwhelmingly rejected by modern style guides and language authorities and is based on the rules of Latin grammar, not English.

Trying to avoid ending a sentence with a preposition often results in very unnatural phrasings. For example, turning “He knows what he’s talking about ” into “He knows about what he’s talking” or “He knows that about which he’s talking” is definitely not an improvement.

No, ChatGPT is not a credible source of factual information and can’t be cited for this purpose in academic writing . While it tries to provide accurate answers, it often gets things wrong because its responses are based on patterns, not facts and data.

Specifically, the CRAAP test for evaluating sources includes five criteria: currency , relevance , authority , accuracy , and purpose . ChatGPT fails to meet at least three of them:

  • Currency: The dataset that ChatGPT was trained on only extends to 2021, making it slightly outdated.
  • Authority: It’s just a language model and is not considered a trustworthy source of factual information.
  • Accuracy: It bases its responses on patterns rather than evidence and is unable to cite its sources .

So you shouldn’t cite ChatGPT as a trustworthy source for a factual claim. You might still cite ChatGPT for other reasons – for example, if you’re writing a paper about AI language models, ChatGPT responses are a relevant primary source .

ChatGPT is an AI language model that was trained on a large body of text from a variety of sources (e.g., Wikipedia, books, news articles, scientific journals). The dataset only went up to 2021, meaning that it lacks information on more recent events.

It’s also important to understand that ChatGPT doesn’t access a database of facts to answer your questions. Instead, its responses are based on patterns that it saw in the training data.

So ChatGPT is not always trustworthy . It can usually answer general knowledge questions accurately, but it can easily give misleading answers on more specialist topics.

Another consequence of this way of generating responses is that ChatGPT usually can’t cite its sources accurately. It doesn’t really know what source it’s basing any specific claim on. It’s best to check any information you get from it against a credible source .

No, it is not possible to cite your sources with ChatGPT . You can ask it to create citations, but it isn’t designed for this task and tends to make up sources that don’t exist or present information in the wrong format. ChatGPT also cannot add citations to direct quotes in your text.

Instead, use a tool designed for this purpose, like the Scribbr Citation Generator .

But you can use ChatGPT for assignments in other ways, to provide inspiration, feedback, and general writing advice.

GPT  stands for “generative pre-trained transformer”, which is a type of large language model: a neural network trained on a very large amount of text to produce convincing, human-like language outputs. The Chat part of the name just means “chat”: ChatGPT is a chatbot that you interact with by typing in text.

The technology behind ChatGPT is GPT-3.5 (in the free version) or GPT-4 (in the premium version). These are the names for the specific versions of the GPT model. GPT-4 is currently the most advanced model that OpenAI has created. It’s also the model used in Bing’s chatbot feature.

ChatGPT was created by OpenAI, an AI research company. It started as a nonprofit company in 2015 but became for-profit in 2019. Its CEO is Sam Altman, who also co-founded the company. OpenAI released ChatGPT as a free “research preview” in November 2022. Currently, it’s still available for free, although a more advanced premium version is available if you pay for it.

OpenAI is also known for developing DALL-E, an AI image generator that runs on similar technology to ChatGPT.

ChatGPT is owned by OpenAI, the company that developed and released it. OpenAI is a company dedicated to AI research. It started as a nonprofit company in 2015 but transitioned to for-profit in 2019. Its current CEO is Sam Altman, who also co-founded the company.

In terms of who owns the content generated by ChatGPT, OpenAI states that it will not claim copyright on this content , and the terms of use state that “you can use Content for any purpose, including commercial purposes such as sale or publication”. This means that you effectively own any content you generate with ChatGPT and can use it for your own purposes.

Be cautious about how you use ChatGPT content in an academic context. University policies on AI writing are still developing, so even if you “own” the content, you’re often not allowed to submit it as your own work according to your university or to publish it in a journal.

ChatGPT is a chatbot based on a large language model (LLM). These models are trained on huge datasets consisting of hundreds of billions of words of text, based on which the model learns to effectively predict natural responses to the prompts you enter.

ChatGPT was also refined through a process called reinforcement learning from human feedback (RLHF), which involves “rewarding” the model for providing useful answers and discouraging inappropriate answers – encouraging it to make fewer mistakes.

Essentially, ChatGPT’s answers are based on predicting the most likely responses to your inputs based on its training data, with a reward system on top of this to incentivise it to give you the most helpful answers possible. It’s a bit like an incredibly advanced version of predictive text. This is also one of ChatGPT’s limitations : because its answers are based on probabilities, they’re not always trustworthy .

OpenAI may store ChatGPT conversations for the purposes of future training. Additionally, these conversations may be monitored by human AI trainers.

Users can choose not to have their chat history saved. Unsaved chats are not used to train future models and are permanently deleted from ChatGPT’s system after 30 days.

The official ChatGPT app is currently only available on iOS devices. If you don’t have an iOS device, only use the official OpenAI website to access the tool. This helps to eliminate the potential risk of downloading fraudulent or malicious software.

ChatGPT conversations are generally used to train future models and to resolve issues/bugs. These chats may be monitored by human AI trainers.

However, users can opt out of having their conversations used for training. In these instances, chats are monitored only for potential abuse.

Yes, using ChatGPT as a conversation partner is a great way to practice a language in an interactive way.

Try using a prompt like this one:

“Please be my Spanish conversation partner. Only speak to me in Spanish. Keep your answers short (maximum 50 words). Ask me questions. Let’s start the conversation with the following topic: [conversation topic].”

Yes, there are a variety of ways to use ChatGPT for language learning , including treating it as a conversation partner, asking it for translations, and using it to generate a curriculum or practice exercises.

AI detectors aim to identify the presence of AI-generated text (e.g., from ChatGPT ) in a piece of writing, but they can’t do so with complete accuracy. In our comparison of the best AI detectors , we found that the 10 tools we tested had an average accuracy of 60%. The best free tool had 68% accuracy, the best premium tool 84%.

Because of how AI detectors work , they can never guarantee 100% accuracy, and there is always at least a small risk of false positives (human text being marked as AI-generated). Therefore, these tools should not be relied upon to provide absolute proof that a text is or isn’t AI-generated. Rather, they can provide a good indication in combination with other evidence.

Tools called AI detectors are designed to label text as AI-generated or human. AI detectors work by looking for specific characteristics in the text, such as a low level of randomness in word choice and sentence length. These characteristics are typical of AI writing, allowing the detector to make a good guess at when text is AI-generated.

But these tools can’t guarantee 100% accuracy. Check out our comparison of the best AI detectors to learn more.

You can also manually watch for clues that a text is AI-generated – for example, a very different style from the writer’s usual voice or a generic, overly polite tone.

Our research into the best summary generators (aka summarisers or summarising tools) found that the best summariser available in 2023 is the one offered by QuillBot.

While many summarisers just pick out some sentences from the text, QuillBot generates original summaries that are creative, clear, accurate, and concise. It can summarise texts of up to 1,200 words for free, or up to 6,000 with a premium subscription.

Try the QuillBot summarizer for free

Deep learning requires a large dataset (e.g., images or text) to learn from. The more diverse and representative the data, the better the model will learn to recognise objects or make predictions. Only when the training data is sufficiently varied can the model make accurate predictions or recognise objects from new data.

Deep learning models can be biased in their predictions if the training data consist of biased information. For example, if a deep learning model used for screening job applicants has been trained with a dataset consisting primarily of white male applicants, it will consistently favour this specific population over others.

A good ChatGPT prompt (i.e., one that will get you the kinds of responses you want):

  • Gives the tool a role to explain what type of answer you expect from it
  • Is precisely formulated and gives enough context
  • Is free from bias
  • Has been tested and improved by experimenting with the tool

ChatGPT prompts are the textual inputs (e.g., questions, instructions) that you enter into ChatGPT to get responses.

ChatGPT predicts an appropriate response to the prompt you entered. In general, a more specific and carefully worded prompt will get you better responses.

Yes, ChatGPT is currently available for free. You have to sign up for a free account to use the tool, and you should be aware that your data may be collected to train future versions of the model.

To sign up and use the tool for free, go to this page and click “Sign up”. You can do so with your email or with a Google account.

A premium version of the tool called ChatGPT Plus is available as a monthly subscription. It currently costs ÂŁ16 and gets you access to features like GPT-4 (a more advanced version of the language model). But it’s optional: you can use the tool completely free if you’re not interested in the extra features.

You can access ChatGPT by signing up for a free account:

  • Follow this link to the ChatGPT website.
  • Click on “Sign up” and fill in the necessary details (or use your Google account). It’s free to sign up and use the tool.
  • Type a prompt into the chat box to get started!

A ChatGPT app is also available for iOS, and an Android app is planned for the future. The app works similarly to the website, and you log in with the same account for both.

According to OpenAI’s terms of use, users have the right to reproduce text generated by ChatGPT during conversations.

However, publishing ChatGPT outputs may have legal implications , such as copyright infringement.

Users should be aware of such issues and use ChatGPT outputs as a source of inspiration instead.

According to OpenAI’s terms of use, users have the right to use outputs from their own ChatGPT conversations for any purpose (including commercial publication).

However, users should be aware of the potential legal implications of publishing ChatGPT outputs. ChatGPT responses are not always unique: different users may receive the same response.

Furthermore, ChatGPT outputs may contain copyrighted material. Users may be liable if they reproduce such material.

ChatGPT can sometimes reproduce biases from its training data , since it draws on the text it has “seen” to create plausible responses to your prompts.

For example, users have shown that it sometimes makes sexist assumptions such as that a doctor mentioned in a prompt must be a man rather than a woman. Some have also pointed out political bias in terms of which political figures the tool is willing to write positively or negatively about and which requests it refuses.

The tool is unlikely to be consistently biased toward a particular perspective or against a particular group. Rather, its responses are based on its training data and on the way you phrase your ChatGPT prompts . It’s sensitive to phrasing, so asking it the same question in different ways will result in quite different answers.

Information extraction  refers to the process of starting from unstructured sources (e.g., text documents written in ordinary English) and automatically extracting structured information (i.e., data in a clearly defined format that’s easily understood by computers). It’s an important concept in natural language processing (NLP) .

For example, you might think of using news articles full of celebrity gossip to automatically create a database of the relationships between the celebrities mentioned (e.g., married, dating, divorced, feuding). You would end up with data in a structured format, something like MarriageBetween(celebrity 1 ,celebrity 2 ,date) .

The challenge involves developing systems that can “understand” the text well enough to extract this kind of data from it.

Knowledge representation and reasoning (KRR) is the study of how to represent information about the world in a form that can be used by a computer system to solve and reason about complex problems. It is an important field of artificial intelligence (AI) research.

An example of a KRR application is a semantic network, a way of grouping words or concepts by how closely related they are and formally defining the relationships between them so that a machine can “understand” language in something like the way people do.

A related concept is information extraction , concerned with how to get structured information from unstructured sources.

Yes, you can use ChatGPT to summarise text . This can help you understand complex information more easily, summarise the central argument of your own paper, or clarify your research question.

You can also use Scribbr’s free text summariser , which is designed specifically for this purpose.

Yes, you can use ChatGPT to paraphrase text to help you express your ideas more clearly, explore different ways of phrasing your arguments, and avoid repetition.

However, it’s not specifically designed for this purpose. We recommend using a specialised tool like Scribbr’s free paraphrasing tool , which will provide a smoother user experience.

Yes, you use ChatGPT to help write your college essay by having it generate feedback on certain aspects of your work (consistency of tone, clarity of structure, etc.).

However, ChatGPT is not able to adequately judge qualities like vulnerability and authenticity. For this reason, it’s important to also ask for feedback from people who have experience with college essays and who know you well. Alternatively, you can get advice using Scribbr’s essay editing service .

No, having ChatGPT write your college essay can negatively impact your application in numerous ways. ChatGPT outputs are unoriginal and lack personal insight.

Furthermore, Passing off AI-generated text as your own work is considered academically dishonest . AI detectors may be used to detect this offense, and it’s highly unlikely that any university will accept you if you are caught submitting an AI-generated admission essay.

However, you can use ChatGPT to help write your college essay during the preparation and revision stages (e.g., for brainstorming ideas and generating feedback).

ChatGPT and other AI writing tools can have unethical uses. These include:

  • Reproducing biases and false information
  • Using ChatGPT to cheat in academic contexts
  • Violating the privacy of others by inputting personal information

However, when used correctly, AI writing tools can be helpful resources for improving your academic writing and research skills. Some ways to use ChatGPT ethically include:

  • Following your institution’s guidelines
  • Critically evaluating outputs
  • Being transparent about how you used the tool

Ask our team

Want to contact us directly? No problem. We are always here for you.

Support team - Nina

Our support team is here to help you daily via chat, WhatsApp, email, or phone between 9:00 a.m. to 11:00 p.m. CET.

Our APA experts default to APA 7 for editing and formatting. For the Citation Editing Service you are able to choose between APA 6 and 7.

Yes, if your document is longer than 20,000 words, you will get a sample of approximately 2,000 words. This sample edit gives you a first impression of the editor’s editing style and a chance to ask questions and give feedback.

How does the sample edit work?

You will receive the sample edit within 24 hours after placing your order. You then have 24 hours to let us know if you’re happy with the sample or if there’s something you would like the editor to do differently.

Read more about how the sample edit works

Yes, you can upload your document in sections.

We try our best to ensure that the same editor checks all the different sections of your document. When you upload a new file, our system recognizes you as a returning customer, and we immediately contact the editor who helped you before.

However, we cannot guarantee that the same editor will be available. Your chances are higher if

  • You send us your text as soon as possible and
  • You can be flexible about the deadline.

Please note that the shorter your deadline is, the lower the chance that your previous editor is not available.

If your previous editor isn’t available, then we will inform you immediately and look for another qualified editor. Fear not! Every Scribbr editor follows the  Scribbr Improvement Model  and will deliver high-quality work.

Yes, our editors also work during the weekends and holidays.

Because we have many editors available, we can check your document 24 hours per day and 7 days per week, all year round.

If you choose a 72 hour deadline and upload your document on a Thursday evening, you’ll have your thesis back by Sunday evening!

Yes! Our editors are all native speakers, and they have lots of experience editing texts written by ESL students. They will make sure your grammar is perfect and point out any sentences that are difficult to understand. They’ll also notice your most common mistakes, and give you personal feedback to improve your writing in English.

Every Scribbr order comes with our award-winning Proofreading & Editing service , which combines two important stages of the revision process.

For a more comprehensive edit, you can add a Structure Check or Clarity Check to your order. With these building blocks, you can customize the kind of feedback you receive.

You might be familiar with a different set of editing terms. To help you understand what you can expect at Scribbr, we created this table:

Types of editing Available at Scribbr?


This is the “proofreading” in Scribbr’s standard service. It can only be selected in combination with editing.


This is the “editing” in Scribbr’s standard service. It can only be selected in combination with proofreading.


Select the Structure Check and Clarity Check to receive a comprehensive edit equivalent to a line edit.


This kind of editing involves heavy rewriting and restructuring. Our editors cannot help with this.

View an example

When you place an order, you can specify your field of study and we’ll match you with an editor who has familiarity with this area.

However, our editors are language specialists, not academic experts in your field. Your editor’s job is not to comment on the content of your dissertation, but to improve your language and help you express your ideas as clearly and fluently as possible.

This means that your editor will understand your text well enough to give feedback on its clarity, logic and structure, but not on the accuracy or originality of its content.

Good academic writing should be understandable to a non-expert reader, and we believe that academic editing is a discipline in itself. The research, ideas and arguments are all yours – we’re here to make sure they shine!

After your document has been edited, you will receive an email with a link to download the document.

The editor has made changes to your document using ‘Track Changes’ in Word. This means that you only have to accept or ignore the changes that are made in the text one by one.

It is also possible to accept all changes at once. However, we strongly advise you not to do so for the following reasons:

  • You can learn a lot by looking at the mistakes you made.
  • The editors don’t only change the text – they also place comments when sentences or sometimes even entire paragraphs are unclear. You should read through these comments and take into account your editor’s tips and suggestions.
  • With a final read-through, you can make sure you’re 100% happy with your text before you submit!

You choose the turnaround time when ordering. We can return your dissertation within 24 hours , 3 days or 1 week . These timescales include weekends and holidays. As soon as you’ve paid, the deadline is set, and we guarantee to meet it! We’ll notify you by text and email when your editor has completed the job.

Very large orders might not be possible to complete in 24 hours. On average, our editors can complete around 13,000 words in a day while maintaining our high quality standards. If your order is longer than this and urgent, contact us to discuss possibilities.

Always leave yourself enough time to check through the document and accept the changes before your submission deadline.

Scribbr is specialised in editing study related documents. We check:

  • Graduation projects
  • Dissertations
  • Admissions essays
  • College essays
  • Application essays
  • Personal statements
  • Process reports
  • Reflections
  • Internship reports
  • Academic papers
  • Research proposals
  • Prospectuses

Calculate the costs

The fastest turnaround time is 24 hours.

You can upload your document at any time and choose between four deadlines:

At Scribbr, we promise to make every customer 100% happy with the service we offer. Our philosophy: Your complaint is always justified – no denial, no doubts.

Our customer support team is here to find the solution that helps you the most, whether that’s a free new edit or a refund for the service.

Yes, in the order process you can indicate your preference for American, British, or Australian English .

If you don’t choose one, your editor will follow the style of English you currently use. If your editor has any questions about this, we will contact you.

Search Icon

DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

Date post: 27-Mar-2022
Category:
Upload:
View: 2 times
Download: 0 times

Page 1: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

T A M P E R E E N A M M A T T I K O R K E A K O U L U U N I V E R S I T Y O F A P P L I E D S C I E N C E S B U S I N E S S S C H O O L

FINAL THESIS REPORT

DEVELOPING MASTER SCHEDULE TEMPLATE FOR CAPITAL PROJECTS

Case Metso Power Finland

Susanna Koivisto

Degree Programme in International Business May 2010

Kai Hintsanen

T A M P E R E 2 0 1 0

Page 2: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

Author: Susanna Koivisto Title of Thesis: Developing Master Schedule Template for Capital Projects Degree Programme: International Business Month and Year: May 2010 Supervisor: Kai Hintsanen Number of Pages: 35

The purpose of this Final Thesis was to develop the scheduling template used in creating

executive project schedules in the case company. The objection of the development work

was to create a functional and coherent schedule template based on the case company’s

Work Breakdown Structure. This way also the schedule will be connected to the global

management system implemented in the case company.

Scheduling is linked to other project management areas tightly. To really get a deeper

understanding of scheduling also the other areas were considered and therefore project

management has been dealt as a whole and scheduling as a part of it. Project management

areas such as project life cycle, planning and scheduling, risk and opportunity management,

cost management, project control and closeout were investigated further.

Based on the theory current working methods in the case company were introduced. The

final chapters concentrate on the development work itself. The current schedule template

was investigated and features of the new schedule template were introduced. Also the flow

of the development work is described and suggestions for future developments were listed.

The working methods of the case company and the development work itself are confidential

and therefore not included in the public version of the Final Thesis.

Keywords: Project Management Schedule Management Schedule Development Schedule Template Work Breakdown Structure

Page 3: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

Table of Contents Abstract .......................................................................................................................................... 2 Table of Contents .......................................................................................................................... 3 Abbreviations ................................................................................................................................ 4 1. Introduction ............................................................................................................................... 5 

1.1 Background ........................................................................................................................ 5 1.2 Research Objectives ........................................................................................................... 5 1.3 Research Methods .............................................................................................................. 6 1.4 Structure of the Research ................................................................................................... 6 

2. Project Management ................................................................................................................. 7 2.1 Project Organisation ........................................................................................................... 7 2.2 Project Phases ..................................................................................................................... 9 2.3 Planning ............................................................................................................................ 10 

2.3.1 Project Scope .......................................................................................................... 10 2.3.2 Work Breakdown Structure (WBS) ........................................................................ 11 2.3.3 Activity Definition .................................................................................................. 15 2.3.4 Developing Networks ............................................................................................. 15 

2.4 Scheduling ........................................................................................................................ 18 2.4.1 Activity Duration Estimation .................................................................................. 18 2.4.2 Activity Resource Estimation ................................................................................. 19 2.4.3 Gantt Chart .............................................................................................................. 20 2.4.4 Computer Software Programmes ............................................................................ 21 

2.5 Risk and Opportunity Management ................................................................................. 24 2.6 Cost Management ............................................................................................................. 25 2.7 Project Evaluation and Control ........................................................................................ 27 

2.7.1 Reviews ................................................................................................................... 27 2.7.2 Tracking Gantt ........................................................................................................ 28 2.7.3 Milestone Analysis ................................................................................................. 28 2.7.4 S-curve Analysis ..................................................................................................... 28 

2.8 Project Closeout and Termination .................................................................................... 29 3. Company Overview................................................................................................................. 31 

3.1 Metso Corporation ............................................................................................................ 31 3.2 Metso Power ..................................................................................................................... 32 

4. Conclusions .............................................................................................................................. 34 Bibliography ................................................................................................................................ 35 

Page 4: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

Abbreviations WBS Work Breakdown Structure

A tool to breaking down the project scope into smaller more manageable

pieces of work to meet the project objectives. Also a graphic description

of the project scope.

ERP Enterprise Resource Planning

A compute-based system to manage resources, finances, materials and

human resources.

DOR Division of Responsibility

A document for dividing project responsibilities by person or

organisation used by the case company.

PEM Project Execution Model

A tool for monitoring project progress in the case company.

R&O Register Risk and Opportunity Register

A document for controlling risks and opportunities during projects in the

case company.

Page 5: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

1. Introduction The main focus of this thesis is to develop the project schedule template for the case

company. Scheduling is only one piece of a bigger picture which is project management. To

really get a deeper understanding of scheduling also the other parts involved in project

management have to be looked at and taken into consideration. Therefore in this thesis

project management has been dealt as a whole and scheduling as a part of it.

Scheduling improvement is now one of the top priority development issues in the case

company, Metso’s Power business line. This includes developing the scheduling tool Master

Schedule Template for Capital Projects and the scheduling methods. Metso Power is an

international company with locations in Finland, Sweden, Brazil and USA. The Master

Schedule Template is currently implemented in Finland but the development work was

performed with the Metso global management system in mind. The development ideas and

the Master Schedule Template have been introduced to the all the company locations.

1.1 Background In 2009 a Master of Science Thesis on Scheduling in Multiproject Environment was written

for Metso Power. The thesis analysed the current scheduling methods in all the Metso Power

global locations to find a global framework for scheduling and suggestions for best

practices. As a result the scheduling process was developed to a more detailed level where

roles and responsibilities were defined. Suggestions for the schedule tool functionality and

framework were made. The structure recommendation for the scheduling tool was based on

the Metso Power global WBS. This thesis continues from these suggestions to conduct the

development work of the scheduling tool Master Schedule Template.

1.2 Research Objectives The purpose of this final thesis is to develop the Master Schedule Template for Capital

Projects business unit in case company Metso Power Finland. The aim is to develop the

Master Schedule Template into a functional and coherent scheduling tool for projects. The

schedule is developed away from the separate departmental discipline schedules towards a

schedule where the chain of activities can be cross-checked based on the company WBS

Page 6: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

logic. In this way the project schedule can follow the same logic as risk and opportunity,

cost and scope management.

1.3 Research Methods The research was mainly conducted based on the qualitative research method. Theoretical

data and ideas are gathered from secondary literary resources dealing with project

management. The current situation in the case company Metso was established with internal

material and observation of daily routines and ongoing projects.

In the actual case study development work action research method was used. The input from

departmental disciplines in the Capital Projects business unit was vital. Based on their

knowledge and experience the information was gathered through discussions and put

together to find the best solutions. The reason to collect input from the departmental

disciplines was that people from these different disciplines form the project team. It was

momentous to have their input and knowledge since the team members are the end users of

project schedules created from the Master Schedule Template.

1.4 Structure of the Research The final thesis consists of four entities. The first part deals with the theoretical literature

discussing project management. The development work in the thesis focus on schedule

development but since it is only one part of the project also other important parts are studied

as these different parts together form the project as a whole and affect each other greatly.

The second and third entities cover an overview of the case company and how project

management is executed currently. The last entity goes deeper into the development work

behind the Master Schedule Template. Problems with the current scheduling tool are

introduced and solutions to these problems are presented. The development process is

introduced as well as the methods of implementation. In the end the findings are anlysed,

conclusions drawn up and suggestions listed for the future.

Page 7: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

2. Project Management Lewis (2002) defines the word project in his book as a “multitask job that has performance,

time, cost, and scope requirements and that is done only one time”. He continues that a

project has a definite starting as well as an ending point and a temporary team that will be

disbanded after the project ends. The PMBOK Guide (2004, 5) further defines that a project

is “undertaken to create a unique product, service, or result.”

There are four project constrains: time, budget, scope and performance requirements. All of

these constraints are dependent on each other and have to be in balance for the project to

succeed. Only three of the constraints can have values assigned and one of them has to be

determined by the project team. For example the customer or project sponsor can define a

certain timeframe, scope and performance level for the project. From here the project

manager or the project team can determine the costs. Being realistic at this stage is very

important since committing to a too tight schedule or budget might result in a disaster later

on. (Lewis 2002, 7-8)

There are two types of organizations in the macro level, project-based or non-project-based.

In project-based organizations everything is focused around projects. Each project has its

profit and loss statement and the organization profit is a sum of the profits of all the projects.

(Kerzner 2006, 20) There are two categories of project-based organisations. Organisations

that get their revenue primarily from performing projects under contract for other

organisations and organisations that have adopted their management style by projects. In the

latter the organisation’s management systems are designed to specially facilitate project

management. (PMI 2004, 27) In non-project-based organisations projects are performed to

support the product or functional lines. (Kerzner 2006, 20) Often non-project-based

organisations may be lacking management systems that facilitate project management

effectively and efficiently. (PMI 2004, 27)

2.1 Project Organisation A project organisation is an organisation that is created for the purpose of executing a

project. The amount of people in the project organisation may vary along the different

phases of the project. Projects often vary in size and character and therefore also project

organisation composition and emphasis vary between projects. (Pelin 2008, 65)

Page 8: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

In project organisations with multiple projects a management team is created to make the

essential project decisions, define the project and decide on the project manager. The

management team consists of senior managers that regularly review the current situation of

all the ongoing projects. At the reviews any conflicts between projects such as for example

with resources or finances are seen and resolved objectively. (Pelin 2008, 66)

The project manager holds the main responsibility for planning, execution and control of the

project. In smaller projects the project manager is the main resource for the project. In multi-

year projects the best solution is to create a project organisation where the essential

resources are found in the subordination of the project manager. The key for the success of

the project manager is creating the project team for the project. (Pelin 2008, 66-69)

To create an effective project team a great deal of effort goes to finding the right people and

developing this team into a functional and collectively performing project team. The ideal

situations in creating a project team is where people themselves express an interest to take

part and are awarded a place in the team. Unfortunately, in reality, in many organisations

people are often chosen simply because they are available. However the team is built, it is a

challenge for the project manager to build these different individuals into an effective and

united project team. (Pinto 2007, 183)

The project manager needs to approach the people he or she would like to have in the

project team. Sometimes personnel have authority to assign their time to projects but most

of the time these people are under the authority of the departmental head. The latter situation

can lead to situations where the project manager will have to negotiate with the departmental

manager over the use of their staff. The final step is to assemble the project team and check

that all necessary skills have been acquired. (Pinto 2007, 183)

One of the key factors in a successful project is a mutually understood and clear project

mission. All project members need to understand the project objectives and how they can

contribute in achieving these objectives. Enthusiasm and positive attitude are strengthened

Page 9: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

when the project team is encouraged to believe that by working together towards certain

goals they are attainable. (Pinto 2007, 186-187)

All project team members need a reason for their contribution in the project. Often projects

may compete with team member’s other duties and managers need to make all resources and

sources of organisational reward available in order for the team members to devote time and

energy to further the project’s goals. A sense of interdependency is vital among team

members. It is not only important to know how team members own contributions affect the

project but also how this work fits into the overall scheme and to the work of other team

members in other departments. (Pinto 2007, 186-187)

Participation from the project team in the planning process is extremely important,

especially for the people who will be involved in performing the detailed activities. They

usually have the best knowledge about these activities. Commitment comes through

participation and taking part in the planning stage is of consequence. (Gido & Clements

2.2 Project Phases Projects are divided into phases from the beginning of the project to the end to gain better

management control. Many organisations set specific phases which together form the

project’s life cycle and use this life cycle on all of their projects. Project phase descriptions

can be extremely detailed or on the opposite very general. Detailed descriptions can include

charts, forms and checklists to create control and structure. (PMI 2004, 19-20)

There is no one way of defining project life cycle. Different project phases generally define

what work is to be performed and when the deliverables are generated. Phases are usually

sequential and the amount of work and resources required are low at the initial phase, peak

during the intermediate phase and drop dramatically at the final phase. Level of uncertainty

is also at the highest in the beginning as the risk of failing to achieve the project objectives is

high. (PMI 2004, 20-21)

Page 10: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

In large projects in particular the project phases are often divided into subphases for reasons

of complexity, level of risk and financial constraints. Each of these subphases consists of

deliverables related to the primary phase deliverable. Deliverables for phases are

measurable, verifiable work such as detailed design document, specification or working

prototype. The deliverables can match to the project management process or the end product

or a component of it. (PMI 2004, 22)

Throughout the project the project manager needs to demonstrate to the executive

management that the project has clear objectives and the work is carried out as planned. A

system of phase gates between different phases of a project offer review points to evaluate

project status and progress. Each of the phase gates, if opened, allow the work to be

continued into the next phase. The decision to open a phase gate is made after revising the

current progress and possible slippages, current risks, the budget and available resources.

Occasionally it is necessary to make recommendations or revisions to current plans before

proceeding to the next phase or even cancel the work. (Young 2007, 26-28)

2.3 Planning “Failing to plan is planning to fail.” (Kerzner 2006, 396) Planning a project is to establish a

predefined plan of action in an environment that is characterized by estimation and

uncertainty. Project planning must be systematic, flexible, disciplined and a continuous

process throughout the duration of the entire project. Good planning reduces uncertainty,

improves efficiency and gives tools to control and monitor the project. Planning will give

answers to questions what and how. (Kerzner 2006, 396-398)

2.3.1 Project Scope

The project scope includes the work that is required to complete the project successfully to

meet the requirements for deliverables set at the onset of the project. (Gido & Clements

2007, 6) Scope so to speak sets the boundaries for the project, what is included and what is

not included in the project. (PMI 2004, 103) Project scope contains also constrains and

limitations as well as project goals. (Pinto 2007, 147)

Page 11: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

s a graphic

project sco

oject objecti

mponents as

re are three

size and co

t all of the p

o three leve

o the accura

k. Breaking

oductive ma

propriate ba

gure 1: Samp

description

ope hierarch

ives. (PMI 2

s work pack

in the proje

be broken d

to five leve

omplexity o

paths have to

ls is sufficie

acy desired.

g the work d

anagement a

alance betwe

of the proje

hically into s

2004, 112) T

kages. The W

ect?” (Haug

down into d

els in a WB

f the projec

o end up on

ent, while a

down to too

and inefficie

een too little

Breakdown S

ect scope (L

smaller mor

an, 2002, 13

different lev

ct. The WBS

n the same le

another work

02, 50) Find

ent use of re

e and too m

Structure (M

Lewis 2002,

re manageab

oes on defin

ers the ques

vels. Accord

ount of level

S tree does n

evel. Somet

ding the righ

ls and too m

esources. Th

much. (PMI 2

Modified fro

, 27) and a t

ble pieces o

ning the low

ding to Lew

ls can vary

not have to

times break

up to five l

ht level can

much detail m

he project te

tool to break

of work to m

west level W

wis (2002, 47

greatly dep

king down th

evels to bre

n prove to be

may lead to

eam needs t

7) typically

eak it down

e a difficult

to find the

Page 12: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

The deliverables and subprojects are summed up from the work of work packages

supporting them. When deliverables or subprojects are divided into work packages, the

deliverables or subprojects do not have duration of their own, have assigned costs or spend

any resources. All the resources and costs to a deliverable or subproject are from the work

packages supporting it. (Pinto 2007, 157-158)

The main reason for the WBS structure is to identify and ensure that all relevant work

packages are included in order to successfully carry out the project. “The 100 percent rule”

states that the sum of work in the next WBS level must be 100 percent of the work

represented in the previous level. This means that the work represented by the work

packages in each deliverable or subproject must add up to 100 percent of the work it takes to

complete the deliverable or subproject. The purpose of this rule is to arouse the question

whether any work is missing from the WBS. (Haugan 2002, 18)

WBS is important to create before the schedule. A WBS does not contain the sequence of

the work packages and this is done later on in the scheduling process. WBS shows the scope

of the project in a graphic form allowing resource allocation as well as time and cost

estimates. (Lewis 2002, 49) Lewis writes that it is misleading to develop a schedule before

all work packages have been identified and agreed on by the project team.

Projects are often unique but a previous WBS can be used as a template for a new

resembling project. Many large organizations have similar project life cycles with similar

deliverables required from different phases of the project and thus have a standard WBS

template which is used in new projects. (PMI 2004, 113)

All the different components in the WBS are assigned a unique identifying numeric code.

(Pinto 2007, 157) The numbering can follow any desired method or logic but it has to be

consistent throughout the entire WBS. This numeric code shows where each activity fits in

the project overall hierarchy and identify them from each other. The WBS code helps with

scheduling, tracking, assigning and communicating throughout the project. (Haugan 2002,

Page 13: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

) Also the co

activities to

uations and

mbination o

o their budg

cultures of

of several of

the project

into these su

ject C, etc. (

WBS is divid

ample prelim

WBS is itemi

ly affect the

ts of an enti

m and fuel sy

ct componen

accounting f

get costs. (P

mber of diffe

the stakeho

f these struc

consists of

ubprojects o

ded into succ

minary plan

sed into dif

e project org

ty of differe

e of system

function can

Pinto 2007,

erent structu

olders. Com

ctures are us

f several diff

on level 2. T

cessive pha

nning, execu

fferent syste

ganisation h

ent systems

n allocate co

ures or categ

mmonly the f

fferent subpr

The project

ses of the p

ution and im

ems delivere

horizontally

such as for

osts more pr

gorisations

following b

rojects the W

is divided i

roject. The

mplementatio

ed in the pro

y. (Pelin 200

r example ai

recisely and

due to diffe

asic structu

nto Project

project pha

on. (Pelin 2

oject. The s

08, 95) A po

ir system, w

ases can be

Page 14: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

WBS is broke

BS can be fi

ngs. It is the

s boiler, fue

isational Un

acturing, ere

irst broken d

en further di

2008, 95) A

el handling a

e of product

ded into orga

ection, com

mple of orga

different ph

down into g

ivided into d

A power plan

and flue gas

anisational u

mmissioning

hysical parts

geographica

different pa

nt for examp

s cleaning.

nt WBS stru

units in the

etc. (Pelin

s of the proj

ally separate

rts of the bu

ple consists

project suc

ject. In a lar

e parts such

uilding, equ

s of differen

ch as engine

rge project

nt products

Page 15: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

2.3.3 Activity Definition

In the activity definition process the WBS work package deliverables are further broken

down into smaller schedule activities that can be scheduled and monitored during the project

duration. (PMI 2004, 127-128) In large multiyear projects with thousands of people working

on the initial planning the top-level activities are usually created by a core group. Other team

members will then further develop these levels and break them into lower-level activities.

(Lewis 2002, 49) The activity definition process answers the question “How will the project

be accomplished?” (Haugan 2002, 13)

The short-duration activities have a definite start and finish time, have costs assigned and

spend resources. (Pinto 2007, 156) A single person or a discipline within the organization is

responsible for the work described in the activity. (Haugan 2002, 36) These activities are not

a part of the actual WBS structure but the structure offers a framework for defining these

activities for the project. (Haugan 2002, 4)

Activity lists from similar projects in the past or a standard list can be used as template for

new projects. The template can also include further information on resource skills and the

requisite hours of effort, reference to risks and possible other characteristic information

needed in activity definition. (PMI 2004, 128)

Rolling Wave Planning is a form of gradual planning where the work that is performed in

the near future is planned on a detailed low level of the WBS and work far in the future is

planned on a more general WBS level higher up. As the project progresses, work is planned

in more detail for the next one or two reporting periods. This means that schedule activities

can appear in different detail levels throughout the life cycle of the project. (PMI 2004, 128)

2.3.4 Developing Networks

As the WBS does not show the sequence of activities and a network diagram can be

prepared once all the activities are known. The two most commonly used methods for

creating activity networks are the Activity-on-Node (AON) and the Activity-on-Arrow

(AOA) logic. (Pinto 2007, 284) The AOA logic was commonly used a several decades ago

Page 16: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

but nowadays because of the computer-based scheduling programs the AON logic has

become the preferred method. (Pinto 2007, 285)

With these two methods the activities can be placed in their logical precedential order.

According to Gido & Clements in order to find the precedential order for each individual

activity you should ask the following questions:

1. Which activities have to be finished immediately before the start of this activity?

2. Which activities can be performed at the same time with this activity?

3. Which activities can not start before this activity has finished?

By answering these questions you are able to place each activity in their right place in the

network diagram portraying the interrelationship and sequence between the activities needed

to accomplish the project. (Gido & Clements 2003, 116) If a WBS has been developed for

the project, there should be activities in the network diagram for each work package. (Gido

& Clements 2003, 116)

In the Activity-on-Node (AON) logic each activity is written within a box. In each activity

node contains a unique activity number. The node can also include the following

information, activity descriptor, activity duration, early start time, early finish time, late start

time, late finish time and activity float. (Pinto 2007, 285) Activity float or slack is the time

that an activity can be delayed from its early start without delaying the finish of the whole

project. (Pinto 2007, 284) The more information included in the node makes calculations

easier such as identifying critical path, activity float, total project duration and so on. (Pinto

Activities have relationships and they are linked in a precedential order to display which

activities are to be finished before starting another activity. Arrows linking the boxes show

the direction of the precedential order. (Gido & Clements 2003, 110-111)

Some of the activities are to be done in a serial order where a preceding activity has to be

finished before starting on the consequential activity. For example when designing a product

the activity “Detail Engineering” can start only after activity “Basic Engineering” is

finished. (Gido & Clements 2003, 111)

Page 17: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

gure 5: Activ

me activitie

er the activi

d “Layout D

Manufacturin

ources to pe

gure 6: Activ

the Activity

tivities are e

activity and

es not indica

ido & Clem

l activities a

like in the a

ements 2003

gether with e

gure 7: Activ

vity-on-Nod

es can be do

ity “Basic E

Design” can

ng” can start

erform all s

each represe

d the arrow

ate the dura

ments 2003,

are linked b

activity in a

3, 112) Acti

of activity “

vity-on-Arr

de logic, act

ne at the sa

Engineering”

t. When per

ented by an

whead repres

ation of the

by events. A

ivities “Bas

ber 2. Event

“Detail Eng

row logic, ta

tivities perf

ame time. (G

” has been f

oncurrently.

rforming ac

s activities.

gic activities

n arrow whe

sents the end

activity nor

Activities fin

t, here the e

ic Engineer

2 signifies

gineering”.

asks perform

formed cons

Gido & Clem

finished bot

ctivities con

formed conc

s are written

ere the tail o

d of the acti

r implicates

nish in these

events have

ring” and “D

the end of a

sequentially

th activities

y are both do

currently th

n on the arro

of the arrow

ivity. The le

anything ab

e circles and

a unique nu

Detail Engin

activity “Ba

, 111) For e

one, activity

here must be

ow instead o

w signifies th

ength of the

bout its imp

d start form

umber. (Gid

neering” are

asic Enginee

ngineering”

e sufficient

he start of

Page 18: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

2.4 SchIn t

tivities goin

tivities can

ivity start. (

etail Engine

gure 8: Activ

heduling the planning

visualise the

heduling the

4.1 Activity

e scheduling

imates are b

thods, durin

e ideal situa

is creates co

particular in

ganisation or

imations for

ng into an ev

(Gido & Cle

eering” hav

g section ac

ives. This pl

e project sco

e plan. Sche

g process st

, 13) Activi

any addition

based on the

ng normal w

ation would

nvolving hu

r subcontra

r all the acti

vent must b

ed simultan

e to be both

ctivities hav

lan was then

ope. Now th

eduling can

tarts with es

144) and re

ity duration

nal waiting

e assumptio

working hou

d be to have

to the work

undreds of p

ctor designa

ivities the o

be finished b

neously and

3, 112) For

h finished be

ve been deci

n portrayed

he schedulin

stimating ho

espond to th

signify the

time. (Gido

on that they

urs and norm

k and avoids

eople, this w

ates a respo

organisation

before activ

efore activit

med concurr

ided and seq

d in graphica

ng process f

questions w

ow long eac

e question “

total elapse

o & Clemen

will be com

mal busines

s bias. How

would not b

onsible perso

n or subcont

vities going

are both fin

activities “L

quencing fo

al form by t

for the proje

when and by

ch activity w

ed time whi

nts 2003, 14

mpleted with

s days. (Pin

g the job esti

wever, in larg

be possible.

tractor is res

out can star

nished can t

ayout Desig

acturing” ca

ormed to rea

the network

ect can begi

will take to c

it be accom

ich means th

44) Activity

h normal wo

nto 2007, 29

imating the

ge multiyea

the duration

sponsible fo

he time for

ar projects

or. (Gido &

Page 19: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

The activity duration estimation is always directly linked to the available resources in the

project and estimation must always be based on the resources that will be expected to be

used on the performance of the activity. This should be as realistic as possible, not too

pessimistic or positive. People sometimes perform to expectation, hence if the duration

estimation is too pessimistic and set to 10 days, the activity may take the whole 10 days

even if it could have been done in a shorter time. The activity estimation should not include

a lot of extra time for things that could go wrong. (Gido & Clements 2003, 144-145)

Duration estimation is always somewhat uncertain. Past work and experience can be used as

a guide and history to estimation. What worked in the past might not work right now due to

for example different external factors. (Pinto 2007, 292) Duration estimations for some tasks

will be spot on, some will be delayed for one reason or another and some activities are

performed faster than expected. Over the duration of the whole project these delays and

accelerations sometimes tend to cancel each other out. For example, one activity can take

two weeks longer to complete but two activities preceding it took each one week less to

complete than expected and this way cancelling each other out. (Gido & Clements 2003,

The entire project also requires a start and completion time. These times can also be dates,

usually the completion time is a date that is stated in the contract. (Gido & Clements 2003,

146) Creating the project schedule can begin from the completion date when the project is

due to end and worked from there until the start date can be defined. Alternatively the

project schedule creation can begin from the start date and be built from there until the

completion date is defined. Often in practice both the completion date and the start date are

defined in the contract and the project schedule is created either form the beginning or the

end but is restrained by both the start and completion dates.

2.4.2 Activity Resource Estimation

Activity resource estimation is to define the appropriate resources whether it is material,

equipment, facilities or personnel to perform the activities in a work package. The budget of

the project often dictates how much resources are at disposal. (PMI 2004, 135) Resource

estimation is closely knitted with cost estimation and budgeting process. (PMI 2004, 135)

Page 20: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

ailable reso

quire the use

e of these re

y have to be

4.3 Gantt C

e Gantt char

dividual acti

ference betw

ntt chart com

the Gantt ch

e and time s

nd side. Esti

view of the s

gure 9: Exam

ources for th

e of same re

esources. If t

e reschedul

rt was deve

ivities into t

ween planne

mbines both

hart activitie

scale with a

imated start

status of the

mple of a ba

he use of a p

esources at t

there are no

ed until the

eloped in 19

the schedule

ed and actu

h the planni

es with an e

a bar display

t and finish

e project at a

asic Gantt c

project are o

the same tim

ot enough re

necessary r

917 by Henr

e baseline. I

al performa

ing and sche

estimated sta

ying the dur

dates are or

any given d

often limited

me span and

esources for

resources ar

ry Gantt to c

It is also a v

ance is easy

eduling fun

art and finis

ration of eac

rdered by ba

date during t

d. Several d

d therefore a

re available

create a netw

very handy t

to see. (Pin

nctions of a p

sh dates are

ch task hori

aseline cale

the project.

different act

are competi

ivities some

e. (Gido & C

work linkin

tracking too

nto 2007, 31

project. (Gi

e listed on th

izontally on

endar dates a

(Pinto 2007

tivities may

ing for the

he left-hand

n the right-

Page 21: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

Besides benefits the Gantt chart also has limitations. The chart does not show

interdependencies between the activities. (Kerzner 2006, 525) Without these relationships it

is difficult to see how a change in one activity will affect the rest of the activities. It is clear

that a change in the beginning can affect the rest of the project but it is not clear which

individual activities this change may affect.

2.4.4 Computer Software Programmes

Today there are many available computer software programmes to plan and control projects.

The programs vary slightly as to how they function and what features they offer. Gido &

Clements (2003, 409-413) list the following features among the most important:

The feature allows the definition of all the activities to be performed during the project. For

each activity the user can specify the basic functions; a name or description, start date, finish

date and duration. In addition the precedential relationships between activities can be

established and resources assigned.

For large projects consisting of several thousands of activities it would be difficult and prone

to errors to manually draw up and update Gantt charts and network diagrams. The software

can generate a variety of charts and networks quickly and easily based on the given data.

Modifications to the plan can easily be entered to the data and the software will

automatically adjust these changes into the graphics.

The feature provides support for scheduling based on planning. The software can create

Gantt Charts and network diagrams from the planned activities and the precedential

relationships. After the relationships have been entered, any changes to the activities will be

reflected to the entire schedule automatically. Users can also schedule recurring activities,

perform scheduling from the project start or finish date, schedule lag, set priorities to

activities and give constrains to activities such as schedule activities to start as late or soon

Page 22: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

as possible, specify must-start-by or must-finish-by dates, no-earlier-than or no-later-than

Project monitoring and tracking

For the project manager it is important to know during the project how activities are actually

being performed compared to the baseline plan. The software allows the user to set a

baseline from the planned schedule and compare actual progress or cost to the baseline

schedule. Most available software allow tracking of progress, start and finish date,

completed tasks, actual cost spent and used resources. There are several different report

formats provided for these monitoring and tracking features.

Handling multiple projects and subprojects

The feature allows to handle at the same time multiple projects in separate files with

connecting links between these files or to divide large projects into smaller subprojects. It is

possible to store multiple projects in the same file and handle several projects

simultaneously. Gantt Charts and network diagrams can be created from several projects.

Importing and exporting data

Software allows the user to import information from other applications such as spreadsheets,

word processing or database applications. This feature saves time and possibility of errors

from retyping the information into the project management software. Data transferring also

works in reverse where data can be exported from project management software into other

applications.

This function offers the possibility to define different working days and hours to different

resources or groups of resources. The project has a set base calendar with standard working

hours and holidays. This calendar can be changed for each resource or resource group.

Working hours, working days, nonworking days, vacation days, different shifts such as part-

time or night time can be entered.

Page 23: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

Budgeting and cost control

Costs can be assigned to each activity and resource. The employee, subcontractor and

material costs such as hourly rates, overtime rates, one-time-only rates or ongoing costs can

be defined. Accounting and material codes can also be specified to each resource. This

information is used to calculate and track the budgeted and actual costs of the project.

Actual individual resource, group resource and subcontractor costs as well as actual costs of

the entire project can be compared to the planned budget at any time during the project.

Resource management

A list of resources can be added where details concerning each resource or resource group

can be maintained and updated. Resources have an identifying name, standard and overtime

rates and an invoicing method. Each resource can have a personalised calendar and

constraints when the resource is available. Resources can be assigned to several activities at

the same time and have a certain percentage of the level of input to an activity. The software

highlights over allocation and helps to correct and level resources.

Report generation

Reports can be generated from the entire project or a part of the project. For partial reports

the user can set a date range, select activities that are completed or ongoing, activities that

start or finish in a certain time frame, or choose to report the milestones of a project.

What-if analysis

When activities are linked together to make precedential relationships different

manipulations can be performed. Since the software adapts changes in one activity into the

entire project the user can explore different effects of various scenarios. For example if an

activity is changed to occur later the software will automatically calculate how this change

will affect the rest of the project. This way the project manager can better control risks

involved with the project costs, schedule and resources.

Page 24: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

gure 10: Exa

sk and Ope environme

y aspect of t

the control o

oject risks as

positively. P

st or time m

oject life cyc

sis of the lik

nto 2007, 2

lowing step

ample of a G

pportunitent around p

the project;

of the organ

s possible e

PMI continu

may be impac

ment is to re

cle and in th

kelihood of

22) PMI (20

Gantt chart

ty Managprojects is f

budget, res

nisation and

events that c

ues that one

cted and ris

cognise, an

he best inter

the event oc

004, 237) d

view in the

gement filled with u

ources, cust

d so on. (Pin

can affect th

e or more of

sks may hav

alyse and re

rests of its m

ccurring as

describes the

uncertainty.

tomer requi

nto 2007, 22

he objective

f these objec

ve one or mo

eact to these

main objecti

well as the

e project ris

irements, an

21) PMI (20

ctives such

ore causes.

e risk factor

ives. Risks

software Mi

can occur in

ny outside f

004, 238) de

oject either n

as for exam

rs throughou

are assessed

ces they ma

ment process

factors out

mple scope,

Page 25: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

• Risk Management Planning – deciding on a plan how to identify, plan and manage

risks of the project.

• Risk Identification – identify and document possible risks that are likely to affect the

success of the project.

• Qualitative Risk Analysis – prioritising identified risks by how likely they are to

occur and how they would impact the project.

• Quantitative Risk Analysis – analysing with numbers how the identified risks would

threaten the project objectives.

• Risk Response Planning –developing precautions and minimising the impact of

likely risks.

• Risk Monitoring and Control – executing, evaluating and documenting identified

risks and risk response plans throughout the entire project.

Projects encounter different kind of risks with impact on different areas of the project. Risks

commonly fall under certain classifications. Pinto (2007,223-224) classifies risks under the

five following clusters:

• Financial risk

• Technical risk

• Commercial risk

• Execution risk

• Contractual or legal risk

2.6 Cost Management Cost Management is composed of planning, estimating, budgeting and controlling costs of

the project. Cost Management is mainly interested in the costs of the resources that are

needed in completing the scheduled activities. This should be done without forgetting the

life-cycle costs. These are costs of using, maintaining and supporting the project end

product, service or result. Decisions done to reduce costs of the project can increase cost for

the customer such as in case of limiting reviews during the project phase can bring

additional operational costs to the customer. (PMI 2004, 157)

Page 26: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

Estimating the costs of the project includes evaluating how much costs it will take to

perform each work package in the WBS structure. There can be several different alternatives

as to how much a work package is expected to cost. Additional work during the design or

engineering phase can reduce costs in the operational phase and save total costs in the long

run. The estimation process is to find these possibilities and consider whether the savings in

the end will cover the costs of additional input. (PMI 2004, 161)

Project costs are often estimated during development of project proposal for a customer.

Depending on the required level of detail the proposal includes the total bottom-line costs or

detailed breakdown of various costs. Costs include labour, materials, subcontractors and

consultant, equipment and facilities as well as travelling costs. In addition there can be

included contingency costs. These are to take care of any unexpected situations which have

been overlooked such as changes in cost of labour especially in multiyear projects or when

producing a new product. (Gido & Clements 2003, 254-255)

Estimation should be as realistic as possible. If too much contingency costs are estimated in

case of pretty much anything that can go wrong there is a risk of overpricing the project and

loosing to a competing contractor. On the other hand, if the estimation is too optimistic and

unexpected costs arise the profits of the project may be lower than expected or facing the

embarrassment of having to go to the customer to request additional funds. (Gido &

Clements 2003, 255-256)

Gido & Clements (2003, 254) clearly state that it is vital during the project, from the

beginning to the end, regularly monitor actual costs and progress of work to ensure that

everything is going within the budget. They continue that any variance or inefficiencies in

costs is crucial to recognize early in order to take action before the situation spirals out of

PMI (2004, 171) includes the following into cost control:

• monitoring cost performance to find any variance from baseline

• managing and documenting changes to budget when they occur

Page 27: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

• making sure potential changes do not exceed the authorized funding in the total

budget for the project

• preventing inappropriate or unapproved changes going into the reported costs

2.7 Project Evaluation and Control During project implementation it is of uppermost importance to monitor and control the

project. Since projects have one or more constraints (time, budget, scope or performance) set

by the customer or the project sponsor, these constraints require particular monitoring. Once

baseline is set for schedule and budget the ongoing current status can be compared and

evaluated against these original estimations. During the duration of the project cumulative

work or budget can be broken down by time. (Pinto 2007, 410-412)

2.7.1 Reviews

Project Performance Reviews are held periodically during the running of the project to asses

and compare cost performance, schedule activities, planned budget and milestones. Actual

performance is analysed and compared to the planned or expected performance. Also a trend

analysis can be done where project performance over time is analysed to determine whether

performance is weakening or improving. (PMI 2004, 176)

Kerzner (2006, 238) mentions three types of reviews; project team, executive management

and customer review meetings. Meetings can be held in a variety of timely manner such as

weekly, alternate weeks, monthly, quarterly and so on. Most project teams hold regular

meetings to keep the project manager and the project team informed in current issues and

the project status. Executive management most often require monthly status review

meetings. Customer reviews are often the most critical and require preparation in advance.

(Kerzner 2006, 238-239)

In complex project review gates are held to close a certain phase in the project. The review

gates are usually scheduled as milestones in the project schedule. The gates are determined

based on deliverables and activities that need to be completed. These periodical evaluations

have to be carried out in order to proceed to the next phase of the project and are often a

requirement in the contract. (Pinto 2007, 415)

Page 28: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

2.7.2 Tracking Gantt

Tracking Gantt is a form of Gantt chart where project schedule performance can be

evaluated at a given date during the project. The tracking Gantt chart offers a visual graph to

detect positive or negative deviation of the current situation to the originally planned

baseline. (Pinto 2007, 416-417)

The tracking Gantt chart is easy to interpret and it can be updated quickly to give a real-time

control of the project. The chart does show when activities are ahead or behind the schedule

but as a drawback it does not offer information to the underlying cause of this kind of

activity slippage. (Pinto 2007, 417) Projections to the future can be difficult with the

tracking Gantt chart. When an ongoing activity is behind schedule on a given date it is

difficult to tell whether the activity is not going to be completed before the finish date or

whether it is just momentarily late and can still be completed before the finish date.

2.7.3 Milestone Analysis

Milestones are events or dates in the project where significant deliverables are completed.

The deliverables can be one single task or a combination of several different tasks.

Milestones give indication to the project team of the current status of the project and

especially in multiyear projects provide a good picture of the overall progress. (Pinto 2007,

2.7.4 S-curve Analysis

The classic S-curve displays graphically the actual accumulated amount of cost or work

against time. The analysis is done for both the actual cost or work and the planned cost or

work. Any variation between actual and planned can potentially signify a problem.

Simplicity is the biggest advantage with the S-curve analysis. It offers real-time information

of the project status in a timely manner. (Pinto 2007, 412-413)

Simplicity can also be considered as the biggest downfall of the S-curve. The information it

provides is not always easily interpreted. The S-curve provides an easy way to identify

Page 29: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

positive or negative variance but does not give any indication as to the cause of this

variance. (Pinto 2007, 413)

2.8 Project Closeout and Termination The final stage of a project is termination. Projects are one-off with a definite ending where

the termination is planned from the beginning. The termination is a series of events where

project acceptance is handed over to the customer or project sponsor and various project

documents and records are finalised, revised and completed. (Pinto 2007, 445)

Pinto (2007, 445-446) lists four different reasons for project termination:

• Termination by extinction –the project can be concluded unsuccessfully or

successfully. In successful termination by extinction the project has been handed

over to the customer and all termination activities are conducted. The final budget is

audited and team members disbanded.

• Termination by addition – the project has been institutionalised as a part of the

parent organisation. The project team has been in a way promoted to a formal part of

the organisation’s structure.

• Termination by integration – the project resources, with the project team included,

are reintegrated within the existing structure in the organisation to perform other

duties or to wait for new project assignments. There is a chance that the project team

members have no desire to go back to their old functional department duties and the

risk of loosing key organisational members is significant.

• Termination by starvation – the project can starve out for a number of different

reasons. Due to budget cuts some projects may be kept on the books waiting for

better economic times to be reactivated. Some projects may be kept on file for

political reasons where the organisation has no real intent for the project to succeed

or ever finish. Starving a project may even to be a conscious decision to neglect and

slowly decrease the project budget resulting in making the project unviable.

Even though project termination can be conducted for a variety of reasons, the termination

activities should be included already in the planning phase. The termination activities can

begin after all the project execution phase is completed and the results are accepted by the

Page 30: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

customer. When the project is completed it must be verified by the project that deliverables

in the contract have been supplied to the customer or the project sponsor. These deliverables

can include documents such as training and instruction manuals, drawings, reports or as-

built documentation, equipment, software and data. The documentation is to be properly

organised and filed appropriately for future reference. (Gido & Clements 2003, 84)

All payments have to be received and paid by the project organisation. Once the final

payments are made the project’s final budget can be audited closed. Evaluations of

performance can be held during the termination process. The evaluations should be held

both internally within the project organisation as well as between the project organisation

and the customer or project sponsor. The purpose is to provide valuable information on

performance, find out whether anticipated benefits were achieved and receive suggestions

for future projects. (Gido & Clements 2003, 86)

In some projects termination is required before the project is completed and before it was

originally planned. Early termination can be caused by a number of reasons such as for

example circumstances where the benefits from the project are exceeded by costs, customer

dissatisfaction or when the expected results of the project are found to be unrealistic or

otherwise unattainable. (Gido & Clements 2003, 91)

Page 31: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

3. Company Overview The development work was conducted for the case company Metso in the Power business

unit’s Capital Projects business line. In the following chapter the company is further

introduced and the organisation presented.

3.1 Metso Corporation Metso Corporation is a worldwide deliverer of technology and services in pulp and paper,

mining, construction, power generation, oil and gas and recycling industries. The customers

are typically industrial companies such as paper companies, mining companies and energy

companies. Multiyear project deliveries are typically in pulp and paper industries, mining

and power generation. Deliveries to the construction, oil and gas industries are mostly

smaller package solutions and individual equipment components. The services business

totals up to over 40 percent of the net sales. (Metso Corporation 2010)

Figure 11: Net sales by customer industry in 2009 (Metso Corporation 2010)

Metso traditionally receives orders from the Western Europe, North America, Japan,

Australia and New Zealand. In 2009 48 percent of received orders came from emerging

markets such as Eastern Europe, South and Central America, the Middle East and Africa and

Asia-Pacific (excluding Japan, Australia and New Zealand). Focus on investment is now

more clearly in these emerging markets. (Metso Corporation 2010)

Page 32: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

gure 12: Ord

etso employ

er 300 units

erations in o

etso Poweetso Corpora

ning and Co

e is a part of

tomation an

signing and

per industry

overy boile

intenance s

etso Power l

ivered recov

signed and d

ocesses incre

rrosion char

azil. Metso

ojects. (Mets

ders receive

ys more than

with sales,

over 50 coun

er ation consis

onstruction

f Metso’s E

nd Recyclin

y as well as e

ers, oil and g

ervices. (M

line is the w

very boilers

delivered th

ease produc

racteristics.

so Power In

ed by marke

n 27.000 pro

ntries. (Met

sts of three s

Energy and E

ng business

ring chemic

energy prod

gas boilers,

Metso Power

world’s lead

s and 400 de

he largest de

ction efficie

ctions as a p

ntranet 2010

et area in 20

ofessionals

g, procurem

segments: E

y, Paper and

lines. (Mets

cal recovery

ducers. Prod

Intranet 20

ing chemica

elivered eva

elivery boile

ency combin

wer main ope

project organ

009 (Metso C

in over 100

ment, produc

d Fiber Tec

ntal Techno

so 2010a) M

y systems an

ducts includ

systems, en

al recovery

aporation un

ers in the wo

ned with red

erations are

nisation wh

Corporation

0 countries w

ction, servic

ology segme

nd power ge

de fluidized

nvironmenta

nits. Metso

orld. Contin

duced emiss

e in Finland,

here new ord

ces business

ent along wi

r specialise

eneration fo

bed boilers

al systems,

supplier wi

nuous resea

sions, low f

, Sweden, U

ders are exe

s and other

or pulp and

fouling and

Page 33: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

gure 13: Jämmsänkosken

Voima powwer plant (MMetso 2010b

Page 34: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

4. Conclusions

The purpose of this thesis was to develop the currently used Master Schedule Template in

the case company. The first objective of the development work was to develop the schedule

template into a functional tool in creating coherent project schedules. The second objective

was to create a chain of activities that are easy to cross-check based on the company WBS

logic. Both objectives were met during the development work and the result was a schedule

template that has been implemented in new projects.

The theoretical study involved project management as a whole since scheduling is closely

related to all project management areas. The main focus however was on planning and

scheduling. Special attention was also given to Work Breakdown Structure because it was

the key factor in the schedule template development process. Scheduling is a challenging

area in project management and success in scheduling is directly linked to the success of the

project. The literature offered different ways and methods on how to plan and schedule a

project but the overall logic was similar. It was clear that there is a strong link between the

WBS and scheduling. The WBS can have a number of different structures or categorisations

depending on the needs of the company. The WBS final level work packages are divided

into specific scheduling activities. These scheduling activities form the project schedule.

The case company project management analysis and the development work are confidential

and not included in the public version of the Final Thesis.

Page 35: DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus

Bibliography Gido, Jack and Clements, James P. 2003. Successful Project Management. 2nd ed. Mason

(Ohio): Thomson South-Western. Haugan, Gregory T. 2002. Effective work breakdown structures. Vienna (Virginia):

Management Concepts. Kerzner, Harold 2006. Project Management: A Systems Approach to Planning, Scheduling,

and Controlling. 9th ed. Hoboken (New Jersey): John Wiley & Sons.

Lewis, James P. 2002. Fundamentals of Project Management: Developing Core Competencies to Help Outperform the Competition. 2nd ed. New York (NY): American Management Association.

Metso Corporation 2010. Metso Annual Report 2009. LĂśnnberg Painot Oy

Metso 2010a Metso in brief. [online]. [referred to 17.3.2010] Available:

http://www.metso.com/corporation/about_eng.nsf/WebWID/WTB-041026-2256F-55957?OpenDocument

Metso 2010b Sensodec 6S: Averts unexpected downtime at JaVo. [online]. [referred to

17.3.2010] Available: http://www.metso.com/Automation/magazinebank.nsf/Resource/autom_1_2004_p10-p12/$File/autom_1_2004_p10-p12.pdf [Published in print: Automation No 1, 2004]

Metso Power Intranet 2010 About us. [online]. [referred to 17.3.2010] Available:

http://power.metso.com/

Pelin, Risto 2008. Projektihallinnan käsikirja. 5th ed. Jyväskylä: Gummerus Kirjapaino Oy.

Pinto, Jeffrey K. 2007. Project Management: Achieving Competitive Advantage. Upper Saddle River (New Jersey): Pearson/Prentice Hall.

Project Management Institute (PMI) 2004. A Guide to the Project Management Body of

Knowledge: PMBOK Guide. 3rd ed. Newtown Square (Pennsylvania): PMI. Young, Trevor L. 2007. The Handbook of Project Management: A Practical Guide to

Effective Policies, Techniques and Processes. Rev. 2nd ed. London: Kogan Page Ltd.

THESEUS AND ARIADNE - Off The Wall Playsoffthewallplays.com/.../05/Theseus-and-Ariadne-half... ¡ Theseus and Ariadne, a play by Jethro Dykes Characters Theseus, a Greek hero Ariadne,

THESEUS AND ARIADNE - Off The Wall Playsoffthewallplays.com/.../05/Theseus-and-Ariadne-half... ¡ Theseus and Ariadne, a play by Jethro Dykes Characters Theseus, a Greek hero Ariadne,

Theseus Pharmaceuticals

Theseus Pharmaceuticals

Theseus BCCD World

Theseus BCCD World

NETWORK SECURITY - Theseus

NETWORK SECURITY - Theseus

Greek Mythology Theseus & Minotaur. Theseus’ Childhood.

Greek Mythology Theseus & Minotaur. Theseus’ Childhood.

PreceptingAdvanced Pharmacy NEOMED TEMPLATE …€¦ · PreceptingAdvanced Pharmacy ... • APPE rotation schedule released to preceptorsAPPE rotation schedule ... • Rotation related

PreceptingAdvanced Pharmacy NEOMED TEMPLATE …€¦ · PreceptingAdvanced Pharmacy ... • APPE rotation schedule released to preceptorsAPPE rotation schedule ... • Rotation related

Monthly Schedule Template

Monthly Schedule Template

Chili Pow! - Theseus

Chili Pow! - Theseus

Template For Building Project Schedule Jazz Presentation.

Template For Building Project Schedule Jazz Presentation.

ITEM CODING - Theseus

ITEM CODING - Theseus

Heracles and Theseus

Heracles and Theseus

Theseus Installation

Theseus Installation

Project Schedule Management - Smartsheet Schedule... ¡ Project Schedule Management Template Set How It Works With the Project Schedule Management Template Set, you can quickly evaluate

Project Schedule Management - Smartsheet Schedule... ¡ Project Schedule Management Template Set How It Works With the Project Schedule Management Template Set, you can quickly evaluate

EMPLOYER BRANDING - Theseus

EMPLOYER BRANDING - Theseus

Depreciation Schedule Template

Depreciation Schedule Template

Plutarch, Life of Theseus: 15-22 - glirby.people.wm.eduglirby.people.wm.edu/COLL100/Theseus-and-Heracles.pdf ¡ Plutarch, Life of Theseus 1 Plutarch, Life of Theseus: 15-22 The legendary

Plutarch, Life of Theseus: 15-22 - glirby.people.wm.eduglirby.people.wm.edu/COLL100/Theseus-and-Heracles.pdf ¡ Plutarch, Life of Theseus 1 Plutarch, Life of Theseus: 15-22 The legendary

Theseus and the_minotaur_pdf2

Theseus and the_minotaur_pdf2

Social Media Publishing Schedule Template (1)

Social Media Publishing Schedule Template (1)

IMAGES

  1. Research Paper Try

    thesis statement recycling paper

  2. Essay Recycling waste

    thesis statement recycling paper

  3. Final Thesis Waste Management

    thesis statement recycling paper

  4. Help Financial Case Studies Homework

    thesis statement recycling paper

  5. What Is Recycling Research Paper Example

    thesis statement recycling paper

  6. Argumentative Essay On Recycling PDF

    thesis statement recycling paper

VIDEO

  1. How paper is recycled?

  2. Valorisation of textile waste- A Biological approach

  3. Project 2: My Ecological Statement

  4. Essay on Recycling

  5. How to Write a Thesis Statement?

  6. Unlocking Academic Writing: How to Identify a Thesis Statement

COMMENTS

  1. The Ultimate Guide to Writing a Research Paper

    4 Write a thesis statement. Using what you found in your preliminary research, write a thesis statement that succinctly summarizes what your research paper will be about. This is usually the first sentence in your paper, making it your reader's introduction to the topic. A thesis statement is the best answer for how to start a research paper.

  2. Ecotechnology

    Unsurprisingly, the conference issued the statement that the environmental crisis is a consequence of the established hegemonic economic and epistemic orders. A need was identified to seek out "environmental rationalities through a dialogue of knowledges with the critical Western thinking now underway in science, philosophy and ethics ...

  3. Amesite Âť The Five Most Common Mistakes in Writing an Independent Paper

    Without it, the paper can become a collection of disjointed ideas lacking a coherent purpose. How to Avoid This Mistake: Before diving into writing, craft a precise thesis statement outlining your main argument and how you plan to support it. This statement should be specific and assertive, appearing early in your paper, ideally at the end of ...

  4. Strong Thesis Statement

    This could be anything from the subject of your research to the argument you wish to defend or refute. 2. Take a Stand: Your thesis statement shouldn't merely state a fact. Instead, it should take a position or make an assertion. 3. Be Precise: Narrow down your statement to be as specific as possible.

  5. APA Sample Paper

    Media Files: APA Sample Student Paper , APA Sample Professional Paper This resource is enhanced by Acrobat PDF files. Download the free Acrobat Reader. Note: The APA Publication Manual, 7 th Edition specifies different formatting conventions for student and professional papers (i.e., papers written for credit in a course and papers intended for scholarly publication).

  6. Thesis Statement Generator: Free & Precise

    Our thesis statement generator can help writing a thesis for your research. Create a short, catchy thesis statement, and you are one step closer to completing a perfect research paper! 📜 Dissertation Thesis Statement. Writing a master's thesis or a Ph.D. dissertation is not the same as writing a simple research paper.

  7. What Is a Thesis?

    A thesis is a type of research paper based on your original research. It is usually submitted as the final step of a PhD program in the UK. Writing a thesis can be a daunting experience. Indeed, alongside a dissertation, it is the longest piece of writing students typically complete. It relies on your ability to conduct research from start to ...

  8. 100 Effective Argumentative Essay Topics for Your Next Paper in 2024

    In conclusion, in this blog, We have discussed how to write an argumentative essay, including its structure and essential components. We also covered the importance of a clear thesis statement, logical transitions, well-supported arguments, and a compelling conclusion. Additionally, we explored various potential topics for an argumentative essay.

  9. Free Online Paper & Essay Checker

    The Ginger Essay Checker helps you write better papers instantly. Upload as much text as you want - even entire documents - and Essay Checker will automatically correct any spelling mistakes, grammar mistakes, and misused words. Ginger Essay Checker uses patent-pending technology to fix essays, improving your writing just like a human ...

  10. Academic Guides: Paragraphs: Topic Sentences

    This guide includes instructional pages on writing paragraphs. Just as an effective essay starts off with an introduction that presents the paper's thesis statement and indicates the specific claim or argument that the essay will develop, each paragraph should begin with a topic sentence that indicates the focus of that paragraph, alerting the reader to the particular subtopic that the ...

  11. Research Methodology

    The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

  12. Essay Rewriter: Free Online Paraphraser by AHelp

    The Paraphraser by AHelp equipped with intelligent rewording algorithms is a handy companion for anyone regularly engaged with writing tasks. Whether you're drafting a formal email, an essay, a research proposal, or you just strive for clearer communication, this tool is versatile with its seven different paraphrasing modes.

  13. How to Write an Introduction For a Research Paper

    Be succinct - it is advised that your opening introduction consists of around 8-9 percent of the overall amount of words in your article (for example, 160 words for a 2000 words essay). Make a strong and unambiguous thesis statement. Explain why the article is significant in 1-2 sentences. Remember to keep it interesting.

  14. Thesis statement : Could you support me in crafting a thesis statement

    Thesis Statements #1. Thesis Statement: Absenteeism among secondary school students in Jamaica is a multifaceted problem with significant consequences, influenced by both individual and systemic factors, and requires comprehensive interventions to effectively address the underlying causes and improve student attendance rates.

  15. Julius Caesar Sample Essay Outlines

    The following paper topics are based on the entire play. Following each topic is a thesis and sample outline. ... I. Thesis Statement: Superstition is an important factor in determining the events ...

  16. For a philosophy of good construction: a learning experience

    Conference Paper ďťż Metadata Show full item record. Citation. Marrone, P. and Mancini, F.M. 2022. ... from energy efficiency to the recycling of materials to building regeneration. In university education, however, the transmission of knowledge on construction stayed limited to lessons on the elements and construction techniques that declined ...

  17. Earth911 Reader: Sustainability, Recycling, Business, and Science

    Other Asian nations have begun to import cardboard. Concurrently, consumer demand for recycled paper personal care products is reshaping paper recycling in the United States. Demand for printing paper and newsprint have fallen by 21% and 47%, respectively. At the same time, OCC now accounts for almost two-thirds of recovered paper globally.

  18. College Essay Examples and Essay Formats

    Understand how top-performing students and experts write their essays to pick up on essential cues that might just set your paper apart. Delve into our extensive collections of the most popular documents preferred by fellow students, and commence your writing journey to score well in every assignment you turn in.

  19. The Homebrew Industrial Revolution

    The change, naturally, did not go unremarked by those profiting from it. For example, here's a bit of commentary from an advertising trade paper in 1925: In the statement to its stockholders issued recently by The American Sugar Refining Company, we find this statement: "Formerly, as is well known, household sugar was largely of bulk pricing.

  20. Proceedings of the 4th IPLeiria's International Health Congress

    3. Fritz S & Lusardi M. White paper: "walking speed: the sixth vital sign". J Geriatr Phys Ther. 2009; 32(2): 2-5. 4. Stevens JA. The STEADI tool kit: a fall prevention resource for health care providers. IHS Prim Care Provid. 2016, 39: 162-6. Keywords. Self-reported data, Fall Risk Assessment, Community dwelling adults.

  21. What's the difference between a bibliography and a ...

    The words 'dissertation' and 'thesis' both refer to a large written research project undertaken to complete a degree, but they are used differently depending on the country: In the UK, you write a dissertation at the end of a bachelor's or master's degree, and you write a thesis to complete a PhD.

  22. DEVELOPING MASTER SCHEDULE TEMPLATE FOR

    DEVELOPING MASTER SCHEDULE TEMPLATE FOR - Theseus. Date post: 27-Mar-2022: Category: Documents: Upload: others View:

  23. Characters of the Punch-Out!! series

    Punch-Out!! is a series of boxing video games created by Genyo Takeda and Makoto Wada, and published by Nintendo.The main protagonist and player character of the series is Little Mac, a short boxer from the Bronx who climbs the ranks of the fictional World Video Boxing Association (WVBA) by challenging various opponents. These opponents come from different countries and feature various ethnic ...