<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>L. E. G. Mboera, C. Sindato, I. R. Mremi, S. F. Rumisha, J. George, R. Ngolongolo, G. Misinzo,
E. D. Karimuribo, M. M. Rweyemamu, N. Haider, M. A. Hamid, R. Kock, Socio-Ecological
Systems Analysis and Health System Readiness in Responding to Dengue Epidemics in
Ilala and Kinondoni Districts, Tanzania, Frontiers in Tropical Diseases</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1007/978-94-017-8959-2_5</article-id>
      <title-group>
        <article-title>Cognitive Security and Resilience: A Social Ecological Model of Disinformation and other Harms with Applications to COVID-19 Vaccine Information Behaviors</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Shawn Janzen</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Caroline Orr</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ph.D.</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sara-Jayne Terp</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland</institution>
          ,
          <addr-line>College Park, Maryland</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>College of Information Studies (iSchool), University of Maryland</institution>
          ,
          <addr-line>College Park, Maryland</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <volume>2</volume>
      <issue>2021</issue>
      <fpage>8</fpage>
      <lpage>13</lpage>
      <abstract>
        <p>Access to and discovery of credible information is the product of numerous, interacting factors including individual characteristics and behaviors as well as features of the information environment, social, cultural, and institutional norms, policies and regulations, and more. To date, most research on information disorder has focused either on the individual or on the information environment (or on the technology that allows an individual to access the information environment), but there is a lack of systematic, theory-driven research on the dynamic relationship between the individual and their environment. In this study, we propose a novel application of Brofenbrenner's social ecological model to the study of cognitive security and resilience in the context of information disorder. First, we describe the refitting of the model from public health and human development to cognitive security. Using extant literature in the field, we identify the key factors at each level of influence - including individual-level factors such as attitudes/beliefs, knowledge/experience, and demographic characteristics, as well as higher-level factors at the interpersonal-, organizational/institutional-, community-, and policy/culture-levels - that shape susceptibility and resilience to information disorder. We also consider the dynamic interactions between individuals, groups, societies, and characteristics of the technological environment, including how algorithms and artificial intelligence interact with individual behaviors, policies, and organizational decision-making to shape access to and discoverability of credible information. Finally, we describe an application of the model to a use case involving COVID-19-related information behaviors. To our knowledge, this is the first time Brofenbrenner's social ecological model has been applied in full as a conceptual foundation for the study of cognitive security and resilience. Our findings provide important new insight into the social, cultural, and structural factors that shape information behaviors and access to credible information, as well as the impact of information disorder. The results can be used to identify vulnerabilities and targets for future information-related initiatives and interventions (such as fact-checking and journalism initiatives) and to inform evaluations of such initiatives, as well as to better understand variation in susceptibility and resilience to information disorder. Further, this study lays an important conceptual foundation for future research to expand on this use case and refine the application of the social ecological model to the information domain.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;cognitive security</kwd>
        <kwd>social ecological model</kwd>
        <kwd>misinformation</kwd>
        <kwd>information disorder</kwd>
        <kwd>information behavior</kwd>
        <kwd>COVID-19</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The United States of America (US) presidential elections in 2016 and 2020, the Brexit referendum
in 2016, and now the ongoing coronavirus pandemic, have thrust the issue of information
disorder into the global spotlight, leading to greater awareness of the challenge and a surge of
new eforts to address it. First conceptualized by Wardle and Derakhshan, information disorder
describes the creation and/or sharing of false or misleading information, whether deliberately
or unwittingly, with or without the intent to cause harm [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The concept encompasses
misinformation — that is, the unintentional sharing of false information — as well as disinformation,
or information that is deliberately false or misleading [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. It also includes a third category of
misleading content, malinformation, which describes genuine information that is shared with
the intent to cause harm.
      </p>
      <p>Although the spread of false and misleading content is not a new phenomenon, the internet,
and particularly social media, have given rise to a fundamental change in how people
communicate and how information is disseminated and accessed, which has contributed to the viral
spread of mis- and disinformation as well as targeted propaganda and influence campaigns.
The “three Vs” of volume, velocity, and variety from big data also contribute to the reach of
disinformation, cognitive overloads in processing it, and in the ability of disinformation creators
to rapidly test and adjust messaging, channels and media to maximize impact. The scale of the
problem and its far-reaching efects have created an urgent need to develop efective strategies
to counter information disorder and facilitate better access to and engagement with credible
information, but these eforts have been hampered by fundamental challenges including
inconsistent terminology, a lack of integration of research from diferent disciplines, and underuse
of theory. To more efectively address the challenges posed by information disorder, it is first
necessary to develop a better understanding of the problem and identify the most promising
avenues to counter it.</p>
      <p>The causes of and contributors to information disorder are complex and multifaceted, and
the existing literature on information disorder spans numerous fields including communication,
media studies, public health, psychology, computer and information science, and security studies.
Studies have examined the characteristics that make individuals susceptible to information
disorder, as well as the characteristics of the information itself, the networks in which it spreads,
and the platforms and technologies that enable individuals to form networks and engage with
information. However, there is a lack of foundational, theory-based research examining how
these agents, processes, and environments interact with each other and respond to change.</p>
      <p>
        In this paper, we propose a novel framework for conceptualizing cognitive security and
resilience in the context of information disorder and information-based harms. We take a
sociotechnical systems view of mis- and disinformation, drawing from information security tools
and processes, as well as cognitive security’s twin definitions of adversarial machine learning
that afects machine beliefs, and social engineering at large scale, which we term adversarial
cognition [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. In this paper, we define cognitive security as the ability to detect, characterize,
and counter misinformation, disinformation, and other information-based harms and forms of
malign influence among people. Resilience, as part of cognitive security, includes the structural
context that protects humans from exposure to disinformation in the first place, as well as the
ability to identify it, limit its spread, and mitigate its efects once exposed. Throughout this
paper, we use the term “information-based harms” to refer to misinformation, disinformation,
conspiracy theories, and a variety of other types of potentially harmful information.
      </p>
      <p>The framework we are proposing builds on existing work that has applied fundamental
concepts from the field of public health to the study of information and information disorder.
For example, the spread of rumors and other falsehoods on social media is often compared
to the spread of contagious viruses, which is why widely-shared posts are said to have “gone
viral.” However, there are limits to this epidemiology-based model; discussing rumors and
misinformation as malicious viruses to be contained and removed creates stigma on participants
in them, especially where they arise from genuine information behaviors and social interaction.
There are parallels here to the first applications of the Social Ecological Model and other
ecological frameworks in the field of the public health, which came in response to criticism that
traditional approaches to studying health and disease — which largely focused on individual
characteristics and behaviors — promoted a victim-blaming mentality in which blame for poor
health outcomes was placed on the shoulders of individuals, often without consideration of the
structural and environmental causes.</p>
      <p>
        Our proposed model is an adaptation of Brofenbrenner’s Social Ecological Model (SEM),
which was initially developed as a framework through which to understand human development,
with a particular emphasis on the dynamic interactions between individuals and their
environments [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Since its inception in the 1970s, the SEM has been applied in various formats across
a variety of domains including public health [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], health literacy [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], media and communications
[8], risk management [9], and organizational change [10]. The model recognizes individuals as
embedded within multiple levels of interacting systems, and within each of these systems there
are myriad factors that directly and indirectly influence the individual and are influenced by
the individual. This core assumption — that individuals can influence their environment and
that the environment can influence the individual — is known as reciprocal causation.
      </p>
      <p>Where and how you retrieve information matters: information seekers not only analyze
retrieved information, they also co-create the information search terms that they use with
diferent levels of their personal SEMs, leading to term-based information silos [ 11]. The
credibility of retrieved information is based on factors that include, according to Self as quoted
by Pasi, "(i) the source of information, (ii) the information that is difused, i.e., the message,
considering both its structure and its content, and (iii) the media used to difuse information"
[12], [13]. The Admiralty Code [14] is widely used to assess credibility in information retrieval
and open source intelligence (OSINT) by separately rating information contents and sources,
but there is scant research on the efects of the type of source, and their relationship to the
information seeker, or on the credibility they assign to information retrieved through personal
communication, online search, networking, and OSINT. This SEM extends information retrieval
to give ways to consider the source and the efects of source assessment on retrieved information
credibility.</p>
      <p>While the levels of the SEM have been conceptualized and labeled in diferent ways over
the course of the past five decades, this study builds on a version of the model that is widely
used in public health, health promotion, and behavior change research. This framework, an
adaptation of Brofenbrenner’s model put forth by McLeroy and colleagues, specifies five levels
of influence that interact with each other and with the individual, starting with the individual
level, which encompasses the most proximal layer of influences such as demographic factors,
identity, political ideology, attitudes, beliefs, emotions, knowledge/skills, behaviors, and more
[15]. The second level of the model, the interpersonal level, comprises the external social
influences of family, friends, and other close relationships, as well as related social factors
such as group norms and social support. The organizational level of influence describes the
organizations and institutions in which social relationships occur and in which policies and
regulations originate. In the public health context, this would include local, state, federal, and
global health agencies such as the Centers for Disease Control and Prevention (CDC) and the
World Health Organization (WHO). The next level of influence is the community level, which
focuses on the networks that connect organizations and institutions, the settings in which they
exist, and the culture and norms that emanate from these spaces. Examples include the public
health community, the global aid community, the information security community, and the
education community. The fifth level is the policy/societal level, which includes broad societal
factors that create a climate in which certain practices, behaviors, and phenomena are either
reinforced/encouraged or inhibited/discouraged, as well as factors such as poverty, inequality,
discrimination and bias, and strength of democracy. This level also includes the policies that
create or reduce poverty, inequality, discrimination, and related factors, as well as policies
focused on technology, information, security, and defense. For the purposes of this study, we
chose to describe these layers separately, as we identified several key areas where policy and
society were moving at diferent speeds, and/or where coalitions involved in policy-making
spanned numerous, heterogeneous societies and thus were not accurately captured in a single
level.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>The SEM has been used as a framework for research and program planning in a variety of
ifelds, particularly within the areas of human development, public health, and intervention
planning. In recent years, increasing attention has been given to ecological models like the
SEM, in large part because federal agencies like the National Institutes of Health (NIH) Ofice
of Behavioral and Social Science Research (OBSSR) and the CDC have issued calls for more
research incorporating transdisciplinary science and systems science methods in an efort to
better understand the multilevel influences on health and disease [ 16]. Although there are
many parallels between the study of information disorder and the fields of public health and
behavioral science, there are few applications of the SEM and related models in the area of
information-based harms.</p>
      <p>
        Lewin was among the first scientists to adopt an ecological approach to understanding
human behavior [17]. Brofenbrenner, a student of Lewin, is credited with formulating ecological
systems theory, which views individuals as agents who influence, and are influenced by, their
environments [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The SEM, which is rooted in ecological systems theory, also incorporates
principles from systems science, including Watzlawick, Weakland, and Fisch’s theory of problem
formation and problem resolution, which provides a framework for understanding why certain
problems persist while others are resolved [18]. Watzlawick’s theory is based on the idea that
there are at least two distinct types of change: first-order change and second-order change.
First-order change occurs when a change is made inside a system, or a particular symptom or
need is addressed, but the system itself (and its processes and structures) remains unchanged.
Second-order change, on the other hand, occurs when a modification is made to the system
itself, such as when a process or structure is added or removed from the system.
      </p>
      <p>Expanding on this line of work, Glass and McAtee developed a multilevel model of human
behavior to advance the study of behavioral science within the context of public health [19]. At
the time, the public health community was struggling to develop more efective approaches to
reduce behavioral risk factors such as smoking and physical inactivity. Historically, behavioral
science had approached these problems by focusing on individual characteristics and behaviors,
but by the 1990’s it was becoming increasingly clear that there was a pressing need to better
understand the social context in which these behaviors are shaped. In response to calls from
leading public health scholars to advance a new research agenda focusing on the social causes
and context of disease [20, 21, 22], Glass and McAtee proposed an integrated approach to
studying health behavior that recognized that nearly all public health problems have multiple
causes and are shaped by multiple factors at diferent levels of influence, and that behavioral
health is the product of both social-environmental and biological processes and systems [19].</p>
      <p>In the fields of public health and behavioral science, there is strong evidence that interventions
are more likely to be efective if they are based on ecological models like the SEM, rather than
individual-level theories, because of the SEM’s focus on multicausality and multilevel influences
on health and behavior [23]. As such, the SEM has been applied in numerous ways to a variety
of health conditions, behaviors, and public health problems, ranging from intimate partner
violence[24] and firearm injuries [ 25], to obesity [26] and cancer screening [27], to bullying
[28] and suicide prevention [29]. The SEM has also been used during the COVID-19 pandemic
as a framework to study the determinants of preventive behaviors related to the virus, including
vaccine intentions [30] and mask use [31], to understand vulnerability and resilience among
elderly populations [32], to explain country-level variation in COVID-19 abatement eforts [ 33],
and to conceptualize the impact of the pandemic on other health issues such as opiate use [34].</p>
      <p>McCormack and colleagues applied the SEM to the study of health literacy and patient
engagement, showing how both concepts are influenced by social and contextual factors such as the
delivery of health-related information, the communication skills of public health professionals
and medical providers, the characteristics of public health institutions, and the policies that
afect health-related organizations, providers, and patients [ 35]. After identifying the factors
at each level that influence health literacy and patient engagement, the authors incorporated
ecological processes such as pooled interdependence — a term that describes the cumulative
impact of intervention efects — to specify intervention strategies that could be used to target
factors at each level of the model.</p>
      <p>The SEM and other ecological models have also been applied to the study of infectious disease
outbreaks [36] and emerging infectious diseases [37] to understand the dynamic interactions
between pathogens, hosts, individuals, and their environments, and how changes to any one of
these can influence the spread of disease, the susceptibility of populations and subgroups, the
severity of disease outcomes, and more. In this context, ecological models have primarily been
used for the purposes of risk and needs assessment, identifying priorities for intervention, and
evaluating the impact of prevention and treatment strategies.</p>
      <p>Additionally, the SEM was used by members of an international coalition funded by the United
States Agency for International Development (USAID) to identify ideal communication strategies
to promote health behavior change in response to the 2014 Ebola epidemic, during which fear,
mistrust, and miscommunication severely hampered outbreak response eforts [ 38]. Although
the Ebola epidemic difers from the coronavirus pandemic in many key aspects, there are also
many parallels between the two situations — namely, the challenge of efective communication
in the face of an unprecedented crisis, widespread mistrust eroding public health eforts, and a
rapidly evolving, emotionally-charged situation that left the population vulnerable to rumors
and misinformation — that make the Ebola epidemic an important example from which we
can learn key lessons to apply in the present. Initial communication eforts during the Ebola
outbreak were largely focused on psychosocial determinants of behavior change, however as the
authors noted, it soon “became evident that controlling the epidemic required communication
interventions to address levels higher than the individual, namely, community and normative
level factors that could influence the desired behaviors, service-level factors that provided critical
resources for the ill, and policy-level factors to guide a coordinated response within a very
limited timeframe” [38]. As such, members of the USAID-sponsored Health Communication
Capacity Collaborative (HC3) project turned to the SEM to formulate a more comprehensive
strategy that explicitly identified possible causal mechanisms to promote behavior change
through domain-based communication activities focused on community dialogue, social change,
service delivery, and individual and household factors.</p>
      <p>Most recently, during the coronavirus pandemic, the SEM has been used to guide the
exploration of COVID-19 vaccine intentions and identify subgroups with negative vaccine intentions,
who may represent ideal targets for intervention [30]. The study used survey data and broke
down the items into the levels of the SEM, then used univariate and multivariate models to
compare participants who intended to get vaccinated against COVID-19 to respondents who
did not intend to get vaccinated or who were ambivalent about getting vaccinated. The results
pointed to several potential factors to target in vaccine promotion campaigns, including gender
(males were significantly more likely to have negative intentions to get vaccinated), race
(participants who identified as Black were significantly more likely to have negative intentions to get
vaccinated), conservative political ideology, and social norms (participants whose peers did not
engage in or support COVID-19 prevention behaviors were significantly more likely to have
negative intentions). This builds on previous research using the SEM and related ecological
models to investigate vaccine attitudes, including Walker and colleagues’ qualitative study
of confusion, mistrust, and hesitancy among mothers who had accepted the HPV vaccine for
their children but were not confident in their decision [ 39]. This is an important subgroup for
several reasons. First, many vaccinations require multiple doses to be efective, so ongoing
hesitancy after initial acceptance can be a barrier to completing a full vaccination series.
Secondly, although individuals may choose to accept one vaccine and reject others (or vice versa),
there is a risk that hesitancy about one vaccine could develop into more generalized vaccine
hesitancy. For these reasons, individuals who have accepted a vaccine but remain hesitant are
still a key group to consider in vaccine communication and promotion activities. In the study
of HPV vaccine-accepting mothers, interview data revealed that media and social media were
key sources of mistrust and confusion, and that although most mothers indicated a high degree
of trust in their children’s health care providers, the information they got from providers was
often undermined by information they got from other sources, such as friends, family, and the
media. The authors suggested that, in light of parents’ increased access to and engagement with
credible and noncredible sources of information online, and the subsequent expectation to be
more involved in health decision-making, traditional models of the patient-provider relationship
and communication may need to be revised [39].</p>
      <p>Much like the field of public health at the turn of the century, we now find ourselves facing a
complex challenge that threatens the health of both individuals and societies, but which has
been resistant to most eforts to promote change. To date, the vast majority of research on
information-related harms has focused either on individual characteristics and behaviors — such
as why certain people are more susceptible to mis- and disinformation — or on the platforms and
technologies that facilitate the spread of the problem. Other research has approached this
problem by exploring characteristics of information itself, such as why certain misleading content is
more likely to go viral. However, there is a lack of theory-based research that integrates these
diferent approaches and explicitly considers the interactions between individuals, information,
and the technologies and environments that enable individuals to encounter and engage with
information. We hope to help fill that gap with our proposal for a novel application of the SEM
to the study of cognitive security.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Social Ecological Model of Cognitive Security</title>
      <p>e
R
c
o
S
:
2</p>
      <sec id="sec-3-1">
        <title>3.1. Individual</title>
        <p>Individual-level determinants of cognitive security and resilience encompass a wide variety
of characteristics including demographic and psychosocial factors, ideology, knowledge and
technical skills, digital literacy and digital numeracy, as well as information needs, information
evaluation, and information behaviors. These factors interact with each other and with the
information environment, creating a dynamic situation in which characteristics of the individual
influence their information needs, evaluation, and behavior, which in turn influences the types
of information they seek, attend to, share, and recall.</p>
        <p>Demographics: Cognitive security and resilience are influenced by a variety of demographic
factors, including age, race, gender, socioeconomic status, language and vocabulary. For
example, older individuals (over the age of 65) have been found to be more vulnerable to false
information disseminated via social media and messaging apps [40, 41]. Among youth, social
and cognitive development are important determinants of cognitive security and resilience
due to their influence on information evaluation and uptake. For example, younger children
struggle with certain website design features such as content lists and maps, but benefit more
than older children from learning cues such as pop-ups explaining the main point of a webpage
[42]. Although children generally struggle more than adults with tasks that require analytical
thinking and complex judgments, youth tend to be more comfortable using new technologies
and are often more motivated to engage with emerging technologies [42]. Other research has
found enduring racial, socioeconomic, and age-based divides in access to and use of
communication technologies [43, 44, 41]. Certain dynamics of religiosity, including fundamentalism
and dogmatism, have been found to be associated with reduced analytical thinking and,
subsequently, greater susceptibility to conspiracy theories and other falsehoods [45]. Political
ideology has also been shown to influence susceptibility to misinformation, such that
individuals who identify as conservative appear to be more susceptible to political misinformation
than left-leaning or ideologically neutral individuals [46]. Language and vocabulary are also
important determinants of cognitive security and vulnerability. These factors can interact with
the technological and information environments, leading to challenges such as the “vocabulary
mismatch problem,” which describes a phenomenon in which diferent people and/or systems
use diferent labels to describe the same concept. Put diferently, vocabulary mismatch occurs
when “the way users express concepts difers from the way they appear in the corpus” [ 47] or
when the terms between queries and documents are lexically diferent but semantically similar
[48].</p>
        <p>Psychosocial factors: Cognitive security and resilience are also influenced by a variety of
psychosocial determinants, including attitudes and beliefs about technology and about the topic
at hand, trust in information and sources of information, as well as the technology used to
access it, subjective norms surrounding source credibility and information-sharing, cognitive
biases, risk perceptions, stress, trauma, emotional state and emotional reactivity, and more
[49, 50].</p>
        <p>Digital literacy and numeracy: Technical knowledge and skills, as well as subject-specific
knowledge and familiarity with the topic at hand, are also important determinants of cognitive
security and resilience. In particular, digital literacy, which the Department of Education defines
as “the skills associated with using technology to enable users to find, evaluate, organize, create,
and communicate information,” [51] and digital numeracy, which involves the ability to process
basic numeric concepts and is closely tied to decision-making ability, have been shown to
be associated with susceptibility to misinformation, such that individuals with low digital
numeracy tend to be more likely to believe misinformation they encounter online [52]. In one
recent study on coronavirus-related misinformation, digital numeracy was found to be the
strongest predictor of susceptibility to misinformation [49].</p>
        <p>Information-related factors: Other individual-level determinants of cognitive security and
resilience include information needs and how they are expressed (e.g., how people interact with
search engines), evaluation of information and sources (e.g., relevance and credibility judgments),
perceived usefulness (of the information), time spent with the information, and a variety of
information behaviors including search behaviors and information seeking, information sharing,
and engaging with information.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Interpersonal</title>
        <p>At the relationships layer, influence on an individual’s cognitive security depends upon the
viewpoints of family, peers, and other social connections. The social environment enables
individuals to afect how others receive, perceive, and understand information, as well as be afected
by others. Likewise, social norms link individuals and through interpersonal relationships
within the social environment, contributing toward cognitive security development within the
group. Similar protective information sharing occurs through prosocial gossip as a way to
defend members of the social group [53] and between the social strata of leadership [54, 55].</p>
        <p>Families and peers: Families and peers are the first connections in the social environment.
This group serves as a natural support system, signal for belonging, and potential source of and
iflter for credible information. They serve as fact-checkers and validators to increase individual
cognitive security [56]. Yet, individual attributes of family members and peers, as well as group
demographics, mediate credibility [41]. Additionally, family and peers as sources of credibility
information and attitude change agents is further mediated by individual cognitive ability [57].</p>
        <p>Relationship quality: In addition to the presence of this social connection, the quality of that
interpersonal relationship matters. Where an individual may initially recognize and dismiss
instances of disinformation, they are more likely to become involved when family and peers
consume or are afected by that disinformation [ 58]. Attempts to correct incorrect information,
an outward cognitive security exercise, could be interpreted as quarrelsome behavior. Therefore,
it is more likely to occur among family and peers, where the relationship could mitigate
perceptions of aggressive communications [58].</p>
        <p>Homophily: Homophily is another factor that shapes social connections, which in the context
of cognitive security considers diversity of information behaviors and receptiveness to new
information. Degrees of homophily allow individual cognitive security to reflect and transfer
within the group. The greater the homophily within a group, the more close-knit and greater
the chance to become an echo chamber. Diferentiating individual opinions to correctly
acknowledge disinformation in groups with echo chambers can be dificult if some individuals
lack the “necessary instruments and cognitive abilities to assess the level of credibility of pieces
and sources of information with which they come into contact” [59]. When disinformation is
accepted within groups with high homophily, it difuses quickly through the group and bridges
to similar groups [60]. On the flipside, groups with low homophily may prevent disinformation
from having wide acceptance within the group [60]. Groups with low homophily then
demonstrate a greater chance for individuals within the group to intercede the disinformation and
benefit from difering levels of individual cognitive security within the group. Malicious actors
that produce disinformation recognize the role of homophily and confirmation bias in social
connections and leverage those relationships to create more sophisticated types of antagonistic
information operations [61].</p>
        <p>Social roles and overloading: Other relationship factors include social roles and overloading
behaviors. Social roles can emerge within interpersonal relationships and afect the development
and transmission of cognitive security capabilities. Social exchanges, which include sharing and
correcting disinformation may require use of social capital within the group. Individuals may
weigh their role and value in a relationship or group as a factor whether to act on correcting
others in the group based on the potential social costs [60]. In addition to social roles, the overall
volume of information flowing between social connections can afect the ability to recognize
disinformation. Large amounts of information, correct and incorrect, cycle through various
communication platforms and parse through groups. The degree to which the amount and type
of information presented within a social circle, particularly within a content delivery system,
can overload individuals [62]. Thus, while an overloaded individual could benefit from the
cognitive security of others within the group for information group members share, they would
also rely upon those group members to identify information injected into the group by content
delivery systems.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Organizations</title>
        <p>An organization is a group of people with a common purpose: this includes government
departments, businesses (e.g., companies, social media platforms), nonprofits (e.g., United Nations
agencies etc), topic-specific facilities (e.g., hospitals, health facilities, etc), and informal groups
(e.g., cognitive security monitoring and response groups: fact-checking, election monitoring)
[63]. It also includes the groups and businesses that support the creation, dissemination, and
use of mis- and dis-information.</p>
        <p>Organizations’ role in cognitive security: Organizations might maintain their own cognitive
security, or be part of the cognitive security of the vertical (e.g., elections, health, transport)
system. Organizations also afect the cognitive security of individuals, communities, and other
stakeholders, including the end-users that they serve.</p>
        <p>Organizational influence: Organizations influence cognitive security through narrative- and
activity-based mitigations and counters to misinformation and other cyber harms.
Narrativebased counters include prebunking, debunking, and making clear information available in the
spaces where individuals seek, post, and share both information and misinformation.
Activitybased mitigation and counters include reducing the visibility of online misinformation content,
sites, and creators, and training influencers in areas where misinformation has spread ofline, in
local languages, or to communities that are hard to reach with broader online campaigns.</p>
        <p>Boundary issues: Organizations implementing cognitive security for themselves have a
boundary problem. Unlike other areas of security, the organization needs to monitor and act on
not just its own systems, but also on systems, e.g., social media platforms, controlled by other
bodies. This forces organizations to cooperate on cognitive security mitigation and counters.
Organizations control the information that they produce, their own responses to external content,
and cooperation with other bodies. Larger organizations have communications and marketing
departments that scan media, such as social media, traditional media, and trade publications, for
mentions of the organization and subjects that afect it, respond to, and produce information
and media about the organization. Few organizations are scanning for misinformation that
afects them; fewer still (outside media and social media organizations) scan for misinformation
about their vertical, or afecting their stakeholder populations.</p>
        <p>Factors in organizational cognitive security: Factors that afect organizations’ own cognitive
security include the organization’s access to information monitoring and response resources
(which is often related to organization size), access to collaboration resources (e.g., with other
organizations, and communities in its area), and visibility of subcommunities within or
overlapping the organization. For example, health organizations often contain medical staf who are
part of their own information communities. Organization structure can create issues for which
the solutions are not always clear, such as acknowledging who is responsible for cognitive
security. The dynamics of an organization’s social power also afect its cognitive security:
external perceptions (e.g., stigma) and accessibility both of the group and by the group to external
mitigation and response resources that it needs. Like individuals, organizational characteristics
can make cognitive security for the organization and its people easier or harder to obtain.</p>
        <p>The importance of plans: An important first step in implementing cognitive security with
organizations is to create a cognitive security plan, detailing potential and allowed responses to
a disinformation incident, steps in those plans, with contacts listed for internal and external
collaboration (e.g., image production, social media contacts), and mitigation steps that could be
taken to reduce the potential spread and efect of future events (e.g., creating and amplifying
narratives in information voids).</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Communities</title>
        <p>A community is a group of people who may or may not be spatially connected and could be local,
national or international, but who share common interests, concerns, or identities. Factors that
afect community cognitive security include the community structure (e.g. community cohesion),
trust levels, existing shared beliefs, and the challenges specific to that community, including
communications challenges. Other factors include literacy, language, existing communications
channels and communication skills (e.g., some communities don’t understand maps, and there
are similar issues with other new information types), access to information and bandwidth, and
the monetary cost of access (for searching, posting, and sharing information).</p>
        <p>Trust: Internews’ work on community-based misinformation response covers several
community factors [64]. Trust is key in community cognitive security incidents and defenses. Trusted
community information sources include individual influencers (e.g., community leaders, and
community information sources like librarians and local oficials), and influential organizations
(e.g., religious bodies) and meeting points (e.g., barbershops). Less-trusted information sources
are generally less local, and include government organizations and websites, science, and
mainstream media. Community cognitive security plans should take these diferent levels of trust,
and their management, into account.</p>
        <p>Existing shared beliefs: Disinformation often takes advantage of existing in-group
narratives and schisms between groups. Examples include the use of Buddhist/Muslim tensions,
Dinka/Nuer tensiobs in South Sudan, and community-level distrust of government fuelling
polarization during COVID-19. Factors listed include political conflict, social upheaval, economic
stress, and other sociological or psychological factors [65].</p>
        <p>Response origins: Where cognitive security plans originate from is also important. Heeks
described development as either pro, para, or per-poor communities, but this categorization
also applied to other types of community [66]. Pro-community work occurs outside
communities, but on their behalf; para-community work is done working alongside communities;
and per-community work occurs within and by communities. Local context is important to
cognitive security: pro-community centralized responses miss that context, e.g. that the US
contains multiple Hispanic communities with diferent cognitive security needs, whilst grassroots
per-community cognitive security originates from the community needs. This can create a
disconnect between what the community thinks is appropriate intervention, and what funders
and information controlling organizations think it should be, and often gives rise to discussions
about who represents a community, and whose voices in it should be heard.</p>
        <p>Online communities: Online and physical communities difer in several ways [ 65]. Lieberman
and Schroeder identified four main diferences as fewer nonverbal cues, greater anonymity,
more opportunity to form new social ties and bolster weak ties, and wider dissemination of
information [67]. Online disinformation takes advantage of the three “Vs” of big data: greater
volume, at greater velocity, and over a wider variety of channels, languages, formats, and
community structures. Anonymity allows users to spoof (pretend) membership of ofline
communities, increasing trust in their roles as community members or influencers. Wider
dissemination of information gives access to many more communities than those bound by
geography or spoken language, with automation and electronic content production making the
management of multiple “sock puppet” accounts and groups feasible [68]. More opportunity to
create and afect social graphs (the ability to form new ties and bolster weak ones) has been
useful to the creators of misinformation-led online groups [69]. Despite this, online communities
do still have geographical factors, including algorithmic containment of what they view because
of their location, language, search terms, and influencers.</p>
        <p>Finally, not all community falsehoods are bad: Community information and coherence is
also sometimes based on false information in the form of myths, convenient untruths, such as
backstories, and other misinformation. These can be used as signals of belonging to a community
without being believed. Santa Claus dropping down chimneys is clearly a myth, but determining
which false information is normal to a community and not to be countered might not be easy to
determine from outside.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.5. Policy</title>
        <p>Public policy plays a substantial role in the governance of the flow, use, and storage of
information, as well as guide actors within the information and social environments. Such policies also
influence and are influenced by individual, group, and organizational culture and dynamics,
creating a symbiotic system that can reinforce systems of cognitive security. Moreover, policy
is one vehicle that guides how individuals navigate and interact with the institutions and other
actors in the cognitive security landscape. Policy afects several key factors which can trickle
down toward the development of individual cognitive security. Such factors include areas such
as legislation and executive policy, coordinated planning documents, stakeholder involvement,
policy hesitancy or resistance, funding and resources, and research and reporting endeavors.</p>
        <p>Whether individually or as a collaborative group, countries can take a proactive approach to
stem the flow of disinformation to their populations. For example, the US and the European
Union (EU) actively work with experts to identify and counter sources through legislation [70].
Regulation is one type of legislation and executive policy that has a forcing function on the
transmission of disinformation. Regulation in the US requires advertisers to track, sometimes
publicly, who purchased ads and for much in an efort to improve election transparency [ 71, 70].
At a subnational level, the US states of Washington and California introduced policy to improve
media literacy in schools [72, 73]. Separate similar policies from diferent groups, such as
intergovernmental or across sectors, can form overlapping mosaics that enhance or hinder
cognitive security against botnets and ofset disinformation. These policy mosaics are program
laboratories that produce vital knowledge developments. Yet, policy mosaics can also result
in the creation of a fractured field of consent management platforms for data protection,
where platforms and use of data vary in response to each privacy policy directives [74, 75].
Governments can also use policy planning documents, such as strategic security plans, to
help set agendas, as well as establish leadership and security topic importance. The Biden
Administration’s recent updates to the United States National AI Initiative through AI.gov
present strategic pillars like education and training to increase and improve the workforce
pipeline and emphasize the importance of incorporating socio-technical perspectives [76].</p>
        <p>The stakeholders involved in policy development and implementation have direct and
downstream impacts on the type, quality, and quantity of policies that afect cognitive security and
disinformation through their formal and informal policy institutions and networks. The number
and type of stakeholders across organizations to be involved in public and other macro-level
policies vary by purpose, while Maynard and colleagues identified nine groups of
stakeholders that should be involved information security processes within an organization: executive
management, ICT specialists, security specialists, legal and regulatory, business unit
representatives, the user community, human resources, public relations, and external representatives [77].
Moreover, stakeholders should remain involved throughout information security life cycles
[78, 79]. Whether within or across stakeholder organizations, each participant and group bring
their own context regarding what matters. Kshetri found the contexts of formal and informal
institutions and their institutional changes to be informative toward the networks formed and
relationship dynamics, which in-turn impacted cloud technology and its security development
[80]. As cognitive security would encompass use of cloud systems, the contextual drivers and
premise may also be applicable more broadly to cognitive security and resilience.</p>
        <p>The private sector may hesitate or resist policy if there is perceived overreach, censorship, or a
lack of limits and boundaries. Technology remains politically contentious. Examples of national
security policy resistance include allowances of data collection and use, disclosure mandates,
costs to small businesses, unclear liabilities, information sharing that becomes adversarial
roadmaps, and failure to use pre-existing legislation [81]. Reevaluating policy through other
lenses and being interdisciplinary are two avenues to help mitigate policy resistance and
align matters of importance, particularly for wicked problems like disinformation. Cognitive
science-based approaches serve as a lens to better understand the ethics and policy implications
of issues such as AI and botnet mitigation [82]. Funding, timing, and resource allocation
are central elements of policy which are also determined by involved stakeholders, and may
influence policy resistance. Funding, timing, and resources also serve as signals which afect
communities, organizations, and individuals working on cognitive security and resilience topics.
Resource allocation also reflects decision-making priorities and shapes the environment for
future cognitive security eforts. In their work at the National Institute of Standards and
Technology (NIST) advising government executives, Bowen, Chew, and Hash identified capital
planning as an essential element alongside awareness and training as part of information
security program development [83].</p>
        <p>Policies mandates and support for research and reporting endeavors are additional types
of barriers or accelerants for innovation and accountability. The US federal government’s
2020 National Defense Authorization Act (NDAA) honed in on the relationships between
social media platforms and information operations. In response to the NDAA and other rising
disinformation concerns, new organizations, like the Cognitive Security Intelligence Center
and National Commission for Countering Influence Operations (NCCIO), emerged together
to harness academic, civil society, industry talents to defend against online disinformation
[84]. Inadequately funded organizations, such as university labs and private companies, may
not be able to fully participate equally with other peer stakeholders. On the flipside, research
organizations with their reputations on the line may engage less if participation brings unwanted
adversarial attention, such as from botnets [82].</p>
        <p>Yet, for all that policy can do for cognitive security, more policy-related questions arise that
may continue to sway security discussions. This is due to the inherently complex and emergent
nature of the socio-technical challenges that these policies are trying to address. As policies
attempt to eliminate disinformation and reduce its impact, those policy solutions will continue
to raise perceptual, economic, sectoral, ideological, ethical, and legal questions [85]. Many of
these are carried out at the society level.</p>
      </sec>
      <sec id="sec-3-6">
        <title>3.6. Society</title>
        <p>A variety of societal characteristics have been shown to influence cognitive security and
resilience. This level encompasses some of the most impactful but distal determinants that are also
among the most enduring and resistant to change, such as cultural values and traditions, media
and social media influences, economic factors like poverty and inequality, and discrimination
and marginalization.</p>
        <p>Culture and ideology: A recent cross-national comparison of resilience to online
disinformation found that societal polarization decreases resilience to online disinformation, likely
due to increasingly disparate representations of reality that make it more dificult to
distinguish between false and correct information [86]. Societies characterized by a higher degree
of populism may also be more susceptible to disinformation due to the underlying worldview,
which includes sentiments such as anti-elitism and mistrust of expert knowledge. The same
factors that make populist societies vulnerable to information disorder also make them more
susceptible to anti-vaccine appeals.</p>
        <p>Media and free press: Weak public broadcasting services/public service media is associated
with greater susceptibility to online disinformation [86]. Similarly, societies with stronger media
infrastructures and an independent and free press tend to be more resilient to disinformation,
while those with higher levels of censorship tend to be more vulnerable [87]. However, societies
in which news consumers are distributed across a diverse and fragmented media ecosystem may
have increased susceptibility because of the greater number of entry points for both foreign and
domestic disinformation [86]. Low levels of distrust in institutional sources of knowledge (such
as science and medicine) and higher levels of funding for public service media are associated with
greater resilience to disinformation, as is a lower degree of media polarization and fragmentation
[87].</p>
        <p>Social media: Societies with greater numbers of social media users, higher rates of social
media use, and greater reliance on social media as a news source tend to be characterized by
poorer knowledge of public afairs, reduced political learning, and increased susceptibility to
online disinformation [86].</p>
        <p>Economy: Economic factors such as poverty rates, unemployment, resource allocation can also
make populations more vulnerable to information disorder [88]. Similarly, societies with larger
advertising markets and more potential consumers tend to be more susceptible to disinformation
than those with smaller advertising markets [86]. This is attributed in part to the significant
amount of false content that is produced for the purpose of generating advertising revenue.</p>
        <p>Discrimination and marginalization: Factors such as racism, discrimination, and oppression
are also important determinants of cognitive security. Societies characterized by a greater
degree of marginalization of minorities may be more susceptible to disinformation, in part
because of associated perceptions that the political, social, and economic systems are “rigged”
[89, 90].</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Use Case</title>
      <p>With the SEM applied to a cognitive security context, we now apply it to a use case of US-based,
COVID-19 vaccine hesitancy and refusal based on disinformation and disinformation response.</p>
      <sec id="sec-4-1">
        <title>4.1. Individual</title>
        <p>At the individual level, vaccine-related mis- and disinformation can lead to vaccine hesitancy
through several diferent pathways, many of which also represent opportunities for possible
intervention and mitigation. Key mediating factors include knowledge and understanding,
personal experiences, cognitive and emotional appraisals, and risk perceptions, as well as
demographic and personal characteristics such as race, religion, and political ideology.</p>
        <p>There is a well-documented pattern of racial disparities in vaccine-related beliefs and
behaviors, and this has continued during the coronavirus pandemic [91]. In the US, members of racial
minority communities have been specifically targeted by vaccine-related mis- and
disinformation, which is believed to have contributed to increased rates of vaccine hesitancy within these
communities. Given the legacy of racism in science and medicine, race may also impact factors
such as trust in scientific and medical institutions, which could influence vaccine hesitancy
among members of racial minority communities by driving them to seek out information from
alternative sources who they trust but may not always be credible. Further discussion of the
impact of race and racism on vaccine hesitancy, and their relationship with vaccine-related
misinformation, is available in the societal section.</p>
        <p>Previous experiences with vaccination and/or knowledge of someone who sufered from a
vaccine-preventable disease or an adverse event associated with vaccination are associated with
vaccine hesitancy [92], as is being a parent [93]. During the COVID-19 pandemic, particularly,
research has found that Americans who identify as evangelical Christians are more likely to
be vaccine hesitant and less likely to respond to persuasive messaging aimed at shifting their
attitudes about vaccination [94]. Political afiliation has also been shown to influence risk
perceptions during the COVID-19 pandemic, with Republicans far less likely to view the virus
as a major threat to public health [95]. Given that low levels of perceived risk and susceptibility
are associated with a lower likelihood of vaccine uptake, this may be another avenue through
which exposure to misinformation influences vaccine hesitancy [96].</p>
        <p>Previous research shows that certain psychological factors, such as conspiratorial thinking
and endorsement of conspiracy theories, disgust sensitivity, and higher levels of reactance
and non-conformity are associated with negative vaccine attitudes and lower intentions to get
vaccinated [97, 98]. These patterns have also been documented during the COVID-19 pandemic,
with studies showing that conspiratorial beliefs are associated with negative attitudes about the
COVID-19 vaccine and reduced intentions to get vaccinated [99].</p>
        <p>Concerns about the safety of vaccines and the potential risks are two of the main drivers
of both general vaccine hesitancy and, specifically, COVID-19 vaccine hesitancy [ 100, 101].
Exposure to vaccine-related misinformation may alter risk perceptions, resulting in negative
attitudes toward vaccination and, in some cases, vaccine hesitancy [98, 102]. This may be
particularly true for personal narratives about adverse events associated with vaccination, which
are a very common and influential form of vaccine misinformation [ 103]. Cognitive biases may
also contribute to vaccine hesitancy after exposure to vaccine-related mis- and disinformation
[100]. For example, since mis- and disinformation tend to be more emotionally salient than
accurate information, these messages may be easier to recall and lead to misperceptions about
the frequency of rare events, ultimately resulting in vaccine hesitancy. Additionally, emotionally
driven messaging and vaccine-related misinformation that manipulate emotions such as fear
and anger have been identified as key contributors to vaccine hesitancy [ 104]. Exposure to
COVID-19-related misinformation has been found to be associated with increased fear and
stress, which in turn may impair information processing and lead to poorer health-related
decision-making [105]. Studies show that psychological distress is associated with vaccine
hesitancy, and there is evidence that this relationship is mediated by mistrust and belief in
conspiracy theories [106]. Additionally, research indicates that people may be more likely to
share low-credibility sources of COVID-19 vaccine information as they cope with anxiety, anger,
and fear, suggesting that there may be a feedback loop involving exposure to emotionally-driven
mis- and disinformation, emotional appraisal and response, and information behaviors, which
may in turn result in increasing levels of vaccine hesitancy [107].</p>
        <p>People who rely on Facebook as their primary source of news about COVID-19 are more
likely to be vaccine hesitant than those who get their news from other sources [108]. This may
be at least partially attributable to the types of information individuals are likely to encounter
on Facebook, or it may be a reflection of underlying characteristics that make certain individuals
more likely to seek out news content on Facebook and more likely to be skeptical of vaccination.
Either way, the association between using Facebook as a primary news source and vaccine
hesitancy points to a potential avenue for targeting future vaccine communication eforts.</p>
        <p>The majority of interventions aimed at countering or mitigating the efects of vaccine-related
mis- and disinformation target individual-level factors, such as susceptibility, vaccine-related
beliefs, perceptions of personal risk, or digital literacy. Research suggests that refutation
messages that address the afective and cognitive evaluations of vaccine-related misinformation
may help reduce vaccine hesitancy among people exposed to misinformation. Promoting
vaccine uptake by addressing the motivational roots of vaccine hesitancy, such as concerns
about vaccine safety and efectiveness, may be a promising strategy for countering certain
vaccine-related information harms, while fact-based rebuttals focused on knowledge deficits do
not appear to have much of an impact [109, 110]. During the COVID-19 pandemic, there has
also been a focus on recruiting social media influencers to promote vaccine uptake, but research
suggests that exposure to authoritative information about the vaccine is a stronger incentive
to get vaccinated than endorsement from influencers [ 109]. As is the case in public health,
individual-level approaches to countering information harms are limited in their potential
impact, and are likely to be more efective when paired with approaches targeting higher-level
factors.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Interpersonal</title>
        <p>When it comes to individuals choosing whether to become vaccinated against COVID-19 or
not, family, peers, and other social connections play an important role. These interpersonal
interactions are key factors in shaping individual perceptions of what to believe or reject about
the vaccine. Adolescent level doses came later than adults and the interim period was strife
with a spectrum of mis- and disinformation as to whether children should receive the vaccine
or in what dose [111]. Recent findings from Rogers and colleagues discovered that family,
particularly parents, norms largely influenced adolescent vaccine intent; peer norms had a
lesser but still significant impact [ 112]. Stepping back to consider all adults, surveys around
May of 2021 consistently found that peer efects from advice to pressure had far more efect on
vaccination decision than political preference, despite divergent political views of COVID-19
largely dividing the nation [113].</p>
        <p>Following expectations, loose social environments such as Twitter conversations about
vaccine fraud with thousands of people had high associations between low vaccination rates and
negative attitudes toward the vaccine; yet, that efect disappeared among similar discussions with
family and close friends [114]. Close social connections as fact checkers serve as a final defensive
line against misinformation, out-performing validation from experts like Dr. Anthony Fauci
[114]. We speculate that the CDC’s recognition of the influence of interpersonal relationships
supported the CDC creation of entire guides about how to discuss COVID-19 with friends and
family [115].</p>
        <p>Other interpersonal factors continue to yield results for vaccine uptake and overcoming
disinformation. Homophily, particularly on measures of race and ethnicity, was a primary
driver of becoming vaccinated and positive views about the vaccine, which follows expectations
from social contagion theory [116]. Similar homophily results are found comparing COVID-19
vaccine update to other vaccines and antibiotic use [117] and socio-demographics efect on
various prophylactic measures [118]. Buttenheim’s Congressional expert testimony on strategies
to reduce vaccine hesitancy recommended overcoming misinformation through leveraging
the social capital of well-known individuals within a community, such as stylists and barbers
[119]. Lastly, the length and severity of the COVID-19 pandemic gave rise to large swaths
of information, increasing the potential for individuals to experience information overload,
in-turn leading to issues like mental fatigue and determining credibility through cognitive
heuristics, but groups ofset this through group coping practices and extend individual ability
to communicate about COVID-19 [120].</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Organizations</title>
        <p>Organizations involved in US COVID-19 vaccine promotion eforts include the CDC, which
together with the WHO is active in trying to counter what has been termed an “infodemic”
[121, 122]. CDC’s advice to communities includes using social listening on social media and
traditional media channels, logging and analyzing misinformation in these channels; listening
to the community to identify content gaps, perceptions, information voids, and
misinformation; sharing clear, accurate information, and using trusted messengers (influencers) to boost
credibility. Outputs from the CDC include regular State of Vaccine Confidence Insights reports
[123].</p>
        <p>Other organizations involved in US COVID-19 vaccine promotion eforts include state and
local health departments. For example, the Massachusetts State Department of Health has
a Vaccine Ambassadors scheme, making public health professionals available to community
forums and meetings in 12 languages including American Sign Language [124]. However, these
agencies have also faced significant challenges during the pandemic, in large part due to poor
coordination and communication between local, state, and federal health agencies, particularly
in the early months. Miscommunication, lack of or delayed information sharing, incompatible
databases, inconsistent reporting practices, and insecure communication channels are just a few
of the problems that have arisen. Similarly, eforts to develop efective vaccine communication
strategies have been complicated by local and state-level variation in COVID-19 trends, which
at times has resulted in confusion due to seemingly conflicting information coming from federal
versus state and local health agencies. Hospitals, schools, universities, and churches are other
examples of organizations involved in vaccine promotion and communication eforts.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Communities</title>
        <p>Communities are active in vaccine disinformation creation, amplification, mitigations, and
counters. Online communities have formed around antivax narratives and vaccine conspiracy
theories, or been created by admins, some linked to known disinformation creators, seemingly to
advance these narratives, and increase the division they create. High-profile online influencers
also amplify these narratives [125].</p>
        <p>Communities afected by vaccine misinformation include immigrant communities. In Boston,
the Haitian immigrant community sees vaccine misinformation in Haitian Creole;
communitybased responses to this include faith leader narratives, such as Pastor Keke on platforms
including local radio, and the Mattapan community health center placing local ads and flyers
[126]. Hispanic communities in the US have also been targeted with vaccine disinformation;
one issue with this is the assumption of many higher-level (e.g., government level) responders
that Hispanic communities in the US are a uniform population, rather than targeted separately
by factors including their countries of origin: Cuban-American, Puerto-Rican, and Mexican
targeted disinformation, narratives, and channels vary significantly from Cuban-American fear
of leftist politics to documented histories of medical experimentation on Puerto Ricans [127].</p>
        <p>Other communities targeted by disinformation in the US include Black communities [128]
already targeted disproportionately with election related disinformation, wellness communities,
and parents of small children. Each of these communities has online groups and spaces which
could be targeted by fake profiles, misinformation, and artificial amplification of vaccine
misinformation. Healthcare worker communities are also targeted: Doximity, the “LinkedIn for
doctors” connecting 80% of US physicians, also contains vaccine disinformation [129].</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.5. Policy</title>
        <p>With COVID-19 at global scale, infecting nearly 300 million people and killing almost 5.5 million,
it was inevitable that governments would get involved. Governments at all levels across the US
activated a wide range of policies to reduce vaccine hesitancy and increase vaccine uptake. Yet,
diferences in policy formation and compliance varied largely by political lines. Messaging from
elected oficials and political elites drove narratives and vaccine endorsements, or lack thereof,
which shaped early individual perceptions of the vaccine [130, 131, 132]. Despite political
fractures, leadership from the top initiated eforts to reduce misinformation and disinformation,
such as through science-based public health campaigns identified and supported through
the US National Strategy documents [133]. Additionally, the US Department of Health and
Human Services (HHS) initiated a 5-year strategic plan on vaccines, which included goals
specifically toward partnership development to combat disinformation and reduce vaccine
hesitancy [134]. The plan is national in orientation, yet global in scope, and covers more
vaccines than COVID-19, but maintains targets and metrics at the individual level. In addition to
rolling out policy to reduce disinformation and increase vaccinations, some policy was inward
looking to conduct governmental self-checks. The US Cares Act set requirements for the US
Government Accountability Ofice to perform oversight on COVID-19 related policies, including
stakeholder eforts on vaccine administration and information sharing [135].</p>
        <p>In this use case, these examples combine funding and resources considerations with research.
With no end in sight, agencies across the federal government initiated research funding on
disinformation and vaccine hesitancy. Even before the COVID-19 pandemic, HHS was sponsoring
work to “help individuals make informed decisions about immunization for themselves and
their families.” [136]. Of course, the NIH are deep into supporting vaccine and decision-making
research; they a wide range of grants and cover issues such as community-level interventions
for vaccination uptake, evaluation of government policies or initiatives that “that mitigate or
exacerbate disparities in vaccine access, uptake, and series completion”, and examine barriers,
access, and other measure among populations “who experience health disparities” [137]. Even
NASA opened grants for access to its remote sensing and satellite data to better understand
spatial efects on “environmental, economic, and/or societal impacts of the COVID-19 pandemic”
and how its systems can benefit decision-making research [138].</p>
      </sec>
      <sec id="sec-4-6">
        <title>4.6. Society</title>
        <p>Anti-Asian sentiment and discrimination have been widely documented during the COVID-19
pandemic, in large part because the virus was first discovered in Wuhan, China, which has led to
the proliferation of conspiracy theories and attributions of blame for the pandemic. This is even
apparent in search terms about the pandemic, which reflect stigmatizing beliefs about the virus
and its origins [139]. There is concern that these beliefs, combined with existing ethnic and
racial biases, may have spilled over to the healthcare system and public health communication,
resulting in culturally insensitive vaccine messaging and poorer quality interactions with
healthcare providers [140, 141]. Given that trust is a key factor in determining vaccine-related
attitudes and behaviors, it is possible that these negative experiences may have contributed to
vaccine hesitancy among some subgroups of Asian Americans.</p>
        <p>The history of anti-Black discrimination and racism in the U.S. healthcare system is also
believed to play a significant role in driving COVID-19 vaccine hesitancy in Black communities
[128]. This problem is compounded by vaccine-related misinformation targeting Black
communities, which fuels mistrust and negative attitudes toward vaccination. Additionally, high
levels of mistrust can increase susceptibility to misinformation [49]. Inequality-driven mistrust
has been recognized as a distinct phenomenon among communities who have historically
experienced disenfranchisement [142]. During the COVID-19 pandemic, this has manifested
itself in false belief systems such as the idea that vaccines and therapeutics are being deliberately
withheld from certain racial groups. Recognizing the significant role of racism in fueling
mistrust and harmful health beliefs such as vaccine hesitancy, researchers are calling for solutions
to information disorder that directly address racism [142].</p>
        <p>Media coverage is believed to have contributed to fear, mistrust, and stress during the
COVID-19 pandemic, which may have resulted in increased vaccine hesitancy [143, 144]. While
clear, accurate, and timely information from trusted sources is necessary to make informed
decisions during public health crises, there is a delicate balance to strike between providing
too little versus too much information. On the one hand, information vacuums and infrequent
updates during ongoing crises can lead to the proliferation of rumors and increased levels of
uncertainty, psychological distress, and fear, but too much information may cause people to
become overwhelmed, confused, and unsure of who or what to trust. Paradoxically, as reporters
and news outlets tried to keep the public informed about the outbreak, excessive exposure
to news stories about COVID-19 may have had a negative impact on preventive behaviors
such as vaccination. This has been attributed in part to the observed impacts of information
overload, which has been shown to lead to maladaptive behaviors and information avoidance
during emergencies [145, 146]. Additionally, perceptions that the media exaggerated the risk of
COVID-19 are associated with vaccine hesitancy, possibly due in part to disengagement with
authoritative sources of information and increased engagement with “alternative” news sources
[143].</p>
        <p>At a national level, social media use and the prevalence of foreign disinformation online
has been shown to be significantly associated with COVID-19 vaccine hesitancy among the
population [147]. Low levels of societal trust in scientific and biomedical institutions, and low
levels of citizen engagement with the scientific community, are also associated with COVID-19
vaccine hesitancy [148]. Societal norms that prioritize individual freedom over the protection
of vulnerable groups have also been identified as a significant driver of COVID-19 vaccine
hesitancy [149]. This coincides with trends in anti-vaccine messaging, which in recent years
have increasingly framed vaccine refusal as a civil right and vaccine mandates as a form of
tyrannical government overreach [150]. Political and voting trends have been shown to be
associated with COVID-19 vaccination attitudes and behaviors, such that higher percentages of
votes for Donald Trump are significantly associated with lower vaccination rates and increased
vaccine hesitancy [151]. Additionally, research suggests that Christian nationalism is among the
strongest predictors of vaccine hesitancy, in large part because of its association with distrust of
science, hostility towards authorities other than the church, and endorsement of misinformation
espoused by Donald Trump [149].</p>
      </sec>
      <sec id="sec-4-7">
        <title>4.7. Integration</title>
        <p>While application of the SEM allows breakdown analysis of stress points and interventions
of an information-based harm within each level, the levels do not operate in isolation from
each other. This integration section details examples of cross-level analysis for a microchip
in the vaccine misinformation scenario. October 2, 2020, Charlamagne Tha God, while on
The Breakfast Club radio show claimed, “[m]illions will line up to take the vaccine, and boom,
microchips for all of y’all, right in time for goddamn Thanksgiving” [152]. From there, the
rumor of the vaccine injecting microchips spread rapidly. It was exacerbated and amplified by
media outlets and influencers, spinning comments from Bill Gates in a Reddit Ask Me Anything
conversation about digital vaccine cards into a narrative of microchipped vaccine cards as part
of a larger tracking system [153, 154]. As of March 2021, two percent of surveyed individuals
representing the American adult population believed the vaccine contained a microchip, but
nearly 27 percent of survey respondents were unsure which together accounted for almost 75
million people [155]. In truth, neither the vaccine nor vaccine passports and certification cards
contain microchips, but the microchip information-based harm is widespread [156, 154].</p>
        <p>Despite the prevalence of the microchip rumor, there are viewable countermeasures and
counter-initiatives that span the social ecological levels. Making the vaccine ingredient list
publicly available is one approach to demonstrate it does not contain any microchips. The CDC
provides vaccine ingredients on their website as part of the broader vaccine information packet,
along with guidance as to who should or should not receive a particular vaccine, possible side
efects, other safety data, and clinical trial data [ 157, 158]. This information is used by other
agencies, such as the FDA when determining vaccine approval status (organization), but these
ingredient lists also help obtain endorsement from faith-based groups (community). Moreover,
ingredient transparency helped Islamic faith leaders to determine that vaccine uptake follows
Sharia (Islamic) law and leadership in the The Church of Jesus Christ of Latter-day Saints to
actively support vaccination and not provide religious waivers to their membership [159, 160].</p>
        <p>Some actors transformed the injected microchip topic from misinformation to disinformation
by selectively editing video interviews of prominent business leaders and news anchors to
reshape a narrative that portrays the microchips as true [156]. Institutions such as the CDC
[121] and FactCheck.org [161] (organizations) provide mythbusting analysis and media coverage
carries this message to the public [162, 163] (society). Live conversations between public health
oficials, vaccine suppliers, politicians, members of the media, and the public provide another
avenue to overcome the microchip misinformation. A televised question session by the Orange
County Board of Supervisors in California with their public health administrator (organization)
included inquiry about injected tracking device which was quickly debunked; this engagement
may boost public awareness and transparency with constituents (individuals and communities),
but it was also distributed widely by NBC news for wider dissemination (society) [162].</p>
        <p>Beyond traditional news outlets, news is increasingly consumed via social media. Younger
individuals are more likely to obtain the majority of their information online [164] and
wellestablished journalist publications reference social media sites as references [165]. Microchip
and other vaccine-related rumors spread across social media platforms, and in response, tech
giants like YouTube and Twitter developed COVID-19 misinformation policies that allow for
removal of posts and potential account bans (organization, policy) [166, 167]. Similarly, Facebook
and Instagram, following World Health Organization guidance, (organization) incorporated
group administrators on their sites (community) to help control the presence of COVID-19
misinformation and ban users violating those policies (policy) [168]. Although creating and
implementing social media COVID-19 misinformation policies have mixed success [169], any
actions have the potential to afect how platform users create and share their content
(interpersonal), as well as engage with influencers, celebrities, public oficials, and other users
(communities).</p>
        <p>At a more local and personal level, there are intervention eforts to help individuals overcome
COVID-19 information-based harms, including injectable microchipping. School educators have
professional training available (organization) to help “leaders, teachers and parents to become
“vaccine ambassadors” to communicate better with parents,” (interpersonal) including how to
difuse misconceptions about microchips without being dismissive [ 170]. Local investigative
reporters, like Mahsa Saeidi in Tampa, Florida with WFLA news (society), connect interviews
with concerned parents with a physician (community) to address the injected microchip
falsehood and other fact versus fiction [ 171]. Other groups skeptical of the government include
communities of color, young people, and the LGBTQ+ community. Recognizing this gap, the
Biden administration brought together Dr. Anthony Fauci and teen social media influencers
(community); those influencers then reached out to their millions of followers (society) to
help dispel microchip and other misinformation while also reducing vaccine hesitancy [172].
The influencers bridged Gen-Z, Black, Spanish-speaking, LGBTQ+ communities. Moreover,
influencer outreach carried into conversations between parents and their youth followers [ 172].</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion and Conclusion</title>
      <p>Vaccination related information-based harms continue to pervade while the COVID-19 pandemic
continues to afect individuals and societies around the world. The web of falsehoods creates
and reinforces vaccination hesitancy. The SEM, adapted from Brofenbrenner, ofers a holistic
approach to understand information behaviors across a layered spectrum from the individual
through society. Our SEM analysis extended previous information disorder and COVID-19
related research to identify contributing factors and complex relationships to vaccine uptake.
Through our SEM factors, practitioners can develop interventions that span interdependent
relationships for greater eficacy. Academics may leverage this adaptation of the SEM to link
research across traditional disciplinary boundaries and encourage future work on the causal
relationships within COVID-19 information behaviors.</p>
      <p>The SEM can be used to classify cognitive security, at the individual, interpersonal, community,
organizational, policy, and societal levels. It should be useful for researchers and responders
assessing the coverage of responses, the implications of actions, and barriers that could diminish
the efectiveness of interventions at each SEM level. Expanding analysis to include not just
the individual, but also the efect of family and friends, communities, etc., and considering the
interactions between SEM levels, potentially increases the reach and scalability of cognitive
security.</p>
      <p>
        Our usage of the SEM here relies upon a heavily qualitative approach as part of our efort to
establish its foundation within this information science space. The lack of quantified elements
within this SEM may give some readers pause. However, we believe that lack of quantitative
aspects within the SEM we present does not make it any less relevant for quantitative research.
Other scholars tied areas in which the SEM can or could work alongside quantitative methods.
Onwuegbuzie, Collins, and Frels [173] posited that the levels within Brofenbrenner’s SEM [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]
are useful for both qualitative and quantitative research. Moreover, the SEM as a framework aids
the pursuit of generalization, often a focus of quantitative research, by helping scope empirical
methods and designs, such as sampling frames, appropriate to one or more of the model’s levels
[173].
      </p>
      <p>Our paper ofers a novel adaptation of the social ecological model for a cognitive security
context which leans heavily on qualitative description to provide a first-step foundation upon
which future research, including quantitative approaches, may identify measurable variables
within each level. One example of this quantitative use within another context assessed political
violence and child adjustment in Northern Ireland with individual (individual level) and family
(interpersonal level) demographic data, social and economic measures from the local political
and religious communities (community level), as well as education system and attainment
values, sectarian segregation, and policy (all society level) [174]. The political violence study
then used the variables associated with the model to derive index scales and conduct exploratory
factor analysis, correlations, and path analysis, also known as structural equation modeling in
other disciplines. Regression coeficients and R2 values from the path analysis then mapped
back to the social ecological model levels, illustrating potential relationships for variables
within and between the levels [174]. A second example used the levels a social ecological
model to frame interrelated, multi-level, quantifiable characteristics to understand alcohol
use behaviors; their statistical approaches included logistic multilevel random efects models
and censored regression (TOBIT) random efects models [ 175]. Related, the social ecological
model can be used to establish the environment in which computational approaches such as
nested network analysis or agent-based models operate. The levels within a social ecological
model naturally lend themselves to multi-level modeling. Brown and colleagues discussed the
benefits of using computational approaches to explore human-computer interactions situated
in complex social environments within an HIV prevention context [176]. Of the diferent
computational perspectives mentioned, they included the development of an agent-based model
that fit within a simplified version of the social ecological model with interactions and strategies
based on the environment’s information landscapes [176]. Saha and colleagues took a diferent
approach to using the social ecological model. Their work focused on using machine learning
models to help impute missing data, but the authors relied upon a social ecological model
for theoretical foundation to better understand the context of their data and the environment
in which that data originates and interrelates. In-turn, they believed opportunity exists to
use their imputation modeling to improve upon and discover new dimensions of intersection
within a social ecological model [176]. Future use of our model could leverage similar variable
identification or modeling perspectives to enhance a quantitatively oriented research design, or
used in a mixed-methods design to improve triangulation.</p>
      <p>The SEM provides a novel framework for conceptualizing cognitive security, but there are
limitations to its use. The primary limitation is a corollary of its strength: The SEM is broad in
scope and is meant to be a comprehensive framework to guide needs assessment, evaluation,
surveillance, and more, but its broad scope comes at the cost of some degree of precision. Related,
the scope of the model also makes it dificult to quantify in a single measure. In previous uses,
eforts to quantify the model have been carried out by using validated measures for factors
at each level of the model, rather than in one comprehensive measure. There have also been
successful eforts to validate hypotheses based on factors included in the SEM by using path
analysis to test various models of the relationship(s) between predictor variables at multiple
levels, and between those variables and a specified behavioral outcome [ 177]. This can be
further systematized by using SEM to guide meta-analysis and systematic reviews to develop
empirically grounded and testable lists of factors at each level. Integrating the SEM with other
novel approaches like agent-based modeling is another promising approach to harnessing
the breadth and qualitative nature of the SEM. As Rounsevell and colleagues explicated, an
“ABM may include quantitative, equation-based approaches, but the rules that characterize this
approach are qualitative” [178].</p>
      <p>Questions of generalizability arise given the challenges associated with measuring the SEM.
On the issue of generalizability, the findings from the current study would not be expected to
be applicable to other subject matter issues, though the model and its underlying assumptions
and relationships should be expected to remain stable across many diferent settings. Although
this is a limitation, it is an inherent characteristic of research exploring the dynamic
humancomputer-information nexus. As Antill articulated, “By the very act of installing an information
system, one is changing the situation into which it is installed [179]. Therefore, no particular
‘experiment’ can be repeated.” Of course, this does not mean that repeatability is null and void.
Rather, it means that a widely-held notion of repeatability — that the same results should
be produced by any researcher in any laboratory anywhere in the world — may need to be
reconceptualized to consider other forms of repeatability, such as the ability to demonstrate
that the same set of variables or assumptions, held to be controlled and identical, do indeed hold
up in multiple tests of the model. Similarly, there are diferent mechanisms of achieving validity
in qualitative research. Among the most important is face validity, which simply conveys
whether the results were viewed as credible, recognizable, and trustworthy by others. This is
one of the primary mechanisms of transferability in qualitative research — rather than using
statistical inference based on a defined population, qualitative analysis seeks to produce results,
assumptions, relationships, and models that can be generalized to many settings [180].</p>
      <p>The SEM adaptation in this paper is early-stage work and we applied it to a single use-case
falsehood on microchips injected with the vaccine. Future eforts could explore additional
factors within each level of the SEM and reinforce the interdependencies between levels. Next
steps to improve the model validity would implement this SEM approach for other COVID-19
information disorder use-cases. Another interesting application would be looking at algorithms
(e.g., social media recommendation algorithms) through the SEM for cognitive security lens,
where the neighbors of individual algorithms would be models and model instantiations sharing
training datasets and results, and communities could form around the pre-trained models used
in e.g., text and image understanding, with model poisoning and other machine information
harms being shared across those communities and so on. Another future pathway extends
this SEM adaptation as a theoretical contribution alongside to other prominent theories of
information behavior within an environment, such as Chatman’s small worlds micro view [181],
Habermas’ lifeworld theory macro view [182], the multilevel view from Jaeger and Burnett’s
Information Worlds [183]. Additionally, other information behavior theories could help evolve
this SEM adaptation, such as Lee and Butler’s theory of local information landscapes [184] to
consider the materiality of information within the environment as a capacity-based construct,
directionality of information seeking through Sonnenwald’s information horizons information
[185], or chance discovery via Williamson’s incidental information acquisition [186] (1998) or
Agarwal’s information serendipity [187].</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>We would like to thank Simon van Woerden (WHO) and Alex Ruiz (Phaedrus) for conversations
that helped inform our diagram of the crisis stages of Cognitive Security, including discussions on
UN emergency cycle models. Thanks to the developers of ACM consolidated LaTeX styles https:
//github.com/borisveytsman/acmart and to the developers of Elsevier updated LATEX templates
https://www.ctan.org/tex-archive/macros/latex/contrib/els-cas-templates.
American Journal of Community Psychology 44 (2009) 350.
doi:10.1007/s10464-0099270-8.
[8] A. Lindridge, S. Macaskill, W. Gnich, D. Eadie, I. Holme, Applying an ecological model to
social marketing communications, European Journal of Marketing 47 (2013) 1399–1420.
doi:10.1108/EJM-10-2011-0561.
[9] I. Vigna, A. Besana, E. Comino, A. Pezzoli, Application of the Socio-Ecological System
Framework to Forest Fire Risk Management: A Systematic Literature Review,
Sustainability 13 (2021) 2121. doi:10.3390/su13042121.
[10] F. van Gool, N. Theunissen, J. Bierbooms, I. Bongers, Literature study from a
social ecological perspective on how to create flexibility in healthcare organisations,
International Journal of Healthcare Management 10 (2017) 184–195. doi:10.1080/
20479700.2016.1230581.
[11] F. Tripodi, Searching for Alternative Facts: Analyzing Scriptural Inference in Conservative
News Practices, Technical Report, Data &amp; Society, 2018. URL:
https://datasociety.net/wpcontent/uploads/2018/05/Data_Society_Searching-for-Alternative-Facts.pdf.
[12] C. C. Self, Credibility, in: D. W. Stacks, M. Salwen (Eds.), An Integrated Approach to</p>
      <p>Communication Theory and Research, 2 ed., Routledge, New York, NY, 2014, pp. 449–470.
[13] G. Pasi, Credibility and Relevance in Information Retrieval, 2021. URL:
https://romcir2021.disco.unimib.it/wp-content/uploads/sites/90/2021/04/KeynoteROMCIR-LAST.pdf.
[14] U. M. of Defence, Joint Doctrine Publication 2-00:
Understanding and Intelligence Support to Joint Operations, 2011. URL: https:
//assets.publishing.service.gov.uk/government/uploads/system/uploads/
attachment_data/file/311572/20110830_jdp2_00_ed3_with_change1 .pdf.
[15] K. R. McLeroy, D. Bibeau, A. Steckler, K. Glanz, An Ecological Perspective on Health
Promotion Programs, Health Education Quarterly 15 (1988) 351–377. doi:10.1177/
109019818801500401, publisher: SAGE Publications Inc.
[16] P. L. Mabry, D. H. Olster, G. D. Morgan, D. B. Abrams, Interdisciplinarity and systems
science to improve population health: a view from the NIH Ofice of Behavioral and
Social Sciences Research, American Journal of Preventive Medicine 35 (2008) S211–224.
doi:10.1016/j.amepre.2008.05.018.
[17] K. Lewin, Psycho-sociological problems of a minority group, Character &amp; Personality; A
Quarterly for Psychodiagnostic &amp; Allied Studies 3 (1935) 175–187.
doi:10.1111/j.14676494.1935.tb01996.x.
[18] P. Watzlawick, J. H. Weakland, R. Fisch, Change: Principles of Problem Formation and</p>
      <p>Problem Resolution, W. W. Norton &amp; Company, 1974.
[19] T. A. Glass, M. J. McAtee, Behavioral science at the crossroads in public health:
Extending horizons, envisioning the future, Social Science &amp; Medicine 62 (2006) 1650–1671.
doi:10.1016/j.socscimed.2005.08.044.
[20] N. Krieger, Epidemiology and the web of causation: has anyone seen the spider?, Social</p>
      <p>Science &amp; Medicine (1982) 39 (1994) 887–903. doi:10.1016/0277-9536(94)90202-x.
[21] B. G. Link, J. Phelan, Social Conditions As Fundamental Causes of Disease, Journal of</p>
      <p>Health and Social Behavior (1995) 80–94. doi:10.2307/2626958.
[22] J. B. McKinlay, The New Public Health Approach to Improving Physical Activity and
Autonomy in Older Populations, in: E. Heikkinen, J. Kuusinen, I. Ruoppila (Eds.),
Preparation for Aging, Springer US, Boston, MA, 1995, pp. 87–103.
doi:10.1007/978-1-46151979-9_10.
[23] J. Sallis, N. Owen, E. B. Fisher, Ecological Models of Health Behavior, in: K. Glanz,
B. K. Rimer, K. Viswanath (Eds.), Health Behavior: Theory, Research, and Practice, 4 ed.,
Jossey-Bass, San Francisco, CA, 2015, pp. 465–485.
[24] D. J. Whitaker, D. M. Hall, A. L. Coker, Primary prevention of intimate partner violence:
Toward a developmental, social-ecological model, in: Intimate partner violence: A
health-based perspective, Oxford University Press, New York, NY, US, 2009, pp. 289–305.
[25] A. Durkin, C. Schenck, Y. Narayan, K. Nyhan, K. Khoshnood, S. H. Vermund, Prevention
of Firearm Injury through Policy and Law: The Social Ecological Model, The Journal of
Law, Medicine &amp; Ethics 48 (2020) 191–197. doi:10.1177/1073110520979422.
[26] P. Ohri-Vachaspati, D. DeLia, R. S. DeWeese, N. C. Crespo, M. Todd, M. J. Yedidia, The
relative contribution of layers of the Social Ecological Model to childhood obesity, Public
health nutrition 18 (2015) 2055. doi:10.1017/S1368980014002365.
[27] T. A. Gregory, C. Wilson, A. Duncan, D. Turnbull, S. R. Cole, G. Young, Demographic,
social cognitive and social ecological predictors of intention and participation in screening
for colorectal cancer, BMC Public Health 11 (2011) 38. doi:10.1186/1471-2458-11-38.
[28] D. L. Espleage, S. M. Swearer, A Social-Ecological Model for Bullying Prevention and
Intervention: Understanding the Impact of Adults in the Social Ecology of Youngsters,
in: S. R. Jimerson, S. M. Swearer, D. L. Espleage (Eds.), Handbook of Bullying in Schools,
Routledge, 2009, pp. 71–82.
[29] R. J. Cramer, N. D. Kapusta, A Social-Ecological Framework of Theory, Assessment, and</p>
      <p>Prevention of Suicide, Frontiers in Psychology 8 (2017).
[30] C. Latkin, L. A. Dayton, G. Yi, A. Konstantopoulos, J. Park, C. Maulsby, X. Kong,
COVID19 vaccine intentions in the United States, a social-ecological framework, Vaccine 39
(2021). doi:10.1016/j.vaccine.2021.02.058.
[31] A. R. Casola, B. Kunes, A. Cunningham, R. J. Motley, Mask Use During COVID-19: A
Social-Ecological Analysis, Health Promotion Practice 22 (2021) 152–155. doi:10.1177/
1524839920983922.
[32] H. Igarashi, M. L. Kurth, H. S. Lee, S. Choun, D. Lee, C. M. Aldwin, Resilience in Older
Adults during the COVID-19 Pandemic: A Socioecological Approach, The Journals
of Gerontology. Series B, Psychological Sciences and Social Sciences (2021) gbab058.
doi:10.1093/geronb/gbab058.
[33] N. A. Suhud, G. H. T. Ling, P. C. Leng, A. M. R. A. Matusin, Using A Socio-Ecological
System (SES) Framework to Explain Factors Influencing Countries’ Success Level in
Curbing COVID-19, Technical Report, 2020. URL: https://www.medrxiv.org/content/
10.1101/2020.11.17.20226407v1. doi:10.1101/2020.11.17.20226407.
[34] E. Cowan, M. R. Khan, S. Shastry, E. J. Edelman, Conceptualizing the efects of the
COVID19 pandemic on people with opioid use disorder: an application of the social ecological
model, Addiction Science &amp; Clinical Practice 16 (2021) 4.
doi:10.1186/s13722-02000210-w.
[35] L. McCormack, V. Thomas, M. A. Lewis, R. Rudd, Improving low health literacy and
patient engagement: A social ecological approach, Patient Education and Counseling 100</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Wardle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Derakhshan</surname>
          </string-name>
          ,
          <article-title>Thinking about 'information disorder': formats of misinformation, disinformation, and mal-information, Journalism,'fake news'&amp; disinformation</article-title>
          . (
          <year>2018</year>
          )
          <fpage>43</fpage>
          -
          <lpage>54</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.</given-names>
            <surname>Wardle</surname>
          </string-name>
          , Understanding Information disorder,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G.</given-names>
            <surname>Ronchetti</surname>
          </string-name>
          , What is Cognitive Security?,
          <year>2020</year>
          . URL: https://xtncognitivesecurity.com/ what-is
          <string-name>
            <surname>-</surname>
          </string-name>
          cognitive-security/.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Waltzman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Center</given-names>
            <surname>for Cognitive Security - Draft Proposal</surname>
          </string-name>
          ,
          <year>2017</year>
          . URL: https://www.linkedin.com/pulse/center-cognitive
          <article-title>-security-draft-proposal-randwaltzman/?trk=public_profile_article_view.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>U.</given-names>
            <surname>Brofenbrenner</surname>
          </string-name>
          ,
          <article-title>The experimental ecology of human development</article-title>
          , Harvard University Press: Cambridge,
          <year>1979</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Richard</surname>
          </string-name>
          , L. Guavin,
          <string-name>
            <given-names>K.</given-names>
            <surname>Raine</surname>
          </string-name>
          , Ecological Models Revisited:
          <article-title>Their Uses and Evolution in Health Promotion Over Two Decades</article-title>
          ,
          <source>Annual Review of Public Health</source>
          <volume>32</volume>
          (
          <year>2011</year>
          )
          <fpage>307</fpage>
          -
          <lpage>326</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J. Wharf</given-names>
            <surname>Higgins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Begoray</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>MacDonald, A Social Ecological Conceptual Framework for Understanding Adolescent Health Literacy in the Health Education Classroom,</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>