<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Visual Stereotypes of Autism Spectrum in Janus-Pro-7B, DALL-E, Stable Difusion, SDXL, FLUX, and Midjourney</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Maciej Wodziński</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marcin Rządeczka</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anastazja Szuła</string-name>
          <email>anastazja.szula@gmail.com</email>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kacper Dudzic</string-name>
          <email>kacper.dudzic@ideas.edu.pl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marcin Moskalewicz</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AMU Center for Artificial Intelligence</institution>
          ,
          <addr-line>Uniwersytetu Poznańskiego 4, 61-614 Poznań</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Adam Mickiewicz University</institution>
          ,
          <addr-line>Uniwersytetu Poznańskiego 4, 61-614 Poznań</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>IDEAS Research Institute</institution>
          ,
          <addr-line>Królewska 27, 00-060 Warsaw</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Maria Curie-Skłodowska University</institution>
          ,
          <addr-line>Plac Marii Curie-Skłodowskiej 4, 20-031 Lublin</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>Poznań University of Medical Sciences</institution>
          ,
          <addr-line>Rokietnicka 7, 60-806 Poznań</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>Avoiding systemic discrimination of neurodiverse individuals is an ongoing challenge in training AI models, which often propagate negative stereotypes. This study examined whether six text-to-image models (Janus-Pro-7B VL2 vs. VL3, DALL-E 3 v. April 2024 vs. August 2025, Stable Difusion v. 1.6 vs. 3.5, SDXL v. April 2024 vs. FLUX.1 Pro, and Midjourney v. 5.1 vs. 7) perpetuate non-rational beliefs regarding autism by comparing images generated in 2024-2025 with controls. 53 prompts aimed at neutrally visualizing concrete objects and abstract concepts related to autism were used against 53 controls (baseline total N=302, follow-up experimental 280 images plus 265 controls). Expert assessment measuring the presence of common autism-related stereotypes employed a framework of 10 deductive codes followed by statistical analysis. Autistic individuals were depicted with striking homogeneity in skin color (white), gender (male), and age (young), often engaged in solitary activities, interacting with objects rather than people, and exhibiting stereotypical emotional expressions such as sadness, anger, or emotional flatness. In contrast, the images of neurotypical individuals were more diverse and lacked such traits. We found significant diferences between the models; however, with a moderate efect size (baseline  2 = 0.05 and follow-up  2 = 0.08), and no diferences between baseline and follow-up summary values, with the ratio of stereotypical themes to the number of images similar across all models. The control prompts showed a significantly lower degree of stereotyping with large size efects (DALL·E 3  2 = 0.39; Midjourney  2 = 0.41; FLUX  2 = 0.20; Stable Difusion  2 = 0.34; DeepSeek-VL3  2 = 0.45), confirming the hidden biases of the models. In summary, despite improvements in the technical aspects of image generation, the level of reproduction of potentially harmful autism-related stereotypes remained largely unafected.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;autistic identity</kwd>
        <kwd>autism discrimination</kwd>
        <kwd>neurodiversity fairness</kwd>
        <kwd>visual stereotypes</kwd>
        <kwd>ethics of aesthetic representations</kwd>
        <kwd>medical humanities in AI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>Stereotypes in text-to-text and text-to-image models</title>
        <p>
          Analyses of AI cognitive biases and oversimplifications in their representations of various social
phenomena play a significant role in AI ethics and fairness [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. To prevent the perpetuation of
systemic discrimination, it is imperative that users of Large Language Models (LLMs) are cognizant of
their inherent limitations and that developers can identify and rectify them.
        </p>
        <p>
          Previous research has demonstrated that many models reproduce gender, race, age [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], or ethnic
stereotypes, and that AI models underlying assistive technologies contain biased stereotypes [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. For
example, LLMs associate Muslims with violence [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]; even when a model is tasked with generating
content pertaining to Arabic culture, it remains ‘contaminated’ by the elements characteristic of the
West [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Furthermore, some models exhibit biases towards the values of specific societies [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] or may
be biased politically [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. While the majority of research in this field has focused on text-to-text models,
a few studies have examined models that generate images from textual prompts [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. This is especially
relevant in terms of human identity. While the deeper aspects of identity that have to do with the
sense of self strongly relate to language, the more superficial self-perception is typically mediated
by appearances. Bianchi demonstrated that text-to-image generation models amplify demographic
stereotypes [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], and Aldahoul highlighted the presence of racial and gender stereotypes in AI-generated
faces across 6 races, 32 professions, and 2 genders, and additionally proposed some debiasing solutions
[
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. When investigated LLMs were asked to depict an ‘attractive person’, they predominantly depicted
white individuals. In contrast, when LLMs were asked to depict a ’poor person’, they predominantly
depicted black individuals. In a similar vein, LLMs depicted a ‘terrorist’ as a Middle Eastern man. Even
when explicitly instructed to depict a ‘white terrorist,’ the models generated images of a bearded man
who visually resembled a stereotypically Middle Eastern individual. LLMs also perpetuate stereotypes
concerning race, gender, and religion [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. For example, when asked to show ‘people who are political
elites’, they show mainly white males.
        </p>
        <p>
          Most recently, DeepSeek’s Janus-Pro-7B was reported to outperform OpenAI’s DALL-E 3 and Stability
AI’s Stable Difusion in text-to-image generation benchmarks, particularly allegedly achieving an 80%
accuracy rate on the GenEval [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] benchmark compared to DALL-E 3’s 67% and Stable Difusion’s
74%. Its enhanced performance was attributed to improvements in training processes, including the
integration of 72 million high-quality synthetic images balanced with real-world data, resulting in more
stable and detailed image generation [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>These claims rely on specific evaluation metrics and datasets, which may not fully capture
realworld performance across diverse prompts and creative tasks. At the moment, there is no publicly
available information indicating whether DeepSeek’s Janus-Pro-7B model has been evaluated for biases
or stereotypes, including neurodiversity-related stereotypes of interest. The existing benchmarks, such
as GenEval and DPG-Bench, primarily assess the models’ ability to follow text prompts and generate
images accurately in the technical sense of the word.</p>
        <p>To address the issue of reproducing stereotypical beliefs, AI developers often use ’fairness protocols’,
which are top-down safeguards. These protocols result in the models either refusing to generate specific
content (e.g., a description of a representative of a particular social group) or circumventing the issue
by generating content on similar substitute topics. This method is only a secondary and temporary
solution, as it does not address the fundamental issue of biased training datasets.</p>
        <p>This prospective longitudinal study examines the degree of harmful representations regarding socially
prevalent (and, therefore, likely included in training datasets) stereotypes about the autism spectrum in
text-to-image models in 2024-2025.</p>
      </sec>
      <sec id="sec-1-2">
        <title>Socially prevalent stereotypes about the autism spectrum condition</title>
        <p>
          The example of autism is pertinent for a number of reasons. Firstly, the topic is of significant social
importance and sensitivity, impacting ca. 62/10 000 people in the global population [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. Secondly, it is
becoming increasingly prominent in the public eye. Numerous, often detrimental, identity stereotypes
and oversimplifications about autism have been created and disseminated, and have become deeply
embedded in collective awareness. For instance, although there is some evidence suggesting the
prevalence of autistic cognitive style among STEM/IT professionals, stereotypically identifying all
people on the spectrum with the figure of a brilliant computer geek is unsubstantiated [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. Thirdly,
the topic is characterized by a high degree of cognitive uncertainty, both in the social and scientific
spheres. A number of studies have highlighted the historical variability and social construction of
the autism category [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. Consequently, numerous beliefs and stereotypes about autism operate
unconsciously in social awareness as the so-called background knowledge, influencing the identities of
autistic individuals [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. The pervasive belief that autism is invariably accompanied by sufering, that
the source of this sufering is the condition itself and not social misunderstanding, and that autism is
primarily diagnosed in children, boys, and white individuals often leads to the perpetuation of hurtful
prejudice against autistic individuals, impeding their social functioning and access to diagnosis and
appropriate therapy.
        </p>
        <p>
          Consequently, the beliefs about autistic identity propagated by AI models significantly influence
opinions in this field as the topic gains increasing popularity in various spheres of public life [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. A
multitude of stereotypes and myths surrounding autism negatively impact the lives of individuals on the
spectrum. Autism communities seek to challenge these stereotypes and hegemonic narratives, aiming
to redefine autism as a distinct mode of functioning, resulting, among other factors, from the atypical
structure of the nervous system. Consequently, they seek to challenge the perception of autism as a
deficit and to depathologize it, thereby reducing social stigma [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. In this context, the growing role
that AI models play in shaping public awareness of neurodiversity makes it increasingly important to
control for cognitive biases and non-rational beliefs in the models’ performance.
        </p>
        <p>Table 1 presents deductive codes representing stereotypes selected for this study based on the
literature review and our previous research, along with their operationalized definitions and a brief
explanation of their harmfulness.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Methods</title>
      <sec id="sec-2-1">
        <title>Research protocol</title>
        <p>The research protocol involved generating images in two rounds, one year apart, except for Janus
Pro, which was released only in early 2025, hence the gap between rounds was 4 months (total N=302
at baseline and N=280 in the follow-up), based on 53 distinct prompts, selected with the objective
of visualizing, in a possibly neutral way, concrete objects and abstract concepts related to autism
across five models. The follow-up aimed to determine whether the advances in the technical aspect
of image generation led to a reduction in the degree of use of stereotypical motifs associated with
autism. In addition, 53 control prompts were used in the follow-up to account for the randomness of
stereotypization.</p>
        <p>
          DALL-E 3 [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ] (v. April 2024 and v. August 2025) is based on an undisclosed architecture. Stable
Difusion [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] (v. 1.6 medium at baseline and v. 3.5 medium in the follow-up) employs a latent difusion
model, which processes images in a compressed feature space and gradually refines the image from
a random noise distribution through a series of learned reverse difusion steps (that use stochastic
processes to create images from initial noise). FLUX.1 Pro [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] is built upon a hybrid architecture of
multimodal and parallel difusion transformer blocks, scaled to 12B parameters. Midjourney’s (v. 5.1 at
baseline and v. 7 in the follow-up) architecture is also unknown, while Janus-Pro (v. VL2 at baseline
and v. VL3 in the follow-up) decouples visual encoding into separate pathways, while still utilizing
a single unified transformer architecture for processing. The DALL-E 3, Stable Difusion, and FLUX
models were used through the dedicated Python API’s provided by their developers; the Midjourney
model was used through the GUI available on its oficial website, whereas the Janus-Pro model was run
locally on a single NVIDIA A100 80GB GPU using the text-to-image generation script provided on its
GitHub page. For all the models, the default provided inference settings were used.
        </p>
        <p>The experimental prompts were engineered to ensure a neutral form without suggesting the use of
specific symbols or themes. The issues covered were selected by a team of experts, including a person
on the spectrum (hence: participatory prompts co-design with members of the autistic community)
to take into account both the image of autistic people (individually and in groups) and various types
of behavior, interactions, and everyday situations. However, the team also considered more abstract
concepts, such as the visualization of the phenomenon of autism itself or emotions.</p>
        <p>Prompts were phrased to focus on lived experience rather than pathology, e.g., instead of
deficitbased semantics, “dificulty caused by autism” - “dificulty faced by an autistic person”, not to imply
normative judgments. Our assessment was based on group discussion and iterative testing, where
interfaces allowed (e.g., models asked how they interpret phrases). While some prompts may seem
redundant, it was a deliberate design choice to include closely semantically related prompts but with
minor variance in phrasing (alternate sentence structure) to control for semantic bias (outputs skewed by
lexical choices) and ensure that any stereotypes detected were attributable to the concept (e.g., autism),
rather than prompt form. For the control group, we prepared a symmetrical set of prompts aimed at
representing non-autistic individuals. It was done by either deleting the word “autistic” (e.g. “Create
an image of an autistic person” - “Create an image of a person,”) or replacing the abstract concept of
“autism” with “neurotypicality” (e.g. “Describe autism with an image” - “Describe neurotypicality with
an image”, “visualize autism” - “visualize neurotypicality”). The control-prompt group isn’t fully neutral
due to the concept of neurotypicality, but it fulfills the key function of controlling for autism-related
content. This choice allows for a direct conceptual contrast between “autism” and “neurotypicality,”
thereby strengthening the clarity and consistency of the control condition. Also, some visual biases
(gender or age-related) may appear in the models independently of the concept of autism, and either
represent training data or amplify autism-specific stereotypes, while some prompts (e.g., “at school”)
may implicitly cue certain representations (e.g., children). We attempted to minimize such efects,
but acknowledge that some prompts may prime certain representations (e.g., “school” often evokes
children), which is a limitation. To indirectly control for false positives, we took into account the
over-representation of children against the over-representation of white boys in those few prompts.</p>
        <p>Each prompt was administered once to each model. However, the final number of images exceeds
the number of prompts multiplied by the number of models because some models generated multiple
alternative versions. When requested to generate multiple themes or objects, the models occasionally
returned the results as a single image (e.g., split into three parts) and at other times as three separate
images. Midjourney consistently generated four preliminary images. To avoid arbitrariness in the
selection, all images generated by all models were included in the analysis, resulting in uneven baseline
and follow-up samples. The default hyperparameter values of models were retained between the
baseline and follow-up versions; for closed-source models (all except Janus), this was limited to the
equivalence of the hyperparameter and model snapshot variable values. Reproducibility of the outputs
was only checked ad hoc and not quantitatively assessed, which is a limitation. The results were
subjected to an expert assessment of independent coders via a framework of 10 deductive codes
that represented common stereotypes contested by the autistic community, regarding their presence,
judged by taking into consideration their spatial intensity on an image (see Table 1). The presence
of a stereotype was rated on a categorical scale, yes/no, while the overall level of stereotyping was
determined by adding up the scores across the list of ten deductive codes, with each image subsequently
rated on this 0-10 ordinal scale. This is referred to as the ‘degree of stereotyping’. The results were
subjected to statistical analysis of inter-rater reliability and size efects. The full research protocol,
including the comprehensive list of prompts, all generated images, the evaluation form, and the
interrater reliability assessment, is attached as supplementary material, which can be downloaded here:
https://figshare.com/s/8caea1bd2c2910598b98.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Community involvement</title>
        <p>The first author of this paper is an active autistic researcher and a parent of two autistic children, and
another co-author is on the spectrum. It has now become generally accepted by the participatory
research community that autistic people provide unique epistemic perspectives to the field of autism
research.</p>
      </sec>
      <sec id="sec-2-3">
        <title>Data Analysis</title>
        <p>In the baseline data analysis (N=302), three subsequent pilot coding sessions on randomized samples of
20 from the dataset were conducted to improve inter-rater reliability using Cohen’s kappa coeficient,
accounting for agreement occurring by chance. We have improved the initial inter-rater reliability of
0.315 in the first pilot coding, through 0.698 in the second, to 0.93 in the third, mainly by redefining
the operational qualitative descriptions of codes. In the follow-up rounds (experimental and control),
a similar process of pilot coding was performed, resulting in an inter-rater reliability of 0.90. Hence,
there were three subsequent coding sessions, both at baseline and in the follow-up; while definitions
remained unchanged, the follow-up images difered; inter-rater reliability is provided independently
for both baseline and follow-up. The final moderate k-values of 0.74 for baseline and 0.79 and 0.54 for
follow-up represent the kappa paradox because the absolute agreement was &gt; 0.8; while in the coding of
controls, the presence of stereotypes was very low, hence there were a lot of 0 values accounting for the
score (see research protocol for details). To calibrate the assessment framework and ensure its accuracy,
these sessions included consensual adjustments to the qualitative evaluation grid based on feedback
from the raters, which led to a progressive increase in inter-rater reliability. In efect, we refined the
coding framework by increasing its specificity and sensitivity, taking into account those cases where
there were divergencies between the evaluators, leading to both false-positive and false-negative results.
For example, the “lonely” stereotype was assessed only when there were multiple people or contextual
elements present in the image, e.g., when an individual was visually isolated from a group, placed apart
in a playground. Such criteria were clearly specified in our coder guidelines to avoid over-attribution.
To obtain unambiguous and non-fractional values, remaining minor diferences between the two raters
were solved by a third rater in a final meta-evaluation for all three sets of images.
(a) Two images combining many stereotypes.</p>
        <p>Prompts no. 15 (left) and 12 (right). Model: Stable
Difusion v. 1.6
(b) Examples of ambiguous images. Prompts no. 8
(left) and 38 (right). Model: DALL-E (April 2024)</p>
        <p>Not all stereotypes were as readily apparent as those in Fig. 1a. Fig. 1b illustrates examples of
ambiguous images for which expert raters were required to clarify the definitions of stereotypes (e.g.,
the concepts of a ‘nerd’ or blue color dominance) to achieve the desired level of agreement.</p>
        <p>Images depicting groups of autistic individuals tend to present them in a highly homogeneous and
uniform manner, with limited variation in characteristics such as gender, age, or skin color, which was
not the case for the control group (see Fig. 2a and Fig. 2b). To ascertain the presence of the white boy
stereotype, three distinct codes had to be identified: white, including a greater proportion of individuals
with white skin; child, including a greater number of children (with teenagers) than adults; and male,
including a greater number of males than females.
(a) A homogenous group of people presenting
stereotypical autistic characteristics. Prompt no. 11
(left) and 10 (right). Model: Stable Difusion v. 1.6
(b) Control images showing more diverse groups of
people. Prompt no. 11. Models: Stable Difusion v.
3.5 (left) and DALL-E (August 2025)</p>
        <p>The most frequently repeated stereotypical themes were the puzzle symbol and the blue color. The
overwhelming majority of characters depicted were white boys. These three stereotypes represent
a significant challenge for the autism community, which has been striving to combat them for years.
Consequently, the puzzle stereotype was operationalized more sensitively, with its presence being
considered in any location within the image, including the background and edges. Similarly, the
occurrence of a stereotypical association with the color blue was considered if this color appeared in the
image more often than other colors (it did not have to constitute more than 50% of the image area) or if
blue was associated with a significant object, for example, located in the central, attention-grabbing
part of the image (see Fig. 3).</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <p>Distributions of the degree of stereotyping for all models
difer significantly from normal. Testing (Kruskal-Wallis)
indicated significant diferences between the degree of
autistic stereotyping between all models, however, with
a moderate efect size (for baseline  2 = 0.05, for the
follow-up  2 = 0.08). The highest degree of stereotyping
was observed for Stable Difusion (M/me 3.915/4.00) at
baseline and for FLUX (M/me 4.151/4.00) in the follow-up.</p>
      <p>The lowest for SDXL (M/Me 2.896/3.00) and
DeepSeekVL3 (M/Me 2.896/3.00), respectively. Mann-Whitney U
test showed no significant diferences between older and
newer versions of models. The only noticeable diference
between the architectures was the higher presence of
stereotypes related to the use of the color blue and
portraying people on the spectrum as loners, IT geeks, or
artists in the case of the DALL-E model and the use of
the child motif by difusion models (see Fig. 4).</p>
      <p>The control prompts showed a statistically
significantly lower degree of stereotyping non-autistic
individuals with harmful autistic traits, with large size efects for
all the models, thus confirming the hidden biases of the Figure 4: Comparison of the distribution
models. For DALL·E 3, the Mann-Whitney test yielded U of the degree of stereotyping
= 478.50, Z = -6.63, p &lt; 0.001,  2 = 0.39; for Midjourney v7, (0-10 scale) across five models
U = 390.00, Z = -6.55, p &lt; 0.001,  2 = 0.41; for FLUX, U = at baseline and follow-up
786.00, Z = -4.67, p &lt; 0.001,  2 = 0.20; for Stable Difusion
3.5, U = 487.50, Z = -5.98, p &lt; 0.001,  2 = 0.34; and for DeepSeek-VL3, U = 332.00, Z = -6.91, p &lt; 0.001,
 2 = 0.45 (see Table 2 for overview of the diferences between older-newer and experimental-control
degree of stereotyping).</p>
      <p>The ratio of stereotypical themes to the number of images generated (baseline vs. follow-up) was
found to be similar across the models (DALL-E: 2.91 vs. 3.53, Midjourney: 3.72 vs. 4.15, SDXL vs.
FLUX: 2.90 vs. 2.93, Stable Difusion: 3.92 vs. 2.74, Janus-Pro-7B: 3.36 vs. 3.28). This indicates that, in
absolute values, a comparable degree of stereotyping was exhibited by both the DALL-E transformer
architecture-based model, the models based on difusion architecture, and the latest DeepSeek model. It
is noteworthy that the proportion of males to females (281:86) depicted in the generated images closely
resembles the proportion of genders in the clinical diagnoses. This is due to biases in diagnostic tools
and procedures, which have resulted in autism being currently diagnosed 3 to 4 times more often in
males. In this context, Janus appeared as the most “female-inclusive” model (61:26).</p>
      <p>In addition to the three common stereotypes observed across all models (gender, skin color, and
age), the most frequently repeated motifs for the models were (baseline/follow-up): DALL-E – the
blue color theme / negative emotion; Midjourney – brain or modified head theme for both rounds;
SDXL and FLUX – social isolation themes / blue color, Stable Difusion – the color blue / negative
afect, and Janus-Pro-7B – negative afect for both rounds. Among the three images with the highest
degree of stereotyping at baseline (degree of stereotyping 8/10 and 7/10), two were generated by the
Stable Difusion model and one by Midjourney. In the follow-up (highest degree 7/10), two images
were generated by Midjourney, one by Flux and one by Stable Difusion. A notable distinction was the
prevalence of stereotypes associated with using the color blue and the portrayal of individuals on the
spectrum as isolated, nerds, or artists (in the case of the DALL-E model,) and the utilization of the child
motif by difusion models.</p>
      <p>In contrast, the degree of stereotyping in control images was noticeably lower in all models, except
for the brain/modified head stereotype, which we found more often in the DALL-E and Stable Difusion
models. We explain this by the presence of the word “neurotypical” in the control prompts as an
alternative to the word “autistic.” The dominant stereotypes of skin color and gender were significantly
less present in all models, indicating that autistic people are largely identified with white men. In
addition, diferences in the degree of stereotyping of the categories “child” and “isolation” show that
autistic people are more often than neurotypicals depicted as children and as people uninterested in
social contact, which has incredibly serious social and clinical consequences. In the control set, the
stereotype associated with the puzzle symbol (which was one of the more prominent in previous series)
was almost absent. The puzzle motif did not appear at all in the DALL-E and Janus models (2x in
Midjourney and 1x in Stable Difusion and FLUX). Also, none of the images achieved as high a degree of
stereotyping as in previous series (8/10 and 7/10), reaching a maximum of 5/10 (only 2 cases) and 4/10
(12 cases). 45 of the 265 control images did not contain any of the sought-after stereotypes, meaning
that their level of autistic stereotypization was 0.</p>
      <p>
        Interestingly, stereotypes regarding the medicalization of autism were almost absent from the
generated experimental graphics. This finding is intriguing in light of numerous contemporary analyses
of media representations of autism, which have highlighted the prevalence and detrimental efects of
portraying autism through a medical lens in media discourse [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], which apparently the AI models
avoid. See Fig. 5a, Fig. 5b, and Fig. 5c for the average incidence of the ten stereotypes for all models.
The proportion of average stereotype incidence for each model is defined as the number of images in
which a given stereotype was identified, divided by the total number of images generated by the model,
and normalized by the maximum possible number of stereotypes (10).
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion</title>
      <sec id="sec-4-1">
        <title>Recurring themes</title>
        <p>
          Regrettably, all the models perpetuated common stereotypes of autism. The most prevalent were:
the white [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ], the young [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ], the boy [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ], the puzzle symbol [
          <xref ref-type="bibr" rid="ref30">30</xref>
          ], and the blue color. The
puzzle implies that autistic individuals are analogous to incomplete puzzles, lacking the components
required for completion. This representation may influence the perception of autism as a deficit
rather than as a diversity in human functioning; the symbol may also result in infantilization,
whereby experiences and challenges are perceived as childish or trivial. The color blue has been
criticized for its association with the controversial organization Autism Speaks and a perspective
that focuses on males. This stereotype may contribute to the under-recognition and support for
women and girls on the spectrum [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]. The prevalence of white male children among the depicted
individuals serves to reinforce the erroneous assumption that autism is most prevalent in white boys.
(a) Proportion of average
stereotype incidence across
models - baseline
(b) Proportion of average
stereotype incidence across
models - follow-up (2025)
(c) Proportion of average stereotype
incidence across models
control images (2025)
This one-sided representation results in the marginalization of the experiences of autistic people
from diferent ethnic groups and cultures [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ]. Consequently, it can result in delays in diagnosis,
inadequate support, and dificulties in accessing services for autistic individuals who do not align
with this narrow perspective [
          <xref ref-type="bibr" rid="ref31">31</xref>
          ]. Also, a lack of diversity in representations can make it
dificult to build social inclusion and understanding of the wide spectrum of autism in diferent communities.
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Group images</title>
        <p>
          Images of groups including or consisting of autistic individuals appear to be similar to each other and
are less diverse than default groups without specified characteristics (see Fig. 2a and Fig. 2b). In some
instances, DALL-E generated explanations concerning the generated images, stating that the prompted
issue is highly complex and that an alternative scene would be created in order to avoid perpetuating
stereotypes (even though some of them were still used). It was evident from such cases that top-down
‘fairness protocols’ did not fully fulfill their role. Moreover, utilizing such safeguards does not address
the underlying issue, which is the existence of biased datasets [
          <xref ref-type="bibr" rid="ref32">32</xref>
          ].
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>Interaction: objects vs people</title>
        <p>
          Images featuring multiple characters demonstrated a tendency to portray individuals on the autism
spectrum as preoccupied with physical objects rather than engaging in interpersonal interactions.
Even when these characters were in close proximity to one another, they did not engage in common
activities, which reflects the pervasive (but often erroneous) belief that individuals with autism are
antisocial, which is a hurtful stereotype [
          <xref ref-type="bibr" rid="ref33">33</xref>
          ] (see Fig. 6a). The people depicted in the control group’s
images were more often involved in interpersonal relationships. (see Fig. 6b)
        </p>
      </sec>
      <sec id="sec-4-4">
        <title>Emotional expressions and behavior</title>
        <p>
          In addition to the images in which the model was directly asked to display strong emotions, most of the
characters presented in the generations exhibited a lack of emotional expressiveness. It is noteworthy
that a greater number of images depicted positive emotions than negative ones. However, when the
models were directly asked to generate images showing autistic people experiencing a strong emotion
or showing a typical mood, the majority of images showed negative emotions. This may falsely suggest
that autistic individuals do not experience intense emotions (or do so infrequently), or if they do, these
are challenging, negative emotions rather than positive ones, such as joy or empathy [
          <xref ref-type="bibr" rid="ref34">34</xref>
          ] (see Fig. 7a).
(a) People on the spectrum depicted as focused on
objects rather than personal relations. Prompt no.
9 (left) and 14 (right). Model: DALL-E (April 2024)
(b) The people shown in the control images were
more focused on interpersonal interactions.
        </p>
        <p>Prompt no. 9, DALL-E (August 2025) (left) and no.
25, FLUX (right).</p>
      </sec>
      <sec id="sec-4-5">
        <title>Artificial Neural Networks mirror human cognitive biases and mental imagery</title>
        <p>
          On a side note, we observed representational insensitivity regarding the generated images of autism
despite directional prompting aimed at falsifying the stereotypes. For example, the most prevalent
motif employed by models to represent autism was the puzzle symbol. Upon being explicitly instructed
to generate visualizations that did not include this symbol, the models nevertheless incorporated it
into their creations. The only model that handled this task properly was Janus. This could mean that
either the model is better at tackling negative prompting, or it is better at distinguishing between the
object classes it actually includes in the generated images. DALL-E explicitly refuted (on the text-to-text
modality) the allegation that it perpetuates the puzzle stereotype, despite generating it. We hypothesize
that insensitivity to negation may stem from encoder architecture, where embeddings of dominant
tokens (e.g., “puzzle”) outweigh modifiers (e.g., “without”), causing cross-attention to preserve the
dominant concept and ignore negation. This may also be interpreted as networks mirroring the human
cognitive architecture regarding the discrepancy between background and reflective knowledge, as
justified by research on autism-related stereotypes in humans. This analogy is grounded in psychological
and neuroscientific research on implicit social cognition and stereotype activation, since artificial neural
networks’ statistical pattern completion may mirror the pattern of activation of entrenched cultural
associations in human background knowledge. Furthermore, the images were frequently found to
resemble the so-called human "mental images" (as diferent from "perceptual images") due to the
presence of qualitatively undefined quantitative properties and a lack of adherence to the principle
of individuation [
          <xref ref-type="bibr" rid="ref35">35</xref>
          ]. This resulted, for example, in the simultaneous appearance of objects across
multiple modalities.
(a) Dificult emotions and emotional blandness.
        </p>
        <p>Prompt no. 42, model: Midjourney v. 5.1 (left) and
33, model: Stable Difusion v. 1.6 (right)
(b) An image created with prompt no. 53: “Visualize
autism without using a puzzle theme.”, model:
DALL-E (April 2024)</p>
      </sec>
      <sec id="sec-4-6">
        <title>Technological (computational) progress does not equal debiasing (ethical) progress</title>
        <p>This study shows that despite the undeniable technological advances that allow the models to generate
images of higher quality and with fewer technical errors, the level of potentially harmful bias contained
in the images remains largely similar. The aforementioned "low" median values of 3 and "high" of
4 in the 10-point scale are all actually high in absolute terms, given that the scale contains harmful
stereotypes only, with particular images scoring as high as 7-8 on this scale (see Fig. 8a and Fig. 8b for
control images). In view of the intense discussions on the future of AI development and the place that
ethics of AI aesthetics occupies, it should be underlined that models generating images are gaining
importance also in terms of shaping the public epistemic structures. Top-down restrictions on the
ability to generate visual content on certain topics or present a given aesthetic perspective will not solve
the foundational issue arising from the inherent bias of the training data. In the end, our evaluation
concerns the amplification and reinforcement of biases present in the human-created data by generative
artificial intelligence, since pre-existing representations of autism created by humans are full of the
analyzed stereotypes. The discussion of socially just and fair use of AI capabilities must also take this
area into account.</p>
        <p>
          The question remains whether it is possible to create a good representation of an autistic person
without using any stereotypes; in other words, whether such a person would be recognizable as autistic.
In our view, autism often lacks visible features, and the expectation that AI-generated images “should”
reveal visible traits only reinforces stereotypes. We suggest that the problem of generating
stereotypefree images may not be simply dificult but perhaps structurally constrained. Models trained on biased
data are unable to produce representations of autism that are both intelligible and free from stereotypes.
It seems that "recognizability’ itself is inherently tied to culturally shared but often reductive visual
markers. Optimal data curation is rarely feasible, but biased generators can be partially corrected
with post-hoc debiasing (e.g., concept erasure, model unlearning) and safety-oriented fine-tuning that
penalizes stereotype-related activations during training [
          <xref ref-type="bibr" rid="ref36">36</xref>
          ]. Future work could therefore examine
whether multimodal models with deeper language understanding yield less stereotypical results from the
same concise prompts. Nevertheless, we believe that AI models should not merely align with the majority
of human-created data but strive for ethical alignment. If a model diverges from dominant biased
patterns, reducing harm, this is a desirable outcome, not an error. Such deviations must be evaluated
within interdisciplinary ethical frameworks, including neurodiverse ones, and not just statistical norms.
In other words, models providing information not aligned with humanly created information may be
“correct”. Identifying persisting harms is a necessary prerequisite for developing practical solutions, but
ethical evaluation often lags behind technical innovation. Our results demonstrate that the analyzed
models at their current stage of development may disseminate prevalent and harmful stereotypes
regarding autism, and can thus be utilized as a repository of knowledge representing these stereotypes
for research purposes. We express the hope that this work may contribute to the sensitivity regarding
the appearances of neurodiverse individuals among LLM developers and, in the long run, serve the
purpose of increasing the accountability of AI in the eyes of the autistic community.
(a) Examples of highly stereotyped images. Prompt no.
        </p>
        <p>15, FLUX (left) and no. 1, Midjourney v. 7 (right).</p>
        <p>
          (b) Control images for prompt 15. Models: Flux (left)
and Midjurney v. 7
The belief that autism concerns mainly children results in
dificulties in diagnosing adults and the belief that autism can be
‘outgrown’ [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ]
The racial stereotype of autism makes it dificult for non-white
people to access diagnosis and treatment [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]
The false belief that autism primarily afects males is the result
of bias in diagnostic tools and means that less attention is paid
to symptoms occurring in women or non-binary people [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ]
The symbol suggests that autistic people are ‘incomplete’. Such
a metaphor may be seen as pejorative, suggesting that autism
is a deficit rather than a diference in functioning [
          <xref ref-type="bibr" rid="ref30">30</xref>
          ]
Blue is often seen as a ’boy’ colour, which can inadequately
represent and marginalise females and non binary people on
the autism spectrum.
        </p>
        <p>
          Autism is ‘located’ in the head. Refers to the belief that autism
is the result of a deficit located in the brain, especially when
the image is cracked or falling apart [
          <xref ref-type="bibr" rid="ref37">37</xref>
          ]
Stereotypically, people on the autism spectrum are perceived
as devoid of empathy, unsympathetic, unwilling to establish
contact, and socially isolated.[
          <xref ref-type="bibr" rid="ref38">38</xref>
          ]
Stereotype associated with the excessive medicalization of
autism and the belief that it is something undesirable that needs
to be ‘cured’ [
          <xref ref-type="bibr" rid="ref39">39</xref>
          ]
People on the spectrum are often perceived as perpetually
unhappy, “broken”, aggressive, and dangerous to those around
them. [
          <xref ref-type="bibr" rid="ref40">40</xref>
          ] [
          <xref ref-type="bibr" rid="ref41">41</xref>
          ]
Autistic people showed as lonely individuals, focused on
unusual, complex interests that often require extraordinary
abilities, which puts pressure on the majority of this social group
who do not have such abilities [
          <xref ref-type="bibr" rid="ref42">42</xref>
          ]
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools except for the creation of research images.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Y. T.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sotnikova</surname>
          </string-name>
          , H. Daum'e,
          <string-name>
            <given-names>R.</given-names>
            <surname>Rudinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. X.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <article-title>Theory-grounded measurement of u.s. social stereotypes in english language models</article-title>
          ,
          <source>in: North American Chapter of the Association for Computational Linguistics</source>
          ,
          <year>2022</year>
          . URL: https://api.semanticscholar.org/CorpusID:249319807.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Mattern</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sachan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mihalcea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Scholkopf</surname>
          </string-name>
          ,
          <article-title>Understanding stereotypes in language models: Towards robust measurement and zero-shot debiasing</article-title>
          ,
          <source>ArXiv abs/2212</source>
          .10678 (
          <year>2022</year>
          ). URL: https://api. semanticscholar.org/CorpusID:254926728.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>W.</given-names>
            <surname>Zekun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bulathwela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Koshiyama</surname>
          </string-name>
          ,
          <article-title>Towards auditing large language models: Improving textbased stereotype detection</article-title>
          ,
          <source>ArXiv abs/2311</source>
          .14126 (
          <year>2023</year>
          ). URL: https://api.semanticscholar.org/CorpusID: 265445454.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Herold</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Waller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kushalnagar</surname>
          </string-name>
          ,
          <article-title>Applying the stereotype content model to assess disability bias in popular pre-trained nlp models underlying ai-based assistive technologies</article-title>
          , in: Ninth Workshop on Speech and
          <article-title>Language Processing for Assistive Technologies (SLPAT-</article-title>
          <year>2022</year>
          ), Association for Computational Linguistics,
          <year>2022</year>
          , pp.
          <fpage>58</fpage>
          -
          <lpage>65</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2022</year>
          .slpat-
          <volume>1</volume>
          .8.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Abid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Farooqi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <article-title>Large language models associate muslims with violence</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          <volume>3</volume>
          (
          <year>2021</year>
          )
          <fpage>461</fpage>
          -
          <lpage>463</lpage>
          . URL: https://api.semanticscholar.org/CorpusID:236384212.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Naous</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Ryan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ritter</surname>
          </string-name>
          , W. Xu,
          <article-title>Having beer after prayer? measuring cultural bias in large language models (</article-title>
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Johnson</surname>
          </string-name>
          , G. Pistilli,
          <string-name>
            <given-names>N.</given-names>
            <surname>Menédez-González</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. D. D.</given-names>
            <surname>Duran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Panai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kalpokiene</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Bertulfo</surname>
          </string-name>
          ,
          <article-title>The ghost in the machine has an american accent: value conflict in gpt-3 (</article-title>
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Almandil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Alkuroud</surname>
          </string-name>
          , S. AbdulAzeez, A. AlSulaiman,
          <string-name>
            <given-names>A.</given-names>
            <surname>Elaissari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Borgio</surname>
          </string-name>
          ,
          <article-title>Environmental and genetic factors in autism spectrum disorders: Special emphasis on data from arabian studies</article-title>
          ,
          <source>International Journal of Environmental Research and Public Health</source>
          <volume>16</volume>
          (
          <year>2019</year>
          )
          <article-title>658</article-title>
          . doi:
          <volume>10</volume>
          .3390/ijerph16040658.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Paes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. H.</given-names>
            <surname>Tanneru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Srinivas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lakkaraju</surname>
          </string-name>
          ,
          <article-title>Word-level explanations for analyzing bias in text-to-image models</article-title>
          ,
          <source>arXiv preprint arXiv:2306.05500</source>
          (
          <year>2023</year>
          ). URL: https://arxiv.org/abs/2306.05500,
          <article-title>5 main pages, 3 pages in appendix, and 3 figures.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bianchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kalluri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Durmus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ladhak</surname>
          </string-name>
          , M. Cheng, D. Nozza,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hashimoto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Jurafsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Y.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Caliskan</surname>
          </string-name>
          ,
          <article-title>Easily accessible text-to-image generation amplifies demographic stereotypes at large scale</article-title>
          ,
          <source>Proceedings of the 2023 ACM Conference on Fairness, Accountability</source>
          , and
          <string-name>
            <surname>Transparency</surname>
          </string-name>
          (
          <year>2022</year>
          ). URL: https://api.semanticscholar.org/CorpusID:253383708.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>N.</given-names>
            <surname>AlDahoul</surname>
          </string-name>
          , T. Rahwan,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zaki</surname>
          </string-name>
          ,
          <article-title>Ai-generated faces influence gender stereotypes and racial homogenization (</article-title>
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Bian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Xu</surname>
          </string-name>
          , H. Cheng, H. M.
          <string-name>
            <surname>Meng</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Zheng</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Wu</surname>
          </string-name>
          ,
          <article-title>Language agents for detecting implicit stereotypes in text-to-image models at scale (</article-title>
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hajishirzi</surname>
          </string-name>
          , L. Schmidt,
          <string-name>
            <surname>Geneval:</surname>
          </string-name>
          <article-title>An object-focused framework for evaluating text-to-image alignment</article-title>
          , https://doi.org/10.48550/arXiv.2310.11513,
          <year>2023</year>
          . ArXiv preprint arXiv:
          <volume>2310</volume>
          .
          <fpage>11513</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ruan</surname>
          </string-name>
          ,
          <article-title>Janus-pro: Unified multimodal understanding and generation with data and model scaling</article-title>
          , https://arxiv.org/abs/2501.17811,
          <year>2025</year>
          . ArXiv preprint arXiv:
          <volume>2501</volume>
          .
          <fpage>17811</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          , W. Liu,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ruan</surname>
          </string-name>
          , Janusflow:
          <article-title>Harmonizing autoregression and rectified flow for unified multimodal understanding and generation</article-title>
          , https://arxiv.org/abs/2411.07975,
          <year>2024</year>
          . ArXiv preprint arXiv:
          <volume>2411</volume>
          .
          <fpage>07975</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Elsabbagh</surname>
          </string-name>
          , G. Divan,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Koh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kauchali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Marcín</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Montiel-Nava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Paula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Yasamy</surname>
          </string-name>
          , E. Fombonne,
          <article-title>Global prevalence of autism and other pervasive developmental disorders</article-title>
          ,
          <source>Autism Research</source>
          <volume>5</volume>
          (
          <year>2012</year>
          )
          <fpage>160</fpage>
          -
          <lpage>179</lpage>
          . doi:
          <volume>10</volume>
          .1002/aur.239.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S.</given-names>
            <surname>Silberman</surname>
          </string-name>
          ,
          <article-title>Neurotribes : the legacy of autism and the future of neurodiversity (</article-title>
          <year>2016</year>
          )
          <fpage>548</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D.</given-names>
            <surname>Draaisma</surname>
          </string-name>
          , Stereotypes of autism,
          <source>Philosophical Transactions of the Royal Society B: Biological Sciences</source>
          <volume>364</volume>
          (
          <year>2009</year>
          )
          <fpage>1475</fpage>
          -
          <lpage>1480</lpage>
          . doi:
          <volume>10</volume>
          .1098/rstb.
          <year>2008</year>
          .
          <volume>0324</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wodziński</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rządeczka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Moskalewicz</surname>
          </string-name>
          ,
          <article-title>How to minimize the impact of experts' non-rational beliefs on their judgments on autism</article-title>
          ,
          <source>Community Mental Health Journal</source>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          . URL: https://link.springer. com/article/10.1007/s10597-022-01062-1. doi:
          <volume>10</volume>
          .1007/S10597-022-01062-1/TABLES/2.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>L.</given-names>
            <surname>Camus</surname>
          </string-name>
          , G. Rajendran,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Stewart</surname>
          </string-name>
          ,
          <article-title>Social self-eficacy and mental well-being in autistic adults: Exploring the role of social identity</article-title>
          ,
          <source>Autism</source>
          <volume>28</volume>
          (
          <year>2024</year>
          )
          <fpage>1258</fpage>
          -
          <lpage>1267</lpage>
          . doi:
          <volume>10</volume>
          .1177/13623613231195799.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>C.</given-names>
            <surname>Treweek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Freeth</surname>
          </string-name>
          ,
          <article-title>Autistic people's perspectives on stereotypes: An interpretative phenomenological analysis</article-title>
          ,
          <source>Autism : the international journal of research and practice 23</source>
          (
          <year>2019</year>
          )
          <fpage>759</fpage>
          -
          <lpage>769</lpage>
          . doi:
          <volume>10</volume>
          .1177/1362361318778286.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>S. Y.</given-names>
            <surname>Kim</surname>
          </string-name>
          , D.-
          <string-name>
            <given-names>Y.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Bottema-Beutel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Gillespie-Lynch</surname>
          </string-name>
          ,
          <article-title>Time to level up: A systematic review of interventions aiming to reduce stigma toward autistic people</article-title>
          ,
          <source>Autism</source>
          <volume>28</volume>
          (
          <year>2024</year>
          )
          <fpage>798</fpage>
          -
          <lpage>815</lpage>
          . doi:
          <volume>10</volume>
          .1177/ 13623613231205915.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>J.</given-names>
            <surname>Betker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Goh</surname>
          </string-name>
          , L. Jing, TimBrooks,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ouyang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Manassra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dhariwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Chu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ramesh</surname>
          </string-name>
          ,
          <article-title>Improving image generation with better captions</article-title>
          , ???? URL: https://api.semanticscholar.org/CorpusID:264403242.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>R.</given-names>
            <surname>Rombach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Blattmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lorenz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Esser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ommer</surname>
          </string-name>
          ,
          <article-title>High-resolution image synthesis with latent difusion models</article-title>
          ,
          <year>2022</year>
          . URL: https://arxiv.org/abs/2112.10752. arXiv:
          <volume>2112</volume>
          .
          <fpage>10752</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Gordon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mizzi</surname>
          </string-name>
          ,
          <article-title>Representation of autism in fictional media: A systematic review of media content and its impact on viewer knowledge and understanding of autism</article-title>
          ,
          <source>Autism</source>
          <volume>27</volume>
          (
          <year>2023</year>
          )
          <fpage>2205</fpage>
          -
          <lpage>2217</lpage>
          . doi:
          <volume>10</volume>
          .1177/13623613231155770.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>R.</given-names>
            <surname>Brickhill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Atherton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piovesan</surname>
          </string-name>
          , L. Cross,
          <article-title>Autism, thy name is man: Exploring implicit and explicit gender bias in autism perceptions</article-title>
          ,
          <source>PLOS ONE 18</source>
          (
          <year>2023</year>
          )
          <article-title>e0284013</article-title>
          . doi:
          <volume>10</volume>
          .1371/journal.pone.
          <volume>0284013</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>S.</given-names>
            <surname>Cruz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. C.-P.</given-names>
            <surname>Zubizarreta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. D.</given-names>
            <surname>Costa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Araújo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Martinho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tubío-Fungueiriño</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sampaio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cruz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Carracedo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fernández-Prieto</surname>
          </string-name>
          ,
          <article-title>Is there a bias towards males in the diagnosis of autism? a systematic review and meta-analysis</article-title>
          ,
          <source>Neuropsychology Review</source>
          (
          <year>2024</year>
          ).
          <source>doi:10.1007/s11065-023-09630-2.</source>
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>B. S.</given-names>
            <surname>Aylward</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. E.</given-names>
            <surname>Gal-Szabo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Taraman</surname>
          </string-name>
          , Racial, ethnic, and
          <article-title>sociodemographic disparities in diagnosis of children with autism spectrum disorder</article-title>
          .,
          <source>Journal of developmental and behavioral pediatrics : JDBP</source>
          <volume>42</volume>
          (
          <year>2021</year>
          )
          <fpage>682</fpage>
          -
          <lpage>689</lpage>
          . doi:
          <volume>10</volume>
          .1097/DBP.0000000000000996.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>Z. J.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <article-title>Race and sex bias in the autism diagnostic observation schedule (ados-2) and disparities in autism diagnoses, JAMA network open 5 (</article-title>
          <year>2022</year>
          )
          <fpage>e229503</fpage>
          -
          <lpage>e229503</lpage>
          . URL: https://api.semanticscholar.org/ CorpusID:248390645.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Gernsbacher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Raimond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Stevenson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Boston</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Harp</surname>
          </string-name>
          ,
          <article-title>Do puzzle pieces and autism puzzle piece logos evoke negative associations?</article-title>
          ,
          <source>Autism</source>
          <volume>22</volume>
          (
          <year>2018</year>
          )
          <fpage>118</fpage>
          -
          <lpage>125</lpage>
          . doi:
          <volume>10</volume>
          .1177/1362361317727125.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Mandell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Wiggins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Carpenter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Daniels</surname>
          </string-name>
          , C. DiGuiseppi,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Durkin</surname>
          </string-name>
          , E. Giarelli,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Morrier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Nicholas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Pinto-Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. T.</given-names>
            <surname>Shattuck</surname>
          </string-name>
          , K. C. Thomas,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yeargin-Allsopp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Kirby</surname>
          </string-name>
          ,
          <article-title>Racial/ethnic disparities in the identification of children with autism spectrum disorders</article-title>
          ,
          <source>American Journal of Public Health</source>
          <volume>99</volume>
          (
          <year>2009</year>
          )
          <fpage>493</fpage>
          -
          <lpage>498</lpage>
          . doi:
          <volume>10</volume>
          .2105/AJPH.
          <year>2007</year>
          .
          <volume>131243</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>N.</given-names>
            <surname>Norori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Aellen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. D.</given-names>
            <surname>Faraci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tzovara</surname>
          </string-name>
          ,
          <article-title>Addressing bias in big data and ai for health care: A call for open science</article-title>
          ,
          <source>Patterns</source>
          <volume>2</volume>
          (
          <year>2021</year>
          )
          <article-title>100347</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.patter.
          <year>2021</year>
          .
          <volume>100347</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>J.</given-names>
            <surname>Sinclaire</surname>
          </string-name>
          ,
          <article-title>Don't mourn for us</article-title>
          , Autonomy - The
          <source>Critical Journal of Interdisciplinary Autism Studies</source>
          <volume>1</volume>
          (
          <year>2012</year>
          )
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>L.</given-names>
            <surname>Kimber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Verrier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Connolly</surname>
          </string-name>
          ,
          <article-title>Autistic people's experience of empathy and the autistic empathy deficit narrative</article-title>
          , Autism in Adulthood (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .1089/aut.
          <year>2023</year>
          .
          <volume>0001</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>P.</given-names>
            <surname>Beckmann</surname>
          </string-name>
          , G. Köstner,
          <string-name>
            <surname>I. Hipólito</surname>
          </string-name>
          ,
          <article-title>An alternative to cognitivism: Computational phenomenology for deep learning</article-title>
          ,
          <source>Minds and Machines</source>
          <volume>33</volume>
          (
          <year>2023</year>
          )
          <fpage>397</fpage>
          -
          <lpage>427</lpage>
          . doi:
          <volume>10</volume>
          .1007/s11023-023-09638-w.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>R.</given-names>
            <surname>Gandikota</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Materzynska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fiotto-Kaufman</surname>
          </string-name>
          , D. Bau,
          <article-title>Erasing concepts from difusion models</article-title>
          ,
          <source>ArXiv abs/2303</source>
          .07345 (
          <year>2023</year>
          ). URL: https://arxiv.org/abs/2303.07345.
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>D.</given-names>
            <surname>Crawshaw</surname>
          </string-name>
          ,
          <article-title>Should we continue to tell autistic people that their brains are diferent?</article-title>
          , Psychological
          <string-name>
            <surname>Reports</surname>
          </string-name>
          (
          <year>2023</year>
          )
          <article-title>003329412311743</article-title>
          . doi:
          <volume>10</volume>
          .1177/00332941231174391.
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Harwood</surname>
          </string-name>
          ,
          <article-title>Representations of autism in australian print media</article-title>
          ,
          <source>Disability Society</source>
          <volume>24</volume>
          (
          <year>2009</year>
          )
          <fpage>5</fpage>
          -
          <lpage>18</lpage>
          . doi:
          <volume>10</volume>
          .1080/09687590802535345.
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>G.</given-names>
            <surname>Wolbring</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Mosig</surname>
          </string-name>
          ,
          <article-title>Autism in the News: Content Analysis of Autism Coverage in Canadian Newspapers</article-title>
          , Praeger,
          <year>2017</year>
          , pp.
          <fpage>63</fpage>
          -
          <lpage>94</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>A. E.</given-names>
            <surname>Holton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. C.</given-names>
            <surname>Farrell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Fudge</surname>
          </string-name>
          ,
          <article-title>A threatening space?: Stigmatization and the framing of autism in the news</article-title>
          ,
          <source>Communication Studies</source>
          <volume>65</volume>
          (
          <year>2014</year>
          )
          <fpage>189</fpage>
          -
          <lpage>207</lpage>
          . URL: https://www.tandfonline.com/doi/abs/10.1080/ 10510974.
          <year>2013</year>
          .
          <volume>855642</volume>
          . doi:
          <volume>10</volume>
          .1080/10510974.
          <year>2013</year>
          .
          <volume>855642</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Huws</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S. P.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <article-title>Missing voices: representations of autism in british newspapers,</article-title>
          <year>1999</year>
          -
          <fpage>2008</fpage>
          ,
          <source>British Journal of Learning Disabilities</source>
          <volume>39</volume>
          (
          <year>2011</year>
          )
          <fpage>98</fpage>
          -
          <lpage>104</lpage>
          . URL: https://onlinelibrary.wiley.com/doi/abs/10.1111/j. 1468-
          <fpage>3156</fpage>
          .
          <year>2010</year>
          .
          <volume>00624</volume>
          .x. doi:
          <volume>10</volume>
          .1111/j.1468-
          <fpage>3156</fpage>
          .
          <year>2010</year>
          .
          <volume>00624</volume>
          .x.
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>J.</given-names>
            <surname>Jordynn</surname>
          </string-name>
          , From Refrigerator Mothers to Computer Geeks, University of Illinois Press,
          <year>2014</year>
          . URL: http: //www.jstor.org/stable/10.5406/j.ctt7zw5k5.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>