<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Web Content Filtering Through Knowledge Distillation of Large Language Models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tamás Vörös</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sean Bergeron</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Konstantin Berlin</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sophos AI</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>We introduce a state-of-the-art approach for URL categorization that leverages the power of Large Language Models (LLMs) to address the primary objectives of web content filtering: safeguarding organizations from legal and ethical risks, limiting access to high-risk or suspicious websites, and fostering a secure and professional work environment. Our method utilizes LLMs to generate accurate classifications and then employs established knowledge distillation techniques to create smaller, more specialized student models tailored for web content filtering. Distillation results in a student model with a 9% accuracy rate improvement in classifying websites, sourced from customer telemetry data collected by a large security vendor, into 30 distinct content categories based on their URLs, surpassing the current state-of-the-art approach. Our student model matches the performance of the teacher LLM with 175 times less parameters, allowing the model to be used for in-line scanning of large volumes of URLs, and requires 3 orders of magnitude less manually labeled training data than the current state-of-the-art approach. Depending on the specific use case, the output generated by our approach can either be directly returned or employed as a pre-filter for more resource-intensive operations involving website images or HTML.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Machine Learning</kwd>
        <kwd>Web Content Filtering</kwd>
        <kwd>Large Language Models</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Web content filtering is crucial for maintaining network security and regulatory compliance in
organizations[
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. The aim of a web content filtering system is to prevent employees from
accessing inappropriate content that violates regulatory requirements or company policies, and
by filtering out high-risk content categories, such as pornography and weapons, it helps to
avoid legal liability, reduces the risk of legal or ethical issues arising from exposure to unsuitable
content, and promotes a professional work environment. Unlike security classification, which
detects hosted malware and phishing attacks, content filtering models address a more general
problem that is independent of the attack mechanism. In this work, we address the problem of
web content categorization.
      </p>
      <p>
        Traditional approaches to website categorization have relied upon creating and
maintaining domain-to-category mappings, which are lists of domains grouped by their
manually assigned categories [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. A natural extension to list-based URL categorization is to
enhance them with signatures created by analysts, that would generalize better than exact
string matching. In the case of web content filtering, the most straightforward
signaturebased approach is to propagate labels based on domains and subdomains, although more
complex rules may be applied [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
        ]. An example of this kind of label propagation is
to maintain a list of known domains with predetermined labels, such as labeling
“onlineshop.com” as an e-commerce site and “news-site.com” as a news site. All URLs under
these domains inherit the label. For instance, any URL under “online-shop.com” such
as “online-shop.com/products/clothing”, “online-shop.com/products/electronics”, and
“onlineshop.com/cart” can be labeled as e-commerce. Similarly, any URL under “news-site.com”, such
as “news-site.com/politics”, “news-site.com/technology”, and “news-site.com/entertainment”
can be labeled as news. In the manuscript, we focus on domain label propagation signatures for
acquiring ground truth for the sake of simplicity, but it could be trivially extended to longest
prefix matching of the URL for ambiguous websites. To provide comprehensive customer
telemetry coverage for organizations, one of the most resource-efective manual methods involves
ranking domains by frequency and labeling them in descending order. This approach maximizes
coverage by prioritizing the labeling of a single domain. As new websites emerge daily and with
over a billion existing websites, maintaining and scaling signature approaches manually for
the long tail has become increasingly challenging. This necessitates the integration of machine
learning into the classification pipeline[
        <xref ref-type="bibr" rid="ref7 ref8 ref9">7, 8, 9</xref>
        ]. Figure 1 illustrates the telemetry coverage
of a large security vendor, with the space above the bars representing the infrequently seen
long tail distribution of domains not already covered by domain labeling and label propagation
signatures.
      </p>
      <p>
        Maintaining domain-to-category mapping lists and extending them with signatures remains
critical in the early stages of security pipelines [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. These labels serve as initial shortcuts in
the filtering pipeline to prevent catastrophic false positives, and provide low latency on more
commonly seen websites. Websites like ’stackoverflow.com’ are well-known and need not be
evaluated by a model whereas a potential false positive would translate to a negative impact on
the productivity of an organization. In this work, we focus our evaluations on the long tail of
the distribution, which aligns with actual deployment scenarios and emphasizes the need for
machine learning to address the challenges associated with classifying this ever-growing subset
of domains.
      </p>
      <p>
        In addition to acting as a pre-filter, domain-to-category mapping lists and label propagation
signatures are often used to create the training sets for machine learning models. However,
machine learning algorithms tend to memorize patterns rather than understand underlying
concepts [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ], thus learning from already labeled URLs is insuficient for accurate content
classification in the long tail of the URL distribution. A model whose parameters are configured
to memorize the head of the distribution is undesired as signatures already cover such domains
without risking false positives. Therefore, our objective is to identify models with superior
generalization capabilities for out-of-distribution samples.
      </p>
      <p>
        For unknown or new domains, the model must infer a description from the URL. It is useful
to view URL classification, especially for web content filtering, as a natural language processing
task, considering URLs as semi-sentences. For a fair amount of our categories, the URL will
frequently have explicit words to advertise its content, specifically semantically related keywords
for the given category. For example, a site selling weapons will often contain keywords such
as “armaments” or “glock” or “gun”. The current state-of-the-art in URL detection and our
chosen baseline, URLTran [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], frames URL detection as a natural language processing task and
ifne-tunes a pre-trained BERT model [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] to detect phishing URLs. The BERT model is an early
example of the transformer architecture [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] which has since been refined and scaled, giving
rise to large language models. Large language models (LLMs) are state-of-the-art on natural
language tasks [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. LLMs are first pre-trained on large amounts of unlabeled textual data
in a task-agnostic manner, learning a general understanding of language such as syntax and
semantics [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Once pre-trained LLMs can efectively generalize to new tasks upon fine-tuning
or few-shot prompting with much smaller amounts of data [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. The amount of data needed for
LLMs to generalize to new tasks is often several orders of magnitude less than the amount of
data needed to fully train a smaller model.
      </p>
      <p>
        Direct use of LLMs for URL content classification in production is prohibitive due to cost
considerations at scale [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Fine-tuning smaller LLMs that have lower inference costs results
in a loss of performance. Through knowledge distillation [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], the LLM-labeled long tail data
enables a smaller student model to improve its performance while maintaining the necessary
computational eficiency for production. Turc et al. [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] proposed an approach that utilizes
knowledge distillation from the teacher’s predictive distribution (soft labels) followed by
supervised fine-tuning of the student model. In the domain of web content classification, we combine
the steps of distillation and fine-tuning and our computationally eficient student matches the
performance of the teacher model. Instead of a predictive distribution, we distill the teacher
using hard labels. The student model has a low inference cost and is well-suited for the purposes
of web content filtering in production.
      </p>
      <p>The main contributions of this paper are as follows:
• We demonstrate that when fine-tuned on data labeled with domain propagation signatures,
large language models outperform standard deep learning models by 9% in terms of
accuracy on the long tail categorization problem.
• We demonstrate that we can fine-tune a large language model using 10000 samples to
achieve better performance than the current state-of-the-art approach trained on 10
million samples.
• We showcase the efective application of knowledge distillation from a fine-tuned LLM to
boost the performance of a smaller, more computationally eficient model, specifically for
web content filtering tasks. We attain performance levels comparable to the original LLM
using a model that is 175 times smaller, decreasing from 770 million parameters to just 4
million. This reduction in size makes the model more suitable for production and enables
practical deployment across various contexts, such as serving as a general pre-filter for
all incoming network trafic in firewalls.
• We propose a novel validation approach for the community to adopt, which more
accurately assesses model performance in realistic scenarios where it works alongside
a domain-to-category mapping list of ground truth labels, extended via domain label
propagation signatures. In this setting, the model analysis focuses on labeling the long
tail, focusing on a more relevant metric.</p>
      <p>Our paper is structured as follows: In Section 1 we introduce the research problem and
elucidate the motivation behind our proposed approach. In Section 2 we review relevant
literature and prior work in the field. In Section 3, we provide a comprehensive description of
our methodology, encompassing the dataset and experimental setups. In Section 4 we present
our results, which include a comparison of our approach’s performance against the current
state-of-the-art, an analysis of the benefits of LLMs in terms of accuracy and sample eficiency, as
well as an exploration of deployment challenges and our proposed solution utilizing knowledge
distillation for more compact and computationally eficient models. Lastly, In Section 5 we
conclude the paper, outlining potential avenues for future research in this domain.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>Previous work in this field has primarily focused on security classification rather than content
classification and filtering. Since machine learning approaches to security classification can be
readily reformulated from binary classification to multi-class classification through modification
of the last layer in the neural network, approaches to security classification are relevant to the
task of content classification. We will compare and build upon security publications as they are
better studied.</p>
      <p>
        Early work on URL-only classification for phishing detection using manually derived feature
sets employed both generic features and features meant to detect certain obfuscation techniques
such as obfuscation of the host with another domain [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. The features were divided into four
groups: Page Based, Domain Based, Type Based, and Word Based. The authors focused on
manual feature engineering and only applied logistic regression as their classifier. A range
of machine learning models, including Random Forests, Logistic Regression, Support Vector
Machines, Naive Bayes, and Gradient Boosting, have been applied to detect phishing URLs
using manually extracted feature sets [
        <xref ref-type="bibr" rid="ref20 ref7">7, 20</xref>
        ]. Feature sets may be entirely lexically derived
such as the length of the URL, the number of digits in the primary domain, and the number
of special characters in the path [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. In addition to lexical features, domain-specific features
such as the number of passive DNS changes or the remaining time of the SSL certificate may be
incorporated [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Manual features may also be extracted from the retrieved information of
lookups (Whois, GSB Reporting, Google Ranking, and Selenium Rendering) [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
      </p>
      <p>
        The manual feature extraction approach is dificult to maintain as adversaries tend to adapt
obfuscation methods to avoid detection so models have shifted to a featureless approach based
on the raw string as input. Deep learning methods learn and then automatically extract the
feature set from the raw URL during training. The use of automatically extracted features does
not preclude the inclusion of manual features however as the optimal input combination of
manual and automatic features can be optimized with genetic algorithms [
        <xref ref-type="bibr" rid="ref24 ref25">24, 25</xref>
        ].
      </p>
      <p>
        Automatic feature extraction can be done on various levels of granularity starting at the
character level. Saxe et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] encode a URL by replacing each character with its corresponding
ID whereby features are extracted from the encoded URL with sequential embedding and
convolutional layers. This approach outperformed a baseline which uses a manual feature set.
Learning meaningful context-independent representations is dificult when using character-level
tokenization as a character token doesn’t carry the same meaning that a word does. More recent
approaches like subword-level and word-level tokenization have been developed in natural
language processing in order to make it easier for models to maintain semantic meaning in
common subwords and learn more meaningful context-independent representations.
      </p>
      <p>
        The application of word-level tokenization to URL classification was first proposed by Le
et al. [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] who extracted both character-level and word-level features. Each feature set is fed
through its own series of sequential embedding and convolutional layers before being fused.
Tajaddodianfar et al. [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ] expand on this approach by first training the word embeddings
in an unsupervised manner via FastText [28]. The word and character convolutional stems
include several convolutional layers in parallel with dilated convolutions allowing the model to
adaptively grow in depth and width, extracting N-grams of various lengths. In addition to using
both character and word-level feature models, Bu et al.[29] apply a triplet network structure in
order to address class imbalances and better learn the similarity between URLs.
      </p>
      <p>
        In addition, to feature set selection, the choice of model architecture plays a large role in
the performance of a URL classification model. Transformers have achieved state-of-the-art
results in many natural language processing tasks making them a good candidate for URL
classification after fine-tuning or even custom pre-training [
        <xref ref-type="bibr" rid="ref9">30, 31, 9, 32, 33</xref>
        ]. In addition to the
URL, a transformer can leverage tokenized features of the HTML [34]. A URL classification
system might employ diferent architectures in parallel, fusing the output of models with a
convolutional architecture and a transformer architecture [35]. Instead of fusing model outputs,
a system may employ an ensemble of diferent architectures including Decision Trees, LSTMs,
and transformers for URL classification [36].
      </p>
      <p>
        Other architectures applied to URL classification include graphical networks [ 37, 38] and
GANs [39, 40]. AutoEncoders have proven useful against zero-day attacks [41]. In addition
to the URL and HTML sequences, but beyond the scope of this paper, images of the webpage
may be incorporated [42, 43]. The task of classification may be reformulated by approaching
detection from a reinforcement learning perspective [44] or from the perspective of thwarting
an adversarial opponent [
        <xref ref-type="bibr" rid="ref28">45, 46</xref>
        ].
      </p>
      <p>
        The current state-of-the-art for URL-only classification for phishing detection, URLTran [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ],
utilizes the transformer architecture underpinning LLMs. Maneriker et al. fine-tune a
pretrained BERT model on Microsoft’s Edge and Internet Explorer production browsing telemetry.
Parallel to URLTRan is the Unified Text-to-Text Cybersecurity (UTS) model. Pal et al. [
        <xref ref-type="bibr" rid="ref29">47</xref>
        ] train
a multi-task encoder-decoder LLM on cybersecurity data that includes URL phishing detection.
Although Pal et al. introduce LLMs to URL phishing detection, they do not explore the few shot
capabilities of LLMs in the URL domain nor test the capabilities of LLMs at scale. Compared to
URLTran, UTS does not consider a methodology which would allow its large model to be used
in production and reports a lower F1 score on a random split compared to URLTran’s evaluation
on the industry standard time split. Therefore, URLTran will act as our baseline to which all of
our results will be compared.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>3.1. Data
In this section, we describe our methodology for collecting data and constructing training,
validation, and test sets. We also explain our experimental setup and provide a detailed account
of how we trained our model.</p>
      <p>We obtained our dataset from a large security vendor’s customer telemetry data sourced from
its firewall and endpoint products over a period spanning July 1, 2022 to December 23, 2022.</p>
      <p>We track 30 categories in our dataset. These categories were defined by a team of expert
analysts to be representative of the most common internet content categories as well as the most
impactful, which we define as the potential to impact productivity, the degree of liability for
the organization, and the degree of associated ethical concerns. The categories include: “Chat”,
“Games”, “Shopping”, “Sports”, “News”, “Job Search”, “Search Engines”, “Alcohol”, “Gambling”,
“Weapons”, “Porn”, “Banking”, “Business”, “Education”, “Entertainment”, “Food and Dining”,
“Government”, “Health and Medicine”, “Motor Vehicles”, “Peer to Peer”, “Real Estate”, “Religion”,
“Travel”, “Translators”, “Computer and Internet”, “Hunting and Fishing”, “Marijuana”, “Radio
and Audio Hosting”, “Social Networking”, and “Video Hosting”. The majority of websites in our
dataset belong to categories such as “Computer and Internet”, “Search Engines”, and “Business”,
while niche categories such as “Hunting and Fishing” and “Marijuana” have fewer instances.
Figure A5 shows the distribution of categories in our dataset. We define our categorization
task as a closed-world problem, meaning every URL belongs to one of the 30 categories. It’s
important to note that, due to limitations in the domain-to-category database, we only consider
a single category per URL, even though some pages may realistically have multiple category
labels.</p>
      <sec id="sec-3-1">
        <title>3.2. Training Sets</title>
        <p>To construct our training dataset which spans the period from July 1, 2022 to August 19, 2022, we
uniformly sampled 10 million distinct URLs, out of the billions of URL lookups, that have been
labeled using a domain-to-category mapping database with label propagation. Additionally, we
sampled 10 million URLs from this period that did not correspond to a signature (unlabeled).
The unlabeled URLs were set aside for training augmentation purposes.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.3. Validation and Test Sets</title>
        <p>We sampled an evaluation dataset spanning from August 19, 2022 to December 23, 2022 and
divided it into two validation and test sets to assess our model’s performance in diferent
scenarios. The validation sets were based on data first seen between August 19, 2022 and
November 24, 2022 while the test sets included data first seen between November 24, 2022 and
December 23, 2022.</p>
        <p>We created a domain and time split to simulate a long tail deployment setting. We separated
the data based on the first-seen time of the URL and the first-seen time of the URL’s domain. The
ifrst-seen time of a domain refers to the earliest instance of a URL from that domain. Meaning,
there is no domain overlap between the training, validation, and test sets. This approach allowed
us to better approximate the unlabeled part of the telemetry.</p>
        <p>To compare our results with the industry-standard evaluation methodology we also created a
time split. This split was sampled from the same time span as the domain and time split but
without the constraint of dividing based on the domain’s first-seen time.</p>
        <p>For the domain and time split, the validation set was comprised of 79,313 unique URLs from
30,897 unique domains, with a maximum of 5 URLs per domain. The test set included 110,624
unique URLs from 43,996 domains. For the time split, we sampled 183,935 URLs from 62,961
domains.</p>
        <p>To compare the various splits, we display the most common domains and their frequencies
for the labeled training data, both test splits, and the unlabeled training data in Table 1. The
labeled training set and the time split are dominated by common domains such as “google.com”.
The domain and time split is most similar to the unlabeled long tail of the data where the desired
value of machine learning resides.</p>
        <p>To quantitatively assess the disparities between the domain distributions of the time split
and the domain and time split, which more closely models the long tail, we employed the
Kullback-Leibler (KL) divergence as a metric for measuring the dissimilarity between token
distributions. The KL divergence values were calculated between each validation split and
the training dataset as the reference. We tokenized all the URLs in the training dataset using
BERT tokenization and then combined all of the tokens to define the distribution of the base
training dataset. We tokenized all the URLs in both validation splits using BERT tokenization
and each token sequence was converted into probability distributions by computing normalized
histograms. The KL divergence between the token probability distribution of each URL and the
reference distribution was then determined using the entropy function. Figure 2 illustrates that
the token distribution of the domain and time split displays substantially higher KL divergence
values from the reference compared to the time split. This observation highlights the distinct
nature of the domain distributions in the two validation splits and the similarity between the
unlabeled part of the customer telemetry and the domain and time split.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.4. Experiments</title>
        <p>The primary objective of our experiments was to identify the best-performing model in terms
of accuracy on our dataset, while using as few training labels and being as small as possible. To
achieve this, we varied the training set size as a hyperparameter for each LLM, compact model,
and the baseline. We explored training set sizes ranging from few-shot to large-scale learning,
increasing the sample size from 10 samples per category to 5 million total samples, growing
by an order of magnitude at each step. For a given sample step size  , the exact samples per
category were determined by the minimum of  and the total labeled instances in that category.</p>
        <p>Our next goal was to refine the top-performing large language model (LLM) configuration
into a more compact student model. We achieved this by using labels generated by the
bestperforming LLM to train smaller models.</p>
        <p>We labeled 10 million unlabeled URLs from our dataset using the best-performing LLM,
utilizing them as hard labels. This resulted in a total of 20 million training set with the 10
million signature-labeled base training set and an additional 10 million labels generated by the
LLM. We then investigated the impact of combining these labels using various mixing ratios of
labeled samples from the base training set and LLM-labeled samples. Each compact student
model and baseline were trained on a variety of dataset configurations, each containing a total
of 10 million samples.</p>
        <p>We began with a 10-million base training set, incorporating LLM-generated labels at 0.0,
0.25, 0.50, 0.75, and 1.0 ratios. The 0.0 ratio used only the base training set, while the 0.25
ratio included 7.5 million base URLs and 2.5 million LLM-generated. At 0.5, the sources were
evenly split with 5 million each. The 0.75 ratio contained 2.5 million base and 7.5 million LLM
URLs, and the 1.0 ratio relied solely on LLM-generated labels. By varying the mixing ratios,
we were able to assess the efectiveness of our knowledge distillation process and compare the
contributions of LLM-generated labels to simply using signature-generated labels.</p>
        <p>
          We trained and compared the performance of five models: BERT-based URLTran as the
baseline[
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], which demonstrated state-of-the-art performance for URL classification, eXpose
[
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] and BERTiny [
          <xref ref-type="bibr" rid="ref30">48</xref>
          ] as the student models, and T5 Large [
          <xref ref-type="bibr" rid="ref31">49</xref>
          ] and GPT-3 Babbage [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] as
the teacher models. The size configurations of our teacher models were limited by budgetary
constraints, precluding larger configurations such as GPT-3 Davinci and T5-11B. Our student
models were chosen for the following reasons: BERTiny is the smallest pre-trained configuration
of the baseline and the inclusion of eXpose allows us to demonstrate the improvements of the
transformer architecture over convolutional models for natural language tasks, specifically web
content categorization. Unless otherwise noted, all experiments were evaluated on the test set
of both validation splits. The GPT-3 Babbage model was not fine-tuned on 5 million samples
due to cost considerations.
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>3.5. Training</title>
        <p>
          For all models, we pre-processed the data by splitting at the first occurrence of the “?” character
and removing the query parameters. The query is assumed to be noisy and without any
meaningful information. All URLs were truncated to a fixed length of 128 characters as we have
seen no improvement in further increasing the size. The base pre-trained models and tokenizers
for all T5 Large, BERT, and BERTiny configurations were the HuggingFace defaults [
          <xref ref-type="bibr" rid="ref32">50</xref>
          ].
        </p>
        <p>
          For all reported T5 configurations, we fine-tuned all weights of a pre-trained T5 Large model
using the Adafactor optimizer [
          <xref ref-type="bibr" rid="ref33">51</xref>
          ]. Early stopping was applied by monitoring performance
on the validation set of the domain and time split. For all reported GPT-3 configurations, we
ifne-tuned the Babbage model using the OpenAI API.
        </p>
        <p>T5 and GPT-3 are generative models that can utilize semantic relationships between class
labels and keywords in a URL for making predictions. Consequently, we employed literal class
labels as our prediction target. When reporting aggregate metrics, out-of-vocabulary (OOV)
predictions are not considered as a separate class, and they were considered as misclassfication
for every class. Additionally, any unlabeled data for which LLM generates an OOV prediction is
excluded from the distillation process.</p>
        <p>For GPT3 the temperature was set to 0 to ensure deterministic results upon inference. The
logit bias for tokens associated with the class labels were set to 100 to ensure exclusive selection
of expected tokens. Finally, the stop token was set to the stop sequence seen during training.</p>
        <p>For the student models, we trained a 1D convolutional eXpose model and fine-tuned all
weights of a pre-trained BERTiny model. We fine-tuned all weights of a pre-trained BERT model
to reproduce the architecture of URLTran as our baseline. No custom vocabulary was created
for the BERT-based models. Hyperparameter configurations for T5, BERTiny, BERT, and eXpose
may be found in Tables A7, A4, A5, and A6 respectively.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>In this section, we present the key findings and results of our two sets of experiments. We
report the results in terms of accuracy, with additional metrics for both experiments provided
in the Appendix.</p>
      <p>The performance of various models as a function of the log of the training sample counts is
displayed in Figure 3. The top-scoring configuration for each model is detailed in Table 2. On
the domain and time split, the best performing model, T5 Large, achieves 46.3% accuracy after
being fine-tuned on 10,000 samples. GPT-3 Babbage attains 44.4% accuracy after fine-tuning
on 10,000 samples. Both LLMs surpass the best baseline configuration, which achieves 38.3%
accuracy. BERTiny and eXpose achieve 35.7% and 30.2% accuracy, respectively, when trained
on 5 million samples.</p>
      <p>On the time split, eXpose achieves 92.8% accuracy when trained on 5 million samples. BERTiny,
ifne-tuned on 5 million samples, attains 97.6% accuracy. The best configurations for the baseline,
GPT-3 Babbage, and T5 Large achieve 97.1%, 98.14%, and 97.5% accuracy, respectively. Additional
metrics for the time split are reported in Table A10, and for the domain and time split in Table
A9 for all experiments.</p>
      <p>On the domain and time split, the best performance was achieved with T5 on 10,000 training
samples, so we selected it as our teacher model. For the domain and time split, we found the best
ratio to be 1.0, where every training sample had 10 million previously unlabeled URLs labeled
by T5. Training eXpose on all of them increases the accuracy from 31.5% to 45%. Fine-tuning
BERTiny on all 10 million LLM labels, compared to the 10 million base training set, improves
the accuracy from 37.5% to 46.2%. Finally, fine-tuning URLTran on all 10 million LLM labels,
compared to the 10 million base training set, raises the accuracy from 41.6% to 46.8%.</p>
      <p>For the traditional time split, the augmentation at a ratio of 0.75 also increased the performance,
albeit marginally.</p>
      <p>The performance of the students and the baseline trained via knowledge distillation is shown
in the augmentation plot of Figure 3 as a function of the LLM label ratio in the training data.
The top-scoring distillation configuration for each student model is detailed in Table 2.
4.1. Discussion
A comparison of the models’ performance on the two evaluation splits reveals that results on
the time split, the traditional validation approach, are overly optimistic. Small models such as
BERTiny, trained merely on signature-driven data, exhibit performance comparable to T5 and
GPT-3. The disparity in model performance between the domain and time split versus the time
split, particularly for small models, underscores that signature-sourced data is repetitive and
can be memorized with just a few million parameters. Time split validation measures a model’s
ability to match the signature distribution while in a production setting the primary concern
within the context of the overall pipeline is a model’s capacity to generalize to new data from
the long tail that falls outside the coverage of signatures.</p>
      <p>When considering the domain and time split—which aligns more closely with real-world
performance on unlabeled data—small models no longer match the performance of LLMs, as
seen in Figure 3. Beyond 10,000 samples, LLMs show minimal to no performance gains when
scaling up further. Conversely, the performance of small models and the baseline has not yet
converged at 5 million training samples. This demonstrates the sample-eficiency of LLMs in
the domain of website content categorization.</p>
      <p>LLMs outperform student models in terms of performance, but they still fall short of perfection
when applied to domain and test splits. This discrepancy can be attributed to two main factors.
First, due to dataset limitations, web content classification is framed as a single-label classification
problem. Table 3 displays a set of LLM misclassifications on domain and time split, highlighting
that a URL could potentially belong to multiple categories. In the first three samples, the
analyst opted for the more generic label, while the model choose the more generic labels in
the following three samples. Both predicted and true labels could be considered correct in
all six cases, suggesting that the true performance is likely better than the metrics indicate
because of the single-label limitation. This trade-of between equally correct specific and general
labels becomes evident when examining the confusion matrix, displayed in Figure A4 in the
Appendix, for a T5 Large model’s performance on the domain and time split. As we can see on
the confusion matrix, the LLM tends to generate class labels that are more specific than manual
labels.</p>
      <p>The second factor occurs when a URL lacks keywords or context related to its category, as
demonstrated by the last six entries in Table 3. For the middle two URLs, the model was misled
by a prominent keyword in the URL, which was unrelated to its content. The final four URLs
contain no apparent signal. Consequently, the large-scale pre-training of LLMs struggles to
efectively transfer knowledge to a URL from the long tail. This means that if a URL lacks
clear or strong indicators of its category, the LLM may not accurately classify it, leading to
misclassifications.</p>
      <p>Our results reveal that mixing in the LLM-generated labels significantly enhances the
performance of student models BERTiny and eXpose as we can see on Figure 3. Through this simple
form of augmentation, we nearly matched the 46.3% accuracy of T5 Large, the best-performing
LLM, with a transformer model that has a parameter count several orders of magnitude smaller
(0.57% of the teacher) and could reasonably be deployed in-line in production.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In conclusion, our paper contributes to the field of web content classification with the
development of lightweight models distilled from fine-tuned LLMs. We have demonstrated that LLMs,
when fine-tuned on data labeled with domain propagation signatures, significantly outperform
the current state-of-the-art approach on the long tail categorization problem. Our
teacherstudent training approach enables the distillation of LLMs into models 175 times smaller without
sacrificing accuracy, thus making deployment practical in a wide variety of new contexts. The
amount of manual labels required to finetune the teacher LLM is orders of magnitude smaller
than what is required for convergence of the current state-of-the-art approach. Furthermore,
we have proposed a new validation approach that better measures model performance in more
realistic scenarios, which should be adapted by the community to improve generalization
capabilities to unseen data.</p>
      <p>Expanding beyond web content classification, the cybersecurity field could greatly benefit
from proven methods of distilling large language models (LLMs) into more compact versions.
This approach is particularly valuable when dealing with large data volumes and expensive
training samples, especially when the model is applied to out-of-distribution cases. For
addressing web content classification tasks specifically, we suggest future work should focus on
augmenting the training data and feature space with HTML and image data, utilizing GPT-4
as a teacher, allowing URLs to have more than one label, and re-working signatures for the
assignment of general categories.
[28] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, Fasttext.zip:
Compressing text classification models, arXiv preprint arXiv:1612.03651 (2016).
[29] S.-J. Bu, H.-J. Kim, Learning disentangled representation of web address via
convolutionalrecurrent triplet network for classifying phishing urls, in: 2021 International Conference
on Electronics, Information, and Communication (ICEIC), IEEE, 2021, pp. 1–4.
[30] E. M. Rudd, A. Abdallah, Training transformers for information security tasks: A case
study on malicious url prediction, arXiv preprint arXiv:2011.03040 (2020).
[31] E. M. Rudd, M. S. Rahman, P. Tully, Transformers for end-to-end infosec tasks: A feasibility
study, in: Proceedings of the 1st Workshop on Robust Malware Analysis, 2022, pp. 21–31.
[32] W. Chang, F. Du, Y. Wang, Research on malicious url detection technology based on bert
model, in: 2021 IEEE 9th International Conference on Information, Communication and
Networks (ICICN), IEEE, 2021, pp. 340–345.
[33] H. Shirazia, K. Haynesb, I. Raya, Towards performance of nlp transformers on url-based
phishing detection for mobile devices, Journal of Ubiquitous Systems and Pervasive
Networks, Volume 17,No. 1(2022) pp. 35-42 (2022).
[34] Q. Hu, H. Zhou, Q. Liu, Phishing website detection based on multi-feature stacking, in:
2021 2nd International Conference on Artificial Intelligence and Computer Engineering
(ICAICE), IEEE, 2021, pp. 716–720.
[35] C. Wang, Y. Chen, Tcurl: Exploring hybrid transformer and convolutional neural network
on phishing url detection, Knowledge-Based Systems (2022) 109955.
[36] S. Venugopal, S. Y. Panale, M. Agarwal, R. Kashyap, U. Ananthanagu, Detection of malicious
urls through an ensemble of machine learning techniques, in: 2021 IEEE Asia-Pacific
Conference on Computer Science and Data Engineering (CSDE), IEEE, 2021, pp. 1–6.
[37] S. Ariyadasa, S. Fernando, S. Fernando, Combining long-term recurrent convolutional and
graph convolutional networks to detect phishing sites using url and html, IEEE Access 10
(2022) 82355–82375.
[38] T. Bilot, G. Geis, B. Hammi, Phishgnn: A phishing website detection framework using
graph neural networks, Conference: SECRYPT 2022At: Lisbon (2022).
[39] S. A. Kamran, S. Sengupta, A. Tavakkoli, Semi-supervised conditional gan for simultaneous
generation and detection of phishing urls: A game theoretic perspective, arXiv preprint
arXiv:2108.01852 (2021).
[40] J. Geng, S. Li, Z. Liu, Z. Cheng, L. Fan, Efective malicious url detection by using generative
adversarial networks, in: International Conference on Web Engineering, Springer, 2022,
pp. 341–356.
[41] S.-J. Bu, S.-B. Cho, Deep character-level anomaly detection based on a convolutional
autoencoder for zero-day phishing url detection, Electronics 10 (2021) 1492.
[42] J. Yuan, G. Chen, S. Tian, X. Pei, Malicious url detection based on a parallel neural joint
model, IEEE Access 9 (2021) 9464–9472.
[43] R. Liu, Y. Lin, X. Yang, S. H. Ng, D. M. Divakaran, J. S. Dong, Inferring phishing intention
via webpage appearance and dynamics: A deep vision based approach, in: 30th {USENIX}
Security Symposium ({USENIX} Security 21), 2022, p. 1.
[44] O. Lavie, A. Shabtai, G. Katz, A transferable and automatic tuning of deep reinforcement
learning for cost efective phishing detection, arXiv preprint arXiv:2209.09033 (2022).
[45] Z. Peng, Y. He, Z. Sun, J. Ni, B. Niu, X. Deng, Crafting text adversarial examples to attack the</p>
    </sec>
    <sec id="sec-6">
      <title>A. Supplementary Plots and Tables</title>
      <p>vocab_size 76
filter_size 128
dropout 0.05
learning rate 1e-3
batch size 49160
optimizer Adam
maximum training epochs 20
scale_parameter
relative_step
warmup_init</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>García</surname>
          </string-name>
          ,
          <article-title>Web content filtering. advances in computers</article-title>
          .,
          <year>2009</year>
          . https://www.academia.edu/ 11471179/Web_Content_Filtering.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D. S. K.</given-names>
            <surname>Ankur Baishya</surname>
          </string-name>
          ,
          <article-title>A review on web content filtering, its technique and prospects</article-title>
          , http://www.ijcstjournal.org/volume-7/issue-3/IJCST-V7I3P5.pdf (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Sheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wardman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Warner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Cranor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hong</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. Zhang,</surname>
          </string-name>
          <article-title>An empirical analysis of phishing blacklists</article-title>
          ,
          <source>roceedings of Sixth Conference on Email and Anti-Spam (CEAS)</source>
          (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Snyder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Livshits</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kapravelos</surname>
          </string-name>
          ,
          <article-title>Improving web content blocking with event-loop-turn granularity javascript signatures</article-title>
          , arXiv preprint arXiv:
          <year>2005</year>
          .
          <volume>11910</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.-Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-P.</given-names>
            <surname>Ma</surname>
          </string-name>
          , W.-L. Yeh,
          <string-name>
            <given-names>C.-Y.</given-names>
            <surname>Lin</surname>
          </string-name>
          , C.-T. Liu,
          <article-title>Mitigate web phishing using site signatures</article-title>
          ,
          <source>in: TENCON 2010-2010 IEEE Region 10 Conference</source>
          , IEEE,
          <year>2010</year>
          , pp.
          <fpage>803</fpage>
          -
          <lpage>808</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Haruta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Yamazaki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Asahina</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Sasase,</surname>
          </string-name>
          <article-title>A novel visual similarity-based phishing detection scheme using hue information with auto updating database</article-title>
          ,
          <source>in: 2019 25th Asia-Pacific Conference on Communications (APCC)</source>
          , IEEE,
          <year>2019</year>
          , pp.
          <fpage>280</fpage>
          -
          <lpage>285</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. K.</given-names>
            <surname>Saul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Savage</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Voelker</surname>
          </string-name>
          ,
          <article-title>Beyond blacklists: learning to detect malicious web sites from suspicious urls</article-title>
          ,
          <source>in: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining</source>
          ,
          <year>2009</year>
          , pp.
          <fpage>1245</fpage>
          -
          <lpage>1254</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Saxe</surname>
          </string-name>
          , K. Berlin, expose
          <article-title>: A character-level convolutional neural network with embeddings for detecting malicious urls, file paths and registry keys</article-title>
          ,
          <source>arXiv preprint arXiv:1702.08568</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>P.</given-names>
            <surname>Maneriker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Stokes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. G.</given-names>
            <surname>Lazo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Carutasu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Tajaddodianfar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gururajan</surname>
          </string-name>
          , Urltran:
          <article-title>Improving phishing url detection using transformers</article-title>
          ,
          <source>in: MILCOM 2021-2021 IEEE Military Communications Conference (MILCOM)</source>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>197</fpage>
          -
          <lpage>204</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.</given-names>
            <surname>Apruzzese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. S.</given-names>
            <surname>Anderson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dambra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Freeman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Pierazzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. A.</given-names>
            <surname>Roundy</surname>
          </string-name>
          ,
          <article-title>"real attackers don't compute gradients": Bridging the gap between adversarial ml research and practice</article-title>
          , https://doi.org/10.48550/arXiv.2212.14315 (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dhurandhar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tajer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <article-title>When neural networks fail to generalize? a model sensitivity perspective</article-title>
          , https://doi.org/10.48550/arXiv.2212.00850 (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bhargava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Drozd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rogers</surname>
          </string-name>
          ,
          <article-title>Generalization in nli: Ways (not) to go beyond simple heuristics</article-title>
          ,
          <year>2021</year>
          . arXiv:
          <volume>2110</volume>
          .
          <fpage>01518</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , Bert:
          <article-title>Pre-training of deep bidirectional transformers for language understanding</article-title>
          , arXiv preprint arXiv:
          <year>1810</year>
          .
          <volume>04805</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          , Ł. Kaiser,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          ,
          <article-title>Attention is all you need</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>30</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>T.</given-names>
            <surname>Brown</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ryder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Subbiah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Kaplan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dhariwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Neelakantan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shyam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sastry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Askell</surname>
          </string-name>
          , et al.,
          <article-title>Language models are few-shot learners</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>33</volume>
          (
          <year>2020</year>
          )
          <fpage>1877</fpage>
          -
          <lpage>1901</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16] ,
          <source>How many websites are there</source>
          ,
          <year>2023</year>
          . https://siteefy.com
          <article-title>/how-many-websites-are-there/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>G.</given-names>
            <surname>Hinton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Vinyals</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <article-title>Distilling the knowledge in a neural network</article-title>
          ,
          <source>arXiv preprint arXiv:1503.02531</source>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>I.</given-names>
            <surname>Turc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          ,
          <article-title>Well-read students learn better: The impact of student initialization on knowledge distillation</article-title>
          , CoRR abs/
          <year>1908</year>
          .08962 (
          <year>2019</year>
          ). URL: http://arxiv.org/abs/
          <year>1908</year>
          .08962. arXiv:
          <year>1908</year>
          .08962.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S.</given-names>
            <surname>Garera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Provos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chew</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. D.</given-names>
            <surname>Rubin</surname>
          </string-name>
          ,
          <article-title>A framework for detection and measurement of phishing attacks</article-title>
          ,
          <source>in: Proceedings of the 2007 ACM workshop on Recurring malcode</source>
          ,
          <year>2007</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>A.</given-names>
            <surname>Oshingbesan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ekoh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Okobi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Munezero</surname>
          </string-name>
          , K. Richard,
          <article-title>Detection of malicious websites using machine learning techniques</article-title>
          ,
          <source>arXiv preprint arXiv:2209.09630</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>A.</given-names>
            <surname>Joshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lloyd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Westin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Seethapathy</surname>
          </string-name>
          ,
          <article-title>Using lexical features for malicious url detection-a machine learning approach</article-title>
          , arXiv preprint arXiv:
          <year>1910</year>
          .
          <volume>06277</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>C.</given-names>
            <surname>Hajaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Hason</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dvir</surname>
          </string-name>
          ,
          <article-title>Less is more: Robust and novel features for malicious domain detection</article-title>
          ,
          <source>Electronics</source>
          <volume>11</volume>
          (
          <year>2022</year>
          )
          <fpage>969</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>A.</given-names>
            <surname>Abuadbba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Almashor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gaire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Camtepe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nepal</surname>
          </string-name>
          ,
          <article-title>Towards web phishing detection limitations and mitigation</article-title>
          ,
          <source>arXiv preprint arXiv:2204.00985</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>S.-J.</given-names>
            <surname>Bu</surname>
          </string-name>
          , H.
          <article-title>-</article-title>
          <string-name>
            <surname>J. Kim</surname>
          </string-name>
          ,
          <article-title>Optimized url feature selection based on genetic-algorithm-embedded deep learning for phishing website detection</article-title>
          ,
          <source>Electronics</source>
          <volume>11</volume>
          (
          <year>2022</year>
          )
          <fpage>1090</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>K.-W. Park</surname>
            ,
            <given-names>S.-J.</given-names>
          </string-name>
          <string-name>
            <surname>Bu</surname>
          </string-name>
          , S.-B. Cho,
          <article-title>Evolutionary optimization of neuro-symbolic integration for phishing url detection</article-title>
          ,
          <source>in: International Conference on Hybrid Artificial Intelligence Systems</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>88</fpage>
          -
          <lpage>100</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>H.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Pham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sahoo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Hoi</surname>
          </string-name>
          ,
          <article-title>Urlnet: Learning a url representation with deep learning for malicious url detection</article-title>
          , arXiv preprint arXiv:
          <year>1802</year>
          .
          <volume>03162</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>F.</given-names>
            <surname>Tajaddodianfar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Stokes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gururajan</surname>
          </string-name>
          ,
          <article-title>Texception: a character/word-level deep learning model for phishing url detection</article-title>
          ,
          <source>in: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</source>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>2857</fpage>
          -
          <lpage>2861</lpage>
          .
          <article-title>deep-learning-based malicious url detection</article-title>
          ,
          <source>in: ICC 2022-IEEE International Conference on Communications, IEEE</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>3118</fpage>
          -
          <lpage>3123</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hong</surname>
          </string-name>
          , S.-W. Kim,
          <article-title>Phishing url detection: A network-based approach robust to evasion</article-title>
          ,
          <source>arXiv preprint arXiv:2209.01454</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [47]
          <string-name>
            <surname>K. K. Pal</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Kashihara</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          <string-name>
            <surname>Anantheswaran</surname>
            ,
            <given-names>K. C.</given-names>
          </string-name>
          <string-name>
            <surname>Kuznia</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Jagtap</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <article-title>Baral, Exploring the limits of transfer learning with unified model in the cybersecurity domain</article-title>
          ,
          <source>arXiv preprint arXiv:2302.10346</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [48]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bhargava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Drozd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rogers</surname>
          </string-name>
          ,
          <article-title>Generalization in nli: Ways (not) to go beyond simple heuristics</article-title>
          ,
          <year>2021</year>
          . arXiv:
          <volume>2110</volume>
          .
          <fpage>01518</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [49]
          <string-name>
            <given-names>C.</given-names>
            <surname>Rafel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Narang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Matena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Exploring the limits of transfer learning with a unified text-to-text transformer</article-title>
          ,
          <source>The Journal of Machine Learning Research</source>
          <volume>21</volume>
          (
          <year>2020</year>
          )
          <fpage>5485</fpage>
          -
          <lpage>5551</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [50]
          <string-name>
            <given-names>T.</given-names>
            <surname>Wolf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Debut</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sanh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chaumond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Delangue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Moi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cistac</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Rault</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Louf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Funtowicz</surname>
          </string-name>
          , et al.,
          <article-title>Transformers: State-of-the-art natural language processing</article-title>
          ,
          <source>in: Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>38</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [51]
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Stern</surname>
          </string-name>
          , Adafactor:
          <article-title>Adaptive learning rates with sublinear memory cost</article-title>
          ,
          <source>in: International Conference on Machine Learning, PMLR</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>4596</fpage>
          -
          <lpage>4604</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>