<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Recommender Systems for Renewable Energy Communities: Tailoring LLM-Powered Recommendations to User Personal Values and Literacy</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Bianca Maria Deconcini</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giulia Coucourde</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luca Console</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Malek Anouti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giorgio Gaudio</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michele Visciola</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Experientia SA</institution>
          ,
          <country country="CH">Switzerland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Turin</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The paper proposes a multi-step approach to the design of recommender systems in which the adoption of LLMs is choreographed by a more traditional knowledge-based system exploiting a user model. We focus on renewable energy communities and on the task of engaging participants by leveraging their values, expertise, and available resources to provide personalized descriptions of the benefits they could achieve. This is an important step for the ultimate goals of advanced recommender systems, i.e., facilitating the progression of behaviours towards sustainable agency and adaptivity to climate and environmental challenges.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Recommender Systems</kwd>
        <kwd>Energy Communities</kwd>
        <kwd>LLMs1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Energy communities face unique challenges in participant engagement and retention. Unlike
traditional recommender systems that focus on item recommendations, energy communities
require personalized approaches that align with users ’values, knowledge levels, and available
resources to encourage meaningful participation. Large Language Models (LLMs) are
demonstrating enormous potential in many areas, including recommender systems, with
capabilities that go beyond traditional methods. The survey by [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] provided a systematic
analysis of the different roles LLMs can play in recommender systems, examining the ways they
can be trained for this specific task and the various approaches exploited for generating
recommendations. In this paper we explore innovative directions that led to the design of
ReCommE, a recommender which combines ”traditional” approaches to recommendation with
the adoption of LLMs under the coordination of a flexible choreography tailored to each
individual user. While in most recommender systems the focus is on recommending
items/content tailored to specific users ’features, we concentrate on leveraging users ’values
and knowledge to engage them to participate in energy communities. Our goal is to provide a
personalized narrative describing the benefit that the user can obtain after joining the
community. Our research addresses three key questions:



      </p>
      <p>How can LLMs be effectively integrated with rule-based systems to create personalized
energy community recommendations?
What role do personal values and literacy levels play in tailoring energy
recommendations?</p>
      <p>How can iterative user feedback improve the personalization process?</p>
      <p>
        At an abstract level, our approach involves interleaving between a rule-based classifier,
based on knowledge provided by domain experts, and an LLM trained (finely tuned) on the
specific domain of application. The classifier initiates by generating the initial profile of a user
who is approaching the community; the LLM provides textual argumentation based on the
classification. The profile is centered around the user’s values and knowledge (besides
sociodemographic information and information about the user resources, which are in turn used as
context by the LLM to generate the responses). The process is iterative and controlled by a
choreography that considers the user’s reaction to the text generated by the LLM (the user can
agree or disagree with parts of the text) and consequently the classifier is used to refine the user
profile, and the LLM to generate more detailed argumentation. This work has been carried on
with Experientia, a forward-thinking company involved in a project that has started at the
beginning of 2023. It focuses on supporting energy communities - groups of people that aim to
optimize energy usage by sharing energy production and consumption [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Through this
synergy, we aim to explore how the integration of LLMs and behavioral models can reshape
recommender systems and better support community-driven scenarios. The paper is organized
as follows: Section 2 presents the background and related work, highlighting relevant
contributions in LLM-based recommender systems and energy communities. Section 3
introduces the Masterpiece project [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] context. Section 4 details the ReCommE system
architecture and its components. Section 5 explains the choreography of our approach. Sections
6 focus on LLM implementation, including prompting, training, and usage. Section 7 discusses
our preliminary evaluation, and Section 8 concludes with future directions.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Background and Related Work</title>
      <sec id="sec-2-1">
        <title>2.1. Energy Communities and User Engagement</title>
        <p>Energy communities face unique challenges in participant recruitment and ongoing
engagement. Unlike traditional consumer products, the benefits of joining energy communities
can be complex and multifaceted, spanning economic, environmental, and social dimensions.
Effective engagement requires understanding users' behavioral drivers, values, and technical
literacy to communicate these benefits in personally relevant ways.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. LLMs in Recommender Systems</title>
        <p>
          While some of the latest recommender systems techniques involve LLMs, most of them use
LLMs to support the work of the system itself and enhance backend operations, such as
generating descriptions or extracting information from texts. However, despite these
advancements, their use in recommendation systems has not yet been fully explored. The idea
for this paper is to move LLM’s role from simple content generators for the system to tools
which can provide context-aware responses. Our goal is to understand if, and how, LLMs can
be valuable tools for personalization purposes, able to increase user engagement and
satisfaction. One of the first applications of LLMs in RS mainly focused on content generation.
The work of [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] demonstrates the use of LLMs to generate item descriptions, showing that these
systems give results that are similar to those obtained by web-scraping techniques. Similarly,
[
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] explore how LLMs can produce explanations that users find comparable or even better than
the baseline ones. Indeed, users in this study perceived LLM-generated recommendations as
more effective and efficient, thus helping them to decide faster. This is due also to LLMs
detailing ability. Several studies explored the integration of LLMs for enhancing personalization
and user experience. [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] introduce LLM-Rec, a framework which leverages the power of LLMs
specifically for personalisation goals. By employing prompting techniques, this approach aims
to generate better quality recommendations,without the need for extensive domain-specific
training or data. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] emphasize the emerging role of LLMs in reshaping traditional
recommendation methodologies. The study shows that contextualising recommendations can
significantly improve their relevance and effectiveness. However, it also suggests a gap in the
incorporation of more sophisticated real-time methods, highlighting the need for systems that
continuously adapt to changing user preferences with the help of contextual information.
Another interesting vision is offered by [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] and their evaluation metric of behavior alignment.
Traditionally, the evaluation of recommendation systems focuses on metrics such as accuracy
or novelty. However, these approaches do not always capture how recommendation systems
adapt to user behaviors and preferences.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. User Modeling for Personalization</title>
        <p>Effective personalization relies on robust user/behavioral modeling. Traditional user behavioral
models often focus on demographic information and behavioral data related to the digital
footprint of the users, but these may be insufficient for complex domains like energy
communities. Our approach expanded to include intangible aspects such as personal values [X],
domain literacy [Y], and resource availability [Z]. These elements are particularly relevant for
energy community engagement, where decisions are influenced by a combination of practical
constraints, knowledge levels, and personal value systems.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Research Gap</title>
        <p>Despite advances in both LLM-based recommender systems and energy community research,
several gaps remain:

</p>
        <p>Most LLM applications in recommender systems focus on item recommendations rather
than complex service engagement.</p>
        <p>Few systems incorporate iterative user feedback to refine recommendations.</p>
        <p>Our work addresses these gaps through a choreographed approach that leverages both
traditional knowledge-based systems and fine-tuned LLMs.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. The Masterpiece Project Context</title>
      <p>Masterpiece is a project focusing on digital tools for supporting energy communities. The first
part of the project focused on studying the domain, exploiting a number of case studies in
different countries. In this way, a detailed picture of the instruments, rules and roles of different
types of stakeholders and participants has been created. User studies supported the construction
of the archetypes of users, characterized along multiple dimensions such as individual and
societal values, levels of expertise, available resources (including household, types of appliances,
etc.). This behavioral model serves now as the foundation for this work. The user studies
identified several key archetypes within energy communities, including:</p>
      <p>The integration of user values and domain literacy levels in recommendation generation
remains underexplored.</p>
      <p>There is limited research on how to effectively combine rule-based systems with
generative LLMs.</p>
      <p>Sustainability Champions: Primarily motivated by environmental values.</p>
      <p>Financial Optimizers: Focused on energy savings and financial benefits.</p>
      <p>Community Builders: Driven by social cohesion and local development.</p>
      <p>Skeptical consumers: Interested in convenience supply.</p>
      <p>Each archetype is characterized by a unique combination of values, knowledge levels, and
available resources, which inform our personalization approach.</p>
    </sec>
    <sec id="sec-4">
      <title>4. ReCommE System: Architecture and Components</title>
      <p>The ReCommE system integrates multiple components to
recommendations for energy community participation.
deliver
personalized</p>
      <sec id="sec-4-1">
        <title>4.1. System Overview</title>
        <p>ReCommE combines a rule-based classifier with a fine-tuned LLM to generate personalized
recommendations. The system follows a choreographed workflow that iteratively refines the
user model based on feedback, allowing for increasingly tailored recommendations.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. User/Behavioral Model</title>
        <p>The core of ReCommE system architecture is a comprehensive user/behavioral model that
captures three key dimensions:

</p>
        <p>Values: Individual priorities and behavioral drivers that drive decision-making (e.g.,
environmental sustainability, economic benefits, community well-being, convenience
services).</p>
        <p>Literacy/Expertise: Technical knowledge and familiarity with energy concepts.</p>
        <p>Resources: Available assets and infrastructure (e.g., home ownership, renewable
installations, smart devices).</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Rule-Based Classifier</title>
        <p>The classifier component employs domain expertise to categorize users into meaningful
archetypes based on their responses to targeted questions. Initially, the user’s profile is
randomly selected from possible profiles. Then, the user is prompted with questions for
each category (values, literacy, and resources). If the user's responses do not align with the
initially selected profile, the questions are dynamically adjusted. This process ensures that
the user’s profile evolves in response to their answers, leading to a more accurate and
personalized classification. This classification provides the foundation for the
LLMgenerated content, ensuring domain-specific relevance.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. LLM Component</title>
        <p>The LLM component is responsible for generating natural language descriptions and
recommendations based on the user profile. Through fine-tuning and structured prompting, the
LLM produces personalized narratives that highlight the benefits of community participation
most relevant to the specific user.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. The Choreography</title>
      <p>In this section we sketch the workflow of the choreography that interleaves traditional
rulebased systems with LLMs (see Figure 1).</p>
      <p>

</p>
      <p>Initial Profiling: Whenever a new participant approaches the community, ReCommE is
activated with the initial goal of profiling the user. A small set of questions is presented
to the user and a first classification in one of the archetypes is done. In this first attempt,
a user model is built with a preliminary rough estimation of the user's values, knowledge
(expertise) and resources.</p>
      <p>Profile Validation: This preliminary model is passed to the LLM which is asked to
generate a first abstract description of the hypothesized user profile. The description is
presented to the user, who is then asked to highlight specific points where s/he agrees
or disagrees.</p>
      <p>Model Refinement: The control is passed back to the rule systems which activates a
second group of questions depending on the user's feedback. In this way the preliminary
model is either disconfirmed or refined. In the former case the process is restarted, in
the latter the LLM is activated for a second time.</p>
      <p>Personalized Recommendation: In this second case, the LLM is asked to provide a first
description of the benefits the user can obtain if she/he joins the community. The
description is personalized given the user model, specifically along the dimensions of
values and available resources and taking also into account the expertise to tailor the
level of details and technicality. We also personalize the tone of the explanation, given
the user model.</p>
      <sec id="sec-5-1">
        <title>5.1. Example Walkthrough</title>
        <p>To illustrate the choreography, consider a user initially classified as a "Sustainability
Champion":



</p>
        <p>The system presents an initial profile: "You seem to prioritize environmental
sustainability and have moderate knowledge about renewable energy..."
The user confirms their environmental values but disagrees with the knowledge
assessment.</p>
        <p>The system refines the model, adjusting the expertise level downward.</p>
        <p>The LLM generates a new recommendation emphasizing environmental benefits with
less technical terminology: "By joining this energy community, you'll help reduce
carbon emissions by approximately X tons per year...".</p>
        <p>In this version of ReCommE, the choreography is one step. In the future we plan to add
further steps that interleave a progressive refinement of the user model by the classifier and
more focused and detailed argumentations generated by the LLM.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. LLM Implementation</title>
      <sec id="sec-6-1">
        <title>6.1. LLM Prompting</title>
        <p>
          The textual input given to an LLM with the purpose of directing the output’s generation is
known as prompt, and prompt engineering is the process of designing and constructing input
to elicit desired responses. Advancements in the field of generative AI are remarkable, as
demonstrated by the increasing complexity of models, improved training techniques and the
expansion of application possibilities. These reasons, as pointed out by [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], underscore the
critical role of prompt engineering in maximizing the precision and usefulness of these models,
ensuring they can successfully meet a range of changing user requirements. This section
describes our strategy to take advantage of the LLMs generation capability by employing
prompt engineering techniques to lead the model to produce a basic description of the user
profile and the benefits of joining the community. According to the workflow, the goal is to get
the abstract description by providing the model details about the user’s value, resources and
knowledge (expertise). In order to get the desired outcome, a role-playing prompting approach
- a method of influencing the language model’s behavior by giving it a specific role within the
interaction - was employed, as suggested by [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. The purpose of the experimental framework
was to assess the performance of the models, Llama-3-8B-Instruct [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] and Zephyr-7b-beta [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ],
under different conditions. The experiments were performed on both pre-trained and fine-tuned
versions of the models. We used a role-prompting strategy for each configuration, employing
both the zero-shot and few-shot paradigms. In the zero-shot setting, the models were prompted
without any task-specific examples and context, only the role was defined. On the other hand,
in the few-shot setting, the models received a limited number of examples, to simulate a more
informed scenario, as illustrated by [13]. Adding context in both cases, background knowledge
helps the model match its responses with the user’s intent, which enhances the quality of the
output produced in both situations. Neverthless, even with this advancement, the generated
content frequently falls short of the required level of domain specificity needed for reliable and
accurate outcomes, being too broad in regard to environmental sustainability and energy field.
This limitation indicates a weakness in the model’s ability to fully use context to tailor
responses to a given profile type and to adjust its responses in response to particular knowledge
domains. However, adding context is a good place to start when trying to improve generation,
providing a basis for additional refinement and domain adaptation to produce more accurate
and contextually relevant results.
        </p>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. LLM Training</title>
        <p>Our system training is based on the use of project documents that describe users’ archetypes
along three main dimensions: values, literacy and resources. These documents, based on user
studies conducted by Experientia, are used to train the model, so that it can know the user
profiles’ characteristics and the specific context. This allows the model to generate responses
driven by domain-relevant user studies, rather than relying solely on its general knowledge as
a preformed model. The initial objective was to obtain the generation of comprehensive
descriptions for each archetype that included the core values and domain-specific information
related to that archetype. This approach aims to offer comprehensive depictions that
contextualize the archetypes’ roles, traits, and applications within the energy domain in
addition to defining them. To achieve the goal, the training data was manually converted into
a chat template format in order to tailor it to the model’s particular needs. The preprocessing
involved manually converting the data into 1,000 instruction-response pairs where each pair
represents a conversational exchange between a user and an assistant, carefully extracted from
official documents. This step was crucial to enhance data relevance and with the help of this
approach we trained the model to generate responses that are highly adapted to the target
domain. For the preliminary testing phase, we chose to finetune Llama-3-8B-Instruct. The
model was fine-tuned applying a selective Parameter-Efficient Fine-Tuning (PEFT) as explained
by [14] with Low-Rank Adaptation (LoRA) [15]. Two NVIDIA A40 GPUs, selected for their high
amount of memory and efficient parallel processing capabilities, made up the computing
environment. Multiple runs were performed, varying the hyperparameter configurations listed
in Table 1. The target modules selected for LoRA fine-tuning include key self-attention
components (query, key, value, and output projections) and MLP layers (gate, up, and down
projections), ensuring effective adaptation of both attention and feed-forward transformations.
Table 2 compares the best and worst runs, selected based on eval loss. The chosen loss function
is the cross-entropy loss, which is standard for causal language modeling. By minimizing
crossentropy loss, the model learns to assign higher probabilities to correct next-token predictions,
improving its ability to generate coherent and contextually appropriate sequences. The best run
(run_7) outperforms the worst (run_18), likely due to its higher learning rate (2.5 − 5 vs. 1.5
− 5) and constant scheduler. This setup yields lower train (1.96 vs. 3.02) and eval loss (1.71 vs.
2.56), suggesting more effective weight updates. Additionally, LoRA r and LoRA alpha are
higher in the best run (16 and 32 vs. 8 and 16), following the consistent setting where alpha is
always set to twice the value of r, which may contribute to better adaptation of the model
parameters.</p>
      </sec>
      <sec id="sec-6-3">
        <title>6.3. LLM Usage</title>
        <p>In order to create different generations, the same prompt structure was used, modifying only
the content of the variables {values} and {profile}, which depends on the classifier’s output. The
profiles and values obtained from the rule-based system’s initial estimation are given to the
system prompt in this initial stage without additional explanation because they were integrated
through fine-tuning. The user prompt incorporates the resulting profile and is not intended as
a prompt directly provided by a user but as a starting point for the generation process. An
example for the archetype ”Sustainability Champion” is shown in Figure 2. This process enabled
the generation of multiple descriptions that could be compared while keeping the structure of
system prompt and user prompt fixed. We came to the conclusion from the tests that a minimal
structured system prompt that defines the role and directs the model in the generation was
successful.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Preliminary Evaluation</title>
      <p>The evaluation, at present, is preliminary and conducted by the Experientia team that carried
out the user studies. Different generations were compared, and the best were selected based on
their coherence with the goal, the prompt and the domain. This process made it possible to
determine which model was the most suitable and according to expert evaluations,
Llama-3-8BInstruct in its fine-tuned version was the one that generated the most relevant outputs.</p>
      <sec id="sec-7-1">
        <title>7.1. Evaluation Criteria</title>
        <p>Our evaluation focused on several key dimensions:




</p>
        <p>Relevance: How well the generated content aligned with the user’s archetype.
Accuracy: Correctness of domain-specific information.</p>
        <p>Personalization: Adaptation to the user’s values, literacy, and resources.</p>
        <p>Readability: Clarity and accessibility of the generated text.</p>
        <p>Persuasiveness: Potential effectiveness for encouraging community participation.
Fine-tuned models consistently outperformed pre-trained models in domain relevance.
Role-prompting significantly improved the personalization of outputs.
“Few-shot” prompts produced more accurate technical information than “zero-shot”
approaches.</p>
        <p>User values were more effectively incorporated than literacy levels in the generated
outputs.</p>
        <p>In this phase, the evaluation is intended to collect initial feedback to understand the potential
of the system or areas for improvement, but it does not represent a final objective assessment
of the system’s performance - that indeed we are planning to conduct in the near future.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>8. Conclusions and Future Work</title>
      <p>In this work, we explored the integration of large language models (LLMs) into recommendation
systems, with a particular focus on dynamic personalization and user-system interaction. Our
key contributions include:</p>
      <sec id="sec-8-1">
        <title>7.2. Preliminary Results</title>
        <p>Initial findings suggest that:



</p>
        <p>A novel choreographed approach combining rule-based systems and LLMs.</p>
        <p>A multi-dimensional user model for energy community engagement.</p>
        <p>An iterative feedback mechanism that refines recommendations based on user input.</p>
        <p>A domain-specific implementation for renewable energy communities.</p>
        <p>Several promising directions for future work have emerged: Since a key aspect of our
proposal is the use of iterative interaction, where users are asked to express their agreement or
disagreement on the description provided by the system, an open question remains whether
binary responses are sufficient or whether it is more useful to allow richer feedback, for a more
detailed refinement. Moreover, an interesting future development regards how past
conversations and interactions could be reintegrated into the LLM tuning process to update the
system knowledge. This approach would allow the system to adapt and enhance its ability to
provide personalized recommendations over time. At present, the interaction between the LLM
and the recommender system takes place in two main phases, but this exchange may become
more frequent and dynamic, involving the LLM more in the recommendation process. Lastly,
as mentioned before, our plan is to first refine the system and then conduct a quantitative
analysis based on a large-scale evaluation with actual energy community participants to
measure engagement effectiveness and behavioral change.
[13] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P.</p>
        <p>Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R.
Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. teusz
Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, D.
Amodei, Language models are few-shot learners, ArXiv abs/2005.14165 (2020). URL:
https://api.semanticscholar.org/CorpusID:218971783
[14] Z. Han, C. Gao, J. Liu, J. Zhang, S. Q. Zhang, Parameter-efficient fine-tuning for large
models: A comprehensive survey, ArXiv abs/2403.14608 (2024). URL:
https://api.semanticscholar.org/CorpusID:268553763.
[15] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, W. Chen, Lora:
Lowrank adaptation of large language models, 2021. URL: https://arxiv.org/abs/2106.09685.
arXiv:2106.09685</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Xing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Niu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Long</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. Zhang,</surname>
          </string-name>
          <article-title>Towards next-generation llm-based recommender systems: A survey and</article-title>
          beyond,
          <year>2024</year>
          . URL: https://arxiv.org/abs/2410.19744. arXiv:
          <volume>2410</volume>
          .
          <fpage>19744</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lowitzsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hoicka</surname>
          </string-name>
          ,
          <string-name>
            <surname>F. van Tulder</surname>
          </string-name>
          ,
          <article-title>Renewable energy communities under the 2019 european clean energy package - governance model for the energy clusters of the future?</article-title>
          ,
          <source>Renewable and Sustainable Energy Reviews</source>
          <volume>122</volume>
          (
          <year>2020</year>
          )
          <article-title>109489</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S1364032119306975. doi:https://doi.org/10.1016/j.rser.
          <year>2019</year>
          .
          <volume>109489</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Multidisciplinary</given-names>
            <surname>Approaches</surname>
          </string-name>
          and
          <article-title>Software Technologies for Engagement, Recruitment and Participation in Innovative Energy Communities in Europe</article-title>
          ,
          <source>Technical Report Grant agreement no 101096836</source>
          ,
          <year>2024</year>
          . URL: https://masterpiece-horizon.eu/.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Acharya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Onoe</surname>
          </string-name>
          ,
          <article-title>Llm based generation of item-description for recommendation system</article-title>
          ,
          <source>in: Proceedings of the 17th ACM Conference on Recommender Systems, RecSys '23</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2023</year>
          , p.
          <fpage>1204</fpage>
          -
          <lpage>1207</lpage>
          . doi:
          <volume>10</volume>
          .1145/3604915.3610647
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lubos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. N. T.</given-names>
            <surname>Tran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Felfernig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Polat</given-names>
            <surname>Erdeniz</surname>
          </string-name>
          , V.
          <string-name>
            <surname>-M. Le</surname>
          </string-name>
          ,
          <article-title>Llm-generated explanations for recommender systems</article-title>
          ,
          <source>in: Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization</source>
          , UMAP Adjunct '
          <volume>24</volume>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2024</year>
          , p.
          <fpage>276</fpage>
          -
          <lpage>285</lpage>
          . doi:
          <volume>10</volume>
          .1145/3631700.3665185.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H.</given-names>
            <surname>Lyu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Leung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Luo,</surname>
          </string-name>
          <article-title>LLM-rec: Personalized recommendation via prompting large language models</article-title>
          , in: K. Duh,
          <string-name>
            <given-names>H.</given-names>
            <surname>Gomez</surname>
          </string-name>
          , S. Bethard (Eds.),
          <source>Findings of the Association for Computational Linguistics: NAACL</source>
          <year>2024</year>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Mexico City, Mexico,
          <year>2024</year>
          , pp.
          <fpage>583</fpage>
          -
          <lpage>612</lpage>
          . URL: https://aclanthology.org/
          <year>2024</year>
          .findings-naacl.
          <volume>39</volume>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2024</year>
          .findings-naacl.
          <volume>39</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Mei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Recommender Systems in the Era of Large Language Models (LLMs)</article-title>
          ,
          <source>IEEE Transactions on Knowledge &amp; Data Engineering</source>
          <volume>36</volume>
          (
          <year>2024</year>
          )
          <fpage>6889</fpage>
          -
          <lpage>6907</lpage>
          . doi:
          <volume>10</volume>
          .1109/TKDE.
          <year>2024</year>
          .3392335
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <article-title>Behavior alignment: A new perspective of evaluating llm-based conversational recommendation systems</article-title>
          ,
          <source>in: Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2024</year>
          , p.
          <fpage>2286</fpage>
          -
          <lpage>2290</lpage>
          . doi:
          <volume>10</volume>
          .1145/3626772.3657924.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>B.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <surname>N.</surname>
          </string-name>
          <article-title>Langren'e, S. Zhu, Unleashing the potential of prompt engineering in large language models: a comprehensive review</article-title>
          ,
          <source>ArXiv abs/2310</source>
          .14735(
          <year>2023</year>
          ).URL: https://api.semanticscholar.org/CorpusID:264426395
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>N.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Shou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <article-title>Large language models are diverse role-players for summarization evaluation</article-title>
          ,
          <source>in: Natural Language Processing and Chinese Computing</source>
          ,
          <year>2023</year>
          .URL:https://api.semanticscholar.org/CorpusID:257767249.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <article-title>AI@Meta, Llama 3 model card</article-title>
          , https://github.com/metallama/llama3/blob/main/MODEL_CARD.md,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>L.</given-names>
            <surname>Tunstall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Beeching</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Lambert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Rajani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Rasul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Belkada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Huang</surname>
          </string-name>
          , L. von
          <string-name>
            <surname>Werra</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Fourrier</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Habib</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Sarrazin</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Sanseviero</surname>
            ,
            <given-names>A. M.</given-names>
          </string-name>
          <string-name>
            <surname>Rush</surname>
          </string-name>
          , T. Wolf, Zephyr: Direct distillation of lm alignment,
          <year>2023</year>
          . arXiv:
          <volume>2310</volume>
          .
          <fpage>16944</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>