<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Cooking On The Margins: Probabilistic Soft Logics for Recommending and Adapting Recipes</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Johnathan Pagnutti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jim Whitehead</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of California</institution>
          ,
          <addr-line>Santa Cruz 1156 High St Santa Cruz, CA, 95064</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <fpage>269</fpage>
      <lpage>276</lpage>
      <abstract>
        <p>This paper introduces InclinedChef1, for the mixology and open challenges as part of the Computer Cooking Competition. InclinedChef uses Probabilistic Soft Logics (PSLs), a logic formalism that relaxes boolean logic operations to probabilities. PSLs have had good success in recommendation engines, but have yet to be applied to the Case Based Reasoning (CBR) domain. They show a lot of promise for their ability to handle contradictory information, multiple data sources and shifting user constraints.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>paper will introduce InclinedChef, a system that uses PSLs for recommending
and adapting recipes. We will provide a short introduction into PSLs, how to
encode ontology information as PSL atoms and predicates, using a PSL solver
to find highly probable results for a query and highly probable adaptations for
those results. We will conclude by looking at future potential applications and
improvements.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        The use of first order logic to represent Case Based Reasoning (CBR) tasks
is not novel. Delgrande added the ‘default’ operator to a classical first-order
logic to help handle common-sense reasoning tasks[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. CHEF[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], something of
the grandaddy of CBR cooking systems, had rules that could be represented as
first-order logic statements (both CHEF’s rule-based simulations of cooking and
its adaptation reasoning could be expressed as a implications). Nearly every CBR
system in the computer cooking competition has used rule-based formalisms to
some degree, with implication (a implies b) being a core component.
      </p>
      <p>
        Other computer cooking systems have used bottom-up learning approaches,
which at their core, are inferring likely ingredient substitutions from datasets of
recipes. PIERRE[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] uses a neural net to learn a regression model to find highly
rated combinations of ingredients. It then uses the regression model along with
a genetic algorithm to come up with new recipes.
      </p>
      <p>
        PSLs express facts, relationships and implications using first-order logic, but
with the twist that all facts and relationships have an associated probability of
being true, and implications have associated weights. PSL programs are
internally converted into hinge-loss Markov random fields, which are evaluated as a
kind of convex optimization problem. PSLs have been used to combine several
evaluation metrics for recommending restaurants and songs[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], and for learning
weights on implication rules from data for textual sentiment[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Probabilistic Soft Logics</title>
      <p>Probabilistic soft logics are composed of two parts, a set of predicates that
describe potential facts about the world, and a set of logical implications that relate
these predicates together. PSLs are different from other logic based programming
paradigms in that:
1. Logical predicates are soft. Instead of predicates returning if an atom is true
or false, they return the probability that an atom being true or false.
2. PSLs require that all potential atoms be represented, even those that are
impossible. Therefore, PSL formulations of problems tend to be space
inefficient.
3. Going along with soft predicates, implications have weights. These weights
can be learned based on ground truth, or they can be used to infer the
probability of new predicates given a current set. InclinedChef uses
handtuned implication weights to infer the probability of new predicates.
For more detail on how InclinedChef works, see System Overview (section 4).</p>
      <p>A PSL solver converts weighted implication rules and predicates to a
HingeLoss Markov Random Field (HL-MRF). To discuss how, we need to first look at
a PSL predicate.</p>
      <p>F riends(\J on"; \Delilah") = 0:7</p>
      <sec id="sec-3-1">
        <title>F riends(\Delilah"; \P hilip")</title>
        <p>The first atom states that Jon and Delilah are friends (they’re arguments to
the Friends predicate) with a probability of 70%. The second line states that
Delilah and Philip are friends, but we don’t yet know how likely that predicate
is. Any atoms that are given a probability before the solver runs are treated
as hard constraints. The probability is held fixed while finding the probability
of the remaining predicates. This allows PSLs to pull in knowledge. Using an
example from InclinedChef:</p>
        <p>IngGeneralizes(\Game"; \M eat") = 0:748</p>
        <p>This states that we can generalize “game" as a “meat" with fairly high
confidence. This value comes from the wikiTaaaable ontology (namely,
Generalization_costs). More information is in System Overview (section 4).</p>
        <p>Because we’d like to discuss atoms without needing to write out their
arguments, we’ll use the convention Friends/2, which states that the Friends predicate
takes two atoms as arguments. Lets look at a PSL implication rule. For example:
3 : F riends(X; Y ) ^ F riends(Y; Z) =) F riends(X; Z)</p>
        <p>This rule states that friendship is transitive: if Jon is friends with Delilah and
Delilah is friends with Philip, it’s likely that Jon is also friends with Philip. We
also have a weight on this rule, which is how important it is in relation to other
rules. Rules can also be hard constraints that must be upheld when inferring
the probability of unknown atoms; hard constraints follow a slightly different
syntax, using another example from an older version InclinedChef:</p>
        <p>M ustContain(I) =) T argetIngredients(I):</p>
        <p>This rule states that all MustContain/1 predicates must also be
TargetIngredient/1 predicates. InclinedChef uses these constraints to insure that certain
ingredients are present or not present in the adapted recipes, with more
information in System Overview (section 4).</p>
        <p>Given a set of predicates and implications, the PSL solver relaxes the boolean
logic operators to continuous variables. For example, to the PSL solver, a =) b
is relaxed to max(a b; 0) and a ^ b is relaxed to max(a + b 1; 0). This converts
the logic operators and predicates to a set of continuous valued variables, which
allows the solver to define an HL-MRF over the set of unknown Y variables
conditioned on the known X variables, such that the probability of Y given
X is related to a hinge-loss potential function, which the solver solves using
convex optimization. Due to the nature of the problem, this is parallelizable and
decently fast.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>System Overview</title>
      <p>The system work can be divided into two parts—offline conversion and online
recommendation and adaptation. Because PSL programs need to be in their own
syntax, information from datasets and ontologies needs to be converted to atoms
and rules. After this step, the solver can use the rules to perform recommendation
and adaptation.
These parts of ImpliedChef are performed only once, and are done before any
user queries are ever fielded by the system. The first few of these convert recipes
into sets of ingredient atoms, as shown in figure 4.1.</p>
      <p>We specify that the IngInRecipe/2 predicate is closed, which means the solver
should treat all predicates of that type as observed, true data, and furthermore,
should not attempt to infer probabilities on them. IngInRecipe/2 simply encodes
which recipes contain which ingredients. We discuss, under Future Work, how
PSL might consider the amounts of each ingredient in its solving steps. We also
use a set of scripts that converts information from the RDF snapshot of the
wikiTaaable ontology to PSL atoms, for the IngGeneralizes/2 predicate. A few
examples are:</p>
      <p>IngGeneralizes(P ork_side_cuts; P ork) = 0:968
IngGeneralizes(Red_martini; M artini) = 0:500</p>
      <p>IngGeneralizes(F resh_bean; V egetable) = 0:422
These atoms are calculated by subtracting the Generalization_cost of an
ingredient or ingredient class by 1. These Generalization_costs are retrieved from
the RDF snapshot. However, not all ingredients have Generalization_costs in
the wikiTaaable ontology. For those that do not, we use the subclassOf feature
and assign only a probability of 0:5 to this being a correct way to generalize
about an ingredient.</p>
      <p>The result is that, as the Generalization_cost decreases, the more the solver
considers a generalization as likely. Greater generalization costs (representing
ingredient classes further away from each other) are less and less likely. The
RDF snapshot only provides costs for ingredients next to each other, but it can
be useful for retrieval and adaptation to consider generalizations further away,
such as IngGeneralizes(Rye_whiskey, Alcohol) or IngGeneralizes(Veal_leg_cut,
Meat). We append these case generalizations without providing probabilities,
and use the following rule to let the solver figure out how likely each of these
cases should be:
After a user query is parsed, it is turned into two sets of atoms for the predicates
MustContain/1 and MustNotContain/1. These contain the ingredients specified
by the user query about ingredients they want and do not want in a resultant
recipe. For retrieval, these are considered soft constraints, as we can adapt any
ill-fitting recipes, but for adaptation, they are hard constraints. In addition, to
fit with the cocktail challenge, we added the reduced list of ingredients as a set
of MustContain/1 atoms.</p>
      <p>For retrieval, we’d like to consider each recipe in terms of the classes that its
ingredients generalize to. We capture this with a few rules:</p>
      <p>IngInRecipe(R; I) ^ IngGeneralizes(I; C) =)</p>
      <sec id="sec-4-1">
        <title>RecipeClasses(R; C)</title>
        <p>RecipeClasses(R; C1) ^ IngGeneralizes(C1; C2) =)
M ustContain(I) ^ IngInRecipe(R; I) =)</p>
      </sec>
      <sec id="sec-4-2">
        <title>RecipeClasses(R; C2)</title>
      </sec>
      <sec id="sec-4-3">
        <title>RecommendT arget(R)</title>
        <sec id="sec-4-3-1">
          <title>M ustContain(I) ^ IngGeneralizes(I; C)^ RecipeClasses(R; C) =)</title>
        </sec>
      </sec>
      <sec id="sec-4-4">
        <title>RecommendT arget(R)</title>
        <p>M ustN otContain(I) ^ IngInRecipe(R; I) =) :RecommendT arget(R)</p>
        <sec id="sec-4-4-1">
          <title>M ustN otContain(I) ^ IngGeneralizes(I; C)^ RecipeClasses(R; C) =) :RecommendT arget(R)</title>
          <p>The first two rules let us consider a recipe based on the ingredient classes that
it’s composed of. The next set of rules let us use that information. We want to
make recipes that contain ingredients in a user’s query more likely. Furthermore,
we also want to establish that any recipes that have ingredients that generalize
to the same classes as an ingredient a user wants are more likely. The inverse
goes for ingredients that a user does not want.</p>
          <p>For adapting a retrieved case, we need to be a little creative. Because PSLs
require all atoms implied by their predicates, we can’t simply derive a probability
for any two potential ingredients to swap. Considering only the 155 ingredients
used in the CCC cocktail case library, we would need to infer probabilities on
155! predicates.</p>
          <p>To get around this bottleneck, we consider two rules of thumb. It’s best to
perform the bare minimum number of swaps to satisfy a query and we only need
to either swap in or out ingredients that are part of the user’s query</p>
          <p>Therefore, we inspect a user’s query and generate the atoms in the Swap/2
predicate for each query. Each atom in Swap/2 contains a ingredient that the
user has specified they wish to have or not have in the resultant recipe. We
perform swaps with the following rules:</p>
        </sec>
        <sec id="sec-4-4-2">
          <title>M ustContain(I1) ^ RecommendT arget(R) ^ IngGeneralizes(I1; C)^</title>
          <p>IngGeneralizes(I2; C) ^ IngInRecipe(R; I2) =) Swap(I1; I2)</p>
        </sec>
        <sec id="sec-4-4-3">
          <title>M ustN otContain(I1) ^ RecommendT arget(R) ^ IngGeneralizes(I1; C)^ IngGeneralizes(I2; C) ^ IngInRecipe(R; I2) =) Swap(I2; I1)</title>
          <p>Swaps are always read the same way, the first atom in the predicate is
replacing the second. Due to the fact that the Swap/2 predicate is built on the
fly, we don’t need to specify a hard constraints that an element must be present.
Probabilities are only inferred on the atoms present in the Swap/2 data files,
and all of those atoms are related to a user’s query. We then build the answer
XML file based on the RecommendTarget/1 and Swap/2 predicates, as shown
in figure 4.2.</p>
          <p>The recommendation rules give us a set of probabilities on the
RecommendTarget/1 predicates, however, unless the user was very, very specific with a query,
several atoms are equally likely to fit. We take the set of the highest probable
atoms and chose one between them. We then retrieve that case from the CCC
case library and use it as part of the retrieve half.</p>
          <p>The adaptation rules give us a set of probabilities on potential swaps. We
scan the Swap/2 predicate for the highest probability swaps that involve both
the user’s query and the retrieved recipe. We adapt the recipe by keeping swap
quantity units and amounts the same, but changing the resultant ingredients.
5</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>We hope that ImpliedChef shows how PSLs can be used for CBR tasks like
retrieval and adaptation. However, there are many extensions possible while
using PSL as a framework.</p>
      <p>The overview of PSL (and the rules that ImpliedChef uses) have kept to
logical, boolean operations. PSL also supports arithmetic operations, such as
sum constraints (the values of atoms need to sum to a particular value). PSL
even affords substituting an arbitrary number of atoms as part of a logical rule
with sum-augmented predicates, which work like placeholders for the sum of
an arbitrary number of atoms. Select-statements restrict which atoms can be
swapped in for sum-augmentation, so one can imagine encoding the amount of
each ingredient in a recipe as a percentage, then using sum-augmentation and
selection to be sensitive to ingredient amounts when adapting recipes.</p>
      <p>In addition, PSL supports the use of arbitrary functions as part of implication
rules, as long as those functions can take in string arguments and return a real
value from [0; 1].</p>
      <p>
        N ame(P 1; N 1) ^ N ame(P 2; N 2) ^ Similar(N 1; N 2) =) Same(P 1; P 2)
The above rule, for example, relates two people atoms (P ) by their names
(N ). Similar/2 is a functional predicate, an external function that takes in two
names and returns how similar they are from [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]. This sort of technique allows
for the unification of many ways to measure similarity, from WordNet
comparisons to similarity metrics built from Long-Short Term Memory networks. More
importantly, though, using several different rules with a variety of
recommendation heuristics, we can tune the weights on the rules to fit a variety of user
preferences.
      </p>
      <p>
        In the current iteration of ImpliedChef, all of its atoms and rules come from
the wikiTaaaable ontology. Other food ontologies exist that also have entities
with labelings, such as the Foodon ontology [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Converting the relevant parts
of other ontologies and integrating them into ImpliedChef is currently ongoing
work.
      </p>
      <p>Reflecting back on the opening problems, we can see that ImpliedChef shows
how PSLs provide approaches for solving them. Recipe data and ontology
information can be converted into PSL rules and predicates, and PSLs can be used
for retrieval and adaptation CBR tasks. Furthermore, PSLs are able to reason
over ingredient amounts, able to combine conflicting heuristic scores and able to
pull in reasoning from multiple ontologies. They seem to be a powerful, general
framework for tackling the semantically rich space of recipe generation.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Ahuja</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Montville</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Omolewa-Tomobi</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Heendeniya</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Steinfeldt</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anand</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>LaComb</given-names>
            , R.,
            <surname>Moshfegh</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>Usda food and nutrient database for dietary studies, 5.0</article-title>
          . US Department of Agriculture, Agricultural Research Service, Food Surveys Research Group (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bach</surname>
            ,
            <given-names>S.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Broecheler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Getoor</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Hinge-loss markov random fields and probabilistic soft logic arXiv:1505.04406 [cs</article-title>
          .LG] (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Badra</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cojan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cordier</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lieber</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meilender</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mille</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Molli</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nauer</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Napoli</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skaf-Molli</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , et al.:
          <article-title>Knowledge acquisition and discovery for the textual case-based cooking system wikitaaable</article-title>
          .
          <source>In: 8th International Conference on Case-Based Reasoning-ICCBR</source>
          <year>2009</year>
          , Workshop Proceedings. pp.
          <fpage>249</fpage>
          -
          <lpage>258</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Delgrande</surname>
            ,
            <given-names>J.P.:</given-names>
          </string-name>
          <article-title>An approach to default reasoning based on a first-order conditional logic: revised report</article-title>
          .
          <source>Artificial intelligence</source>
          <volume>36</volume>
          (
          <issue>1</issue>
          ),
          <fpage>63</fpage>
          -
          <lpage>90</lpage>
          (
          <year>1988</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Foulds</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kumar</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Getoor</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Latent topic networks: A versatile probabilistic programming framework for topic models</article-title>
          .
          <source>In: International Conference on Machine Learning</source>
          . pp.
          <fpage>777</fpage>
          -
          <lpage>786</lpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Griffiths</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brinkman</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dooley</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hsiao</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Buttigieg</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hoehndorf</surname>
          </string-name>
          , R.: Foodon:
          <article-title>A global farm-to-fork food ontology</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Hammond</surname>
            ,
            <given-names>K.J.:</given-names>
          </string-name>
          <article-title>Chef: A model of case-based planning</article-title>
          .
          <source>In: AAAI</source>
          . pp.
          <fpage>267</fpage>
          -
          <lpage>271</lpage>
          (
          <year>1986</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Kouki</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fakhraei</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Foulds</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eirinaki</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Getoor</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Hyper: A flexible and extensible probabilistic framework for hybrid recommender systems</article-title>
          .
          <source>In: Proceedings of the 9th ACM Conference on Recommender Systems</source>
          . pp.
          <fpage>99</fpage>
          -
          <lpage>106</lpage>
          . ACM (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Morris</surname>
            ,
            <given-names>R.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burton</surname>
            ,
            <given-names>S.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bodily</surname>
            ,
            <given-names>P.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ventura</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Soup over bean of pure joy: Culinary ruminations of an artificial chef</article-title>
          .
          <source>In: Proceedings of the 3rd International Conference on Computational Creativity</source>
          . pp.
          <fpage>119</fpage>
          -
          <lpage>125</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Renner</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sproesser</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Strohbach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schupp</surname>
          </string-name>
          , H.T.:
          <article-title>Why we eat what we eat. the eating motivation survey (tems)</article-title>
          .
          <source>Appetite</source>
          <volume>59</volume>
          (
          <issue>1</issue>
          ),
          <fpage>117</fpage>
          -
          <lpage>128</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Steptoe</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pollard</surname>
            ,
            <given-names>T.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wardle</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Development of a measure of the motives underlying the selection of food: the food choice questionnaire</article-title>
          .
          <source>Appetite</source>
          <volume>25</volume>
          (
          <issue>3</issue>
          ),
          <fpage>267</fpage>
          -
          <lpage>284</lpage>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>