0% found this document useful (0 votes)
14 views6 pages

Research Paper Diet 2

This document presents a method for learning personal food preferences through food logs to improve automated food recommendation systems, which currently lack accuracy due to insufficient knowledge of user preferences. The proposed approach utilizes word embeddings and a publicly available food database to identify frequently consumed foods and evaluate user preferences. The method aims to enhance dietary management for individuals with chronic diseases by providing personalized meal recommendations based on their eating habits.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views6 pages

Research Paper Diet 2

This document presents a method for learning personal food preferences through food logs to improve automated food recommendation systems, which currently lack accuracy due to insufficient knowledge of user preferences. The proposed approach utilizes word embeddings and a publicly available food database to identify frequently consumed foods and evaluate user preferences. The method aims to enhance dietary management for individuals with chronic diseases by providing personalized meal recommendations based on their eating habits.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Learning Personal Food Preferences via Food Logs

Embedding
Ahmed A. Metwally1,2,∗ , Ariel K. Leong3,∗ , Aman Desai4 , Anvith Nagarjuna1 , Dalia Perelman1 , Michael Snyder1

Abstract—Diet management is key to managing chronic dis- created an equation to predict the score a user would assign
eases such as diabetes. Automated food recommender systems to a recipe that took into account the number of ingredients in
may be able to assist by providing meal recommendations that the recipe that the user liked and how much they liked them
arXiv:2110.15498v2 [cs.CL] 22 Nov 2021

conform to a user’s nutrition goals and food preferences. Current


recommendation systems suffer from a lack of accuracy that is [3]. There are several disadvantages to these approaches. Their
in part due to a lack of knowledge of food preferences. In this effectiveness relies on having detailed information about users’
work, we propose a method for learning food preferences from feelings, specifically the user for whom the recommendation
food logs, a comprehensive but noisy source of information about is made and similar users, toward many different items. Such
users’ dietary habits. We also introduce accompanying metrics detailed information is difficult to obtain. Additionally, many
to evaluate personal learning food preferences. The method
generates and compares word embeddings to identify the parent of the previous studies either use data scraped from recipe
food category of each food entry and then calculates the most rating websites or present users with a series of recipes
popular. Our proposed approach identifies 82% of a user’s ten and ask for their rating [1]. This data is likely not fully
most frequently eaten foods. Our method is publicly available on representative of users’ eating patterns, as most users do not
(https://github.com/aametwally/LearningFoodPreferences). input ratings for all foods they eat. They probably only review
Index Terms—Food Preferences, Recommender Systems, Ma-
chine Learning. foods they feel especially strongly (positively or negatively)
about. Additionally, many of these methods do not consider
I. I NTRODUCTION the frequency with which a user eats a dish. This is important
information that influences how likely a user would be to eat
Millions of Americans suffer from chronic diseases whose
a recommended dish.
management could be greatly influenced by diet. Following a
In addition to CB- and CF-based methods, others have
dietary recommendation is more successful when the patient
taken innovative approaches to gauge food preferences in their
or consumer is familiar with the food and its preparation. Au-
recommender systems. Ueda et al. use a user’s food log to
tomated food recommendation systems can assist patients in
gauge how much users like various recipe ingredients based on
identifying healthy meals that align with their diet preferences.
their frequency of use [4]. Another group successively presents
However, current food recommendation systems struggle to
images to a user, using a convolutional neural network (CNN)
achieve high accuracies [1]. One reason is a lack of focus
algorithm to learn a user’s preferences [5]. While the approach
on learning users’ food preferences. They do not focus on
is innovative and interesting, it is unclear whether people base
what foods users actually eat. Many food recommendation
eating decisions on food appearance when preparing meals
systems use classic strategies such as content-based filtering
at home (instead of ordering in a restaurant). Toledo et al.
(CB) and collaborative filtering (CF). The former involves
integrate food preferences into their recommendation approach
recommending items similar to ones that the user likes. The
by devising menus containing ingredients that users have liked
latter builds on this approach by recommending items that
in the past but not eaten recently and revising based on user
are similar to both ones the user has previously liked and
feedback of which ingredients they like and do not like [6].
to ones liked by similar users. Examples of CB- and CF-
Other studies treat food recommendations as a query over a
approaches include that used by Forbes and Zhu [2], who used
knowledge graph. One takes in a food log or list of liked foods
a collaborative filtering algorithm but improved it by directly
and allergies and outputs recipes that are most similar to the
incorporating information about a recipe’s ingredients into
input [7]. Another outputs sets of foods that are predicted to
the matrix factorization part of the algorithm. Another group
pair well together [8]. However, these predictions do not take
1 Department of Genetics, Stanford University, Stanford, CA, USA in user input.
2 Systems and Biomedical Engineering Department, Faculty of Engineer- Food logs have been used for purposes such as allergy
ing, Cairo University, Giza, Egypt detection and weight and disease management [9]. One com-
3 Department of Computer Science, Stanford University, Stanford, CA,
USA mon difficulty has been consistently recording entries for
4 Department of Computer Science, University of California at Berkeley, a variety of reasons, such as the laboriousness involved in
Berkeley, USA recording certain kinds of foods, negative emotions arising
∗ These authors contributed equally
Correspondence: [email protected] when journaling and lack of social support [10]. Recent food
This work was supported by the Stanford PHIND award. logging applications such as MyFitnessPal [11], Cronometer
[12], and Lose It! [13] have addressed these challenges by 2) Database: The database used was the U.S. Department
providing a platform that allows the user to input the food of Agriculture’s Food and Nutrient Database for Dietary
name, the quantity, the type of meal, and the time at which the Studies (FNDDS) database. A sample of the database is
user consumed the food. There can be gamification features or shown in Figure 1. It consists of 8691 foods belonging to
social support to address the barriers to journaling mentioned 155 categories that were recorded during the dietary intake
earlier. However, there are several shortcomings in using the component of the National Health and Nutrition Examination
food logs exported directly from these food logging apps. Survey. The FNDDS was chosen because it is representative of
Food names can contain specific brand names, and the food foods eaten in America across demographics, contains detailed
log structure can differ from one website to another, making nutritional information that will be useful for the future food
them difficult to streamline data processing and information recommendation component, and already includes annotations
retrieval. Our proposed approach handles all of these incon- mapping each food to a category. Each category is a different
sistencies. type of food; examples of categories can be found in Figure 3.
Word embeddings have been deployed in food computing The number of foods belonging to a category varies widely;
because they can capture relationships between different ingre- the mean number is 56.06, and the standard deviation is 69.97.
dients and concepts. Recipe embeddings have been learned, for There are 19 categories with fewer than ten foods. FNDDS
example, to build ingredient maps [14]. Diet2Vec used food food names consist of a series of comma-separated phrases.
name embeddings to model meals, and diets [15]. They used An example is ”Yogurt, Greek, whole milk, fruit.”
data from Lose It! journals. They showed that similar foods The descriptive name and assigned label of each food entry
could cluster together from embeddings that were 20% name in the FNDDS database were extracted. Foods that were
and 80% nutrition. They also averaged words to create food assigned a label pertaining to baby foods or formula were
name vectors. removed. Food log processing consisted of extracting food log
In this work, we propose a method to identify food prefer- entry names and annotating them with the list of food labels
ences from food logs that also includes: 1) creating a method used in the FNDDS.
to gauge food preferences that uses food logs, 2) using a
publicly available dataset, the U.S. Department of Agricul-
B. Learning Food Preferences
ture’s Food and Nutrient Database for Dietary Studies [16],
3) using evaluation metrics that others can easily adopt, and Our proposed system identifies food preferences by identi-
4) making our code publicly available. Our system identifies fying foods that are eaten frequently. The system first labels
food preferences by identifying frequently eaten foods. The each food log entry with one of the 155 food labels used in
method first involves assigning each food log entry one of the the Food and Nutrient Database for Dietary Studies (FNDDS).
labels used in the U.S. Department of Agriculture’s Food and The ten most popular foods are then calculated.
Nutrient Database for Dietary Studies (FNDDS). 1) Labeling with Food Embeddings: Due to the small
II. M ETHODS dataset size, a relatively large number of classification cat-
egories, and a small number of samples in each category,
The method, as detailed in Figure 1, has 4 components: k-Nearest Neighbors Classification with k = 1 was used.
preprocessing food logs, generating embedding vectors for Food numerical representation (of length 300) for all food
food log and database entries, computing vector similarities, log entries as well as all FNDDS entries were generated using
and using these similarities to identify commonly eaten foods. Word2Vec, which is a word-embedding model word pretrained
Evaluation metrics were also developed. on the Google News database. We finetuned Word2Vec on
A. Preprocessing of Food Logs and Database artificial sentences that concatenated the FNDDS’ descriptive
food names and the category the food belonged to [17].
1) Food Logs: Food logs were obtained from Cronometer. Embeddings for food names, which consist of multiple words,
This food tracking app was chosen because its nutrition are formed by averaging the embeddings of the constituent
database is company-maintained instead of allowing user words. The cosine similarity between the embedding for each
contributions. (Accurate, comprehensive nutrition information food log entry and each entry in the database was calculated:
would be helpful for future food recommendations.) A sample
of a food log is shown in Figure 1. Visualizations of food log x·y
statistics can be found in Figure 2 and Figure 3. Each entry in Cosine(x, y) = (1)
||x||||y||
a food log was labeled with the date, time, and description of
the food eaten. Food names consisted of a series of comma-
where x is the embedding for a food log entry and U
separated phrases. An example is ”Trader Joe’s, Chicken
contains the embeddings for all entries in the database. The
Sausage, Sweet Italian Style.” The first phrase contained either
label of the database food with the highest similarity was used
the food manufacturer’s brand name, the brand name, the food
as the system’s prediction for the food log entry’s label.
name or only the food name. The following phrases in the food
name entry contained specific details about food composition
or preparation. Label(x) = argmaxy∈U Cosine(x, y) (2)
Food
Consumption
Data Logged

Compute
Similarities

Embedding

Preprocessing

Embedded
Representation

Retrieve
Category of Food
Entry

WWEIA Database

Fig. 1. Workflow of the food preference learning algorithm. Each entry of the food log is processed through an NLP module. Then food embeddings is
obtained for each food entry in the food log and the the database. Next, for each food embedding of the food log, Cosine similarity is run against all
embeddings of the foods in the WWEIA database. The food category label of the food with the highest Cosine similarity is assigned to the food log entry.
This process is repeated for all foods in the food logs. The most common food categories are then calculated.

TABLE I between a food log entry and its database counterpart (or
F OOD L OG NAME P REPROCESSING M ETHODS E XAMPLES the one most closely analogous). Thus to improve food label
Method Result of Processing the Phrase ”Panera accuracy, food log entry names were preprocessed in various
Bread, salad, cobb, green goddess, with ways to remove words that increased similarity to incorrect
chicken & dressing” FNDDS entries or decreased similarity to the correct entry.
1 ”bread” An example showing the preprocessing strategies applied
2 ”bread, salad, cobb, green, with, chicken, &, to one food log entry, ”Panera Bread, salad, cobb, green
dressing” −
→ embedding goddess, with chicken & dressing,” is shown in Table II-B2.
3 ”salad, cobb, green, with, chicken, &, dress- The preprocessing methods, which build on each other, are
ing” −
→ embedding described below:
4 ”salad, cobb, green, chicken, dressing”
5 ”bread, salad, cobb, green, chicken, dress- • Method 1 was intended to only retain the food’s general
ing” −
→ embedding name. As previously mentioned, Cronometer food log en-
6 ”salad, cobb, green, chicken, dressing” but try names were structured as a series of comma-separated
the database is restricted to foods in the phrases. Generally, the first phrase would contain the
”Vegetable mixed dishes”, ”Chicken, whole general food name unless the food brand was specified.
pieces”, ”Chicken patties, nuggets, and ten- The food brand would be in the first comma-separated
ders”, ”Yeast breads,” and ”Salad dressings phrase, and the general food would be the second comma-
and vegetable oils” categories − → embed- separated phrase. Additional phrases specifying further
ding details about the food would follow it. The first phrase
would contain the brand name if there was one, fol-
lowed by the food name. The remaining phrases con-
2) Food Log Name Preprocessing Methods: In our ap- tained food composition or preparation details. Removing
proach, the correct labeling of a food depends on the similarity the food manufacturer’s brand name and more specific
details of the foods was hypothesized to increase the
similarity between analogous foods. The removal was # of correct assignments of unique foods
accomplished by eliminating all but either the first or Accuracylabel =
# of unique food log entries
second comma-separated phrase before generating an em- (3)
bedding. Determining whether the first comma-separated Due to the presence of overlapping food categories (for
phrase contained a food brand name and should not be example, ”Milk, whole” and ”Milk shakes and other dairy
used consisted of counting the number of words in the drinks”), the ”synonymous accuracy” was also calculated:
first and second comma-separated phrases that belonged
to the FNDDS vocabulary and choosing the phrase that # of synonymous assignments of unique foods
had the larger number. Accuracysyn =
# of unique food log entries
• Method 2 was similar to Method 1, but retained the (4)
comma-separated phrases that contained specific details. Food categories were considered synonymous if they shared
• Method 3, like Method 2, retained most of the food at least one word in common. Accounting for synonymous
name, but used another heuristic to judge whether the categories reduced the number of database food categories
first comma-separated phrase contained a brand name. from 155 to 98. Since it could be the case that the model did
Instead of counting the number of words that belonged to not predict the correct label but assigned it a high probability,
the FNDDS vocabulary, the percentage of FNDDS words the mean reciprocal rank (MRR) was also assessed:
in the comma-separated phrase was used.
• Method 4, in which generic food-related terms were n
1X 1
removed from the food log entry name (in addition to M RR = (5)
n i=1 ranki
the preprocessing done in Method 3), was introduced
after noticing that for some food log entries, the most where n is the total number of unique foods in the food
similar database food was one that was wholly unrelated log and rank is the rank that the model assigned each correct
but contained a generic word in common. For example, label for a food.
the most similar database food for many fruits, including Similar to accuracy, the synonymous mean reciprocal rank
”Blueberries, Fresh,” ”Blackberries, Fresh,” and ”Straw- (SMRR) was calculated:
berries, Fresh” was ”Fresh corn custard, Puerto Rican
n
style.” Thus the frequency of each word in the FNDDS 1X 1
vocabulary was tabulated. All of the generic words among SM RR = (6)
n i=1 synranki
the top 250 most common words were removed from the
food log name. where synrank is the rank of the highest-ranking synony-
• Method 5 addressed the mislabeling of foods such as mous food category.
”Kind, Nuts & Spices Bar, Dark Chocolate Nuts & Sea 2) Identifying Food Preferences Metrics: Effectiveness at
Salt,” where the first comma-separated phrase contains identifying food preferences was evaluated in several ways.
not only the brand name but also the general food name. Again, all metrics were calculated individually for each food
This method was identical to Method 4 except that instead log and then averaged. Food preference accuracy was defined
of removing the whole first comma-separated phrase, as
only words not found in the FNDDS vocabulary were
removed. 1 X
• Method 6 addressed mislabeling errors where the pre- Accuracypref erence = {pi == pif }
|categories| i∈categories d
dicted food label was very different from the true label.
(7)
(For example, ”Orowheat, Thin-Sliced Rustic White” was
where the categories are grains, vegetables, proteins, fruits,
misclassified as ”Liquor and cocktails.”) Instead of being
and dairy, and pid is the most popular food for category i in the
compared to the embeddings of all FNDDS foods, a food
dataset, and pif is the most popular food for category i in the
log entry was only compared to foods whose labels had
food log. A corresponding synonymous accuracy (whether the
associated FNDDS foods that shared words in common
food identified as the most popular was from a synonymous
with the food log name.
category) was also calculated. To measure food preferences
beyond one favorite, the percentage of the user’s top ten most
commonly eaten foods that the model was able to identify was
C. Evaluation Metrics
also calculated. A synonymous percentage was also calculated.
1) Labeling Accuracy Metrics: Several evaluation metrics III. R ESULTS AND D ISCUSSION
were employed to assess how well the system assigned the cor-
rect food label to each food entry. All metrics were calculated A. Food Logs and Database Summary
individually for each food log and then averaged. Accuracy Figure 2 demonstrates that most of the 34 food logs used
was computed for each food log as: in the analysis contain over 100 entries. This suggests that the
samples contain a representative selection of different foods be removed and words that were instrumental in establishing
that can be generalized to a larger population of subjects. similarity with the correct analogous database food. Methods
Figure 3 affirms this idea, illustrating the wide diversity in 4 and 5’s high performance on most of the tasks is likely
food selection among the 34 sampled individuals. Even rela- partly due to the removal of generic food-related words. The
tively common food item categories like ”Yeast breads” rarely poor performance of Method 6, the method that restricted the
make up more than 15% of any individual’s log, although it FNDDS foods that a food log entry was compared with to only
should be noted that the current food log entry system does those belonging to a category that contained foods that shared
not measure the exact amount of food consumed. Interestingly, at least one word in common with the food log entry, supports
none of the 10 most chosen food items are meat dishes; this is the importance of focusing on comparing food log entries
likely due to the fact that there are a large number of food log based on the contexts of their component words rather than
categories that refer to dishes with meat in them (ex. ”Ground on the words themselves. Method 6 incorrectly labeled foods
beef”, ”Pork”, ”Turkey, duck, other poultry”, etc.). Overall, the such as ”Creme Fraiche,” which was predicted to have the
lack of any overwhelmingly common food category selections ”Doughnuts, sweet rolls, pastries” label since neither ”creme”
indicates that the dataset has enough variation for us to make nor ”fraiche” appeared in a food name belonging to the correct
preliminary conclusions. category, ”Cream cheese, sour cream, whipped cream.”
Some incorrect label predictions were due to dataset limi-
tations. For several of the non-Western foods in the food logs,
such as sev, there were few or no similar foods in the dataset.
19
150
10 15 The database also did not contain alternate spellings, such
1
as ”yoghurt,” or abbreviations such as ”froyo.” The heuristic
12

14 15 14
for determining whether the first comma phrase contained a
14 15
14
13
9
9
company name was misled by company names that contained
9 12 5

13
8
14
14
food names, such as ”Chipotle.”
# of Food Entries

100
10
10

7
12
15
IV. C ONCLUSION & F UTURE W ORK
8
In this work, we introduce an approach to identify food pref-
1 15

7
erences from food logs that uses embeddings. We also propose
50
7
11 6 accompanying evaluation metrics. Our highest-performing
method identifies 82% of a user’s 10 most frequently-eaten
foods. This information regarding user’s favored foods can be
6
used to generate healthy and realistic meal recommendations
that feature ingredients that the user commonly consumes.
0 Our proposed approach can be generalized to other food
logging apps besides Cronometer and other food preference
S_10
S_11
S_12
S_13
S_14
S_15
S_16
S_17
S_18
S_19
S_20
S_21
S_22
S_23
S_24
S_25
S_26
S_27
S_28
S_29
S_30
S_31
S_32
S_33
S_34
S_1
S_2
S_3
S_4
S_5
S_6
S_7
S_8
S_9

Subject
details besides a user’s most frequently eaten foods. Each
food logging application has its own structure for a food
Fig. 2. Distribution of number of food entries per food log. Each bar name. This work introduces several methods of preprocessing
represents the total number of entries in each subject’s food log, with the food log names. For each application, one approach may
numbers at the top of each bar displaying the number of days across which more accurately identify dietary preferences than another. This
the data was being recorded.
method also provides a guide for identifying other dietary
preferences using a similar method to the one used to identify
B. Performance Evaluation frequently-eaten foods: create a set of vectors corresponding
Table III-B summarize the performance evaluation of each to each available option for a dietary preference. For example,
of the 6 methods in terms of food labeling metrics and food if one were trying to identify a user’s favored cuisines,
preferences metrics. The highest-performing method, Method they could create vectors for the cuisines ”Chinese food”
4, achieved an accuracy of 49% and identified 82% of users’ and ”Mediterranean food.” For each food entry, use cosine
10 most frequently-eaten foods (with synonymous categories similarity to identify the cuisine most likely to belong to that
included). A mean reciprocal rank of 0.57 suggests that for food.
many of the food log entries, the correct food label was Limitations of the work include the small number of food
one of the top two predicted choices. Comparisons to other logs used; annotating all of the entries with the corresponding
food preference evaluation work were difficult because their FNDDS food category required enormous effort. The large
approaches involved different evaluation metrics. decrease in accuracy for Method 6 suggests that using im-
The work shed light on some of the challenges involved proved embeddings, such as those generated using BERT [18]
in working with food logs. The varying performance of dif- or ELMo [19], could lead to better performance. The greater
ferent methods underscored the importance of distinguishing incorporation of context into the embeddings could assist in
between words that could bias the embeddings and should overcoming dataset limitations and labeling food log entries
S_1 S_3 S_5 S_7 S_9 S_11 S_13 S_15 S_17 S_19 S_21 S_23 S_25 S_27 S_29 S_31 S_33

S_2 S_4 S_6 S_8 S_10 S_12 S_14 S_16 S_18 S_20 S_22 S_24 S_26 S_28 S_30 S_32 S_34

0.15
Relative Frequency

0.10

0.05

0.00
Beef

Berries

Cakes and Pies

Cheese

Dairy Desserts

Dips

Nutrition Bars

Pasta

White Potatoes

Yeast Breads
Food Name

Fig. 3. The relative frequency distribution for the ten most common food labels across the food logs.

TABLE II
R ESULTS OF EVALUATING THE FOOD PREFERENCE LEARNING ALGORITHM ON THE INTRODUCED FOOD LABELING AND FOOD REFERENCE METRICS .
Food Labeling Food Preference
Method Accuracy Synonymous Accuracy MRR SMRR Accuracy Synonymous Accuracy % Top 10 Foods Identified % Top 10 Synonymous Foods Identified
1 0.42 0.48 0.49 0.55 0.41 0.47 0.46 0.76
2 0.42 0.47 0.48 0.52 0.35 0.40 0.44 0.75
3 0.43 0.48 0.52 0.56 0.37 0.41 0.47 0.77
4 0.49 0.54 0.57 0.62 0.47 0.51 0.52 0.82
5 0.45 0.49 0.53 0.58 0.39 0.45 0.49 0.81
6 0.30 0.37 0.57 0.62 0.23 0.31 0.32 0.74

that do not have an identical analog in FNDDS. The increase [6] Raciel Yera Toledo, Ahmad A Alzahrani, and Luis Martinez. A
in performance when common words were removed suggests food recommender system considering nutritional information and user
preferences. IEEE Access, 7:96695–96711, 2019.
that other ways of weighting words that differ in importance [7] Yu Chen, Ananya Subburathinam, Ching-Hua Chen, and Mohammed J
when determining similarity, such as incorporating the TF-IDF Zaki. Personalized food recommendation as constrained question an-
statistic, may lead to improved performance. swering over a large-scale food knowledge graph. In Proceedings of the
14th ACM International Conference on Web Search and Data Mining,
In the future, we plan to add a recommendation component pages 544–552, 2021.
to our food preference learning system. After learning the [8] Donghyeon Park, Keonwoo Kim, Seoyoon Kim, Michael Spranger,
kinds of foods the user prefers to eat, the system will use and Jaewoo Kang. Flavorgraph: a large-scale food-chemical graph
for generating food representations and recommending food pairings.
nutritional information to recommend healthy variants of the Scientific reports, 11(1):1–13, 2021.
favored foods that fit with the user’s metabolic goals. We also [9] Felicia Cordeiro, Daniel A Epstein, Edison Thomaz, Elizabeth Bales,
plan to evaluate our system with real users and improve the Arvind K Jagannathan, Gregory D Abowd, and James Fogarty. Barriers
and negative nudges: Exploring challenges in food journaling. In
system by taking cuisine preferences into account. Proceedings of the 33rd Annual ACM Conference on Human Factors
in Computing Systems, pages 1159–1162, 2015.
V. C ODE AVAILABILITY [10] Xu Ye, Guanling Chen, Yang Gao, Honghao Wang, and Yu Cao. As-
The project source code is publicly available on sisting food journaling with automatic eating detection. In Proceedings
of the 2016 CHI conference extended abstracts on human factors in
(https://github.com/aametwally/LearningFoodPreferences). computing systems, pages 3255–3262, 2016.
[11] Myfitnesspal. https://www.myfitnesspal.com/. Accessed: 2021-08-17.
R EFERENCES [12] Cronometer. https://cronometer.com/. Accessed: 2021-08-17.
[1] Christoph Trattner and David Elsweiler. Food recommender systems: [13] Lose it! https://www.loseit.com/. Accessed: 2021-08-17.
important contributions, challenges and future research directions. arXiv [14] Andrea Morales-Garzón, Juan Gómez-Romero, and Maria J Martin-
preprint arXiv:1711.02760, 2017. Bautista. A word embedding-based method for unsupervised adaptation
[2] Peter Forbes and Mu Zhu. Content-boosted matrix factorization for of cooking recipes. IEEE Access, 9:27389–27404, 2021.
recommender systems: experiments with recipe recommendation. In [15] Wesley Tansey, Edward W Lowe Jr, and James G Scott. Diet2vec: Multi-
Proceedings of the fifth ACM conference on Recommender systems, scale analysis of massive dietary data. arXiv preprint arXiv:1612.00388,
pages 261–264, 2011. 2016.
[3] Morgan Harvey, Bernd Ludwig, and David Elsweiler. You are what [16] 2017-2018 food and nutrient database for dietary studies.
you eat: Learning user tastes for rating prediction. In International https://www.ars.usda.gov/. Accessed: 2021-08-17.
symposium on string processing and information retrieval, pages 153– [17] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient
164. Springer, 2013. estimation of word representations in vector space. arXiv preprint
[4] Mayumi Ueda, Mari Takahata, and Shinsuke Nakajima. User’s food arXiv:1301.3781, 2013.
preference extraction for personalized cooking recipe recommendation. [18] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.
In Workshop of ISWC, pages 98–105, 2011. Bert: Pre-training of deep bidirectional transformers for language un-
[5] Longqi Yang, Cheng-Kang Hsieh, Hongjian Yang, John P Pollak, derstanding. arXiv preprint arXiv:1810.04805, 2018.
Nicola Dell, Serge Belongie, Curtis Cole, and Deborah Estrin. Yum- [19] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christo-
me: a personalized nutrient-based meal recommender system. ACM pher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized
Transactions on Information Systems (TOIS), 36(1):1–31, 2017. word representations. arXiv preprint arXiv:1802.05365, 2018.

You might also like