欢迎关注公众号,DataDesigner,让我们一起白话机器学习。
目录
前言
有些读者说我论文读的少,唔。。其实不是这样的,我不是所有论文都写博客,而且有时候一篇博客会融合很多论文。
由于一直没定下来研究方向,就打算把各个领域的经典文章都读一读,和做推荐的海翔聊了聊,下面几周就先更推荐吧。
理解这个首先要理解Word2vec,之前博客虽然有涉及,但是公式推导不是很完善,尤其是Negative Sampling,看这篇:Negative Sampling
当然我只涉及我所需要理解的部分,详细的可以看这几篇博客:Airbnb官方翻译;关键部分解释
主体内容
第一,设置多目标对Listing Embedding结果进行优化,第一个目标是最大化Windows窗口内的Listings neighbours出现概率,第二个目标是最大化“Booked”的Listing的预测概率,第三个目标是减少同地点非Windows内neighbours出现的概率。第一,第三个目标均是负采样。这个目标写法参照Negative Sampling。
有了相似房源列表就够了吗?够了,但是对于实时推荐很明显不是,第一,比如说,一个用户当前正在搜索洛杉矶,并且该用户之前有点击过纽约和伦敦的民宿,那么就可以结合之前的点击信息,来进一步寻找相似的房源(使用余弦相似度度量)。第二,Airbnb还将用户是否点击相似房源的操作也考虑了进来,简单来说,就是我推荐的东西如果没有被点击,证明推荐有问题,那和其类似的房源就最好不要再出现。
第二,是检测Embedding的效果的方法是我没想到的,他居然用聚类和相似度比较这种方式查看Embedding是否包含了价格,位置等信息。示例如下:
第三,光检测Listings相似度显然是不够的,Airbnb还想融入用户的偏好,检测用户和房源的相似性,但是有很多问题(例如共现次数少,用户行为少等冷启动的问题)。于是他就把用户和房源进行分桶,不使用ID而使用Type。使用的数据就是同Type用户们的BookLists,同样优化目标也是使用Word2vec,希望同类型的用户Embedding更近。被相同类型用户预定的房源embedding越近。
他还引入了房源(其实是房东)拒绝房客的信息,把这些listings也作为负样本引入,如下(第三项):
第四,他有了这个Listings的Embeddings并没有直接根据Embeddings做推荐,而是使用这个Embeddings和其他一堆特征合起来使用GBDT预测,并且根据用户的行为标定了labels(预定,加入心愿单等),然后使用DCU和NDCU两个指标做A/B Test。(实时推荐的移至第一条中方便理解)
代码实现
代码部分Airbnb没有开源,其实也不用开源,这个主要是数据驱动的,技术上理解第一个是Word2vec,第二个是GBDT,均可参照我的博客。
Word2vec
import collections
import math
import random
import zipfile
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
import matplotlib as mpl
filename = 'text_words.zip'
def read_data(filename):
'''
:param filename: 文件名(.zip)
:return: 词语列表
'''
with zipfile.ZipFile(filename) as f:
data = tf.compat.as_str(f.read(f.namelist()[0])).split()
return data
words = read_data(filename)
def remove_fre_stop_word(words):
'''
去掉一些高频的停用词 比如 的之类的
:param words: 词语列表
:return: 词语列表
'''
t = 1e-5 # t 值
threshold = 0.8 # 剔除概率阈值
int_word_counts = collections.Counter(words)
total_count = len(words)
word_freqs = {w: c / total_count for w, c in int_word_counts.items()}
prob_drop = {w: 1 - np.sqrt(t / f) for w, f in word_freqs.items()} # 计算删除概率
train_words = [w for w in words if prob_drop[w] < threshold]
return train_words
words = remove_fre_stop_word(words)
words_size = len(words) # words中分词数量(含重复)
vocabulary_size = len(set(words)) # words中分词数量(不含重复值)
def build_dataset(words):
'''
构建
:param words: 词语列表
:return: data 词语列表中每个词对应编号的列表 长度words_size
count 词频统计 按词频大小降序 长度vocabulary_size
dictionary 按词频生成词典 词频越大 序号越小 形式如 ('的', 1)
reverse_dictionary 将上面的词典翻转 形式如(1, ‘的’)
'''
count = [['UNK', -1]]
# 按照词频降序排列
count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
# 词语不在词典中 则标记为unk_count
for word in words:
if word in dictionary:
index = dictionary[word]
else:
index = 0
unk_count += 1
data.append(index)
count[0][1] = unk_count
# 利用zip()函数 翻转字典键值对
reverse_dictionary = dict(zip(dictionary.values(),dictionary.keys()))
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words)
del words
data_index = 0
def generate_batch(batch_size, num_skips, skip_window):
'''
:param batch_size: batch的大小
:param num_skips: 对于每个词语的训练样本数量
:param skip_window: 词语的上下文窗口半径
:return: batch batch_size * 1 的数组
labels batch_size * 1 的数组
'''
# data_index是全局变量,并且需要改变 所以声明global
global data_index
# 两个判断 不符合条件提示错误信息
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
# 一个span包括前后各skip_window个词语 并包括自身,所以是 2*skip_window+1
span = 2 * skip_window + 1
# 定义一个长度固定的队列 长度为span
buffer = collections.deque(maxlen=span)
# 第一次在队列中填充数据
# 从这里可以看到 词语在训练数据的使用形式是编号,通过这个编号定位每个词语
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
# 详细解释这一部分
for i in range(batch_size // num_skips):
target = skip_window #比如传进来的是2,target==2
targets_to_avoid = [skip_window] # targets_to_avoid==[2]
for j in range(num_skips): #随机选择num_skips条数据
while target in targets_to_avoid: # 随机选windows中一个词
target = random.randint(0, span - 1) # 比如3
targets_to_avoid.append(target) # [2,3]
# 逻辑就是[2,2,2]->[1,3,4]
batch[i * num_skips + j] = buffer[skip_window] # 中心词重复三遍
labels[i * num_skips + j, 0] = buffer[target] # 上下文中某个单词
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
batch_size = 128
embedding_size = 128
skip_window = 2
num_skips = 4
valid_size = 16
valid_window = 100
valid_examples = np.random.choice(valid_window, valid_size, replace=False)
num_sampled = 64
train_inputs = tf.placeholder(tf.int32, shape=[batch_size]) # 这真是个技巧,不用多输出,图只是理解使用
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
embeddings = tf.Variable(tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
nce_weights = tf.Variable(tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0 / math.sqrt(embedding_size)))
nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
# 负采样给封装好了,对于每个正例,我们随机选取num_sampled个负例进行优化。
loss = tf.reduce_mean(tf.nn.nce_loss(weights=nce_weights, biases=nce_biases,labels=train_labels, inputs=embed, num_sampled=num_sampled, num_classes=vocabulary_size))
# loss = tf.reduce_mean(
# tf.nn.sampled_softmax_loss(nce_weights, nce_biases, train_labels,
# embed, num_sampled, vocabulary_size))
# 这里设置num_sampled=num_sampled就是在负采样的时候默认执行 P(k) = (log(k + 2) - log(k + 1)) / log(range_max + 1)
optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm # 需要正则化
valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True) # 余弦相似度的计算
num_steps = 100001
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
average_loss = 0
for step in range(num_steps):
batch_inputs, batch_labels = generate_batch(batch_size, num_skips, skip_window)
feed_dict = {train_inputs: batch_inputs, train_labels: batch_labels}
_, loss_val = sess.run([optimizer, loss], feed_dict=feed_dict)
average_loss += loss_val
if step % 2000 == 0:
if step > 0:
average_loss /= 2000
print("Average loss at step ", step, ": ", average_loss)
average_loss = 0
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8
nearest = (-sim[i, :]).argsort()[1:top_k + 1] # 自己与自己点乘肯定是最大的所以从1开始取 argsort是从小到大,所以可以取负号之后排序
log_str = "nearest to %s:" % valid_word
for k in range(top_k):
close_word = reverse_dictionary.get(nearest[k], None)
log_str = "%s %s," % (log_str, close_word)
print(log_str)
final_embeddings = normalized_embeddings.eval()
print("*" * 10 + "final_embeddings:" + "*" * 10 + "\n", final_embeddings)
fp = open('vector_skip_gram.txt', 'w', encoding='utf8')
for k, v in reverse_dictionary.items():
t = tuple(final_embeddings[k])
s = ''
for i in t:
i = str(i)
s += i + " "
fp.write(v + " " + s + "\n")
fp.close()
# 词向量降维(2) + 绘图
def plot_with_labels(low_dim_embs, plot_labels, filename='tsne_skip_gram.png'):
assert low_dim_embs.shape[0] >= len(plot_labels), "More labels than embeddings"
plt.figure(figsize=(18, 18)) # in inches
for i, label in enumerate(plot_labels):
x, y = low_dim_embs[i, :]
plt.scatter(x, y)
plt.annotate(u'{}'.format(label),
xy=(x, y),
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom')
plt.savefig(filename)
try:
mpl.rcParams['font.sans-serif'] = ['SimHei'] # 用来正常显示中文字符
mpl.rcParams['axes.unicode_minus'] = False # 用来正常显示正负号
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
plot_only = 500
low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only, :])
plot_labels = [reverse_dictionary[i] for i in range(plot_only)]
plot_with_labels(low_dim_embs, plot_labels)
except ImportError:
print("Please install sklearn, matplotlib, and scipy to visualize embeddings.")
GBDT
from sklearn.ensemble import GradientBoostingClassifier
def build_model_gbdt(x_train,y_train):
estimator =GradientBoostingClassifier(loss='deviance',subsample= 0.85,max_depth= 5,n_estimators = 30)
param_grid = {
'learning_rate': [0.01,0.05,0.08,0.1,0.2],
'max_depth':range(3,10,2),
'n_estimators':range(20,100,10)
}
gbdt = GridSearchCV(estimator, param_grid,cv=3)
gbdt.fit(x_train,y_train)
print(gbdt.best_params_)
return gbdt