代码示例 / 图数据 / 使用 node2vec 进行图表示学习

使用 node2vec 进行图表示学习

作者: Khalid Salama
创建时间 2021/05/15
最后修改时间 2021/05/15
描述: 实施 node2vec 模型来生成来自 MovieLens 数据集的电影嵌入。

ⓘ 此示例使用 Keras 2

在 Colab 中查看 GitHub 源代码


介绍

从结构化为图的对象中学习有用的表示对于各种机器学习 (ML) 应用程序非常有用,例如社交和通信网络分析、生物医学研究和推荐系统。 图表示学习 旨在学习图节点的嵌入,这些嵌入可用于各种 ML 任务,例如节点标签预测(例如,根据文章的引用对其进行分类)和链接预测(例如,在社交网络中向用户推荐兴趣组)。

node2vec 是一种简单但可扩展且有效的技术,用于通过优化邻域保持目标来学习图中节点的低维嵌入。 目标是学习相邻节点的相似嵌入,相对于图结构。

鉴于您的数据项结构化为图(其中项目表示为节点,项目之间的关系表示为边),node2vec 的工作原理如下

  1. 使用(有偏)随机游走生成项目序列。
  2. 从这些序列创建正负训练样本。
  3. 训练一个 word2vec 模型(跳过语法)来学习项目的嵌入。

在本例中,我们演示了 node2vec 技术在 MovieLens 数据集的小版本 上学习电影嵌入。 此类数据集可以通过将电影视为节点并创建具有相似评分的电影之间的边来表示为图。 学习到的电影嵌入可用于电影推荐或电影类型预测等任务。

此示例需要 networkx 包,可以使用以下命令安装

pip install networkx

设置

import os
from collections import defaultdict
import math
import networkx as nx
import random
from tqdm import tqdm
from zipfile import ZipFile
from urllib.request import urlretrieve
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt

下载 MovieLens 数据集并准备数据

MovieLens 数据集的小版本包含来自 610 个用户对 9,742 部电影的约 100k 个评分。

首先,让我们下载数据集。 下载的文件夹将包含三个数据文件:users.csvmovies.csvratings.csv。 在此示例中,我们只需要 movies.datratings.dat 数据文件。

urlretrieve(
    "http://files.grouplens.org/datasets/movielens/ml-latest-small.zip", "movielens.zip"
)
ZipFile("movielens.zip", "r").extractall()

然后,我们将数据加载到 Pandas DataFrame 并执行一些基本的预处理。

# Load movies to a DataFrame.
movies = pd.read_csv("ml-latest-small/movies.csv")
# Create a `movieId` string.
movies["movieId"] = movies["movieId"].apply(lambda x: f"movie_{x}")

# Load ratings to a DataFrame.
ratings = pd.read_csv("ml-latest-small/ratings.csv")
# Convert the `ratings` to floating point
ratings["rating"] = ratings["rating"].apply(lambda x: float(x))
# Create the `movie_id` string.
ratings["movieId"] = ratings["movieId"].apply(lambda x: f"movie_{x}")

print("Movies data shape:", movies.shape)
print("Ratings data shape:", ratings.shape)
Movies data shape: (9742, 3)
Ratings data shape: (100836, 4)

让我们检查 ratings DataFrame 的示例实例。

ratings.head()
userId movieId rating timestamp
0 1 movie_1 4.0 964982703
1 1 movie_3 4.0 964981247
2 1 movie_6 4.0 964982224
3 1 movie_47 5.0 964983815
4 1 movie_50 5.0 964982931

接下来,让我们检查 movies DataFrame 的示例实例。

movies.head()
movieId title genres
0 movie_1 玩具总动员 (1995) 冒险|动画|儿童|喜剧|奇幻
1 movie_2 勇敢者的游戏 (1995) 冒险|儿童|奇幻
2 movie_3 怒吼的老人 (1995) 喜剧|浪漫
3 movie_4 等待被呼唤 (1995) 喜剧|剧情|浪漫
4 movie_5 岳父大人2 (1995) 喜剧

movies DataFrame 实现两个实用函数。

def get_movie_title_by_id(movieId):
    return list(movies[movies.movieId == movieId].title)[0]


def get_movie_id_by_title(title):
    return list(movies[movies.title == title].movieId)[0]

构建电影图

如果同一用户对两部电影的评分都 >= min_rating,我们就在图中两部电影节点之间创建一条边。 边的权重将基于这两部电影的 点互信息,计算公式为:log(xy) - log(x) - log(y) + log(D),其中

  • xy 是多少个用户对电影 x 和电影 y 的评分都 >= min_rating
  • x 是多少个用户对电影 x 的评分 >= min_rating
  • y 是多少个用户对电影 y 的评分 >= min_rating
  • D 是总共多少个电影评分 >= min_rating

步骤 1:创建电影之间的加权边。

min_rating = 5
pair_frequency = defaultdict(int)
item_frequency = defaultdict(int)

# Filter instances where rating is greater than or equal to min_rating.
rated_movies = ratings[ratings.rating >= min_rating]
# Group instances by user.
movies_grouped_by_users = list(rated_movies.groupby("userId"))
for group in tqdm(
    movies_grouped_by_users,
    position=0,
    leave=True,
    desc="Compute movie rating frequencies",
):
    # Get a list of movies rated by the user.
    current_movies = list(group[1]["movieId"])

    for i in range(len(current_movies)):
        item_frequency[current_movies[i]] += 1
        for j in range(i + 1, len(current_movies)):
            x = min(current_movies[i], current_movies[j])
            y = max(current_movies[i], current_movies[j])
            pair_frequency[(x, y)] += 1
Compute movie rating frequencies: 100%|███████████████████████████████████████████████████████████████████████████| 573/573 [00:00<00:00, 1049.83it/s]

步骤 2:使用节点和边创建图

为了减少节点之间的边数,我们只在边的权重大于 min_weight 时才在电影之间添加边。

min_weight = 10
D = math.log(sum(item_frequency.values()))

# Create the movies undirected graph.
movies_graph = nx.Graph()
# Add weighted edges between movies.
# This automatically adds the movie nodes to the graph.
for pair in tqdm(
    pair_frequency, position=0, leave=True, desc="Creating the movie graph"
):
    x, y = pair
    xy_frequency = pair_frequency[pair]
    x_frequency = item_frequency[x]
    y_frequency = item_frequency[y]
    pmi = math.log(xy_frequency) - math.log(x_frequency) - math.log(y_frequency) + D
    weight = pmi * xy_frequency
    # Only include edges with weight >= min_weight.
    if weight >= min_weight:
        movies_graph.add_edge(x, y, weight=weight)
Creating the movie graph: 100%|███████████████████████████████████████████████████████████████████████████| 298586/298586 [00:00<00:00, 552893.62it/s]

让我们显示图中的节点和边的总数。 请注意,节点数少于电影总数,因为只有具有连接到其他电影的边的电影才会被添加。

print("Total number of graph nodes:", movies_graph.number_of_nodes())
print("Total number of graph edges:", movies_graph.number_of_edges())
Total number of graph nodes: 1405
Total number of graph edges: 40043

让我们显示图中的平均节点度数(邻居数量)。

degrees = []
for node in movies_graph.nodes:
    degrees.append(movies_graph.degree[node])

print("Average node degree:", round(sum(degrees) / len(degrees), 2))
Average node degree: 57.0

步骤 3:创建词汇表以及从标记到整数索引的映射

词汇表是图中的节点(电影 ID)。

vocabulary = ["NA"] + list(movies_graph.nodes)
vocabulary_lookup = {token: idx for idx, token in enumerate(vocabulary)}

实现有偏随机游走

随机游走从给定节点开始,随机选择一个邻居节点进行移动。 如果边是有权重的,则相对于当前节点与其邻居之间边的权重以概率方式选择邻居。 该过程将重复 num_steps 次,以生成一系列相关节点。

有偏随机游走 通过引入以下两个参数,在广度优先采样(其中只访问本地邻居)和深度优先采样(其中访问遥远的邻居)之间取得平衡

  1. 返回参数p):控制在游走中立即重新访问节点的可能性。 将其设置为高值会鼓励适度的探索,而将其设置为低值会使游走保持在本地。
  2. 出入参数q):允许搜索区分内部节点和外部节点。 将其设置为高值会使随机游走偏向本地节点,而将其设置为低值会使游走偏向于访问更远的节点。
def next_step(graph, previous, current, p, q):
    neighbors = list(graph.neighbors(current))

    weights = []
    # Adjust the weights of the edges to the neighbors with respect to p and q.
    for neighbor in neighbors:
        if neighbor == previous:
            # Control the probability to return to the previous node.
            weights.append(graph[current][neighbor]["weight"] / p)
        elif graph.has_edge(neighbor, previous):
            # The probability of visiting a local node.
            weights.append(graph[current][neighbor]["weight"])
        else:
            # Control the probability to move forward.
            weights.append(graph[current][neighbor]["weight"] / q)

    # Compute the probabilities of visiting each neighbor.
    weight_sum = sum(weights)
    probabilities = [weight / weight_sum for weight in weights]
    # Probabilistically select a neighbor to visit.
    next = np.random.choice(neighbors, size=1, p=probabilities)[0]
    return next


def random_walk(graph, num_walks, num_steps, p, q):
    walks = []
    nodes = list(graph.nodes())
    # Perform multiple iterations of the random walk.
    for walk_iteration in range(num_walks):
        random.shuffle(nodes)

        for node in tqdm(
            nodes,
            position=0,
            leave=True,
            desc=f"Random walks iteration {walk_iteration + 1} of {num_walks}",
        ):
            # Start the walk with a random node from the graph.
            walk = [node]
            # Randomly walk for num_steps.
            while len(walk) < num_steps:
                current = walk[-1]
                previous = walk[-2] if len(walk) > 1 else None
                # Compute the next node to visit.
                next = next_step(graph, previous, current, p, q)
                walk.append(next)
            # Replace node ids (movie ids) in the walk with token ids.
            walk = [vocabulary_lookup[token] for token in walk]
            # Add the walk to the generated sequence.
            walks.append(walk)

    return walks

使用有偏随机游走生成训练数据

您可以探索 pq 的不同配置,以获得相关电影的不同结果。

# Random walk return parameter.
p = 1
# Random walk in-out parameter.
q = 1
# Number of iterations of random walks.
num_walks = 5
# Number of steps of each random walk.
num_steps = 10
walks = random_walk(movies_graph, num_walks, num_steps, p, q)

print("Number of walks generated:", len(walks))
Random walks iteration 1 of 5: 100%|█████████████████████████████████████████████████████████████████████████████| 1405/1405 [00:04<00:00, 291.76it/s]
Random walks iteration 2 of 5: 100%|█████████████████████████████████████████████████████████████████████████████| 1405/1405 [00:04<00:00, 302.56it/s]
Random walks iteration 3 of 5: 100%|█████████████████████████████████████████████████████████████████████████████| 1405/1405 [00:04<00:00, 294.52it/s]
Random walks iteration 4 of 5: 100%|█████████████████████████████████████████████████████████████████████████████| 1405/1405 [00:04<00:00, 304.06it/s]
Random walks iteration 5 of 5: 100%|█████████████████████████████████████████████████████████████████████████████| 1405/1405 [00:04<00:00, 302.15it/s]

Number of walks generated: 7025

生成正负样本

为了训练 skip-gram 模型,我们使用生成的游走来创建正负训练样本。 每个样本包含以下特征

  1. target:游走序列中的电影。
  2. context:游走序列中的另一部电影。
  3. weight:这两部电影在游走序列中出现的次数。
  4. label:如果这两部电影是来自游走序列的样本,则标签为 1;否则(即,如果随机采样),则标签为 0。

生成样本

def generate_examples(sequences, window_size, num_negative_samples, vocabulary_size):
    example_weights = defaultdict(int)
    # Iterate over all sequences (walks).
    for sequence in tqdm(
        sequences,
        position=0,
        leave=True,
        desc=f"Generating positive and negative examples",
    ):
        # Generate positive and negative skip-gram pairs for a sequence (walk).
        pairs, labels = keras.preprocessing.sequence.skipgrams(
            sequence,
            vocabulary_size=vocabulary_size,
            window_size=window_size,
            negative_samples=num_negative_samples,
        )
        for idx in range(len(pairs)):
            pair = pairs[idx]
            label = labels[idx]
            target, context = min(pair[0], pair[1]), max(pair[0], pair[1])
            if target == context:
                continue
            entry = (target, context, label)
            example_weights[entry] += 1

    targets, contexts, labels, weights = [], [], [], []
    for entry in example_weights:
        weight = example_weights[entry]
        target, context, label = entry
        targets.append(target)
        contexts.append(context)
        labels.append(label)
        weights.append(weight)

    return np.array(targets), np.array(contexts), np.array(labels), np.array(weights)


num_negative_samples = 4
targets, contexts, labels, weights = generate_examples(
    sequences=walks,
    window_size=num_steps,
    num_negative_samples=num_negative_samples,
    vocabulary_size=len(vocabulary),
)
Generating positive and negative examples: 100%|██████████████████████████████████████████████████████████████████| 7025/7025 [00:11<00:00, 617.64it/s]

让我们显示输出的形状

print(f"Targets shape: {targets.shape}")
print(f"Contexts shape: {contexts.shape}")
print(f"Labels shape: {labels.shape}")
print(f"Weights shape: {weights.shape}")
Targets shape: (881412,)
Contexts shape: (881412,)
Labels shape: (881412,)
Weights shape: (881412,)

将数据转换为 tf.data.Dataset 对象

batch_size = 1024


def create_dataset(targets, contexts, labels, weights, batch_size):
    inputs = {
        "target": targets,
        "context": contexts,
    }
    dataset = tf.data.Dataset.from_tensor_slices((inputs, labels, weights))
    dataset = dataset.shuffle(buffer_size=batch_size * 2)
    dataset = dataset.batch(batch_size, drop_remainder=True)
    dataset = dataset.prefetch(tf.data.AUTOTUNE)
    return dataset


dataset = create_dataset(
    targets=targets,
    contexts=contexts,
    labels=labels,
    weights=weights,
    batch_size=batch_size,
)

训练跳过语法模型

我们的跳过语法是一个简单的二元分类模型,其工作原理如下

  1. target 电影查找嵌入。
  2. context 电影查找嵌入。
  3. 计算这两个嵌入之间的点积。
  4. 将结果(经过 sigmoid 激活后)与标签进行比较。
  5. 使用二元交叉熵损失。
learning_rate = 0.001
embedding_dim = 50
num_epochs = 10

实现模型

def create_model(vocabulary_size, embedding_dim):

    inputs = {
        "target": layers.Input(name="target", shape=(), dtype="int32"),
        "context": layers.Input(name="context", shape=(), dtype="int32"),
    }
    # Initialize item embeddings.
    embed_item = layers.Embedding(
        input_dim=vocabulary_size,
        output_dim=embedding_dim,
        embeddings_initializer="he_normal",
        embeddings_regularizer=keras.regularizers.l2(1e-6),
        name="item_embeddings",
    )
    # Lookup embeddings for target.
    target_embeddings = embed_item(inputs["target"])
    # Lookup embeddings for context.
    context_embeddings = embed_item(inputs["context"])
    # Compute dot similarity between target and context embeddings.
    logits = layers.Dot(axes=1, normalize=False, name="dot_similarity")(
        [target_embeddings, context_embeddings]
    )
    # Create the model.
    model = keras.Model(inputs=inputs, outputs=logits)
    return model

训练模型

我们实例化模型并对其进行编译。

model = create_model(len(vocabulary), embedding_dim)
model.compile(
    optimizer=keras.optimizers.Adam(learning_rate),
    loss=keras.losses.BinaryCrossentropy(from_logits=True),
)

让我们绘制模型。

keras.utils.plot_model(
    model,
    show_shapes=True,
    show_dtype=True,
    show_layer_names=True,
)

png

现在,我们在 dataset 上训练模型。

history = model.fit(dataset, epochs=num_epochs)
Epoch 1/10
860/860 [==============================] - 5s 5ms/step - loss: 2.4527
Epoch 2/10
860/860 [==============================] - 4s 5ms/step - loss: 2.3431
Epoch 3/10
860/860 [==============================] - 4s 4ms/step - loss: 2.3351
Epoch 4/10
860/860 [==============================] - 4s 4ms/step - loss: 2.3301
Epoch 5/10
860/860 [==============================] - 4s 5ms/step - loss: 2.3259
Epoch 6/10
860/860 [==============================] - 4s 4ms/step - loss: 2.3223
Epoch 7/10
860/860 [==============================] - 4s 5ms/step - loss: 2.3191
Epoch 8/10
860/860 [==============================] - 4s 4ms/step - loss: 2.3160
Epoch 9/10
860/860 [==============================] - 4s 4ms/step - loss: 2.3130
Epoch 10/10
860/860 [==============================] - 4s 5ms/step - loss: 2.3104

最后,我们绘制学习历史记录。

plt.plot(history.history["loss"])
plt.ylabel("loss")
plt.xlabel("epoch")
plt.show()

png


分析学习到的嵌入。

movie_embeddings = model.get_layer("item_embeddings").get_weights()[0]
print("Embeddings shape:", movie_embeddings.shape)
Embeddings shape: (1406, 50)

定义一个包含一些电影的列表,名为 query_movies

query_movies = [
    "Matrix, The (1999)",
    "Star Wars: Episode IV - A New Hope (1977)",
    "Lion King, The (1994)",
    "Terminator 2: Judgment Day (1991)",
    "Godfather, The (1972)",
]

获取 query_movies 中电影的嵌入。

query_embeddings = []

for movie_title in query_movies:
    movieId = get_movie_id_by_title(movie_title)
    token_id = vocabulary_lookup[movieId]
    movie_embedding = movie_embeddings[token_id]
    query_embeddings.append(movie_embedding)

query_embeddings = np.array(query_embeddings)

计算 query_movies 的嵌入与所有其他电影的嵌入之间的 余弦相似度,然后为每个选择前 k 个。

similarities = tf.linalg.matmul(
    tf.math.l2_normalize(query_embeddings),
    tf.math.l2_normalize(movie_embeddings),
    transpose_b=True,
)

_, indices = tf.math.top_k(similarities, k=5)
indices = indices.numpy().tolist()

显示 query_movies 中的前 k 个相关电影。

for idx, title in enumerate(query_movies):
    print(title)
    print("".rjust(len(title), "-"))
    similar_tokens = indices[idx]
    for token in similar_tokens:
        similar_movieId = vocabulary[token]
        similar_title = get_movie_title_by_id(similar_movieId)
        print(f"- {similar_title}")
    print()
Matrix, The (1999)
------------------
- Matrix, The (1999)
- Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981)
- Schindler's List (1993)
- Star Wars: Episode IV - A New Hope (1977)
- Lord of the Rings: The Fellowship of the Ring, The (2001)
Star Wars: Episode IV - A New Hope (1977)
-----------------------------------------
- Star Wars: Episode IV - A New Hope (1977)
- Schindler's List (1993)
- Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981)
- Matrix, The (1999)
- Pulp Fiction (1994)
Lion King, The (1994)
---------------------
- Lion King, The (1994)
- Jurassic Park (1993)
- Independence Day (a.k.a. ID4) (1996)
- Beauty and the Beast (1991)
- Mrs. Doubtfire (1993)
Terminator 2: Judgment Day (1991)
---------------------------------
- Schindler's List (1993)
- Jurassic Park (1993)
- Terminator 2: Judgment Day (1991)
- Star Wars: Episode IV - A New Hope (1977)
- Back to the Future (1985)
Godfather, The (1972)
---------------------
- Apocalypse Now (1979)
- Fargo (1996)
- Godfather, The (1972)
- Schindler's List (1993)
- Casablanca (1942)

使用嵌入投影仪可视化嵌入

import io

out_v = io.open("embeddings.tsv", "w", encoding="utf-8")
out_m = io.open("metadata.tsv", "w", encoding="utf-8")

for idx, movie_id in enumerate(vocabulary[1:]):
    movie_title = list(movies[movies.movieId == movie_id].title)[0]
    vector = movie_embeddings[idx]
    out_v.write("\t".join([str(x) for x in vector]) + "\n")
    out_m.write(movie_title + "\n")

out_v.close()
out_m.close()

下载 embeddings.tsvmetadata.tsv 以在 嵌入投影仪 中分析获得的嵌入。

HuggingFace 上的可用示例

训练后的模型 演示
Generic badge Generic badge