代码示例 / 生成式深度学习 / 使用 KerasHub 从头开始进行 GPT 文本生成

使用 KerasHub 从头开始进行 GPT 文本生成

作者: Jesse Chan
创建日期 2022/07/25
最后修改 2022/07/25
描述: 使用 KerasHub 训练一个迷你 GPT 模型用于文本生成。

ⓘ 此示例使用 Keras 3

在 Colab 中查看 GitHub 源代码


引言

在本示例中,我们将使用 KerasHub 构建一个缩小版的生成式预训练(GPT)模型。GPT 是一种基于 Transformer 的模型,可以让你根据提示生成复杂的文本。

我们将在simplebooks-92语料库上训练模型,该语料库由多部小说组成。这是一个非常适合本示例的数据集,因为它词汇量小且词频高,这对于训练参数较少的模型很有益。

本示例结合了使用迷你 GPT 进行文本生成中的概念与 KerasHub 抽象相结合。我们将展示 KerasHub 的分词、层和指标如何简化训练过程,然后展示如何使用 KerasHub 的采样工具生成输出文本。

注意:如果您在 Colab 上运行此示例,请确保启用 GPU 运行时以加快训练速度。

本示例需要 KerasHub。您可以通过以下命令安装它:pip install keras-hub


设置

!pip install -q --upgrade keras-hub
!pip install -q --upgrade keras  # Upgrade to Keras 3.
import os
import keras_hub
import keras

import tensorflow.data as tf_data
import tensorflow.strings as tf_strings

设置与超参数

# Data
BATCH_SIZE = 64
MIN_STRING_LEN = 512  # Strings shorter than this will be discarded
SEQ_LEN = 128  # Length of training sequences, in tokens

# Model
EMBED_DIM = 256
FEED_FORWARD_DIM = 128
NUM_HEADS = 3
NUM_LAYERS = 2
VOCAB_SIZE = 5000  # Limits parameters in model.

# Training
EPOCHS = 5

# Inference
NUM_TOKENS_TO_GENERATE = 80

加载数据

现在,让我们下载数据集吧!SimpleBooks 数据集包含 1,573 本古腾堡(Gutenberg)书籍,并且是词汇量与词级别标记比率最小的数据集之一。它的词汇量约为 98k,是 WikiText-103 的三分之一,而标记数量大致相同(约 100M)。这使得它很容易适应小型模型。

keras.utils.get_file(
    origin="https://dldata-public.s3.us-east-2.amazonaws.com/simplebooks.zip",
    extract=True,
)
dir = os.path.expanduser("~/.keras/datasets/simplebooks/")

# Load simplebooks-92 train set and filter out short lines.
raw_train_ds = (
    tf_data.TextLineDataset(dir + "simplebooks-92-raw/train.txt")
    .filter(lambda x: tf_strings.length(x) > MIN_STRING_LEN)
    .batch(BATCH_SIZE)
    .shuffle(buffer_size=256)
)

# Load simplebooks-92 validation set and filter out short lines.
raw_val_ds = (
    tf_data.TextLineDataset(dir + "simplebooks-92-raw/valid.txt")
    .filter(lambda x: tf_strings.length(x) > MIN_STRING_LEN)
    .batch(BATCH_SIZE)
)
Downloading data from https://dldata-public.s3.us-east-2.amazonaws.com/simplebooks.zip
 282386239/282386239 ━━━━━━━━━━━━━━━━━━━━ 7s 0us/step

训练分词器

我们从训练数据集中训练分词器,词汇量大小为VOCAB_SIZE,这是一个经过调优的超参数。我们希望尽可能地限制词汇量,稍后我们将看到它对模型参数数量有很大影响。我们也不想包含太少的词汇项,否则会有太多未登录词 (OOV) 的子词。此外,词汇表中保留了三个标记

  • "[PAD]" 用于将序列填充到 SEQ_LEN。此标记在 reserved_tokensvocab 中索引为 0,因为 WordPieceTokenizer(及其他层)将 0/vocab[0] 视作默认的填充。
  • "[UNK]" 用于 OOV 子词,应与 WordPieceTokenizer 中的默认值 oov_token="[UNK]" 匹配。
  • "[BOS]" 代表句子开头,但在这里技术上它表示每行训练数据的开头标记。
# Train tokenizer vocabulary
vocab = keras_hub.tokenizers.compute_word_piece_vocabulary(
    raw_train_ds,
    vocabulary_size=VOCAB_SIZE,
    lowercase=True,
    reserved_tokens=["[PAD]", "[UNK]", "[BOS]"],
)

加载分词器

我们使用词汇数据初始化keras_hub.tokenizers.WordPieceTokenizer。WordPieceTokenizer 是 BERT 和其他模型使用的 WordPiece 算法的高效实现。它会进行去除空白、小写化等不可逆的预处理操作。

tokenizer = keras_hub.tokenizers.WordPieceTokenizer(
    vocabulary=vocab,
    sequence_length=SEQ_LEN,
    lowercase=True,
)

对数据进行分词

我们通过分词将数据集预处理并分割为featureslabels

# packer adds a start token
start_packer = keras_hub.layers.StartEndPacker(
    sequence_length=SEQ_LEN,
    start_value=tokenizer.token_to_id("[BOS]"),
)


def preprocess(inputs):
    outputs = tokenizer(inputs)
    features = start_packer(outputs)
    labels = outputs
    return features, labels


# Tokenize and split into train and label sequences.
train_ds = raw_train_ds.map(preprocess, num_parallel_calls=tf_data.AUTOTUNE).prefetch(
    tf_data.AUTOTUNE
)
val_ds = raw_val_ds.map(preprocess, num_parallel_calls=tf_data.AUTOTUNE).prefetch(
    tf_data.AUTOTUNE
)

构建模型

我们使用以下层创建缩小版的 GPT 模型

inputs = keras.layers.Input(shape=(None,), dtype="int32")
# Embedding.
embedding_layer = keras_hub.layers.TokenAndPositionEmbedding(
    vocabulary_size=VOCAB_SIZE,
    sequence_length=SEQ_LEN,
    embedding_dim=EMBED_DIM,
    mask_zero=True,
)
x = embedding_layer(inputs)
# Transformer decoders.
for _ in range(NUM_LAYERS):
    decoder_layer = keras_hub.layers.TransformerDecoder(
        num_heads=NUM_HEADS,
        intermediate_dim=FEED_FORWARD_DIM,
    )
    x = decoder_layer(x)  # Giving one argument only skips cross-attention.
# Output.
outputs = keras.layers.Dense(VOCAB_SIZE)(x)
model = keras.Model(inputs=inputs, outputs=outputs)
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
perplexity = keras_hub.metrics.Perplexity(from_logits=True, mask_token_id=0)
model.compile(optimizer="adam", loss=loss_fn, metrics=[perplexity])

让我们看看模型的摘要——绝大多数参数位于token_and_position_embedding 和输出 dense 层!这意味着词汇量大小(VOCAB_SIZE)对模型大小有很大影响,而 Transformer 解码器层数(NUM_LAYERS)的影响相对较小。

model.summary()
Model: "functional_1"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Layer (type)                     Output Shape                  Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ input_layer (InputLayer)        │ (None, None)              │          0 │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ token_and_position_embedding    │ (None, None, 256)         │  1,312,768 │
│ (TokenAndPositionEmbedding)     │                           │            │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ transformer_decoder             │ (None, None, 256)         │    329,085 │
│ (TransformerDecoder)            │                           │            │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ transformer_decoder_1           │ (None, None, 256)         │    329,085 │
│ (TransformerDecoder)            │                           │            │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ dense (Dense)                   │ (None, None, 5000)        │  1,285,000 │
└─────────────────────────────────┴───────────────────────────┴────────────┘
 Total params: 3,255,938 (12.42 MB)
 Trainable params: 3,255,938 (12.42 MB)
 Non-trainable params: 0 (0.00 B)

训练

现在我们有了模型,让我们使用fit() 方法训练它。

model.fit(train_ds, validation_data=val_ds, epochs=EPOCHS)
Epoch 1/5
 2445/2445 ━━━━━━━━━━━━━━━━━━━━ 216s 66ms/step - loss: 5.0008 - perplexity: 180.0715 - val_loss: 4.2176 - val_perplexity: 68.0438
Epoch 2/5
 2445/2445 ━━━━━━━━━━━━━━━━━━━━ 127s 48ms/step - loss: 4.1699 - perplexity: 64.7740 - val_loss: 4.0553 - val_perplexity: 57.7996
Epoch 3/5
 2445/2445 ━━━━━━━━━━━━━━━━━━━━ 126s 47ms/step - loss: 4.0286 - perplexity: 56.2138 - val_loss: 4.0134 - val_perplexity: 55.4446
Epoch 4/5
 2445/2445 ━━━━━━━━━━━━━━━━━━━━ 134s 50ms/step - loss: 3.9576 - perplexity: 52.3643 - val_loss: 3.9900 - val_perplexity: 54.1153
Epoch 5/5
 2445/2445 ━━━━━━━━━━━━━━━━━━━━ 135s 51ms/step - loss: 3.9080 - perplexity: 49.8242 - val_loss: 3.9500 - val_perplexity: 52.0006

<keras.src.callbacks.history.History at 0x7f7de0365ba0>

推理

使用我们训练好的模型,我们可以对其进行测试以衡量其性能。为此,我们可以使用以"[BOS]"标记开头的输入序列作为模型的种子,并通过循环预测每个后续标记来逐步采样模型。

首先,让我们构建一个与模型输入形状相同的提示,其中仅包含"[BOS]"标记。

# The "packer" layers adds the [BOS] token for us.
prompt_tokens = start_packer(tokenizer([""]))
prompt_tokens
<tf.Tensor: shape=(1, 128), dtype=int32, numpy=
array([[2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
      dtype=int32)>

我们将使用keras_hub.samplers模块进行推理,这需要一个包装我们刚刚训练好的模型的 callback 函数。这个包装器会调用模型并返回我们当前正在生成的标记的 logit 预测值。

注意:定义 callback 函数时,可以使用两个更高级的功能。第一个是能够接收在先前生成步骤中计算出的cache的状态,这可以用来加速生成。第二个是能够输出每个生成的标记的最终密集层“隐状态”(hidden state)。 这被keras_hub.samplers.ContrastiveSampler使用,它通过惩罚重复的隐状态来避免重复。这两个功能都是可选的,我们暂时忽略它们。

def next(prompt, cache, index):
    logits = model(prompt)[:, index - 1, :]
    # Ignore hidden states for now; only needed for contrastive search.
    hidden_states = None
    return logits, hidden_states, cache

创建包装函数是使用这些函数中最复杂的部分。现在已经完成,让我们测试一下不同的工具,从贪婪搜索开始。

我们在每个时间步贪婪地选择概率最高的标记。换句话说,我们获得模型输出的 argmax。

sampler = keras_hub.samplers.GreedySampler()
output_tokens = sampler(
    next=next,
    prompt=prompt_tokens,
    index=1,  # Start sampling immediately after the [BOS] token.
)
txt = tokenizer.detokenize(output_tokens)
print(f"Greedy search generated text: \n{txt}\n")
Greedy search generated text: 
[b'[BOS] " i \' m going to tell you , " said the boy , " i \' ll tell you , and you \' ll be a good friend , and you \' ll be a good friend , and you \' ll be a good friend , and you \' ll be a good friend , and you \' ll be a good friend , and you \' ll be a good friend , and you \' ll be a good friend , and you \' ll be a good friend , and you \' ll be a good friend , and you \' ll be a good friend , and you \' ll be a good friend , and you \' ll be a good']

正如您所见,贪婪搜索一开始还能理解,但很快就开始重复自身。这是文本生成中一个常见的问题,可以通过稍后展示的一些概率文本生成工具来解决!

概括地说,束搜索在每个时间步跟踪num_beams个概率最高的序列,并从所有序列中预测最佳的下一个标记。它比贪婪搜索有所改进,因为它存储了更多的可能性。然而,它不如贪婪搜索高效,因为它必须计算和存储多个潜在序列。

注意:num_beams 设置为 1 的束搜索与贪婪搜索相同。

sampler = keras_hub.samplers.BeamSampler(num_beams=10)
output_tokens = sampler(
    next=next,
    prompt=prompt_tokens,
    index=1,
)
txt = tokenizer.detokenize(output_tokens)
print(f"Beam search generated text: \n{txt}\n")
Beam search generated text: 
[b'[BOS] " i don \' t know anything about it , " she said . " i don \' t know anything about it . i don \' t know anything about it , but i don \' t know anything about it . i don \' t know anything about it , but i don \' t know anything about it . i don \' t know anything about it , but i don \' t know it . i don \' t know it , but i don \' t know it . i don \' t know it , but i don \' t know it . i don \' t know it , but i don \' t know it . i don \'']

与贪婪搜索类似,束搜索很快就开始重复自身,因为它仍然是一种确定性方法。

随机搜索是我们的第一种概率方法。在每个时间步,它使用模型提供的 softmax 概率对下一个标记进行采样。

sampler = keras_hub.samplers.RandomSampler()
output_tokens = sampler(
    next=next,
    prompt=prompt_tokens,
    index=1,
)
txt = tokenizer.detokenize(output_tokens)
print(f"Random search generated text: \n{txt}\n")
Random search generated text: 
[b'[BOS] eleanor . like ice , not children would have suspicious forehead . they will see him , no goods in her plums . i have made a stump one , on the occasion , - - it is sacred , and one is unholy - plaything - - the partial consequences , and one refuge in a style of a boy , who was his grandmother . it was a young gentleman who bore off upon the middle of the day , rush and as he maltreated the female society , were growing at once . in and out of the craid little plays , stopping']

瞧,没有重复! 然而,使用随机搜索,我们可能会看到一些无意义的词语出现,因为词汇表中的任何词语都可能通过这种采样方法出现。 这可以通过我们的下一个搜索工具,top-k 搜索来解决。

与随机搜索类似,我们从模型提供的概率分布中采样下一个标记。 唯一的区别在于,在这里,我们选择出概率最高的k个标记,并在采样之前将概率质量分布在它们之上。 这样,我们就不会从低概率标记中采样,因此出现的无意义词语会更少!

sampler = keras_hub.samplers.TopKSampler(k=10)
output_tokens = sampler(
    next=next,
    prompt=prompt_tokens,
    index=1,
)
txt = tokenizer.detokenize(output_tokens)
print(f"Top-K search generated text: \n{txt}\n")
Top-K search generated text: 
[b'[BOS] " the young man was not the one , and the boy went away to the green forest . they were a little girl \' s wife , and the child loved him as much as he did , and he had often heard of a little girl who lived near the house . they were too tired to go , and when they went down to the barns and get into the barn , and they got the first of the barns that they had been taught to do so , and the little people went to their homes . she did , she told them that she had been a very clever , and they had made the first . she knew they']

即使使用 top-k 搜索,仍有一些地方可以改进。 使用 top-k 搜索时,数字k是固定的,这意味着对于任何概率分布,它都会选择相同数量的标记。 考虑两种情况:一种是概率质量集中在 2 个词上,另一种是概率质量均匀分布在 10 个词上。 我们应该选择k=2还是k=10? 这里没有一个适用于所有k的尺寸。

这就是 top-p 搜索发挥作用的地方! 我们不选择一个k,而是选择一个概率p,我们希望最高概率标记的概率总和达到该值。 这样,我们可以根据概率分布动态调整k。 通过设置p=0.9,如果 90% 的概率质量集中在概率最高的 2 个标记上,我们可以过滤出这 2 个标记进行采样。 如果 90% 的概率分布在 10 个标记上,它也将类似地过滤出这 10 个标记进行采样。

sampler = keras_hub.samplers.TopPSampler(p=0.5)
output_tokens = sampler(
    next=next,
    prompt=prompt_tokens,
    index=1,
)
txt = tokenizer.detokenize(output_tokens)
print(f"Top-P search generated text: \n{txt}\n")
Top-P search generated text: 
[b'[BOS] the children were both born in the spring , and the youngest sister were very much like the other children , but they did not see them . they were very happy , and their mother was a beautiful one . the youngest was one of the youngest sister of the youngest , and the youngest baby was very fond of the children . when they came home , they would see a little girl in the house , and had the beautiful family , and the children of the children had to sit and look on their backs , and the eldest children were very long , and they were so bright and happy , as they were , they had never noticed their hair ,']

使用 callback 进行文本生成

我们还可以将这些工具包装在一个 callback 函数中,这样就可以在模型的每个 epoch 结束时打印出预测序列!这是一个用于 top-k 搜索的 callback 示例

class TopKTextGenerator(keras.callbacks.Callback):
    """A callback to generate text from a trained model using top-k."""

    def __init__(self, k):
        self.sampler = keras_hub.samplers.TopKSampler(k)

    def on_epoch_end(self, epoch, logs=None):
        output_tokens = self.sampler(
            next=next,
            prompt=prompt_tokens,
            index=1,
        )
        txt = tokenizer.detokenize(output_tokens)
        print(f"Top-K search generated text: \n{txt}\n")


text_generation_callback = TopKTextGenerator(k=10)
# Dummy training loop to demonstrate callback.
model.fit(train_ds.take(1), verbose=2, epochs=2, callbacks=[text_generation_callback])
Epoch 1/2
Top-K search generated text: 
[b"[BOS] the young man was in the middle of a month , and he was able to take the crotch , but a long time , for he felt very well for himself in the sepoys ' s hands were chalks . he was the only boy , and he had a few years before been married , and the man said he was a tall one . he was a very handsome , and he was a very handsome young fellow , and a handsome , noble young man , but a boy , and man . he was a very handsome man , and was tall and handsome , and he looked like a gentleman . he was an"]
1/1 - 16s - 16s/step - loss: 3.9454 - perplexity: 51.6987
Epoch 2/2
Top-K search generated text: 
[b'[BOS] " well , it is true . it is true that i should go to the house of a collector , in the matter of prussia that there is no other way there . there is no chance of being in the habit of being in the way of an invasion . i know not what i have done , but i have seen the man in the middle of a day . the next morning i shall take him to my father , for i am not the very day of the town , which would have been a little more than the one \' s daughter , i think it over and the whole affair will be']
1/1 - 17s - 17s/step - loss: 3.7860 - perplexity: 44.0932

<keras.src.callbacks.history.History at 0x7f7de0325600>

结论

回顾一下,在本示例中,我们使用 KerasHub 层来训练子词汇表、对训练数据进行分词、创建一个迷你 GPT 模型,并使用文本生成库进行推理。

如果您想了解 Transformer 的工作原理,或者想了解更多关于训练完整 GPT 模型的信息,这里有一些进一步的阅读材料