Gemma3CausalLMPreprocessor 类keras_hub.models.Gemma3CausalLMPreprocessor(
tokenizer,
image_converter=None,
sequence_length=1024,
add_start_token=True,
add_end_token=True,
max_images_per_prompt=2,
num_vision_tokens_per_image=256,
**kwargs
)
Gemma3 因果语言模型预处理器。
此预处理层用于 keras_hub.models.Gemma3CausalLM。它可以配置为两种方式:仅文本和文本+视觉,具体取决于 image_converter 的值是否为 None。对于前者,它接收字符串批次;对于后者,它接收图像和字符串批次。它以 (x, y, sample_weight) 格式返回输出,其中 y 标签是 x 序列中的下一个 token ID。sample_weight 对于 "prompt" token 为 0,对于 "response" token 为 1,因此损失仅在 "response" token 上计算。
对于文本+视觉场景,此层会将 prompt 中 <start_of_image> token 的实例替换为 num_vision_tokens_per_image 个占位符 token。它还会返回这些视觉 token 所在的位置索引,以便在模型中,图像嵌入可以放置在文本嵌入序列的正确位置。请注意,如果 max_images_per_prompt 为 2,您可以每个样本传递 0、1 或 2 张图像。值 0 对应于仅文本输入。
对于生成使用,该层还公开了两个方法 generate_preprocess() 和 generate_postprocess()。当此预处理器附加到 keras_hub.models.Gemma3CausalLM 实例时,这些方法将在 generate() 中隐式调用。它们也可以独立调用(例如,在单独的进程中预先计算生成所需的预处理输入)。
参数
keras_hub.models.Gemma3Tokenizer 实例。keras_hub.layers.ImageConverter 实例。默认为 None。True,预处理器将在每个输入序列前面添加 token 的起始 token。默认为 True。True,预处理器将在每个输入序列后面添加 token 的结束 token。默认为 True。调用参数
tf.Tensor 或 Python 字符串列表。None,因为该层会生成标签。None,因为该层会生成标签权重。sequence_length。示例
# === Language Gemma3 model ===
# Load the preprocessor from a preset.
preprocessor = keras_hub.models.Gemma3CausalLMPreprocessor.from_preset(
"gemma3_instruct_1b"
)
# Unbatched inputs.
preprocessor(
{
"prompts": "What is the capital of India?",
"responses": "New Delhi",
}
)
# Batched inputs.
preprocessor(
{
"prompts": [
"What is the capital of India?",
"What is the capital of Spain?"
],
"responses": ["New Delhi", "Madrid"],
}
)
# Apply preprocessing to a [`tf.data.Dataset`](https://tensorflowcn.cn/api_docs/python/tf/data/Dataset).
features = {
"prompts": [
"What is the capital of India?",
"What is the capital of Spain?"
],
"responses": ["New Delhi", "Madrid"],
}
ds = tf.data.Dataset.from_tensor_slices(features)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Prepare tokens for generation (no end token).
preprocessor.generate_preprocess(["The quick brown fox jumped."])
# Map generation outputs back to strings.
preprocessor.generate_postprocess({
'token_ids': np.array([[2, 818, 3823, 8864, 37423, 32694, 236761, 0]]),
'padding_mask': np.array([[ 1, 1, 1, 1, 1, 1, 1, 0]]),
})
# === Vision and Language Gemma3 model ===
# Load the preprocessor from a preset.
preprocessor = keras_hub.models.Gemma3CausalLMPreprocessor.from_preset(
"gemma3_instruct_4b"
)
# text-only inputs (unbatched)
preprocessor(
{
"prompts": "What is the capital of India?",
"responses": "New Delhi",
}
)
# text-only inputs (batched)
preprocessor(
{
"prompts": [
"What is the capital of India?",
"What is the capital of Spain?"
],
"responses": ["New Delhi", "Madrid"],
}
)
# Unbatched inputs, with one image.
preprocessor(
{
"prompts": "this is a lily <start_of_image>",
"responses": "pristine!",
"images": np.ones((896, 896, 3), dtype="float32")
}
)
# Unbatched inputs, with two images.
preprocessor(
{
"prompts": "lily: <start_of_image>, sunflower: <start_of_image>",
"responses": "pristine!",
"images": [
np.ones((896, 896, 3), dtype="float32"),
np.ones((896, 896, 3), dtype="float32")
],
}
)
# Batched inputs, one image per prompt.
preprocessor(
{
"prompts": [
"this is a lily: <start_of_image>",
"this is a sunflower: <start_of_image>"
],
"responses": ["pristine!", "radiant!"],
"images": [
np.ones((896, 896, 3), dtype="float32"),
np.ones((896, 896, 3), dtype="float32")
]
}
)
# Can also be written this way.
preprocessor(
{
"prompts": [
"this is a lily: <start_of_image>",
"this is a sunflower: <start_of_image>"
],
"responses": ["pristine!", "radiant!"],
"images": [
[np.ones((896, 896, 3), dtype="float32")],
[np.ones((896, 896, 3), dtype="float32")]
]
}
)
# Different number of images in every sample.
preprocessor(
{
"prompts": [
"Who is this singer: <start_of_image>?",
"Who are these musicians <start_of_image>, <start_of_image>?"
],
"responses": ["Arijit Singh", "John Lennon, Paul Mccartney"],
"images": [
[
np.ones((896, 896, 3), dtype="float32"),
np.ones((896, 896, 3), dtype="float32")
],
[np.ones((896, 896, 3), dtype="float32")]
]
}
)
# Apply preprocessing to a [`tf.data.Dataset`](https://tensorflowcn.cn/api_docs/python/tf/data/Dataset).
inputs = {
"prompts": [
"Who are these two: <start_of_image>, <start_of_image>",
"Who is this: <start_of_image>?",
"What is the capital of India?"
],
"responses": [
"John Lennon, Paul Mccartney",
"Arijit Singh",
"New Delhi"
],
"images": (
tf.ragged.constant(
[
[np.ones((10, 10, 3)), np.ones((10, 10, 3))],
[np.ones((10, 10, 3))],
[],
]
)
)
}
ds = tf.data.Dataset.from_tensor_slices(inputs)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
from_preset 方法Gemma3CausalLMPreprocessor.from_preset(
preset, config_file="preprocessor.json", **kwargs
)
从模型预设实例化一个 keras_hub.models.Preprocessor。
预设是一个包含配置、权重和其他文件资产的目录,用于保存和加载预训练模型。preset 可以作为以下之一传递:
'bert_base_en''kaggle://user/bert/keras/bert_base_en''hf://user/bert_base_en''./bert_base_en'对于任何 Preprocessor 子类,您可以运行 cls.presets.keys() 来列出该类上所有可用的内置预设。
由于一个给定模型通常有多个预处理类,因此应在特定的子类上调用此方法,例如 keras_hub.models.BertTextClassifierPreprocessor.from_preset()。
参数
示例
# Load a preprocessor for Gemma generation.
preprocessor = keras_hub.models.CausalLMPreprocessor.from_preset(
"gemma_2b_en",
)
# Load a preprocessor for Bert classification.
preprocessor = keras_hub.models.TextClassifierPreprocessor.from_preset(
"bert_base_en",
)
| 预设 | 参数 | 描述 |
|---|---|---|
| gemma3_270m | 268.10M | 2.7 亿参数(1.7 亿嵌入参数,1 亿 Transformer 参数)模型,18 层,纯文本模型,专为超高效 AI 设计,特别适合任务特定微调。 |
| gemma3_instruct_270m | 268.10M | 2.7 亿参数(1.7 亿嵌入参数,1 亿 Transformer 参数)模型,18 层,纯文本模型,指令微调模型,专为超高效 AI 设计,特别适合任务特定微调。 |
| gemma3_1b | 999.89M | 10 亿参数,26 层,仅文本预训练 Gemma3 模型。 |
| gemma3_instruct_1b | 999.89M | 10 亿参数,26 层,仅文本指令微调 Gemma3 模型。 |
| gemma3_4b_text | 3.88B | 40 亿参数,34 层,仅文本预训练 Gemma3 模型。 |
| gemma3_instruct_4b_text | 3.88B | 40 亿参数,34 层,仅文本指令微调 Gemma3 模型。 |
| gemma3_4b | 4.30B | 40 亿参数,34 层,视觉+文本预训练 Gemma3 模型。 |
| gemma3_instruct_4b | 4.30B | 40 亿参数,34 层,视觉+文本指令微调 Gemma3 模型。 |
| gemma3_12b_text | 11.77B | 120 亿参数,48 层,仅文本预训练 Gemma3 模型。 |
| gemma3_instruct_12b_text | 11.77B | 120 亿参数,48 层,仅文本指令微调 Gemma3 模型。 |
| gemma3_12b | 12.19B | 120 亿参数,48 层,视觉+文本预训练 Gemma3 模型。 |
| gemma3_instruct_12b | 12.19B | 120 亿参数,48 层,视觉+文本指令微调 Gemma3 模型。 |
| gemma3_27b_text | 27.01B | 270 亿参数,62 层,仅文本预训练 Gemma3 模型。 |
| gemma3_instruct_27b_text | 27.01B | 270 亿参数,62 层,仅文本指令微调 Gemma3 模型。 |
| gemma3_27b | 27.43B | 270 亿参数,62 层,视觉+文本预训练 Gemma3 模型。 |
| gemma3_instruct_27b | 27.43B | 270 亿参数,62 层,视觉+文本指令微调 Gemma3 模型。 |
tokenizer 属性keras_hub.models.Gemma3CausalLMPreprocessor.tokenizer
用于对字符串进行分词的分词器。