Keras 3 API 文档 / 实用程序 / 结构化数据预处理实用程序

结构化数据预处理实用程序

[源代码]

FeatureSpace

keras.utils.FeatureSpace(
    features,
    output_mode="concat",
    crosses=None,
    crossing_dim=32,
    hashing_dim=32,
    num_discretization_bins=32,
    name=None,
)

用于预处理和编码结构化数据的单一实用程序。

参数

  • feature_names: 字典,将特征名称映射到其类型规范,例如 {"my_feature": "integer_categorical"}{"my_feature": FeatureSpace.integer_categorical()}。有关所有支持类型的完整列表,请参阅下面的“可用特征类型”段落。
  • output_mode: "concat""dict" 之一。在连接模式下,所有特征将一起连接到一个单独的向量中。在字典模式下,FeatureSpace 返回一个包含单独编码的特征的字典(与输入字典键具有相同的键)。
  • crosses: 要交叉在一起的特征列表,例如 crosses=[("feature_1", "feature_2")]。这些特征将通过将它们的组合值哈希到固定长度的向量中来进行“交叉”。
  • crossing_dim: 用于哈希交叉特征的默认向量大小。默认值为 32
  • hashing_dim: 用于哈希类型为 "integer_hashed""string_hashed" 的特征的默认向量大小。默认值为 32
  • num_discretization_bins: 用于对类型为 "float_discretized" 的特征进行离散化的默认 bin 数。默认值为 32

可用特征类型

请注意,所有特征都可以通过其字符串名称来引用,例如 "integer_categorical"。使用字符串名称时,将使用默认参数值。

# Plain float values.
FeatureSpace.float(name=None)

# Float values to be preprocessed via featurewise standardization
# (i.e. via a [`keras.layers.Normalization`](/api/layers/preprocessing_layers/numerical/normalization#normalization-class) layer).
FeatureSpace.float_normalized(name=None)

# Float values to be preprocessed via linear rescaling
# (i.e. via a [`keras.layers.Rescaling`](/api/layers/preprocessing_layers/image_preprocessing/rescaling#rescaling-class) layer).
FeatureSpace.float_rescaled(scale=1., offset=0., name=None)

# Float values to be discretized. By default, the discrete
# representation will then be one-hot encoded.
FeatureSpace.float_discretized(
    num_bins, bin_boundaries=None, output_mode="one_hot", name=None)

# Integer values to be indexed. By default, the discrete
# representation will then be one-hot encoded.
FeatureSpace.integer_categorical(
    max_tokens=None, num_oov_indices=1, output_mode="one_hot", name=None)

# String values to be indexed. By default, the discrete
# representation will then be one-hot encoded.
FeatureSpace.string_categorical(
    max_tokens=None, num_oov_indices=1, output_mode="one_hot", name=None)

# Integer values to be hashed into a fixed number of bins.
# By default, the discrete representation will then be one-hot encoded.
FeatureSpace.integer_hashed(num_bins, output_mode="one_hot", name=None)

# String values to be hashed into a fixed number of bins.
# By default, the discrete representation will then be one-hot encoded.
FeatureSpace.string_hashed(num_bins, output_mode="one_hot", name=None)

示例

使用输入数据字典的基本用法

raw_data = {
    "float_values": [0.0, 0.1, 0.2, 0.3],
    "string_values": ["zero", "one", "two", "three"],
    "int_values": [0, 1, 2, 3],
}
dataset = tf.data.Dataset.from_tensor_slices(raw_data)

feature_space = FeatureSpace(
    features={
        "float_values": "float_normalized",
        "string_values": "string_categorical",
        "int_values": "integer_categorical",
    },
    crosses=[("string_values", "int_values")],
    output_mode="concat",
)
# Before you start using the FeatureSpace,
# you must `adapt()` it on some data.
feature_space.adapt(dataset)

# You can call the FeatureSpace on a dict of data (batched or unbatched).
output_vector = feature_space(raw_data)

使用 tf.data 的基本用法

# Unlabeled data
preprocessed_ds = unlabeled_dataset.map(feature_space)

# Labeled data
preprocessed_ds = labeled_dataset.map(lambda x, y: (feature_space(x), y))

使用 Keras 函数式 API 的基本用法

# Retrieve a dict Keras Input objects
inputs = feature_space.get_inputs()
# Retrieve the corresponding encoded Keras tensors
encoded_features = feature_space.get_encoded_features()
# Build a Functional model
outputs = keras.layers.Dense(1, activation="sigmoid")(encoded_features)
model = keras.Model(inputs, outputs)

自定义每个特征或特征交叉

feature_space = FeatureSpace(
    features={
        "float_values": FeatureSpace.float_normalized(),
        "string_values": FeatureSpace.string_categorical(max_tokens=10),
        "int_values": FeatureSpace.integer_categorical(max_tokens=10),
    },
    crosses=[
        FeatureSpace.cross(("string_values", "int_values"), crossing_dim=32)
    ],
    output_mode="concat",
)

返回一个包含整数编码特征的字典

feature_space = FeatureSpace(
    features={
        "string_values": FeatureSpace.string_categorical(output_mode="int"),
        "int_values": FeatureSpace.integer_categorical(output_mode="int"),
    },
    crosses=[
        FeatureSpace.cross(
            feature_names=("string_values", "int_values"),
            crossing_dim=32,
            output_mode="int",
        )
    ],
    output_mode="dict",
)

指定您自己的 Keras 预处理层

# Let's say that one of the features is a short text paragraph that
# we want to encode as a vector (one vector per paragraph) via TF-IDF.
data = {
    "text": ["1st string", "2nd string", "3rd string"],
}

# There's a Keras layer for this: TextVectorization.
custom_layer = layers.TextVectorization(output_mode="tf_idf")

# We can use FeatureSpace.feature to create a custom feature
# that will use our preprocessing layer.
feature_space = FeatureSpace(
    features={
        "text": FeatureSpace.feature(
            preprocessor=custom_layer, dtype="string", output_mode="float"
        ),
    },
    output_mode="concat",
)
feature_space.adapt(tf.data.Dataset.from_tensor_slices(data))
output_vector = feature_space(data)

检索底层的 Keras 预处理层

# The preprocessing layer of each feature is available in `.preprocessors`.
preprocessing_layer = feature_space.preprocessors["feature1"]

# The crossing layer of each feature cross is available in `.crossers`.
# It's an instance of keras.layers.HashedCrossing.
crossing_layer = feature_space.crossers["feature1_X_feature2"]

保存和重新加载 FeatureSpace

feature_space.save("featurespace.keras")
reloaded_feature_space = keras.models.load_model("featurespace.keras")