小白求助,关于tensorflow模型无法保存的问题
如题,找了一个神经协调过滤的模型,但是想保存时提示自定义层什么什么的,搞不懂,没看到代码里面有什么自定义层啊具体警告如下:
CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument. warnings.warn('Custom mask layers require a config and must override '
模型代码如下:
def get_compiled_neumf_model(num_users, num_items, lr = 0.001, mf_dim=10, layers_num=, reg_layers=, reg_mf=0):
assert len(layers_num) == len(reg_layers)
num_layer = len(layers_num) #Number of layers in the MLP
# Input variables
user_input = layers.Input(shape=(1,), dtype='int32', name='user_input')
item_input = layers.Input(shape=(1,), dtype='int32', name='item_input')
# Embedding layer
mf_embedding_user = layers.Embedding(input_dim=num_users, output_dim=mf_dim, name='mf_embedding_user',
embeddings_initializer=initializers.RandomNormal(),
embeddings_regularizer=regularizers.l2(reg_mf),
input_length=1)
mf_embedding_item = layers.Embedding(input_dim=num_items, output_dim=mf_dim, name='mf_embedding_item',
embeddings_initializer=initializers.RandomNormal(),
embeddings_regularizer=regularizers.l2(reg_mf), input_length=1)
mlp_embedding_user = layers.Embedding(input_dim=num_users, output_dim=int(layers_num/2), name="mlp_embedding_user",
embeddings_initializer=initializers.RandomNormal(),
embeddings_regularizer=regularizers.l2(reg_layers), input_length=1)
mlp_embedding_item = layers.Embedding(input_dim=num_items, output_dim=int(layers_num/2), name='mlp_embedding_item',
embeddings_initializer=initializers.RandomNormal(),
embeddings_regularizer=regularizers.l2(reg_layers), input_length=1)
# MF part
mf_user_latent = layers.Flatten()(mf_embedding_user(user_input))
mf_item_latent = layers.Flatten()(mf_embedding_item(item_input))
mf_vector = layers.multiply()
# MLP part
mlp_user_latent = layers.Flatten()(mlp_embedding_user(user_input))
mlp_item_latent = layers.Flatten()(mlp_embedding_item(item_input))
mlp_vector = layers.concatenate()
for idx in range(1, num_layer):
layer = layers.Dense(layers_num, kernel_regularizer=regularizers.l2(reg_layers), activation='relu', name="layer%d" % idx)
mlp_vector = layer(mlp_vector)
# Concatenate MF and MLP parts
predict_vector = layers.concatenate()
# Final prediction layer
prediction = layers.Dense(1, activation='sigmoid', kernel_initializer=initializers.lecun_normal(),
name="prediction")(predict_vector)
model_nuemf = models.Model(inputs=, outputs=prediction)
model_nuemf.compile(optimizer=optimizers.Adam(learning_rate=lr, clipnorm=0.5), loss='binary_crossentropy')
return model_nuemf
一个神经协同过滤的模型,henxiannan博士论文里的,原本代码是只保存权重,但是我想先试试能不能保存整个模型 啥方面的啊,也不介绍一下? 做为一名新人,不敢在大声说话,也不敢得罪人,只能默默地顶完贴然后转身就走人。动作要快,姿势要帅,深藏功与名。
页:
[1]