tensorflow踩坑

  • List item

在对Tensorflow(1.5)构建的模型使用Pytorch进行复现的过程中,遇到了一些关于Tensorflow的问题,这里整合记录一下。

权重初始化

复现的前人的论文
论文:https://daooshee.github.io/BMVC2018website/
代码:https://github.com/weichen582/RetinexNet
首先,贴出来我要使用的模型,我想要在Pytorch使用相同的模型,需要先理解并且测试一下模型的运行情况,我需要打印一些张量数值用来确定两种框架的模型可以输出相同的结果。

import tensorflow as tf

A = tf.random_uniform((3, 2, 3, 4), minval=0.0, maxval=4.0, seed=0)
B = tf.ones((3, 2, 3, 4))


def concat(layers):
    return tf.concat(layers, axis=3)


def DecomNet(input_im, layer_num, channel=64, kernel_size=3):
    input_max = tf.reduce_max(input_im, axis=3, keepdims=True)
    input_im = concat([input_max, input_im])
    with tf.variable_scope('DecomNet', reuse=tf.AUTO_REUSE):
        conv = tf.layers.conv2d(input_im, channel, kernel_size * 3, padding='same', activation=None,
                                name="shallow_feature_extraction")
        for idx in range(layer_num):
            conv = tf.layers.conv2d(conv, channel, kernel_size, padding='same', activation=tf.nn.relu,
                                    name='activated_layer_%d' % idx)
        conv = tf.layers.conv2d(conv, 4, kernel_size, padding='same', activation=None, name='recon_layer')

    R = tf.sigmoid(conv[:, :, :, 0:3])
    L = tf.sigmoid(conv[:, :, :, 3:4])

    return R, L


with tf.Session():
    print(A.eval())
# init = tf.global_variables_initializer()
with tf.Session() as sess:

    [R_low, I_low] = DecomNet(A, layer_num=5)
    # sess.run(tf.global_variables_initializer())
    print(R_low.eval())


但是这里运行报错,错误代码为:FailedPreconditionError (see above for traceback): Attempting to use uninitialized value
也就是在运行模型之前需要初始化网络的权重。

解决方法

使用sess.run(tf.global_variables_initializer())进行初始化权重,切记要在使用紧贴着使用模型语句之后,不报错的代码如下

import tensorflow as tf

A = tf.random_uniform((3, 2, 3, 4), minval=0.0, maxval=4.0, seed=0)
B = tf.ones((3, 2, 3, 4))


def concat(layers):
    return tf.concat(layers, axis=3)


def DecomNet(input_im, layer_num, channel=64, kernel_size=3):
    input_max = tf.reduce_max(input_im, axis=3, keepdims=True)
    input_im = concat([input_max, input_im])
    with tf.variable_scope('DecomNet', reuse=tf.AUTO_REUSE):
        conv = tf.layers.conv2d(input_im, channel, kernel_size * 3, padding='same', activation=None,
                                name="shallow_feature_extraction")
        for idx in range(layer_num):
            conv = tf.layers.conv2d(conv, channel, kernel_size, padding='same', activation=tf.nn.relu,
                                    name='activated_layer_%d' % idx)
        conv = tf.layers.conv2d(conv, 4, kernel_size, padding='same', activation=None, name='recon_layer')

    R = tf.sigmoid(conv[:, :, :, 0:3])
    L = tf.sigmoid(conv[:, :, :, 3:4])

    return R, L


with tf.Session():
    print(A.eval())
# init = tf.global_variables_initializer()
with tf.Session() as sess:

    [R_low, I_low] = DecomNet(A, layer_num=5)
    sess.run(tf.global_variables_initializer()) # 这是不报错的关键语句
    print(R_low.eval())


本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
THE END
分享
二维码

)">
< <上一篇
下一篇>>