预测编码,学习笔记TF020

队列注脚(sequence labelling),输入体系每一帧预测八个种类。OCXC90(Optical
Character Recognition 光学字符识别)。

队列注脚(sequence labelling),输入体系每一帧预测四个品种。OC凯雷德(Optical
Character Recognition 光学字符识别)。

MIT口语系统讨论组罗布 Kassel搜集,新加坡国立高校人工智能实验室Ben
Taskar预管理OCTiguan数据集(http://ai.stanford.edu/~btaskar/ocr/
),包蕴多量独立手写小写字母,每一种样本对应16X8像素二值图像。字线组合系列,种类对应单词。6800个,长度不超越14假名的单词。gzip压缩,内容用Tab分隔文本文件。Python
csv模块间接读取。文件每行一个归一化字母属性,ID号、标签、像素值、下一字母ID号等。

MIT口语系统研商组罗布 Kassel搜罗,哈佛高校人工智能实验室Ben
Taskar预管理OCCRUISER数据集(http://ai.stanford.edu/~btaskar/ocr/
),包蕴多量独立手写小写字母,各类样本对应16X8像素二值图像。字线组合种类,类别对应单词。6800个,长度不当先14假名的单词。gzip压缩,内容用Tab分隔文本文件。Python
csv模块直接读取。文件每行三个归一化字母属性,ID号、标签、像素值、下一字母ID号等。

下一字母ID值排序,依照科学顺序读取每一种单词字母。采撷字母,直到下三个ID对应字段未被设置甘休。读取新系列。读取完指标字母及数码像素,用零图像填充系列对象,能归入多个相当的大目的字母有所像素数量NumPy数组。

下一字母ID值排序,遵照科学顺序读取每一个单词字母。采摘字母,直到下一个ID对应字段未棉被服装置甘休。读取新序列。读取完目的字母及数据像素,用零图像填充连串对象,能放入八个相当的大目的字母有所像素数量NumPy数组。

时刻步之间分享softmax层。数据和目的数组包蕴类别,每种目的字母对应三个图像帧。LacrosseNN扩大,每种字母输出增加softmax分类器。分类器对每帧数据而非整个连串评估预测结果。总括类别长度。二个softmax层增多到全体帧:大概为具备帧加多多少个差异分类器,或然令全部帧共享同二个分类器。分享分类器,权值在教练中被调度次数更加多,陶冶单词各样字母。叁个全连接层权值矩阵维数batch_size*in_size*out_size。现需求在八个输入维度batch_size、sequence_steps更新权值矩阵。令输入(宝马7系NN输出活性值)扁平为形象batch_size*sequence_steps*in_size。权值矩阵产生异常的大的批数量。结果反扁平化(unflatten)。

时刻步之间共享softmax层。数据和对象数组富含类别,每一种指标字母对应贰个图像帧。EvoqueNN扩张,每一个字母输出增添softmax分类器。分类器对每帧数据而非整个种类评估预测结果。总结体系长度。贰个softmax层增多到全部帧:恐怕为持有帧增添多少个不一样分类器,只怕令全数帧分享同二个分类器。分享分类器,权值在教练中被调动次数更加多,磨练单词各样字母。三个全连接层权值矩阵维数batch_size*in_size*out_size。现必要在八个输入维度batch_size、sequence_steps更新权值矩阵。令输入(PRADONN输出活性值)扁平为形象batch_size*sequence_steps*in_size。权值矩阵形成一点都不小的批数量。结果反扁平化(unflatten)。

代价函数,种类每一帧有展望目的对,在对应维度平均。依附张量长度(类别最大尺寸)归一化的tf.reduce_mean不可能使用。供给依照实际种类长度归一化,手工资调解用tf.reduce_sum和除法运算均值。

代价函数,种类每一帧有展望指标对,在相应维度平均。依赖张量长度(种类最大尺寸)归一化的tf.reduce_mean不大概使用。须求依据实际种类长度归一化,手工业调用tf.reduce_sum和除法运算均值。

损失函数,tf.argmax针对轴2非轴1,各帧填充,依靠体系实际尺寸总括均值。tf.reduce_mean对批数量有所单词取均值。

损失函数,tf.argmax针对轴2非轴1,各帧填充,依赖连串实际尺寸总计均值。tf.reduce_mean对批数量有所单词取均值。

TensorFlow自动导数总结,可使用连串分类同样优化运算,只需要代入新代价函数。对持有LX570NN梯度裁剪,幸免陶冶发散,防止负面影响。

TensorFlow自动导数总括,可应用类别分类一样优化运算,只须求代入新代价函数。对具有奥德赛NN梯度裁剪,防止操练发散,制止负面影响。

教练模型,get_sataset下载手写体图像,预管理,小写字母独热编码向量。随机打乱数据顺序,分偏划分陶冶集、测量试验集。

教练模型,get_sataset下载手写体图像,预处理,小写字母独热编码向量。随机打乱数据顺序,分偏划分磨炼集、测量试验集。

单词相邻字母存在依据关系(或互消息),OdysseyNN保存同一单词全部输入消息到含有活性值。前几个字母分类,网络无一大波输入预计额外音信,双向CRUISERNN(bidirectional
奥迪Q3NN)克服缺欠。
三个LX570NN观测输入系列,三个规行矩步一般顺序从左端读取单词,另叁个如约相反顺序从右端读取单词。每一种时间步得到七个出口活性值。送入分享softmax层前,拼接。分类器从每一种字母获取完整单词音讯。tf.modle.rnn.bidirectional_rnn已实现。

单词相邻字母存在依据关系(或互音讯),奥迪Q3NN保存同一单词全部输入音信到含有活性值。前多少个字母分类,网络无多量输入臆度额外消息,双向PRADONN(bidirectional
奥迪Q5NN)制伏缺欠。
四个奥迪Q7NN观测输入种类,三个比照平时顺序从左端读取单词,另一个依照相反顺序从右端读取单词。每一个时间步获得多少个出口活性值。送入共享softmax层前,拼接。分类器从种种字母获取完整单词讯息。tf.modle.rnn.bidirectional_rnn已实现。

完毕双向LacrosseNN。划分预测属性到四个函数,只关切非常少内容。_shared_softmax函数,传入函数张量data臆想输入尺寸。复用别的架构函数,相同扁平化技巧在具备时间步分享同多少个softmax层。rnn.dynamic_rnn创造多个ENCORENN。
队列反转,比完毕新反向传递SportageNN运算轻松。tf.reverse_sequence函数反转帧数据中sequence_lengths帧。数据流图节点盛名称。scope参数是rnn_dynamic_cell变量scope名称,暗中同意值LANDNN。四个参数分化纳瓦拉NN,需求差别域。
反转类别送入后向奇骏NN,互连网出口反转,和前向输出对齐。沿牧马人NN神经元输出维度拼接多少个张量,重返。双向RAV4NN模型质量更优。

落实双向瑞鹰NN。划分预测属性到八个函数,只关心很少内容。_shared_softmax函数,传入函数张量data测度输入尺寸。复用别的架构函数,同样扁平化才具在有着时间步分享同一个softmax层。rnn.dynamic_rnn创设四个本田CR-VNN。
队列反转,比完成新反向传递哈弗NN运算轻巧。tf.reverse_sequence函数反转帧数据中sequence_lengths帧。数据流图节点著名称。scope参数是rnn_dynamic_cell变量scope名称,私下认可值KugaNN。多少个参数分裂LANDNN,必要差异域。
反转系列送入后向福特ExplorerNN,互连网出口反转,和前向输出对齐。沿CRUISERNN神经元输出维度拼接多少个张量,再次来到。双向EvoqueNN模型质量更优。

    import requests
    import os
    from bs4 import BeautifulSoup

    from helpers import ensure_directory

    class ArxivAbstracts:

        ENDPOINT = 'http://export.arxiv.org/api/query'
        PAGE_SIZE = 100

        def __init__(self, cache_dir, categories, keywords, amount=None):
            self.categories = categories
            self.keywords = keywords
            cache_dir = os.path.expanduser(cache_dir)
            ensure_directory(cache_dir)
            filename = os.path.join(cache_dir, 'abstracts.txt')
            if not os.path.isfile(filename):
                with open(filename, 'w') as file_:
                    for abstract in self._fetch_all(amount):
                        file_.write(abstract + '\n')
            with open(filename) as file_:
                self.data = file_.readlines()

        def _fetch_all(self, amount):
            page_size = type(self).PAGE_SIZE
            count = self._fetch_count()
            if amount:
                count = min(count, amount)
            for offset in range(0, count, page_size):
                print('Fetch papers {}/{}'.format(offset + page_size, count))
                yield from self._fetch_page(page_size, count)

        def _fetch_page(self, amount, offset):
            url = self._build_url(amount, offset)
            response = requests.get(url)
            soup = BeautifulSoup(response.text)
            for entry in soup.findAll('entry'):
                text = entry.find('summary').text
                text = text.strip().replace('\n', ' ')
                yield text

        def _fetch_count(self):
            url = self._build_url(0, 0)
            response = requests.get(url)
            soup = BeautifulSoup(response.text, 'lxml')
            count = int(soup.find('opensearch:totalresults').string)
            print(count, 'papers found')
            return count

        def _build_url(self, amount, offset):
            categories = ' OR '.join('cat:' + x for x in self.categories)
            keywords = ' OR '.join('all:' + x for x in self.keywords)
            url = type(self).ENDPOINT
            url += '?search_query=(({}) AND ({}))'.format(categories, keywords)
            url += '&max_results={}&offset={}'.format(amount, offset)
            return url

    import random
    import numpy as np

    class Preprocessing:

        VOCABULARY = \
            " $%'()+,-./0123456789:;=?ABCDEFGHIJKLMNOPQRSTUVWXYZ" \
            "\\^_abcdefghijklmnopqrstuvwxyz{|}"

        def __init__(self, texts, length, batch_size):
            self.texts = texts
            self.length = length
            self.batch_size = batch_size
            self.lookup = {x: i for i, x in enumerate(self.VOCABULARY)}

        def __call__(self, texts):
            batch = np.zeros((len(texts), self.length, len(self.VOCABULARY)))
            for index, text in enumerate(texts):
                text = [x for x in text if x in self.lookup]
                assert 2 <= len(text) <= self.length
                for offset, character in enumerate(text):
                    code = self.lookup[character]
                    batch[index, offset, code] = 1
            return batch

        def __iter__(self):
            windows = []
            for text in self.texts:
                for i in range(0, len(text) - self.length + 1, self.length // 2):
                    windows.append(text[i: i + self.length])
            assert all(len(x) == len(windows[0]) for x in windows)
            while True:
                random.shuffle(windows)
                for i in range(0, len(windows), self.batch_size):
                    batch = windows[i: i + self.batch_size]
                    yield self(batch)

    import tensorflow as tf
    from helpers import lazy_property

    class PredictiveCodingModel:

        def __init__(self, params, sequence, initial=None):
            self.params = params
            self.sequence = sequence
            self.initial = initial
            self.prediction
            self.state
            self.cost
            self.error
            self.logprob
            self.optimize

        @lazy_property
        def data(self):
            max_length = int(self.sequence.get_shape()[1])
            return tf.slice(self.sequence, (0, 0, 0), (-1, max_length - 1, -1))

        @lazy_property
        def target(self):
            return tf.slice(self.sequence, (0, 1, 0), (-1, -1, -1))

        @lazy_property
        def mask(self):
            return tf.reduce_max(tf.abs(self.target), reduction_indices=2)

        @lazy_property
        def length(self):
            return tf.reduce_sum(self.mask, reduction_indices=1)

        @lazy_property
        def prediction(self):
            prediction, _ = self.forward
            return prediction

        @lazy_property
        def state(self):
            _, state = self.forward
            return state

        @lazy_property
        def forward(self):
            cell = self.params.rnn_cell(self.params.rnn_hidden)
            cell = tf.nn.rnn_cell.MultiRNNCell([cell] * self.params.rnn_layers)
            hidden, state = tf.nn.dynamic_rnn(
                inputs=self.data,
                cell=cell,
                dtype=tf.float32,
                initial_state=self.initial,
                sequence_length=self.length)
            vocabulary_size = int(self.target.get_shape()[2])
            prediction = self._shared_softmax(hidden, vocabulary_size)
            return prediction, state

        @lazy_property
        def cost(self):
            prediction = tf.clip_by_value(self.prediction, 1e-10, 1.0)
            cost = self.target * tf.log(prediction)
            cost = -tf.reduce_sum(cost, reduction_indices=2)
            return self._average(cost)

        @lazy_property
        def error(self):
            error = tf.not_equal(
                tf.argmax(self.prediction, 2), tf.argmax(self.target, 2))
            error = tf.cast(error, tf.float32)
            return self._average(error)

        @lazy_property
        def logprob(self):
            logprob = tf.mul(self.prediction, self.target)
            logprob = tf.reduce_max(logprob, reduction_indices=2)
            logprob = tf.log(tf.clip_by_value(logprob, 1e-10, 1.0)) / tf.log(2.0)
            return self._average(logprob)

        @lazy_property
        def optimize(self):
            gradient = self.params.optimizer.compute_gradients(self.cost)
            if self.params.gradient_clipping:
                limit = self.params.gradient_clipping
                gradient = [
                    (tf.clip_by_value(g, -limit, limit), v)
                    if g is not None else (None, v)
                    for g, v in gradient]
            optimize = self.params.optimizer.apply_gradients(gradient)
            return optimize

        def _average(self, data):
            data *= self.mask
            length = tf.reduce_sum(self.length, 0)
            data = tf.reduce_sum(data, reduction_indices=1) / length
            data = tf.reduce_mean(data)
            return data

        def _shared_softmax(self, data, out_size):
            max_length = int(data.get_shape()[1])
            in_size = int(data.get_shape()[2])
            weight = tf.Variable(tf.truncated_normal(
                [in_size, out_size], stddev=0.01))
            bias = tf.Variable(tf.constant(0.1, shape=[out_size]))
            # Flatten to apply same weights to all time steps.
            flat = tf.reshape(data, [-1, in_size])
            output = tf.nn.softmax(tf.matmul(flat, weight) + bias)
            output = tf.reshape(output, [-1, max_length, out_size])
            return output

    import os
    import re
    import tensorflow as tf
    import numpy as np

    from helpers import overwrite_graph
    from helpers import ensure_directory
    from ArxivAbstracts import ArxivAbstracts
    from Preprocessing import Preprocessing
    from PredictiveCodingModel import PredictiveCodingModel

    class Training:

        @overwrite_graph
        def __init__(self, params, cache_dir, categories, keywords, amount=None):
            self.params = params
            self.texts = ArxivAbstracts(cache_dir, categories, keywords, amount).data
            self.prep = Preprocessing(
                self.texts, self.params.max_length, self.params.batch_size)
            self.sequence = tf.placeholder(
                tf.float32,
                [None, self.params.max_length, len(self.prep.VOCABULARY)])
            self.model = PredictiveCodingModel(self.params, self.sequence)
            self._init_or_load_session()

        def __call__(self):
            print('Start training')
            self.logprobs = []
            batches = iter(self.prep)
            for epoch in range(self.epoch, self.params.epochs + 1):
                self.epoch = epoch
                for _ in range(self.params.epoch_size):
                    self._optimization(next(batches))
                self._evaluation()
            return np.array(self.logprobs)

        def _optimization(self, batch):
            logprob, _ = self.sess.run(
                (self.model.logprob, self.model.optimize),
                {self.sequence: batch})
            if np.isnan(logprob):
                raise Exception('training diverged')
            self.logprobs.append(logprob)

        def _evaluation(self):
            self.saver.save(self.sess, os.path.join(
                self.params.checkpoint_dir, 'model'), self.epoch)
            self.saver.save(self.sess, os.path.join(
                self.params.checkpoint_dir, 'model'), self.epoch)
            perplexity = 2 ** -(sum(self.logprobs[-self.params.epoch_size:]) /
                            self.params.epoch_size)
            print('Epoch {:2d} perplexity {:5.4f}'.format(self.epoch, perplexity))

        def _init_or_load_session(self):
            self.sess = tf.Session()
            self.saver = tf.train.Saver()
            checkpoint = tf.train.get_checkpoint_state(self.params.checkpoint_dir)
            if checkpoint and checkpoint.model_checkpoint_path:
                path = checkpoint.model_checkpoint_path
                print('Load checkpoint', path)
                self.saver.restore(self.sess, path)
                self.epoch = int(re.search(r'-(\d+)$', path).group(1)) + 1
            else:
                ensure_directory(self.params.checkpoint_dir)
                print('Randomly initialize variables')
                self.sess.run(tf.initialize_all_variables())
                self.epoch = 1

    from Training import Training
    from get_params import get_params

    Training(
        get_params(),
        cache_dir = './arxiv',
        categories = [
            'Machine Learning',
            'Neural and Evolutionary Computing',
            'Optimization'
        ],
        keywords = [
            'neural',
            'network',
            'deep'
        ]
        )()

    import tensorflow as tf
    import numpy as np

    from helpers import overwrite_graph
    from Preprocessing import Preprocessing
    from PredictiveCodingModel import PredictiveCodingModel

    class Sampling:

        @overwrite_graph
        def __init__(self, params):
            self.params = params
            self.prep = Preprocessing([], 2, self.params.batch_size)
            self.sequence = tf.placeholder(
                tf.float32, [1, 2, len(self.prep.VOCABULARY)])
            self.state = tf.placeholder(
                tf.float32, [1, self.params.rnn_hidden * self.params.rnn_layers])
            self.model = PredictiveCodingModel(
                self.params, self.sequence, self.state)
            self.sess = tf.Session()
            checkpoint = tf.train.get_checkpoint_state(self.params.checkpoint_dir)
            if checkpoint and checkpoint.model_checkpoint_path:
                tf.train.Saver().restore(
                    self.sess, checkpoint.model_checkpoint_path)
            else:
               print('Sampling from untrained model.')
            print('Sampling temperature', self.params.sampling_temperature)

        def __call__(self, seed, length=100):
            text = seed
            state = np.zeros((1, self.params.rnn_hidden * self.params.rnn_layers))
            for _ in range(length):
                feed = {self.state: state}
                feed[self.sequence] = self.prep([text[-1] + '?'])
                prediction, state = self.sess.run(
                    [self.model.prediction, self.model.state], feed)
                text += self._sample(prediction[0, 0])
            return text

        def _sample(self, dist):
            dist = np.log(dist) / self.params.sampling_temperature
            dist = np.exp(dist) / np.exp(dist).sum()
            choice = np.random.choice(len(dist), p=dist)
            choice = self.prep.VOCABULARY[choice]
            return choice
    import gzip
    import csv
    import numpy as np

    from helpers import download

    class OcrDataset:

        URL = 'http://ai.stanford.edu/~btaskar/ocr/letter.data.gz'

        def __init__(self, cache_dir):
            path = download(type(self).URL, cache_dir)
            lines = self._read(path)
            data, target = self._parse(lines)
            self.data, self.target = self._pad(data, target)

        @staticmethod
        def _read(filepath):
            with gzip.open(filepath, 'rt') as file_:
                reader = csv.reader(file_, delimiter='\t')
                lines = list(reader)
                return lines

        @staticmethod
        def _parse(lines):
            lines = sorted(lines, key=lambda x: int(x[0]))
            data, target = [], []
            next_ = None
            for line in lines:
                if not next_:
                    data.append([])
                    target.append([])
                else:
                    assert next_ == int(line[0])
                next_ = int(line[2]) if int(line[2]) > -1 else None
                pixels = np.array([int(x) for x in line[6:134]])
                pixels = pixels.reshape((16, 8))
                data[-1].append(pixels)
                target[-1].append(line[1])
            return data, target

        @staticmethod
        def _pad(data, target):
            max_length = max(len(x) for x in target)
            padding = np.zeros((16, 8))
            data = [x + ([padding] * (max_length - len(x))) for x in data]
            target = [x + ([''] * (max_length - len(x))) for x in target]
            return np.array(data), np.array(target)

    import tensorflow as tf

    from helpers import lazy_property

    class SequenceLabellingModel:

        def __init__(self, data, target, params):
            self.data = data
            self.target = target
            self.params = params
            self.prediction
            self.cost
            self.error
            self.optimize

        @lazy_property
        def length(self):
            used = tf.sign(tf.reduce_max(tf.abs(self.data), reduction_indices=2))
            length = tf.reduce_sum(used, reduction_indices=1)
            length = tf.cast(length, tf.int32)
            return length

        @lazy_property
        def prediction(self):
            output, _ = tf.nn.dynamic_rnn(
                tf.nn.rnn_cell.GRUCell(self.params.rnn_hidden),
                self.data,
                dtype=tf.float32,
                sequence_length=self.length,
            )
            # Softmax layer.
            max_length = int(self.target.get_shape()[1])
            num_classes = int(self.target.get_shape()[2])
            weight = tf.Variable(tf.truncated_normal(
                [self.params.rnn_hidden, num_classes], stddev=0.01))
            bias = tf.Variable(tf.constant(0.1, shape=[num_classes]))
            # Flatten to apply same weights to all time steps.
            output = tf.reshape(output, [-1, self.params.rnn_hidden])
            prediction = tf.nn.softmax(tf.matmul(output, weight) + bias)
            prediction = tf.reshape(prediction, [-1, max_length, num_classes])
            return prediction

        @lazy_property
        def cost(self):
            # Compute cross entropy for each frame.
            cross_entropy = self.target * tf.log(self.prediction)
            cross_entropy = -tf.reduce_sum(cross_entropy, reduction_indices=2)
            mask = tf.sign(tf.reduce_max(tf.abs(self.target), reduction_indices=2))
            cross_entropy *= mask
            # Average over actual sequence lengths.
            cross_entropy = tf.reduce_sum(cross_entropy, reduction_indices=1)
            cross_entropy /= tf.cast(self.length, tf.float32)
            return tf.reduce_mean(cross_entropy)

        @lazy_property
        def error(self):
            mistakes = tf.not_equal(
                tf.argmax(self.target, 2), tf.argmax(self.prediction, 2))
            mistakes = tf.cast(mistakes, tf.float32)
            mask = tf.sign(tf.reduce_max(tf.abs(self.target), reduction_indices=2))
            mistakes *= mask
            # Average over actual sequence lengths.
            mistakes = tf.reduce_sum(mistakes, reduction_indices=1)
            mistakes /= tf.cast(self.length, tf.float32)
            return tf.reduce_mean(mistakes)

        @lazy_property
        def optimize(self):
            gradient = self.params.optimizer.compute_gradients(self.cost)
            try:
                limit = self.params.gradient_clipping
                gradient = [
                    (tf.clip_by_value(g, -limit, limit), v)
                    if g is not None else (None, v)
                    for g, v in gradient]
            except AttributeError:
                print('No gradient clipping parameter specified.')
            optimize = self.params.optimizer.apply_gradients(gradient)
            return optimize

    import random

    import tensorflow as tf
    import numpy as np

    from helpers import AttrDict

    from OcrDataset import OcrDataset
    from SequenceLabellingModel import SequenceLabellingModel
    from batched import batched

    params = AttrDict(
        rnn_cell=tf.nn.rnn_cell.GRUCell,
        rnn_hidden=300,
        optimizer=tf.train.RMSPropOptimizer(0.002),
        gradient_clipping=5,
        batch_size=10,
        epochs=5,
        epoch_size=50
    )

    def get_dataset():
        dataset = OcrDataset('./ocr')
        # Flatten images into vectors.
        dataset.data = dataset.data.reshape(dataset.data.shape[:2] + (-1,))
        # One-hot encode targets.
        target = np.zeros(dataset.target.shape + (26,))
        for index, letter in np.ndenumerate(dataset.target):
            if letter:
                target[index][ord(letter) - ord('a')] = 1
        dataset.target = target
        # Shuffle order of examples.
        order = np.random.permutation(len(dataset.data))
        dataset.data = dataset.data[order]
        dataset.target = dataset.target[order]
        return dataset

    # Split into training and test data.
    dataset = get_dataset()
    split = int(0.66 * len(dataset.data))
    train_data, test_data = dataset.data[:split], dataset.data[split:]
    train_target, test_target = dataset.target[:split], dataset.target[split:]

    # Compute graph.
    _, length, image_size = train_data.shape
    num_classes = train_target.shape[2]
    data = tf.placeholder(tf.float32, [None, length, image_size])
    target = tf.placeholder(tf.float32, [None, length, num_classes])
    model = SequenceLabellingModel(data, target, params)
    batches = batched(train_data, train_target, params.batch_size)

    sess = tf.Session()
    sess.run(tf.initialize_all_variables())
    for index, batch in enumerate(batches):
        batch_data = batch[0]
        batch_target = batch[1]
        epoch = batch[2]
        if epoch >= params.epochs:
            break
        feed = {data: batch_data, target: batch_target}
        error, _ = sess.run([model.error, model.optimize], feed)
        print('{}: {:3.6f}%'.format(index + 1, 100 * error))

    test_feed = {data: test_data, target: test_target}
    test_error, _ = sess.run([model.error, model.optimize], test_feed)
    print('Test error: {:3.6f}%'.format(100 * error))

    import tensorflow as tf

    from helpers import lazy_property

    class BidirectionalSequenceLabellingModel:

        def __init__(self, data, target, params):
            self.data = data
            self.target = target
            self.params = params
            self.prediction
            self.cost
            self.error
            self.optimize

        @lazy_property
        def length(self):
            used = tf.sign(tf.reduce_max(tf.abs(self.data), reduction_indices=2))
            length = tf.reduce_sum(used, reduction_indices=1)
            length = tf.cast(length, tf.int32)
            return length

        @lazy_property
        def prediction(self):
            output = self._bidirectional_rnn(self.data, self.length)
            num_classes = int(self.target.get_shape()[2])
            prediction = self._shared_softmax(output, num_classes)
            return prediction

        def _bidirectional_rnn(self, data, length):
            length_64 = tf.cast(length, tf.int64)
            forward, _ = tf.nn.dynamic_rnn(
                cell=self.params.rnn_cell(self.params.rnn_hidden),
                inputs=data,
                dtype=tf.float32,
                sequence_length=length,
                scope='rnn-forward')
            backward, _ = tf.nn.dynamic_rnn(
            cell=self.params.rnn_cell(self.params.rnn_hidden),
            inputs=tf.reverse_sequence(data, length_64, seq_dim=1),
            dtype=tf.float32,
            sequence_length=self.length,
            scope='rnn-backward')
            backward = tf.reverse_sequence(backward, length_64, seq_dim=1)
            output = tf.concat(2, [forward, backward])
            return output

        def _shared_softmax(self, data, out_size):
            max_length = int(data.get_shape()[1])
            in_size = int(data.get_shape()[2])
            weight = tf.Variable(tf.truncated_normal(
                [in_size, out_size], stddev=0.01))
            bias = tf.Variable(tf.constant(0.1, shape=[out_size]))
            # Flatten to apply same weights to all time steps.
            flat = tf.reshape(data, [-1, in_size])
            output = tf.nn.softmax(tf.matmul(flat, weight) + bias)
            output = tf.reshape(output, [-1, max_length, out_size])
            return output

        @lazy_property
        def cost(self):
            # Compute cross entropy for each frame.
            cross_entropy = self.target * tf.log(self.prediction)
            cross_entropy = -tf.reduce_sum(cross_entropy, reduction_indices=2)
            mask = tf.sign(tf.reduce_max(tf.abs(self.target), reduction_indices=2))
            cross_entropy *= mask
            # Average over actual sequence lengths.
            cross_entropy = tf.reduce_sum(cross_entropy, reduction_indices=1)
            cross_entropy /= tf.cast(self.length, tf.float32)
            return tf.reduce_mean(cross_entropy)

        @lazy_property
        def error(self):
            mistakes = tf.not_equal(
                tf.argmax(self.target, 2), tf.argmax(self.prediction, 2))
            mistakes = tf.cast(mistakes, tf.float32)
            mask = tf.sign(tf.reduce_max(tf.abs(self.target), reduction_indices=2))
            mistakes *= mask
            # Average over actual sequence lengths.
            mistakes = tf.reduce_sum(mistakes, reduction_indices=1)
            mistakes /= tf.cast(self.length, tf.float32)
            return tf.reduce_mean(mistakes)

        @lazy_property
        def optimize(self):
            gradient = self.params.optimizer.compute_gradients(self.cost)
            try:
                limit = self.params.gradient_clipping
                gradient = [
                    (tf.clip_by_value(g, -limit, limit), v)
                    if g is not None else (None, v)
                    for g, v in gradient]
            except AttributeError:
                print('No gradient clipping parameter specified.')
            optimize = self.params.optimizer.apply_gradients(gradient)
            return optimize

 

 

仿照效法资料:
《面向机器智能的TensorFlow实行》

参照他事他说加以考察资料:
《面向机器智能的TensorFlow实施》

款待加作者微信交换:qingxingfengzi
自己的微信大伙儿号:qingxingfengzigz
笔者太太张幸清的微信公众号:qingqingfeifangz

招待加小编微信交流:qingxingfengzi
自己的微教徒人号:qingxingfengzigz
作者太太张幸清的微信公众号:qingqingfeifangz

—复苏内容甘休—

发表评论

电子邮件地址不会被公开。 必填项已用*标注

网站地图xml地图