Tensorflow官网词向量 解释和实现公式
为何文本信息要是用Word2Vec呢? 是由于文本信息没有像音频和图片那么多的信息包含,因此词才具有特殊性,为了克服离散的词需要大量数据的支撑,因此采用词向量方式。


实现逻辑

这里我们进行分布的介绍:主要策略为 DataSet–>转化2维数据–>SkipParam和CBOW–>添加数据至图中–>获取图信息–>TensorBorad中查看

当然如果你感兴趣Skipgram模型是如何工作的Skip-gram的原理中查看。

  • 第一步
    首先我们需要加载我们的训练数据集。此实例中定义了一个maybe_download方法,进行查询是否已经下载,未下载则检查文件大小是否对应则开始下载。
url = 'http://mattmahoney.net/dc/'def maybe_download(filename, expected_bytes):"""Download a file if not present, and make sure it's the right size."""if not os.path.exists(filename):filename, _ = urllib.request.urlretrieve(url + filename, filename)statinfo = os.stat(filename)if statinfo.st_size == expected_bytes:print('Found and verified', filename)else:print(statinfo.st_size)raise Exception('Failed to verify ' + filename + '. Can you get to it with a browser?')return filenamefilename = maybe_download('text8.zip', 31344016)
  • 第二步
    此步进行讲下载后的text8.zip讲文件对象内的内容转化为字符串,并去掉空格转化为list数组。
# Read the data into a list of strings.
def read_data(filename):"""Extract the first file enclosed in a zip file as a list of words"""with zipfile.ZipFile(filename) as f:data = tf.compat.as_str(f.read(f.namelist()[0])).split()return datawords = read_data(filename)
print('Data size', len(words))
  • 第三部
    此步骤进行构建我们的字典数据,并且使用字符串UNK进行替换稀疏的单词。
# Step 2: Build the dictionary and replace rare words with UNK token.
# 单词大小为50000
vocabulary_size = 50000# @param words 传入单词数组
# @param vocabulary_size 要拿到的单词大小
def build_dataset(words, vocabulary_size):count = [['UNK', -1]]# 罗列每个单词出现的次数count.extend(collections.Counter(words).most_common(vocabulary_size - 1))# 建立一个空的Map对象 用来存放我们的单词的字典dictionary = dict()for word, _ in count:dictionary[word] = len(dictionary)data = list()unk_count = 0for word in words:if word in dictionary:index = dictionary[word]else:index = 0  # dictionary['UNK']unk_count += 1data.append(index)count[0][1] = unk_countreverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reverse_dictionarydata, count, dictionary, reverse_dictionary = build_dataset(words, vocabulary_size)
del words  # Hint to reduce memory.
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10], [reverse_dictionary[i] for i in data[:10]])data_index = 0
  • 第四部
    我们开始进行设置的模型,使用了Skip-gram模型。


if __name__ == "__main__":# Step 4: Build and train a skip-gram model.batch_size = 128embedding_size = 128  # Dimension of the embedding vector.skip_window = 1  # How many words to consider left and right.num_skips = 2  # How many times to reuse an input to generate a label.# We pick a random validation set to sample nearest neighbors. Here we limit the# validation samples to the words that have a low numeric ID, which by# construction are also the most frequent.valid_size = 16  # Random set of words to evaluate similarity on.valid_window = 100  # Only pick dev samples in the head of the distribution.valid_examples = np.random.choice(valid_window, valid_size, replace=False)num_sampled = 64  # Number of negative examples to sample.# graph = tf.Graph()## with graph.as_default():# Input data.train_inputs = tf.placeholder(tf.int32, shape=[batch_size])train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])valid_dataset = tf.constant(valid_examples, dtype=tf.int32)# Ops and variables pinned to the CPU because of missing GPU implementationwith tf.device('/cpu:0'):# Look up embeddings for inputs.embeddings = tf.Variable(tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))embed = tf.nn.embedding_lookup(embeddings, train_inputs)# Construct the variables for the NCE lossnce_weights = tf.Variable(tf.truncated_normal([vocabulary_size, embedding_size],stddev=1.0 / math.sqrt(embedding_size)))nce_biases = tf.Variable(tf.zeros([vocabulary_size]))# Compute the average NCE loss for the batch.# tf.nce_loss automatically draws a new sample of the negative labels each# time we evaluate the loss.loss = tf.reduce_mean(tf.nn.nce_loss(weights=nce_weights,biases=nce_biases,labels=train_labels,inputs=embed,num_sampled=num_sampled,num_classes=vocabulary_size))# Construct the SGD optimizer using a learning rate of 1.0.optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)# Compute the cosine similarity between minibatch examples and all embeddings.norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))normalized_embeddings = embeddings / normvalid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)similarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True)# Add variable initializer.init = tf.global_variables_initializer()

以下为全部代码:


from __future__ import absolute_import
from __future__ import division
from __future__ import print_functionimport collections
import math
import os
import random
import zipfileimport numpy as np
from six.moves import urllib
from six.moves import xrange  # pylint: disable=redefined-builtin
import tensorflow as tf
from tensorflow.contrib.tensorboard.plugins import projectorfrom pandas import DataFrame# Step 1: Download the data.
url = 'http://mattmahoney.net/dc/'def maybe_download(filename, expected_bytes):"""Download a file if not present, and make sure it's the right size."""if not os.path.exists(filename):filename, _ = urllib.request.urlretrieve(url + filename, filename)statinfo = os.stat(filename)if statinfo.st_size == expected_bytes:print('Found and verified', filename)else:print(statinfo.st_size)raise Exception('Failed to verify ' + filename + '. Can you get to it with a browser?')return filenamefilename = maybe_download('text8.zip', 31344016)# Read the data into a list of strings.
def read_data(filename):"""Extract the first file enclosed in a zip file as a list of words"""with zipfile.ZipFile(filename) as f:data = tf.compat.as_str(f.read(f.namelist()[0])).split()return datawords = read_data(filename)
print('Data size', len(words))# Step 2: Build the dictionary and replace rare words with UNK token.
vocabulary_size = 50000def build_dataset(words, vocabulary_size):count = [['UNK', -1]]count.extend(collections.Counter(words).most_common(vocabulary_size - 1))dictionary = dict()for word, _ in count:dictionary[word] = len(dictionary)data = list()unk_count = 0for word in words:if word in dictionary:index = dictionary[word]else:index = 0  # dictionary['UNK']unk_count += 1data.append(index)count[0][1] = unk_countreverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))return data, count, dictionary, reverse_dictionarydata, count, dictionary, reverse_dictionary = build_dataset(words, vocabulary_size)
del words  # Hint to reduce memory.
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10], [reverse_dictionary[i] for i in data[:10]])data_index = 0# Step 3: Function to generate a training batch for the skip-gram model.
def generate_batch(batch_size, num_skips, skip_window):global data_indexassert batch_size % num_skips == 0assert num_skips <= 2 * skip_windowbatch = np.ndarray(shape=(batch_size), dtype=np.int32)labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)span = 2 * skip_window + 1  # [ skip_window target skip_window ]buffer = collections.deque(maxlen=span)for _ in range(span):buffer.append(data[data_index])data_index = (data_index + 1) % len(data)for i in range(batch_size // num_skips):target = skip_window  # target label at the center of the buffertargets_to_avoid = [skip_window]for j in range(num_skips):while target in targets_to_avoid:target = random.randint(0, span - 1)targets_to_avoid.append(target)batch[i * num_skips + j] = buffer[skip_window]labels[i * num_skips + j, 0] = buffer[target]buffer.append(data[data_index])data_index = (data_index + 1) % len(data)# Backtrack a little bit to avoid skipping words in the end of a batchdata_index = (data_index + len(data) - span) % len(data)return batch, labelsbatch, labels = generate_batch(batch_size=8, num_skips=2, skip_window=1)
for i in range(8):print(batch[i], reverse_dictionary[batch[i]], '->', labels[i, 0], reverse_dictionary[labels[i, 0]])if __name__ == "__main__":# Step 4: Build and train a skip-gram model.batch_size = 128embedding_size = 128  # Dimension of the embedding vector.skip_window = 1  # How many words to consider left and right.num_skips = 2  # How many times to reuse an input to generate a label.# We pick a random validation set to sample nearest neighbors. Here we limit the# validation samples to the words that have a low numeric ID, which by# construction are also the most frequent.valid_size = 16  # Random set of words to evaluate similarity on.valid_window = 100  # Only pick dev samples in the head of the distribution.valid_examples = np.random.choice(valid_window, valid_size, replace=False)num_sampled = 64  # Number of negative examples to sample.# graph = tf.Graph()## with graph.as_default():# Input data.train_inputs = tf.placeholder(tf.int32, shape=[batch_size])train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])valid_dataset = tf.constant(valid_examples, dtype=tf.int32)# Ops and variables pinned to the CPU because of missing GPU implementationwith tf.device('/cpu:0'):# Look up embeddings for inputs.embeddings = tf.Variable(tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))embed = tf.nn.embedding_lookup(embeddings, train_inputs)# Construct the variables for the NCE lossnce_weights = tf.Variable(tf.truncated_normal([vocabulary_size, embedding_size],stddev=1.0 / math.sqrt(embedding_size)))nce_biases = tf.Variable(tf.zeros([vocabulary_size]))# Compute the average NCE loss for the batch.# tf.nce_loss automatically draws a new sample of the negative labels each# time we evaluate the loss.loss = tf.reduce_mean(tf.nn.nce_loss(weights=nce_weights,biases=nce_biases,labels=train_labels,inputs=embed,num_sampled=num_sampled,num_classes=vocabulary_size))# Construct the SGD optimizer using a learning rate of 1.0.optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)# Compute the cosine similarity between minibatch examples and all embeddings.norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))normalized_embeddings = embeddings / normvalid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)similarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True)# Add variable initializer.init = tf.global_variables_initializer()# Step 5: Begin training.
num_steps = 100001
LOG_DIR = 'D:/Project_coding/Tensorflow/Season2/Word2Vec/log/'with tf.Session() as session:# We must initialize all variables before we use them.init.run()print("Initialized")average_loss = 0for step in xrange(num_steps):batch_inputs, batch_labels = generate_batch(batch_size, num_skips, skip_window)feed_dict = {train_inputs: batch_inputs, train_labels: batch_labels}# We perform one update step by evaluating the optimizer op (including it# in the list of returned values for session.run()_, loss_val = session.run([optimizer, loss], feed_dict=feed_dict)average_loss += loss_valif step % 2000 == 0:if step > 0:average_loss /= 2000# The average loss is an estimate of the loss over the last 2000 batches.print("Average loss at step ", step, ": ", average_loss)average_loss = 0"""Use TensorBoard to visualize our model. This is not included in the TensorFlow website tutorial."""words_to_visualize = 3000final_embeddings = normalized_embeddings.eval()[:words_to_visualize]embedding_var = tf.Variable(final_embeddings)session.run(embedding_var.initializer)saver = tf.train.Saver([embedding_var])saver.save(session, os.path.join(LOG_DIR, "model.ckpt"), 0)# Format: tensorflow/contrib/tensorboard/plugins/projector/projector_config.protoconfig = projector.ProjectorConfig()# You can add multiple embeddings. Here we add only one.embedding = config.embeddings.add()embedding.tensor_name = embedding_var.name# Link this tensor to its metadata file (e.g. labels).embedding.metadata_path = os.path.join(LOG_DIR, 'metadata.tsv')# Use the same LOG_DIR where you stored your checkpoint.summary_writer = tf.summary.FileWriter(LOG_DIR, session.graph)# summary_writer.add_graph()# The next line writes a projector_config.pbtxt in the LOG_DIR. TensorBoard will# read this file during startup.projector.visualize_embeddings(summary_writer, config)labels = [(reverse_dictionary[i], i) for i in range(words_to_visualize)]DataFrame(labels, columns=['word', 'freq_rank']).to_csv('log/metadata.tsv', index=False, sep='\t')

NLP-词向量(Vector Representations of Words)相关推荐

  1. NLP:词向量与ELMo模型笔记

    目录: 基础部分回顾(词向量.语言模型) NLP的核心:学习不同语境下的语义表示 基于LSTM的词向量学习 深度学习中的层次表示以及Deep BI-LSTM ELMo模型 总结 1. 基础部分回顾(词 ...

  2. NLP(词向量、word2vec和word embedding)

    最近在做一些文本处理相关的任务,虽然对于相关知识有所了解,而且根据相关开源代码也可以完成相应任务:但是具有有些细节,尤其是细节之间的相互关系,感觉有些模糊而似懂非懂,所以找到相关知识整理介绍,分享如下 ...

  3. NLP词向量模型总结:从Elmo到GPT,再到Bert

    词向量历史概述 提到NLP,总离开不了词向量,也就是我们经常说的embedding,因为我们需要把文字符号转化为模型输入可接受的数字向量,进而输入模型,完成训练任务.这就不得不说这个转化的历史了. 起 ...

  4. 莫烦nlp——词向量—CBOW

    由于不是第一次接触,本文只摘录莫烦关于词向量的观点.更多的关注代码. 上一次系统学习莫烦教程已经一年半了,时间过得太快了. 转载:https://mofanpy.com/tutorials/machi ...

  5. 深度学习与自然语言处理教程(1) - 词向量、SVD分解与Word2Vec(NLP通关指南·完结)

    作者:韩信子@ShowMeAI 教程地址:https://www.showmeai.tech/tutorials/36 本文地址:https://www.showmeai.tech/article-d ...

  6. 对句子分词,找到对应词的腾讯词向量模型并使用Python进行faiss检索

    目录 一.下载腾讯的词向量 二.停用词 三.代码部分 3.1.代码思想 四.输出结果 本文主要是将句子分词转向量,再加总词向量求平均变为句子向量.接着再存储到faiss中.等待新句子到来,同样按照上述 ...

  7. 在Keras的Embedding层中使用预训练的word2vec词向量

    文章目录 1 准备工作 1.1 什么是词向量? 1.2 获取词向量 2 转化词向量为keras所需格式 2.1 获取所有词语word和词向量 2.2 构造"词语-词向量"字典 2. ...

  8. 使用DL4J读取词向量并计算语义相似度

    使用DL4J读取词向量并计算单词语义相似度 By 龙前尘 实验环境:WINDOWS 8.Java-1.8.0_25.DL4J-0.9.1.ND4J-0.9.1 转载请注明地址: http://blog ...

  9. Onehot_encode与Word2vec词向量训练

    Onehot_encode与Word2vec词向量训练 1.编写onehot_encode函数 使用: class sklearn.preprocessing.OneHotEncoder(*, cat ...

  10. NLP之词向量:利用word2vec对20类新闻文本数据集进行词向量训练、测试(某个单词的相关词汇)

    NLP之词向量:利用word2vec对20类新闻文本数据集进行词向量训练.测试(某个单词的相关词汇) 目录 输出结果 设计思路 核心代码 输出结果 寻找训练文本中与morning最相关的10个词汇: ...

最新文章

  1. EntityCURD操作的参数和返回值
  2. 【AI参赛经验】汉字书法识别比赛经验心得——by:microfat_htu
  3. oracle11g 数据库导出报“ EXP-00003: 未找到段 (0,0) 的存储定义”错误的解决方案
  4. c#双缓冲绘图(不闪烁的几种方法)
  5. golang 读取文件最后一行_python3从零学习-5.4.3、文件输入流fileinput
  6. Java 获取系统信息
  7. php前台输出繁体,利用PHP输出控制功能做简繁体转换_php
  8. 1699 个词汇 的 计算机英语
  9. 麻省理工公开课人工智能笔记五
  10. day12摇色子游戏--笔记
  11. linux命令之ls命令
  12. deel t410安装_用DEEL-LIP构建Lipschitz约束网络
  13. 学校暑期计算机培训心得,暑假计算机培训心得体会
  14. nodejs 将对象转化为query(URLSearchParams)
  15. linux mysql dengl_linux环境搭建(四)--MYSQL
  16. 计算机网络并行传输和串行传输,并行传输和串行传输的区别是什么
  17. python lisp_给Lisp程序员的Python简介
  18. ESD二极管各项参数字母的解释-优恩
  19. 恰如春花秋月人生起伏
  20. CSGO 播放DEMO 闪退跳出到桌面无法播放问题解决

热门文章

  1. flash cs5导出swc到flash builder4 And Late
  2. 怎么看电脑的hdmi是输出还是输入_电脑上的hdmi接口是输出还是输入
  3. html div将页面划分,css+div网页布局
  4. 嵌入式Linux(十一)DDR3
  5. solidwork2019安装教程
  6. 云游戏GPU虚拟化技术分析
  7. C8051F 30x单片机低成本射频读卡器方案
  8. 帝国cms网站迁移到新的服务器,最简单的帝国CMS网站转移方法详解
  9. 怎么使用口袋迷你U盘PE制作工具的ISO模式制作U盘系统
  10. 【Nexus】通过Nexus搭建Npm私库