Deeplearning Algorithms tutorial

谷歌的人工智能位于全球前列,在图像识别、语音识别、无人驾驶等技术上都已经落地。而百度实质意义上扛起了国内的人工智能的大旗,覆盖无人驾驶、智能助手、图像识别等许多层面。苹果业已开始全面拥抱机器学习,新产品进军家庭智能音箱并打造工作站级别Mac。另外,腾讯的深度学习平台Mariana已支持了微信语音识别的语音输入法、语音开放平台、长按语音消息转文本等产品,在微信图像识别中开始应用。全球前十大科技公司全部发力人工智能理论研究和应用的实现,虽然入门艰难,但是一旦入门,高手也就在你的不远处! AI的开发离不开算法那我们就接下来开始学习算法吧!

局部线性嵌入(Locally Linear Embedding)

数据降维是指通过线性的或非线性的映射关系将高维数据转换成低维数据的过程。一般情况下,该低维数据代表了原始高维数据的主要成分(图1),并描述了原始高维数据的空间分布结构。由于经过降维后的数据更易于被分类、识别、可视化以及存储等,故数据降维技术在诸多科研领域受到了越来越多地关注。

从数据本身的的性质特征来看,数据降维可以大致分为线性降维和非线性降维两种技术方法。其中,线性降维技术仅对于数据维数相对较低、且具有全局线性结构的数据有着很好的降维效果。然而,在实际的科学研究中,科研工作者却需要面对海量的非线性高维数据。因此,能够有效处理高维非线性数据的方法亟待被提出,本文将介绍一种用于处理高维非线性数据的降维方法。

局部线性嵌入(Locally linear embedding, LLE)是一种非线性的降维方法,该算法由 Sam T.Roweis等人于2000年提出并发表在《Science》杂志上。LLE试图保留原始高维数据的局部性质,通过假设局部原始数据近似位于一张超平面上,从而使得该局部的某一个数据可以由其邻域数据线性表示。

Sam T.Roweis 和 Lawrence K.Saul提出局部线性嵌入(Locally linear embedding, LLE)算法,它是针对非线性数据的一种新的降维技术,并且能够使降维后的数据保持原有的拓扑结构。 LLE算法可以广泛的应用于非线性数据的降维、聚类以及图像分割等领域。

局部线性嵌入(Locally linear embedding, LLE)是最新提出的非线性降维方法。该算法即具有处理非线性数据的优点又有线性降维方法计算性能的优越性。 简单的讲,该方法是将高维流型用剪刀剪成很多的小块,每一小块可以用平面代替,然后再低维中重新拼合出来, 且要求保留各点之间的拓扑关系不变。整个问题最后被转化为两个二次规划问题。

局部线性嵌入(Locally linear embedding, LLE)算法可以归结为三步:

  1. 寻找每个样本点的k个近邻点;
  2. 由每个样本点的近邻点计算出该样本点的局部重建权值矩阵;
  3. 由该样本点的局部重建权值矩阵和其近邻点计算出该样本点的输出值。

局部线性嵌入算法的第一步是计算出每个样本点的k个近邻点。把相对于所求样本点距离最近的k个样本点规定为所求样本点的k个近邻点。k是一个预先给定值。Sam T.Roweis 和 Lawrence K.Saul算法采用的是欧氏距离,则减轻复杂的计算。然而本文是假定高维空间中的数据是非线性分布的,采用了diijstra距离。Dijkstra 距离是一种测地距离,它能够保持样本点之间的曲面特性,在ISOMAP算法中有广泛的应用。针对样本点多的情况,普通的dijkstra算法不能满足LLE算法的要求。

应用示例:

  1. import numpy
  2. import sources
  3. from sklearn import manifold
  4. from sklearn.utils.extmath import randomized_svd
  5. from sklearn.neighbors import NearestNeighbors
  6. from wordreps import WordReps
  7. from scipy.sparse import csr_matrix, eye
  8. from scipy.sparse.linalg import eigsh
  9. from scipy.sparse.linalg.eigen.arpack.arpack import ArpackNoConvergence
  10. from scipy.io import savemat, loadmat
  11. import sys
  12. import time
  13. import argparse
  14. import collections
  15. class MetaEmbed():
  16. def __init__(self, wordreps, words):
  17. self.words = words
  18. self.ids = {}
  19. for (i,word) in enumerate(words):
  20. self.ids[word] = i
  21. self.reps = wordreps
  22. self.dims = [x.dim for x in self.reps]
  23. self.embeds = []
  24. N = len(words)
  25. # Create the source embedding matrices
  26. write("Creating source embedding matrices...")
  27. for i in range(len(self.reps)):
  28. M = numpy.zeros((self.reps[i].dim, N), dtype=numpy.float64)
  29. for j in range(N):
  30. M[:,j] = self.reps[i].vects[self.words[j]]
  31. self.embeds.append(M)
  32. write("done\n")
  33. pass
  34. def compute_neighbours(self, nns):
  35. """
  36. Compute the nearest neighbours for each embedding.
  37. """
  38. self.NNS = []
  39. for i in range(len(self.embeds)):
  40. start_time = time.clock()
  41. write("Computing nearest neighbours for embedding no = %d ..." % i)
  42. nbrs = NearestNeighbors(n_neighbors=nns, algorithm='ball_tree').fit(self.embeds[i].T)
  43. distances, indices = nbrs.kneighbors(self.embeds[i].T)
  44. self.NNS.append(indices[:,1:])
  45. end_time = time.clock()
  46. write("Done (%s sec.)\n" % str(end_time - start_time))
  47. pass
  48. def show_nns(self, word ,nns):
  49. """
  50. Print nearest neigbours for a word in different embeddings.
  51. """
  52. for i in range(len(self.embeds)):
  53. print "Showing nearest neighbours for = %s" % word
  54. print "\nEmbedding no = %d" % i
  55. for s in self.NNS[i][self.ids[word], :][:nns]:
  56. print self.words[s]
  57. pass
  58. def compute_weights(self):
  59. """
  60. Computes the reconstruction weights.
  61. """
  62. start_time = time.clock()
  63. T = 10 # no. of iterations.
  64. alpha = 0.01 # learning rate.
  65. N = len(self.words)
  66. self.W = numpy.zeros((N, N), dtype=numpy.float64)
  67. # initialise the weights.
  68. for i in range(N):
  69. nns = set()
  70. for j in range(len(self.embeds)):
  71. for x in self.NNS[j][i,:]:
  72. nns.add(x)
  73. val = 1.0 / float(len(nns))
  74. for j in nns:
  75. self.W[i,j] = val
  76. # iterate
  77. for i in range(N):
  78. write("\x1b[2K\rLearning weights for (%d of %d) = %s" % (i, N, self.words[i]))
  79. for t in range(T):
  80. d = [self.embeds[j][:,i] - numpy.sum([self.W[i,k] * self.embeds[j][:,k] for k in self.NNS[j][i,:]], axis=0) for j in range(len(self.embeds))]
  81. #for j in range(len(self.embeds)):
  82. # d.append(self.embeds[j][:,i] - numpy.sum([self.W[i,k] * self.embeds[j][:,k] for k in self.NNS[j][i,:]], axis=0))
  83. grad = numpy.zeros(N, dtype=numpy.float64)
  84. for j in range(len(self.embeds)):
  85. for k in self.NNS[j][i,:]:
  86. grad[k] += -2.0 * numpy.dot(d[j], self.embeds[j][:,k])
  87. self.W[i,:] -= (alpha * grad)
  88. total = numpy.sum(self.W[i,:])
  89. if total != 0:
  90. self.W[i,:] = self.W[i,:] / total
  91. write("\n")
  92. end_time = time.clock()
  93. write("Done (took %s seconds)\n" % str(end_time - start_time))
  94. pass
  95. def save_weights(self, fname):
  96. """
  97. Save the weight matrix to a disk file.
  98. """
  99. savemat(fname, {"W":self.W})
  100. pass
  101. def load_weights(self, fname):
  102. """
  103. Load the weight matrix from a disk file.
  104. """
  105. self.W = loadmat(fname)["W"]
  106. pass
  107. def test_compute_weights(self):
  108. """
  109. Check whether the weights are computed correctly
  110. """
  111. N = len(self.words)
  112. # Check whether non-neighbours have weights equal to zero.
  113. write("Checking whether non-neighbours have zero weights...\n")
  114. for i in range(N):
  115. pred_nns = set(numpy.where(self.W[i,:] != 0)[0])
  116. nns = set()
  117. for j in range(len(self.embeds)):
  118. nns = nns.union(set(self.NNS[j][i,:]))
  119. assert(pred_nns == nns)
  120. # Check whether reconstruction weights add upto one.
  121. write("Checking whether weights add to 1...\n")
  122. for i in range(N):
  123. assert(numpy.allclose(numpy.sum(self.W[i,:]), 1))
  124. # print nearest neighbours and their weights
  125. nn_file = open("../work/nn.csv", 'w')
  126. for i in range(N):
  127. nn_file.write("%s, " % self.words[i])
  128. L = []
  129. for j in range(N):
  130. if self.W[i,j] != 0:
  131. L.append((self.words[j], self.W[i,j]))
  132. L.sort(lambda x, y: -1 if x[1] > y[1] else 1)
  133. for (w, val) in L:
  134. nn_file.write("%s, %f, " % (w, val))
  135. nn_file.write("\n")
  136. nn_file.close()
  137. pass
  138. def compute_M(self):
  139. """
  140. Compute the smallest eigenvectors of M = (I - W')\T(I - W').
  141. """
  142. # Building W'
  143. N = len(self.words)
  144. start_time = time.clock()
  145. write("Computing W'...")
  146. for i in range(N):
  147. z = numpy.zeros(N)
  148. write("Completed %d of %d\r" % (i, N))
  149. for nns in self.NNS:
  150. z[nns[i,:]] += 1
  151. self.W[i,:] = z * self.W[i,:]
  152. end_time = time.clock()
  153. write("Done (took %s seconds)\n" % str(end_time - start_time))
  154. # Computing M.
  155. start_time = time.clock()
  156. write("Computing M....")
  157. self.W = csr_matrix(self.W)
  158. M = eye(N, format=self.W.format) - self.W
  159. M = (M.T * M).tocsr()
  160. end_time = time.clock()
  161. write("Done (took %s seconds)\n" % str(end_time - start_time))
  162. return M
  163. def compute_embeddings(self, k, M, embed_fname):
  164. """
  165. Perform eigen decomposition.
  166. """
  167. N = len(self.words)
  168. start_time = time.clock()
  169. write("Computing Eigen decomposition...")
  170. s, V = eigsh(M, k+1, tol=1E-6, which="SA", maxiter=100)
  171. end_time = time.clock()
  172. write("Done (took %s seconds)\n" % str(end_time - start_time))
  173. P = V[:, 1:]
  174. err = numpy.sum(s[1:])
  175. write("Projection error = %f\n" % err)
  176. write("Writing embeddings to file...")
  177. # Write embeddings to file.
  178. with open(embed_fname, 'w') as embed_file:
  179. for i in range(N):
  180. embed_file.write("%s %s\n" % (self.words[i], " ".join([str(x) for x in P[i,:]])))
  181. write("Done\n")
  182. pass
  183. def write(msg):
  184. sys.stdout.write(msg)
  185. sys.stdout.flush()
  186. pass
  187. def meta_embed(embeddings, words, nns, comps, embed_path):
  188. """
  189. Perform meta-embedding using LLE.
  190. """
  191. ME = MetaEmbed(embeddings, words)
  192. ME.compute_neighbours(nns)
  193. #ME.show_nns("king", 5)
  194. #ME.compute_weights_parallel()
  195. ME.compute_weights()
  196. #ME.save_weights("../work/weights_%d" % nns)
  197. #ME.load_weights("../work/weights+n=%d.meta" % nns)
  198. #ME.test_compute_weights()
  199. M = ME.compute_M()
  200. for k in comps:
  201. embed_fname = "%s/n=%d+k=%d" % (embed_path, nns, k)
  202. write("Embedding NNS = %d, Components (k) = %d\n" % (nns, k))
  203. try:
  204. ME.compute_embeddings(k, M, embed_fname)
  205. except ArpackNoConvergence as e:
  206. print e
  207. return ME
  208. def baseline_concatenate(embeddings, words, embed_fname):
  209. """
  210. Concatenate embeddings to create co-embeddings.
  211. """
  212. dim = sum([x.dim for x in embeddings])
  213. print "Concatenation dimension =", dim
  214. # concatenate the vectors.
  215. with open(embed_fname, 'w') as embed_file:
  216. for (i,word) in enumerate(words):
  217. L = []
  218. for x in embeddings:
  219. w = 8 if x.dim == 300 else 1
  220. #w = 1
  221. L.append(w * x.vects[word])
  222. z = numpy.concatenate(L)
  223. embed_file.write("%s %s\n" % (word, " ".join([str(x) for x in z])))
  224. pass
  225. def get_common_words(embeddings):
  226. words = set(embeddings[0].vocab)
  227. for i in range(1, len(embeddings)):
  228. words = words.intersection(set(embeddings[i].vocab))
  229. return words
  230. def get_selected_words(fname):
  231. words = []
  232. with open(fname) as F:
  233. for line in F:
  234. words.append(line.strip())
  235. return words
  236. def perform_embedding(nns, comps):
  237. print "Neigbourhood size = %d" % nns
  238. #embed_sett
  239. embed_settings = sources.embed_settings
  240. embeddings = []
  241. for (embd_fname, dim) in embed_settings:
  242. start_time = time.clock()
  243. sys.stdout.write("Loading %s -- (%d dim) ..." % (embd_fname, dim))
  244. sys.stdout.flush()
  245. WR = WordReps()
  246. WR.read_model(embd_fname, dim)
  247. end_time = time.clock()
  248. sys.stdout.write("\nDone. took %s seconds\n" % str(end_time - start_time))
  249. sys.stdout.flush()
  250. embeddings.append(WR)
  251. common_words = get_common_words(embeddings)
  252. selected_words = get_selected_words("../work/selected-words")
  253. words = []
  254. for word in selected_words:
  255. if word in common_words and word not in words:
  256. words.append(word)
  257. print "No. of common words =", len(common_words)
  258. print "Vocabulary size =", len(words)
  259. ME = meta_embed(embeddings, words, nns, comps, "../work/meta-embeds")
  260. pass
  261. def save_embedding(words, WR, fname):
  262. F = open(fname, 'w')
  263. for w in words:
  264. if w in WR.vects:
  265. F.write("%s " % w)
  266. F.write("%s\n" % " ".join([str(x) for x in WR.vects[w]]))
  267. # elif w.lower() in WR.vects:
  268. # F.write("%s " % w.lower())
  269. # F.write("%s\n" % " ".join([str(x) for x in WR.vects[w.lower()]]))
  270. F.close()
  271. pass
  272. def main():
  273. parser = argparse.ArgumentParser()
  274. parser.add_argument("-nns", type=int, help="number of nearest neighbours")
  275. parser.add_argument("-comps", type=str, help="components for the projection")
  276. args = parser.parse_args()
  277. comps = [int(x) for x in args.comps.split(',')]
  278. perform_embedding(args.nns, comps)
  279. pass
  280. if __name__ == '__main__':
  281. main()
  282. pass