Compare commits

...

No commits in common. 'c10-beta' and 'c9' have entirely different histories.
c10-beta ... c9

3
.gitignore vendored

@ -1 +1,2 @@
SOURCES/squid-6.10.tar.xz SOURCES/pgp.asc
SOURCES/squid-5.5.tar.xz

@ -1 +1,2 @@
70e90865df0e4e9ba7765b622da40bda9bb8fc5d SOURCES/squid-6.10.tar.xz 8e3de63f3bef0c9c4edbcfe000c567119f687143 SOURCES/pgp.asc
42302bd9b8feff851a41420334cb8eaeab2806ab SOURCES/squid-5.5.tar.xz

File diff suppressed because it is too large Load Diff

@ -1,10 +1,10 @@
diff --git a/contrib/url-normalizer.pl b/contrib/url-normalizer.pl diff --git a/contrib/url-normalizer.pl b/contrib/url-normalizer.pl
index e965e9e..ed5ffcb 100755 index 4cb0480..4b89910 100755
--- a/contrib/url-normalizer.pl --- a/contrib/url-normalizer.pl
+++ b/contrib/url-normalizer.pl +++ b/contrib/url-normalizer.pl
@@ -1,4 +1,4 @@ @@ -1,4 +1,4 @@
-#!/usr/local/bin/perl -Tw -#!/usr/local/bin/perl -Tw
+#!/usr/bin/perl -Tw +#!/usr/bin/perl -Tw
# #
# * Copyright (C) 1996-2023 The Squid Software Foundation and contributors # * Copyright (C) 1996-2022 The Squid Software Foundation and contributors
# * # *

@ -0,0 +1,95 @@
------------------------------------------------------------
revno: 14311
revision-id: squid3@treenet.co.nz-20150924130537-lqwzd1z99a3l9gt4
parent: squid3@treenet.co.nz-20150924032241-6cx3g6hwz9xfoybr
------------------------------------------------------------
revno: 14311
revision-id: squid3@treenet.co.nz-20150924130537-lqwzd1z99a3l9gt4
parent: squid3@treenet.co.nz-20150924032241-6cx3g6hwz9xfoybr
fixes bug: http://bugs.squid-cache.org/show_bug.cgi?id=4323
author: Francesco Chemolli <kinkie@squid-cache.org>
committer: Amos Jeffries <squid3@treenet.co.nz>
branch nick: trunk
timestamp: Thu 2015-09-24 06:05:37 -0700
message:
Bug 4323: Netfilter broken cross-includes with Linux 4.2
------------------------------------------------------------
# Bazaar merge directive format 2 (Bazaar 0.90)
# revision_id: squid3@treenet.co.nz-20150924130537-lqwzd1z99a3l9gt4
# target_branch: http://bzr.squid-cache.org/bzr/squid3/trunk/
# testament_sha1: c67cfca81040f3845d7c4caf2f40518511f14d0b
# timestamp: 2015-09-24 13:06:33 +0000
# source_branch: http://bzr.squid-cache.org/bzr/squid3/trunk
# base_revision_id: squid3@treenet.co.nz-20150924032241-\
# 6cx3g6hwz9xfoybr
#
# Begin patch
=== modified file 'compat/os/linux.h'
--- compat/os/linux.h 2015-01-13 07:25:36 +0000
+++ compat/os/linux.h 2015-09-24 13:05:37 +0000
@@ -30,6 +30,21 @@
#endif
/*
+ * Netfilter header madness. (see Bug 4323)
+ *
+ * Netfilter have a history of defining their own versions of network protocol
+ * primitives without sufficient protection against the POSIX defines which are
+ * aways present in Linux.
+ *
+ * netinet/in.h must be included before any other sys header in order to properly
+ * activate include guards in <linux/libc-compat.h> the kernel maintainers added
+ * to workaround it.
+ */
+#if HAVE_NETINET_IN_H
+#include <netinet/in.h>
+#endif
+
+/*
* sys/capability.h is only needed in Linux apparently.
*
* HACK: LIBCAP_BROKEN Ugly glue to get around linux header madness colliding with glibc
fixes bug: http://bugs.squid-cache.org/show_bug.cgi?id=4323
author: Francesco Chemolli <kinkie@squid-cache.org>
committer: Amos Jeffries <squid3@treenet.co.nz>
branch nick: trunk
timestamp: Thu 2015-09-24 06:05:37 -0700
message:
Bug 4323: Netfilter broken cross-includes with Linux 4.2
------------------------------------------------------------
# Bazaar merge directive format 2 (Bazaar 0.90)
# revision_id: squid3@treenet.co.nz-20150924130537-lqwzd1z99a3l9gt4
# target_branch: http://bzr.squid-cache.org/bzr/squid3/trunk/
# testament_sha1: c67cfca81040f3845d7c4caf2f40518511f14d0b
# timestamp: 2015-09-24 13:06:33 +0000
# source_branch: http://bzr.squid-cache.org/bzr/squid3/trunk
# base_revision_id: squid3@treenet.co.nz-20150924032241-\
# 6cx3g6hwz9xfoybr
#
# Begin patch
=== modified file 'compat/os/linux.h'
--- compat/os/linux.h 2015-01-13 07:25:36 +0000
+++ compat/os/linux.h 2015-09-24 13:05:37 +0000
@@ -30,6 +30,21 @@
#endif
/*
+ * Netfilter header madness. (see Bug 4323)
+ *
+ * Netfilter have a history of defining their own versions of network protocol
+ * primitives without sufficient protection against the POSIX defines which are
+ * aways present in Linux.
+ *
+ * netinet/in.h must be included before any other sys header in order to properly
+ * activate include guards in <linux/libc-compat.h> the kernel maintainers added
+ * to workaround it.
+ */
+#if HAVE_NETINET_IN_H
+#include <netinet/in.h>
+#endif
+
+/*
* sys/capability.h is only needed in Linux apparently.
*
* HACK: LIBCAP_BROKEN Ugly glue to get around linux header madness colliding with glibc

@ -1,8 +1,7 @@
diff --git a/src/cf.data.pre b/src/cf.data.pre diff -up squid-4.0.11/src/cf.data.pre.config squid-4.0.11/src/cf.data.pre
index 44aa34d..12225bc 100644 --- squid-4.0.11/src/cf.data.pre.config 2016-06-09 22:32:57.000000000 +0200
--- a/src/cf.data.pre +++ squid-4.0.11/src/cf.data.pre 2016-07-11 21:08:35.090976840 +0200
+++ b/src/cf.data.pre @@ -4658,7 +4658,7 @@ DOC_END
@@ -5453,7 +5453,7 @@ DOC_END
NAME: logfile_rotate NAME: logfile_rotate
TYPE: int TYPE: int
@ -11,7 +10,7 @@ index 44aa34d..12225bc 100644
LOC: Config.Log.rotateNumber LOC: Config.Log.rotateNumber
DOC_START DOC_START
Specifies the default number of logfile rotations to make when you Specifies the default number of logfile rotations to make when you
@@ -7447,11 +7447,11 @@ COMMENT_END @@ -6444,11 +6444,11 @@ COMMENT_END
NAME: cache_mgr NAME: cache_mgr
TYPE: string TYPE: string

@ -0,0 +1,68 @@
From fc01451000eaa5592cd5afbd6aee14e53f7dd2c3 Mon Sep 17 00:00:00 2001
From: Amos Jeffries <amosjeffries@squid-cache.org>
Date: Sun, 18 Oct 2020 20:23:10 +1300
Subject: [PATCH] Update translations integration
* Add credits for es-mx translation moderator
* Use es-mx for default of all Spanish (Central America) texts
* Update translation related .am files
---
doc/manuals/language.am | 2 +-
errors/TRANSLATORS | 1 +
errors/aliases | 3 ++-
errors/language.am | 3 ++-
errors/template.am | 2 +-
5 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/doc/manuals/language.am b/doc/manuals/language.am
index 7670c88380c..f03c4cf71b4 100644
--- a/doc/manuals/language.am
+++ b/doc/manuals/language.am
@@ -18,4 +18,4 @@ TRANSLATE_LANGUAGES = \
oc.lang \
pt.lang \
ro.lang \
- ru.lang
+ ru.lang
diff --git a/errors/aliases b/errors/aliases
index 36f17f4b80f..cf0116f297d 100644
--- a/errors/aliases
+++ b/errors/aliases
@@ -14,7 +14,8 @@ da da-dk
de de-at de-ch de-de de-li de-lu
el el-gr
en en-au en-bz en-ca en-cn en-gb en-ie en-in en-jm en-nz en-ph en-sg en-tt en-uk en-us en-za en-zw
-es es-ar es-bo es-cl es-co es-cr es-do es-ec es-es es-gt es-hn es-mx es-ni es-pa es-pe es-pr es-py es-sv es-us es-uy es-ve es-xl
+es es-ar es-bo es-cl es-cu es-co es-do es-ec es-es es-pe es-pr es-py es-us es-uy es-ve es-xl spq
+es-mx es-bz es-cr es-gt es-hn es-ni es-pa es-sv
et et-ee
fa fa-fa fa-ir
fi fi-fi
diff --git a/errors/language.am b/errors/language.am
index 12b1b2b3b43..029e8c1eb2f 100644
--- a/errors/language.am
+++ b/errors/language.am
@@ -17,6 +17,7 @@ TRANSLATE_LANGUAGES = \
de.lang \
el.lang \
en.lang \
+ es-mx.lang \
es.lang \
et.lang \
fa.lang \
@@ -51,4 +52,4 @@ TRANSLATE_LANGUAGES = \
uz.lang \
vi.lang \
zh-hans.lang \
- zh-hant.lang
+ zh-hant.lang
diff --git a/errors/template.am b/errors/template.am
index 6c12781e6f4..715c65aa22b 100644
--- a/errors/template.am
+++ b/errors/template.am
@@ -48,4 +48,4 @@ ERROR_TEMPLATES = \
templates/ERR_UNSUP_REQ \
templates/ERR_URN_RESOLVE \
templates/ERR_WRITE_ERROR \
- templates/ERR_ZERO_SIZE_OBJECT
+ templates/ERR_ZERO_SIZE_OBJECT

@ -0,0 +1,127 @@
diff --git a/src/clients/FtpClient.cc b/src/clients/FtpClient.cc
index 747ed35..f2b7126 100644
--- a/src/clients/FtpClient.cc
+++ b/src/clients/FtpClient.cc
@@ -795,7 +795,8 @@ Ftp::Client::connectDataChannel()
bool
Ftp::Client::openListenSocket()
{
- return false;
+ debugs(9, 3, HERE);
+ return false;
}
/// creates a data channel Comm close callback
diff --git a/src/clients/FtpClient.h b/src/clients/FtpClient.h
index eb5ea1b..e92c007 100644
--- a/src/clients/FtpClient.h
+++ b/src/clients/FtpClient.h
@@ -137,7 +137,7 @@ public:
bool sendPort();
bool sendPassive();
void connectDataChannel();
- bool openListenSocket();
+ virtual bool openListenSocket();
void switchTimeoutToDataChannel();
CtrlChannel ctrl; ///< FTP control channel state
diff --git a/src/clients/FtpGateway.cc b/src/clients/FtpGateway.cc
index 05db817..2989cd2 100644
--- a/src/clients/FtpGateway.cc
+++ b/src/clients/FtpGateway.cc
@@ -86,6 +86,13 @@ struct GatewayFlags {
class Gateway;
typedef void (StateMethod)(Ftp::Gateway *);
+} // namespace FTP
+
+static void ftpOpenListenSocket(Ftp::Gateway * ftpState, int fallback);
+
+namespace Ftp
+{
+
/// FTP Gateway: An FTP client that takes an HTTP request with an ftp:// URI,
/// converts it into one or more FTP commands, and then
/// converts one or more FTP responses into the final HTTP response.
@@ -136,7 +143,11 @@ public:
/// create a data channel acceptor and start listening.
void listenForDataChannel(const Comm::ConnectionPointer &conn);
-
+ virtual bool openListenSocket() {
+ debugs(9, 3, HERE);
+ ftpOpenListenSocket(this, 0);
+ return Comm::IsConnOpen(data.conn);
+ }
int checkAuth(const HttpHeader * req_hdr);
void checkUrlpath();
void buildTitleUrl();
@@ -1786,6 +1797,7 @@ ftpOpenListenSocket(Ftp::Gateway * ftpState, int fallback)
}
ftpState->listenForDataChannel(temp);
+ ftpState->data.listenConn = temp;
}
static void
@@ -1821,13 +1833,19 @@ ftpSendPORT(Ftp::Gateway * ftpState)
// pull out the internal IP address bytes to send in PORT command...
// source them from the listen_conn->local
+ struct sockaddr_in addr;
+ socklen_t addrlen = sizeof(addr);
+ getsockname(ftpState->data.listenConn->fd, (struct sockaddr *) &addr, &addrlen);
+ unsigned char port_high = ntohs(addr.sin_port) >> 8;
+ unsigned char port_low = ntohs(addr.sin_port) & 0xff;
+
struct addrinfo *AI = NULL;
ftpState->data.listenConn->local.getAddrInfo(AI, AF_INET);
unsigned char *addrptr = (unsigned char *) &((struct sockaddr_in*)AI->ai_addr)->sin_addr;
- unsigned char *portptr = (unsigned char *) &((struct sockaddr_in*)AI->ai_addr)->sin_port;
+ // unsigned char *portptr = (unsigned char *) &((struct sockaddr_in*)AI->ai_addr)->sin_port;
snprintf(cbuf, CTRL_BUFLEN, "PORT %d,%d,%d,%d,%d,%d\r\n",
addrptr[0], addrptr[1], addrptr[2], addrptr[3],
- portptr[0], portptr[1]);
+ port_high, port_low);
ftpState->writeCommand(cbuf);
ftpState->state = Ftp::Client::SENT_PORT;
@@ -1880,14 +1898,27 @@ ftpSendEPRT(Ftp::Gateway * ftpState)
return;
}
+
+ unsigned int port;
+ struct sockaddr_storage addr;
+ socklen_t addrlen = sizeof(addr);
+ getsockname(ftpState->data.listenConn->fd, (struct sockaddr *) &addr, &addrlen);
+ if (addr.ss_family == AF_INET) {
+ struct sockaddr_in *addr4 = (struct sockaddr_in*) &addr;
+ port = ntohs( addr4->sin_port );
+ } else {
+ struct sockaddr_in6 *addr6 = (struct sockaddr_in6 *) &addr;
+ port = ntohs( addr6->sin6_port );
+ }
+
char buf[MAX_IPSTRLEN];
/* RFC 2428 defines EPRT as IPv6 equivalent to IPv4 PORT command. */
/* Which can be used by EITHER protocol. */
- snprintf(cbuf, CTRL_BUFLEN, "EPRT |%d|%s|%d|\r\n",
+ snprintf(cbuf, CTRL_BUFLEN, "EPRT |%d|%s|%u|\r\n",
( ftpState->data.listenConn->local.isIPv6() ? 2 : 1 ),
ftpState->data.listenConn->local.toStr(buf,MAX_IPSTRLEN),
- ftpState->data.listenConn->local.port() );
+ port);
ftpState->writeCommand(cbuf);
ftpState->state = Ftp::Client::SENT_EPRT;
@@ -1906,7 +1937,7 @@ ftpReadEPRT(Ftp::Gateway * ftpState)
ftpSendPORT(ftpState);
return;
}
-
+ ftpState->ctrl.message = NULL;
ftpRestOrList(ftpState);
}

@ -0,0 +1,185 @@
diff --git a/src/ssl/support.cc b/src/ssl/support.cc
index 3ad135d..73912ce 100644
--- a/src/ssl/support.cc
+++ b/src/ssl/support.cc
@@ -557,7 +557,11 @@ Ssl::VerifyCallbackParameters::At(Security::Connection &sconn)
}
// "dup" function for SSL_get_ex_new_index("cert_err_check")
-#if SQUID_USE_CONST_CRYPTO_EX_DATA_DUP
+#if OPENSSL_VERSION_MAJOR >= 3
+static int
+ssl_dupAclChecklist(CRYPTO_EX_DATA *, const CRYPTO_EX_DATA *, void **,
+ int, long, void *)
+#elif SQUID_USE_CONST_CRYPTO_EX_DATA_DUP
static int
ssl_dupAclChecklist(CRYPTO_EX_DATA *, const CRYPTO_EX_DATA *, void *,
int, long, void *)
diff --git a/src/security/PeerOptions.cc b/src/security/PeerOptions.cc
index cf1d4ba..4346ba5 100644
--- a/src/security/PeerOptions.cc
+++ b/src/security/PeerOptions.cc
@@ -297,130 +297,130 @@ static struct ssl_option {
} ssl_options[] = {
-#if SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG
+#ifdef SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG
{
"NETSCAPE_REUSE_CIPHER_CHANGE_BUG", SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG
},
#endif
-#if SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG
+#ifdef SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG
{
"SSLREF2_REUSE_CERT_TYPE_BUG", SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG
},
#endif
-#if SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER
+#ifdef SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER
{
"MICROSOFT_BIG_SSLV3_BUFFER", SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER
},
#endif
-#if SSL_OP_SSLEAY_080_CLIENT_DH_BUG
+#ifdef SSL_OP_SSLEAY_080_CLIENT_DH_BUG
{
"SSLEAY_080_CLIENT_DH_BUG", SSL_OP_SSLEAY_080_CLIENT_DH_BUG
},
#endif
-#if SSL_OP_TLS_D5_BUG
+#ifdef SSL_OP_TLS_D5_BUG
{
"TLS_D5_BUG", SSL_OP_TLS_D5_BUG
},
#endif
-#if SSL_OP_TLS_BLOCK_PADDING_BUG
+#ifdef SSL_OP_TLS_BLOCK_PADDING_BUG
{
"TLS_BLOCK_PADDING_BUG", SSL_OP_TLS_BLOCK_PADDING_BUG
},
#endif
-#if SSL_OP_TLS_ROLLBACK_BUG
+#ifdef SSL_OP_TLS_ROLLBACK_BUG
{
"TLS_ROLLBACK_BUG", SSL_OP_TLS_ROLLBACK_BUG
},
#endif
-#if SSL_OP_ALL
+#ifdef SSL_OP_ALL
{
"ALL", (long)SSL_OP_ALL
},
#endif
-#if SSL_OP_SINGLE_DH_USE
+#ifdef SSL_OP_SINGLE_DH_USE
{
"SINGLE_DH_USE", SSL_OP_SINGLE_DH_USE
},
#endif
-#if SSL_OP_EPHEMERAL_RSA
+#ifdef SSL_OP_EPHEMERAL_RSA
{
"EPHEMERAL_RSA", SSL_OP_EPHEMERAL_RSA
},
#endif
-#if SSL_OP_PKCS1_CHECK_1
+#ifdef SSL_OP_PKCS1_CHECK_1
{
"PKCS1_CHECK_1", SSL_OP_PKCS1_CHECK_1
},
#endif
-#if SSL_OP_PKCS1_CHECK_2
+#ifdef SSL_OP_PKCS1_CHECK_2
{
"PKCS1_CHECK_2", SSL_OP_PKCS1_CHECK_2
},
#endif
-#if SSL_OP_NETSCAPE_CA_DN_BUG
+#ifdef SSL_OP_NETSCAPE_CA_DN_BUG
{
"NETSCAPE_CA_DN_BUG", SSL_OP_NETSCAPE_CA_DN_BUG
},
#endif
-#if SSL_OP_NON_EXPORT_FIRST
+#ifdef SSL_OP_NON_EXPORT_FIRST
{
"NON_EXPORT_FIRST", SSL_OP_NON_EXPORT_FIRST
},
#endif
-#if SSL_OP_CIPHER_SERVER_PREFERENCE
+#ifdef SSL_OP_CIPHER_SERVER_PREFERENCE
{
"CIPHER_SERVER_PREFERENCE", SSL_OP_CIPHER_SERVER_PREFERENCE
},
#endif
-#if SSL_OP_NETSCAPE_DEMO_CIPHER_CHANGE_BUG
+#ifdef SSL_OP_NETSCAPE_DEMO_CIPHER_CHANGE_BUG
{
"NETSCAPE_DEMO_CIPHER_CHANGE_BUG", SSL_OP_NETSCAPE_DEMO_CIPHER_CHANGE_BUG
},
#endif
-#if SSL_OP_NO_SSLv3
+#ifdef SSL_OP_NO_SSLv3
{
"NO_SSLv3", SSL_OP_NO_SSLv3
},
#endif
-#if SSL_OP_NO_TLSv1
+#ifdef SSL_OP_NO_TLSv1
{
"NO_TLSv1", SSL_OP_NO_TLSv1
},
#else
{ "NO_TLSv1", 0 },
#endif
-#if SSL_OP_NO_TLSv1_1
+#ifdef SSL_OP_NO_TLSv1_1
{
"NO_TLSv1_1", SSL_OP_NO_TLSv1_1
},
#else
{ "NO_TLSv1_1", 0 },
#endif
-#if SSL_OP_NO_TLSv1_2
+#ifdef SSL_OP_NO_TLSv1_2
{
"NO_TLSv1_2", SSL_OP_NO_TLSv1_2
},
#else
{ "NO_TLSv1_2", 0 },
#endif
-#if SSL_OP_NO_TLSv1_3
+#ifdef SSL_OP_NO_TLSv1_3
{
"NO_TLSv1_3", SSL_OP_NO_TLSv1_3
},
#else
{ "NO_TLSv1_3", 0 },
#endif
-#if SSL_OP_NO_COMPRESSION
+#ifdef SSL_OP_NO_COMPRESSION
{
"No_Compression", SSL_OP_NO_COMPRESSION
},
#endif
-#if SSL_OP_NO_TICKET
+#ifdef SSL_OP_NO_TICKET
{
"NO_TICKET", SSL_OP_NO_TICKET
},
#endif
-#if SSL_OP_SINGLE_ECDH_USE
+#ifdef SSL_OP_SINGLE_ECDH_USE
{
"SINGLE_ECDH_USE", SSL_OP_SINGLE_ECDH_USE
},
@@ -512,7 +512,7 @@ Security::PeerOptions::parseOptions()
}
-#if SSL_OP_NO_SSLv2
+#ifdef SSL_OP_NO_SSLv2
// compliance with RFC 6176: Prohibiting Secure Sockets Layer (SSL) Version 2.0
op = op | SSL_OP_NO_SSLv2;
#endif

@ -0,0 +1,24 @@
diff --git a/src/tests/testStoreHashIndex.cc b/src/tests/testStoreHashIndex.cc
index 0564380..fcd60b9 100644
--- a/src/tests/testStoreHashIndex.cc
+++ b/src/tests/testStoreHashIndex.cc
@@ -102,6 +102,8 @@ void commonInit()
if (inited)
return;
+ inited = true;
+
Mem::Init();
Config.Store.avgObjectSize = 1024;
@@ -109,6 +111,10 @@ void commonInit()
Config.Store.objectsPerBucket = 20;
Config.Store.maxObjectSize = 2048;
+
+ Config.memShared.defaultTo(false);
+
+ Config.store_dir_select_algorithm = xstrdup("round-robin");
}
/* TODO make this a cbdata class */

@ -0,0 +1,120 @@
diff --git a/src/gopher.cc b/src/gopher.cc
index 576a3f7..2645b6b 100644
--- a/src/gopher.cc
+++ b/src/gopher.cc
@@ -364,7 +364,6 @@ gopherToHTML(GopherStateData * gopherState, char *inbuf, int len)
char *lpos = NULL;
char *tline = NULL;
LOCAL_ARRAY(char, line, TEMP_BUF_SIZE);
- LOCAL_ARRAY(char, tmpbuf, TEMP_BUF_SIZE);
char *name = NULL;
char *selector = NULL;
char *host = NULL;
@@ -374,7 +373,6 @@ gopherToHTML(GopherStateData * gopherState, char *inbuf, int len)
char gtype;
StoreEntry *entry = NULL;
- memset(tmpbuf, '\0', TEMP_BUF_SIZE);
memset(line, '\0', TEMP_BUF_SIZE);
entry = gopherState->entry;
@@ -409,7 +407,7 @@ gopherToHTML(GopherStateData * gopherState, char *inbuf, int len)
return;
}
- String outbuf;
+ SBuf outbuf;
if (!gopherState->HTML_header_added) {
if (gopherState->conversion == GopherStateData::HTML_CSO_RESULT)
@@ -577,34 +575,34 @@ gopherToHTML(GopherStateData * gopherState, char *inbuf, int len)
break;
}
- memset(tmpbuf, '\0', TEMP_BUF_SIZE);
-
if ((gtype == GOPHER_TELNET) || (gtype == GOPHER_3270)) {
if (strlen(escaped_selector) != 0)
- snprintf(tmpbuf, TEMP_BUF_SIZE, "<IMG border=\"0\" SRC=\"%s\"> <A HREF=\"telnet://%s@%s%s%s/\">%s</A>\n",
- icon_url, escaped_selector, rfc1738_escape_part(host),
- *port ? ":" : "", port, html_quote(name));
+ outbuf.appendf("<IMG border=\"0\" SRC=\"%s\"> <A HREF=\"telnet://%s@%s%s%s/\">%s</A>\n",
+ icon_url, escaped_selector, rfc1738_escape_part(host),
+ *port ? ":" : "", port, html_quote(name));
else
- snprintf(tmpbuf, TEMP_BUF_SIZE, "<IMG border=\"0\" SRC=\"%s\"> <A HREF=\"telnet://%s%s%s/\">%s</A>\n",
- icon_url, rfc1738_escape_part(host), *port ? ":" : "",
- port, html_quote(name));
+ outbuf.appendf("<IMG border=\"0\" SRC=\"%s\"> <A HREF=\"telnet://%s%s%s/\">%s</A>\n",
+ icon_url, rfc1738_escape_part(host), *port ? ":" : "",
+ port, html_quote(name));
} else if (gtype == GOPHER_INFO) {
- snprintf(tmpbuf, TEMP_BUF_SIZE, "\t%s\n", html_quote(name));
+ outbuf.appendf("\t%s\n", html_quote(name));
} else {
if (strncmp(selector, "GET /", 5) == 0) {
/* WWW link */
- snprintf(tmpbuf, TEMP_BUF_SIZE, "<IMG border=\"0\" SRC=\"%s\"> <A HREF=\"http://%s/%s\">%s</A>\n",
- icon_url, host, rfc1738_escape_unescaped(selector + 5), html_quote(name));
+ outbuf.appendf("<IMG border=\"0\" SRC=\"%s\"> <A HREF=\"http://%s/%s\">%s</A>\n",
+ icon_url, host, rfc1738_escape_unescaped(selector + 5), html_quote(name));
+ } else if (gtype == GOPHER_WWW) {
+ outbuf.appendf("<IMG border=\"0\" SRC=\"%s\"> <A HREF=\"gopher://%s/%c%s\">%s</A>\n",
+ icon_url, rfc1738_escape_unescaped(selector), html_quote(name));
} else {
/* Standard link */
- snprintf(tmpbuf, TEMP_BUF_SIZE, "<IMG border=\"0\" SRC=\"%s\"> <A HREF=\"gopher://%s/%c%s\">%s</A>\n",
- icon_url, host, gtype, escaped_selector, html_quote(name));
+ outbuf.appendf("<IMG border=\"0\" SRC=\"%s\"> <A HREF=\"gopher://%s/%c%s\">%s</A>\n",
+ icon_url, host, gtype, escaped_selector, html_quote(name));
}
}
safe_free(escaped_selector);
- outbuf.append(tmpbuf);
} else {
memset(line, '\0', TEMP_BUF_SIZE);
continue;
@@ -637,13 +635,12 @@ gopherToHTML(GopherStateData * gopherState, char *inbuf, int len)
break;
if (gopherState->cso_recno != recno) {
- snprintf(tmpbuf, TEMP_BUF_SIZE, "</PRE><HR noshade size=\"1px\"><H2>Record# %d<br><i>%s</i></H2>\n<PRE>", recno, html_quote(result));
+ outbuf.appendf("</PRE><HR noshade size=\"1px\"><H2>Record# %d<br><i>%s</i></H2>\n<PRE>", recno, html_quote(result));
gopherState->cso_recno = recno;
} else {
- snprintf(tmpbuf, TEMP_BUF_SIZE, "%s\n", html_quote(result));
+ outbuf.appendf("%s\n", html_quote(result));
}
- outbuf.append(tmpbuf);
break;
} else {
int code;
@@ -671,8 +668,7 @@ gopherToHTML(GopherStateData * gopherState, char *inbuf, int len)
case 502: { /* Too Many Matches */
/* Print the message the server returns */
- snprintf(tmpbuf, TEMP_BUF_SIZE, "</PRE><HR noshade size=\"1px\"><H2>%s</H2>\n<PRE>", html_quote(result));
- outbuf.append(tmpbuf);
+ outbuf.appendf("</PRE><HR noshade size=\"1px\"><H2>%s</H2>\n<PRE>", html_quote(result));
break;
}
@@ -688,13 +684,12 @@ gopherToHTML(GopherStateData * gopherState, char *inbuf, int len)
} /* while loop */
- if (outbuf.size() > 0) {
- entry->append(outbuf.rawBuf(), outbuf.size());
+ if (outbuf.length() > 0) {
+ entry->append(outbuf.rawContent(), outbuf.length());
/* now let start sending stuff to client */
entry->flush();
}
- outbuf.clean();
return;
}

@ -0,0 +1,38 @@
commit 4031c6c2b004190fdffbc19dab7cd0305a2025b7 (refs/remotes/origin/v4, refs/remotes/github/v4, refs/heads/v4)
Author: Amos Jeffries <yadij@users.noreply.github.com>
Date: 2022-08-09 23:34:54 +0000
Bug 3193 pt2: NTLM decoder truncating strings (#1114)
The initial bug fix overlooked large 'offset' causing integer
wrap to extract a too-short length string.
Improve debugs and checks sequence to clarify cases and ensure
that all are handled correctly.
diff --git a/lib/ntlmauth/ntlmauth.cc b/lib/ntlmauth/ntlmauth.cc
index 5d9637290..f00fd51f8 100644
--- a/lib/ntlmauth/ntlmauth.cc
+++ b/lib/ntlmauth/ntlmauth.cc
@@ -107,10 +107,19 @@ ntlm_fetch_string(const ntlmhdr *packet, const int32_t packet_size, const strhdr
int32_t o = le32toh(str->offset);
// debug("ntlm_fetch_string(plength=%d,l=%d,o=%d)\n",packet_size,l,o);
- if (l < 0 || l > NTLM_MAX_FIELD_LENGTH || o + l > packet_size || o == 0) {
- debug("ntlm_fetch_string: insane data (pkt-sz: %d, fetch len: %d, offset: %d)\n", packet_size,l,o);
+ if (l < 0 || l > NTLM_MAX_FIELD_LENGTH) {
+ debug("ntlm_fetch_string: insane string length (pkt-sz: %d, fetch len: %d, offset: %d)\n", packet_size,l,o);
return rv;
}
+ else if (o <= 0 || o > packet_size) {
+ debug("ntlm_fetch_string: insane string offset (pkt-sz: %d, fetch len: %d, offset: %d)\n", packet_size,l,o);
+ return rv;
+ }
+ else if (l > packet_size - o) {
+ debug("ntlm_fetch_string: truncated string data (pkt-sz: %d, fetch len: %d, offset: %d)\n", packet_size,l,o);
+ return rv;
+ }
+
rv.str = (char *)packet + o;
rv.l = 0;
if ((flags & NTLM_NEGOTIATE_ASCII) == 0) {

@ -0,0 +1,24 @@
diff --git a/src/anyp/Uri.cc b/src/anyp/Uri.cc
index 20b9bf1..81ebb18 100644
--- a/src/anyp/Uri.cc
+++ b/src/anyp/Uri.cc
@@ -173,6 +173,10 @@ urlInitialize(void)
assert(0 == matchDomainName("*.foo.com", ".foo.com", mdnHonorWildcards));
assert(0 != matchDomainName("*.foo.com", "foo.com", mdnHonorWildcards));
+ assert(0 != matchDomainName("foo.com", ""));
+ assert(0 != matchDomainName("foo.com", "", mdnHonorWildcards));
+ assert(0 != matchDomainName("foo.com", "", mdnRejectSubsubDomains));
+
/* more cases? */
}
@@ -756,6 +760,8 @@ matchDomainName(const char *h, const char *d, MatchDomainNameFlags flags)
return -1;
dl = strlen(d);
+ if (dl == 0)
+ return 1;
/*
* Start at the ends of the two strings and work towards the

File diff suppressed because it is too large Load Diff

@ -0,0 +1,178 @@
From 05f6af2f4c85cc99323cfff6149c3d74af661b6d Mon Sep 17 00:00:00 2001
From: Amos Jeffries <yadij@users.noreply.github.com>
Date: Fri, 13 Oct 2023 08:44:16 +0000
Subject: [PATCH] RFC 9112: Improve HTTP chunked encoding compliance (#1498)
---
src/http/one/Parser.cc | 8 +-------
src/http/one/Parser.h | 4 +---
src/http/one/TeChunkedParser.cc | 23 ++++++++++++++++++-----
src/parser/Tokenizer.cc | 12 ++++++++++++
src/parser/Tokenizer.h | 7 +++++++
5 files changed, 39 insertions(+), 15 deletions(-)
diff --git a/src/http/one/Parser.cc b/src/http/one/Parser.cc
index c78ddd7f0..291ae39f0 100644
--- a/src/http/one/Parser.cc
+++ b/src/http/one/Parser.cc
@@ -65,16 +65,10 @@ Http::One::Parser::DelimiterCharacters()
void
Http::One::Parser::skipLineTerminator(Tokenizer &tok) const
{
- if (tok.skip(Http1::CrLf()))
- return;
-
if (Config.onoff.relaxed_header_parser && tok.skipOne(CharacterSet::LF))
return;
- if (tok.atEnd() || (tok.remaining().length() == 1 && tok.remaining().at(0) == '\r'))
- throw InsufficientInput();
-
- throw TexcHere("garbage instead of CRLF line terminator");
+ tok.skipRequired("line-terminating CRLF", Http1::CrLf());
}
/// all characters except the LF line terminator
diff --git a/src/http/one/Parser.h b/src/http/one/Parser.h
index f83c01a9a..aab895583 100644
--- a/src/http/one/Parser.h
+++ b/src/http/one/Parser.h
@@ -124,9 +124,7 @@ protected:
* detect and skip the CRLF or (if tolerant) LF line terminator
* consume from the tokenizer.
*
- * \throws exception on bad or InsuffientInput.
- * \retval true only if line terminator found.
- * \retval false incomplete or missing line terminator, need more data.
+ * \throws exception on bad or InsufficientInput
*/
void skipLineTerminator(Tokenizer &) const;
diff --git a/src/http/one/TeChunkedParser.cc b/src/http/one/TeChunkedParser.cc
index 1434100b6..8bdb65abb 100644
--- a/src/http/one/TeChunkedParser.cc
+++ b/src/http/one/TeChunkedParser.cc
@@ -91,6 +91,11 @@ Http::One::TeChunkedParser::parseChunkSize(Tokenizer &tok)
{
Must(theChunkSize <= 0); // Should(), really
+ static const SBuf bannedHexPrefixLower("0x");
+ static const SBuf bannedHexPrefixUpper("0X");
+ if (tok.skip(bannedHexPrefixLower) || tok.skip(bannedHexPrefixUpper))
+ throw TextException("chunk starts with 0x", Here());
+
int64_t size = -1;
if (tok.int64(size, 16, false) && !tok.atEnd()) {
if (size < 0)
@@ -121,7 +126,7 @@ Http::One::TeChunkedParser::parseChunkMetadataSuffix(Tokenizer &tok)
// bad or insufficient input, like in the code below. TODO: Expand up.
try {
parseChunkExtensions(tok); // a possibly empty chunk-ext list
- skipLineTerminator(tok);
+ tok.skipRequired("CRLF after [chunk-ext]", Http1::CrLf());
buf_ = tok.remaining();
parsingStage_ = theChunkSize ? Http1::HTTP_PARSE_CHUNK : Http1::HTTP_PARSE_MIME;
return true;
@@ -132,12 +137,14 @@ Http::One::TeChunkedParser::parseChunkMetadataSuffix(Tokenizer &tok)
// other exceptions bubble up to kill message parsing
}
-/// Parses the chunk-ext list (RFC 7230 section 4.1.1 and its Errata #4667):
+/// Parses the chunk-ext list (RFC 9112 section 7.1.1:
/// chunk-ext = *( BWS ";" BWS chunk-ext-name [ BWS "=" BWS chunk-ext-val ] )
void
-Http::One::TeChunkedParser::parseChunkExtensions(Tokenizer &tok)
+Http::One::TeChunkedParser::parseChunkExtensions(Tokenizer &callerTok)
{
do {
+ auto tok = callerTok;
+
ParseBws(tok); // Bug 4492: IBM_HTTP_Server sends SP after chunk-size
if (!tok.skip(';'))
@@ -145,6 +152,7 @@ Http::One::TeChunkedParser::parseChunkExtensions(Tokenizer &tok)
parseOneChunkExtension(tok);
buf_ = tok.remaining(); // got one extension
+ callerTok = tok;
} while (true);
}
@@ -158,11 +166,14 @@ Http::One::ChunkExtensionValueParser::Ignore(Tokenizer &tok, const SBuf &extName
/// Parses a single chunk-ext list element:
/// chunk-ext = *( BWS ";" BWS chunk-ext-name [ BWS "=" BWS chunk-ext-val ] )
void
-Http::One::TeChunkedParser::parseOneChunkExtension(Tokenizer &tok)
+Http::One::TeChunkedParser::parseOneChunkExtension(Tokenizer &callerTok)
{
+ auto tok = callerTok;
+
ParseBws(tok); // Bug 4492: ICAP servers send SP before chunk-ext-name
const auto extName = tok.prefix("chunk-ext-name", CharacterSet::TCHAR);
+ callerTok = tok; // in case we determine that this is a valueless chunk-ext
ParseBws(tok);
@@ -176,6 +187,8 @@ Http::One::TeChunkedParser::parseOneChunkExtension(Tokenizer &tok)
customExtensionValueParser->parse(tok, extName);
else
ChunkExtensionValueParser::Ignore(tok, extName);
+
+ callerTok = tok;
}
bool
@@ -209,7 +222,7 @@ Http::One::TeChunkedParser::parseChunkEnd(Tokenizer &tok)
Must(theLeftBodySize == 0); // Should(), really
try {
- skipLineTerminator(tok);
+ tok.skipRequired("chunk CRLF", Http1::CrLf());
buf_ = tok.remaining(); // parse checkpoint
theChunkSize = 0; // done with the current chunk
parsingStage_ = Http1::HTTP_PARSE_CHUNK_SZ;
diff --git a/src/parser/Tokenizer.cc b/src/parser/Tokenizer.cc
index edaffd8d3..15df793b8 100644
--- a/src/parser/Tokenizer.cc
+++ b/src/parser/Tokenizer.cc
@@ -147,6 +147,18 @@ Parser::Tokenizer::skipAll(const CharacterSet &tokenChars)
return success(prefixLen);
}
+void
+Parser::Tokenizer::skipRequired(const char *description, const SBuf &tokenToSkip)
+{
+ if (skip(tokenToSkip) || tokenToSkip.isEmpty())
+ return;
+
+ if (tokenToSkip.startsWith(buf_))
+ throw InsufficientInput();
+
+ throw TextException(ToSBuf("cannot skip ", description), Here());
+}
+
bool
Parser::Tokenizer::skipOne(const CharacterSet &chars)
{
diff --git a/src/parser/Tokenizer.h b/src/parser/Tokenizer.h
index 7bae1ccbb..3cfa7dd6c 100644
--- a/src/parser/Tokenizer.h
+++ b/src/parser/Tokenizer.h
@@ -115,6 +115,13 @@ public:
*/
SBuf::size_type skipAll(const CharacterSet &discardables);
+ /** skips a given character sequence (string);
+ * does nothing if the sequence is empty
+ *
+ * \throws exception on mismatching prefix or InsufficientInput
+ */
+ void skipRequired(const char *description, const SBuf &tokenToSkip);
+
/** Removes a single trailing character from the set.
*
* \return whether a character was removed
--
2.25.1

@ -0,0 +1,43 @@
From 052cf082b0faaef4eaaa4e94119d7a1437aac4a3 Mon Sep 17 00:00:00 2001
From: squidadm <squidadm@users.noreply.github.com>
Date: Wed, 18 Oct 2023 04:50:56 +1300
Subject: [PATCH] Fix stack buffer overflow when parsing Digest Authorization
(#1517)
The bug was discovered and detailed by Joshua Rogers at
https://megamansec.github.io/Squid-Security-Audit/digest-overflow.html
where it was filed as "Stack Buffer Overflow in Digest Authentication".
---------
Co-authored-by: Alex Bason <nonsleepr@gmail.com>
Co-authored-by: Amos Jeffries <yadij@users.noreply.github.com>
---
src/auth/digest/Config.cc | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/src/auth/digest/Config.cc b/src/auth/digest/Config.cc
index d42831a55..be9f3c433 100644
--- a/src/auth/digest/Config.cc
+++ b/src/auth/digest/Config.cc
@@ -844,11 +844,15 @@ Auth::Digest::Config::decode(char const *proxy_auth, const HttpRequest *request,
break;
case DIGEST_NC:
- if (value.size() != 8) {
+ if (value.size() == 8) {
+ // for historical reasons, the nc value MUST be exactly 8 bytes
+ static_assert(sizeof(digest_request->nc) == 8 + 1, "bad nc buffer size");
+ xstrncpy(digest_request->nc, value.rawBuf(), value.size() + 1);
+ debugs(29, 9, "Found noncecount '" << digest_request->nc << "'");
+ } else {
debugs(29, 9, "Invalid nc '" << value << "' in '" << temp << "'");
+ digest_request->nc[0] = 0;
}
- xstrncpy(digest_request->nc, value.rawBuf(), value.size() + 1);
- debugs(29, 9, "Found noncecount '" << digest_request->nc << "'");
break;
case DIGEST_CNONCE:
--
2.25.1

@ -0,0 +1,46 @@
From c67bf049871a49e9871efe50b230a7f37b7039f6 Mon Sep 17 00:00:00 2001
From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Thu, 25 May 2023 02:10:28 +0000
Subject: [PATCH] Fix userinfo percent-encoding (#1367)
%X expects an unsigned int, and that is what we were giving it. However,
to get to the correct unsigned int value from a (signed) char, one has
to cast to an unsigned char (or equivalent) first.
Broken since inception in commit 7b75100.
Also adjusted similar (commented out) ext_edirectory_userip_acl code.
---
src/acl/external/eDirectory_userip/ext_edirectory_userip_acl.cc | 2 +-
src/anyp/Uri.cc | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/acl/external/eDirectory_userip/ext_edirectory_userip_acl.cc b/src/acl/external/eDirectory_userip/ext_edirectory_userip_acl.cc
index dbc20ae54..9028d1562 100644
--- a/src/acl/external/eDirectory_userip/ext_edirectory_userip_acl.cc
+++ b/src/acl/external/eDirectory_userip/ext_edirectory_userip_acl.cc
@@ -1612,7 +1612,7 @@ MainSafe(int argc, char **argv)
/* BINARY DEBUGGING *
local_printfx("while() -> bufa[%" PRIuSIZE "]: %s", k, bufa);
for (i = 0; i < k; ++i)
- local_printfx("%02X", bufa[i]);
+ local_printfx("%02X", static_cast<unsigned int>(static_cast<unsigned char>(bufa[i])));
local_printfx("\n");
* BINARY DEBUGGING */
/* Check for CRLF */
diff --git a/src/anyp/Uri.cc b/src/anyp/Uri.cc
index a6a5d5d9e..3d19188e9 100644
--- a/src/anyp/Uri.cc
+++ b/src/anyp/Uri.cc
@@ -70,7 +70,7 @@ AnyP::Uri::Encode(const SBuf &buf, const CharacterSet &ignore)
while (!tk.atEnd()) {
// TODO: Add Tokenizer::parseOne(void).
const auto ch = tk.remaining()[0];
- output.appendf("%%%02X", static_cast<unsigned int>(ch)); // TODO: Optimize using a table
+ output.appendf("%%%02X", static_cast<unsigned int>(static_cast<unsigned char>(ch))); // TODO: Optimize using a table
(void)tk.skip(ch);
if (tk.prefix(goodSection, ignore))
--
2.25.1

@ -0,0 +1,30 @@
commit 77b3fb4df0f126784d5fd4967c28ed40eb8d521b
Author: Alex Rousskov <rousskov@measurement-factory.com>
Date: Wed Oct 25 19:41:45 2023 +0000
RFC 1123: Fix date parsing (#1538)
The bug was discovered and detailed by Joshua Rogers at
https://megamansec.github.io/Squid-Security-Audit/datetime-overflow.html
where it was filed as "1-Byte Buffer OverRead in RFC 1123 date/time
Handling".
diff --git a/lib/rfc1123.c b/lib/rfc1123.c
index e5bf9a4d7..cb484cc00 100644
--- a/lib/rfc1123.c
+++ b/lib/rfc1123.c
@@ -50,7 +50,13 @@ make_month(const char *s)
char month[3];
month[0] = xtoupper(*s);
+ if (!month[0])
+ return -1; // protects *(s + 1) below
+
month[1] = xtolower(*(s + 1));
+ if (!month[1])
+ return -1; // protects *(s + 2) below
+
month[2] = xtolower(*(s + 2));
for (i = 0; i < 12; i++)

@ -0,0 +1,62 @@
diff --git a/src/ipc.cc b/src/ipc.cc
index 42e11e6..a68e623 100644
--- a/src/ipc.cc
+++ b/src/ipc.cc
@@ -19,6 +19,11 @@
#include "SquidConfig.h"
#include "SquidIpc.h"
#include "tools.h"
+#include <cstdlib>
+
+#if HAVE_UNISTD_H
+#include <unistd.h>
+#endif
static const char *hello_string = "hi there\n";
#ifndef HELLO_BUF_SZ
@@ -365,6 +370,22 @@ ipcCreate(int type, const char *prog, const char *const args[], const char *name
}
PutEnvironment();
+
+ // A dup(2) wrapper that reports and exits the process on errors. The
+ // exiting logic is only suitable for this child process context.
+ const auto dupOrExit = [prog,name](const int oldFd) {
+ const auto newFd = dup(oldFd);
+ if (newFd < 0) {
+ const auto savedErrno = errno;
+ debugs(54, DBG_CRITICAL, "ERROR: Helper process initialization failure: " << name <<
+ Debug::Extra << "helper (CHILD) PID: " << getpid() <<
+ Debug::Extra << "helper program name: " << prog <<
+ Debug::Extra << "dup(2) system call error for FD " << oldFd << ": " << xstrerr(savedErrno));
+ _exit(EXIT_FAILURE);
+ }
+ return newFd;
+ };
+
/*
* This double-dup stuff avoids problems when one of
* crfd, cwfd, or debug_log are in the rage 0-2.
@@ -372,17 +393,16 @@ ipcCreate(int type, const char *prog, const char *const args[], const char *name
do {
/* First make sure 0-2 is occupied by something. Gets cleaned up later */
- x = dup(crfd);
- assert(x > -1);
- } while (x < 3 && x > -1);
+ x = dupOrExit(crfd);
+ } while (x < 3);
close(x);
- t1 = dup(crfd);
+ t1 = dupOrExit(crfd);
- t2 = dup(cwfd);
+ t2 = dupOrExit(cwfd);
- t3 = dup(fileno(debug_log));
+ t3 = dupOrExit(fileno(debug_log));
assert(t1 > 2 && t2 > 2 && t3 > 2);

@ -0,0 +1,50 @@
diff --git a/src/ClientRequestContext.h b/src/ClientRequestContext.h
index 55a7a43..94a8700 100644
--- a/src/ClientRequestContext.h
+++ b/src/ClientRequestContext.h
@@ -80,6 +80,10 @@ public:
#endif
ErrorState *error; ///< saved error page for centralized/delayed processing
bool readNextRequest; ///< whether Squid should read after error handling
+
+#if FOLLOW_X_FORWARDED_FOR
+ size_t currentXffHopNumber = 0; ///< number of X-Forwarded-For header values processed so far
+#endif
};
#endif /* SQUID_CLIENTREQUESTCONTEXT_H */
diff --git a/src/client_side_request.cc b/src/client_side_request.cc
index f44849e..c7c09d4 100644
--- a/src/client_side_request.cc
+++ b/src/client_side_request.cc
@@ -80,6 +80,11 @@
static const char *const crlf = "\r\n";
#if FOLLOW_X_FORWARDED_FOR
+
+#if !defined(SQUID_X_FORWARDED_FOR_HOP_MAX)
+#define SQUID_X_FORWARDED_FOR_HOP_MAX 64
+#endif
+
static void clientFollowXForwardedForCheck(Acl::Answer answer, void *data);
#endif /* FOLLOW_X_FORWARDED_FOR */
@@ -485,8 +490,16 @@ clientFollowXForwardedForCheck(Acl::Answer answer, void *data)
/* override the default src_addr tested if we have to go deeper than one level into XFF */
Filled(calloutContext->acl_checklist)->src_addr = request->indirect_client_addr;
}
- calloutContext->acl_checklist->nonBlockingCheck(clientFollowXForwardedForCheck, data);
- return;
+ if (++calloutContext->currentXffHopNumber < SQUID_X_FORWARDED_FOR_HOP_MAX) {
+ calloutContext->acl_checklist->nonBlockingCheck(clientFollowXForwardedForCheck, data);
+ return;
+ }
+ const auto headerName = Http::HeaderLookupTable.lookup(Http::HdrType::X_FORWARDED_FOR).name;
+ debugs(28, DBG_CRITICAL, "ERROR: Ignoring trailing " << headerName << " addresses" <<
+ Debug::Extra << "addresses allowed by follow_x_forwarded_for: " << calloutContext->currentXffHopNumber <<
+ Debug::Extra << "last/accepted address: " << request->indirect_client_addr <<
+ Debug::Extra << "ignored trailing addresses: " << request->x_forwarded_for_iterator);
+ // fall through to resume clientAccessCheck() processing
}
}

File diff suppressed because it is too large Load Diff

@ -0,0 +1,30 @@
commit 8fcff9c09824b18628f010d26a04247f6a6cbcb8
Author: Alex Rousskov <rousskov@measurement-factory.com>
Date: Sun Nov 12 09:33:20 2023 +0000
Do not update StoreEntry expiration after errorAppendEntry() (#1580)
errorAppendEntry() is responsible for setting entry expiration times,
which it does by calling StoreEntry::storeErrorResponse() that calls
StoreEntry::negativeCache().
This change was triggered by a vulnerability report by Joshua Rogers at
https://megamansec.github.io/Squid-Security-Audit/cache-uaf.html where
it was filed as "Use-After-Free in Cache Manager Errors". The reported
"use after free" vulnerability was unknowingly addressed by 2022 commit
1fa761a that removed excessively long "reentrant" store_client calls
responsible for the disappearance of the properly locked StoreEntry in
this (and probably other) contexts.
diff --git a/src/cache_manager.cc b/src/cache_manager.cc
index 61c7f65be..65bf22dd0 100644
--- a/src/cache_manager.cc
+++ b/src/cache_manager.cc
@@ -326,7 +326,6 @@ CacheManager::start(const Comm::ConnectionPointer &client, HttpRequest *request,
err->url = xstrdup(entry->url());
err->detailError(new ExceptionErrorDetail(Here().id()));
errorAppendEntry(entry, err);
- entry->expires = squid_curtime;
return;
}

@ -0,0 +1,192 @@
diff --git a/src/http.cc b/src/http.cc
index 98e3969..8b55bf3 100644
--- a/src/http.cc
+++ b/src/http.cc
@@ -54,6 +54,7 @@
#include "rfc1738.h"
#include "SquidConfig.h"
#include "SquidTime.h"
+#include "SquidMath.h"
#include "StatCounters.h"
#include "Store.h"
#include "StrList.h"
@@ -1235,18 +1236,26 @@ HttpStateData::readReply(const CommIoCbParams &io)
* Plus, it breaks our lame *HalfClosed() detection
*/
- Must(maybeMakeSpaceAvailable(true));
- CommIoCbParams rd(this); // will be expanded with ReadNow results
- rd.conn = io.conn;
- rd.size = entry->bytesWanted(Range<size_t>(0, inBuf.spaceSize()));
+ const auto moreDataPermission = canBufferMoreReplyBytes();
+ if (!moreDataPermission) {
+ abortTransaction("ready to read required data, but the read buffer is full and cannot be drained");
+ return;
+ }
+
+ const auto readSizeMax = maybeMakeSpaceAvailable(moreDataPermission.value());
+ // TODO: Move this logic inside maybeMakeSpaceAvailable():
+ const auto readSizeWanted = readSizeMax ? entry->bytesWanted(Range<size_t>(0, readSizeMax)) : 0;
- if (rd.size <= 0) {
+ if (readSizeWanted <= 0) {
assert(entry->mem_obj);
AsyncCall::Pointer nilCall;
entry->mem_obj->delayRead(DeferredRead(readDelayed, this, CommRead(io.conn, NULL, 0, nilCall)));
return;
}
+ CommIoCbParams rd(this); // will be expanded with ReadNow results
+ rd.conn = io.conn;
+ rd.size = readSizeWanted;
switch (Comm::ReadNow(rd, inBuf)) {
case Comm::INPROGRESS:
if (inBuf.isEmpty())
@@ -1617,8 +1626,10 @@ HttpStateData::maybeReadVirginBody()
if (!Comm::IsConnOpen(serverConnection) || fd_table[serverConnection->fd].closing())
return;
- if (!maybeMakeSpaceAvailable(false))
+ if (!canBufferMoreReplyBytes()) {
+ abortTransaction("more response bytes required, but the read buffer is full and cannot be drained");
return;
+ }
// XXX: get rid of the do_next_read flag
// check for the proper reasons preventing read(2)
@@ -1636,40 +1647,78 @@ HttpStateData::maybeReadVirginBody()
Comm::Read(serverConnection, call);
}
-bool
-HttpStateData::maybeMakeSpaceAvailable(bool doGrow)
+/// Desired inBuf capacity based on various capacity preferences/limits:
+/// * a smaller buffer may not hold enough for look-ahead header/body parsers;
+/// * a smaller buffer may result in inefficient tiny network reads;
+/// * a bigger buffer may waste memory;
+/// * a bigger buffer may exceed SBuf storage capabilities (SBuf::maxSize);
+size_t
+HttpStateData::calcReadBufferCapacityLimit() const
{
- // how much we are allowed to buffer
- const int limitBuffer = (flags.headers_parsed ? Config.readAheadGap : Config.maxReplyHeaderSize);
-
- if (limitBuffer < 0 || inBuf.length() >= (SBuf::size_type)limitBuffer) {
- // when buffer is at or over limit already
- debugs(11, 7, "will not read up to " << limitBuffer << ". buffer has (" << inBuf.length() << "/" << inBuf.spaceSize() << ") from " << serverConnection);
- debugs(11, DBG_DATA, "buffer has {" << inBuf << "}");
- // Process next response from buffer
- processReply();
- return false;
+ if (!flags.headers_parsed)
+ return Config.maxReplyHeaderSize;
+
+ // XXX: Our inBuf is not used to maintain the read-ahead gap, and using
+ // Config.readAheadGap like this creates huge read buffers for large
+ // read_ahead_gap values. TODO: Switch to using tcp_recv_bufsize as the
+ // primary read buffer capacity factor.
+ //
+ // TODO: Cannot reuse throwing NaturalCast() here. Consider removing
+ // .value() dereference in NaturalCast() or add/use NaturalCastOrMax().
+ const auto configurationPreferences = NaturalSum<size_t>(Config.readAheadGap).value_or(SBuf::maxSize);
+
+ // TODO: Honor TeChunkedParser look-ahead and trailer parsing requirements
+ // (when explicit configurationPreferences are set too low).
+
+ return std::min<size_t>(configurationPreferences, SBuf::maxSize);
+}
+
+/// The maximum number of virgin reply bytes we may buffer before we violate
+/// the currently configured response buffering limits.
+/// \retval std::nullopt means that no more virgin response bytes can be read
+/// \retval 0 means that more virgin response bytes may be read later
+/// \retval >0 is the number of bytes that can be read now (subject to other constraints)
+std::optional<size_t>
+HttpStateData::canBufferMoreReplyBytes() const
+{
+#if USE_ADAPTATION
+ // If we do not check this now, we may say the final "no" prematurely below
+ // because inBuf.length() will decrease as adaptation drains buffered bytes.
+ if (responseBodyBuffer) {
+ debugs(11, 3, "yes, but waiting for adaptation to drain read buffer");
+ return 0; // yes, we may be able to buffer more (but later)
+ }
+#endif
+
+ const auto maxCapacity = calcReadBufferCapacityLimit();
+ if (inBuf.length() >= maxCapacity) {
+ debugs(11, 3, "no, due to a full buffer: " << inBuf.length() << '/' << inBuf.spaceSize() << "; limit: " << maxCapacity);
+ return std::nullopt; // no, configuration prohibits buffering more
}
+ const auto maxReadSize = maxCapacity - inBuf.length(); // positive
+ debugs(11, 7, "yes, may read up to " << maxReadSize << " into " << inBuf.length() << '/' << inBuf.spaceSize());
+ return maxReadSize; // yes, can read up to this many bytes (subject to other constraints)
+}
+
+/// prepare read buffer for reading
+/// \return the maximum number of bytes the caller should attempt to read
+/// \retval 0 means that the caller should delay reading
+size_t
+HttpStateData::maybeMakeSpaceAvailable(const size_t maxReadSize)
+{
// how much we want to read
- const size_t read_size = calcBufferSpaceToReserve(inBuf.spaceSize(), (limitBuffer - inBuf.length()));
+ const size_t read_size = calcBufferSpaceToReserve(inBuf.spaceSize(), maxReadSize);
- if (!read_size) {
+ if (read_size < 2) {
debugs(11, 7, "will not read up to " << read_size << " into buffer (" << inBuf.length() << "/" << inBuf.spaceSize() << ") from " << serverConnection);
- return false;
+ return 0;
}
- // just report whether we could grow or not, do not actually do it
- if (doGrow)
- return (read_size >= 2);
-
// we may need to grow the buffer
inBuf.reserveSpace(read_size);
- debugs(11, 8, (!flags.do_next_read ? "will not" : "may") <<
- " read up to " << read_size << " bytes info buf(" << inBuf.length() << "/" << inBuf.spaceSize() <<
- ") from " << serverConnection);
-
- return (inBuf.spaceSize() >= 2); // only read if there is 1+ bytes of space available
+ debugs(11, 7, "may read up to " << read_size << " bytes info buffer (" << inBuf.length() << "/" << inBuf.spaceSize() << ") from " << serverConnection);
+ return read_size;
}
/// called after writing the very last request byte (body, last-chunk, etc)
diff --git a/src/http.h b/src/http.h
index e70cd7e..f7ed40d 100644
--- a/src/http.h
+++ b/src/http.h
@@ -15,6 +15,8 @@
#include "http/StateFlags.h"
#include "sbuf/SBuf.h"
+#include <optional>
+
class FwdState;
class HttpHeader;
class String;
@@ -112,16 +114,9 @@ private:
void abortTransaction(const char *reason) { abortAll(reason); } // abnormal termination
- /**
- * determine if read buffer can have space made available
- * for a read.
- *
- * \param grow whether to actually expand the buffer
- *
- * \return whether the buffer can be grown to provide space
- * regardless of whether the grow actually happened.
- */
- bool maybeMakeSpaceAvailable(bool grow);
+ size_t calcReadBufferCapacityLimit() const;
+ std::optional<size_t> canBufferMoreReplyBytes() const;
+ size_t maybeMakeSpaceAvailable(size_t maxReadSize);
// consuming request body
virtual void handleMoreRequestBodyAvailable();

@ -0,0 +1,105 @@
diff --git a/src/SquidString.h b/src/SquidString.h
index e36cd27..ea613ad 100644
--- a/src/SquidString.h
+++ b/src/SquidString.h
@@ -140,7 +140,16 @@ private:
size_type len_ = 0; /* current length */
- static const size_type SizeMax_ = 65535; ///< 64K limit protects some fixed-size buffers
+ /// An earlier 64KB limit was meant to protect some fixed-size buffers, but
+ /// (a) we do not know where those buffers are (or whether they still exist)
+ /// (b) too many String users unknowingly exceeded that limit and asserted.
+ /// We are now using a larger limit to reduce the number of (b) cases,
+ /// especially cases where "compact" lists of items grow 50% in size when we
+ /// convert them to canonical form. The new limit is selected to withstand
+ /// concatenation and ~50% expansion of two HTTP headers limited by default
+ /// request_header_max_size and reply_header_max_size settings.
+ static const size_type SizeMax_ = 3*64*1024 - 1;
+
/// returns true after increasing the first argument by extra if the sum does not exceed SizeMax_
static bool SafeAdd(size_type &base, size_type extra) { if (extra <= SizeMax_ && base <= SizeMax_ - extra) { base += extra; return true; } return false; }
diff --git a/src/cache_cf.cc b/src/cache_cf.cc
index cb746dc..c4ade96 100644
--- a/src/cache_cf.cc
+++ b/src/cache_cf.cc
@@ -950,6 +950,18 @@ configDoConfigure(void)
(uint32_t)Config.maxRequestBufferSize, (uint32_t)Config.maxRequestHeaderSize);
}
+ // Warn about the dangers of exceeding String limits when manipulating HTTP
+ // headers. Technically, we do not concatenate _requests_, so we could relax
+ // their check, but we keep the two checks the same for simplicity sake.
+ const auto safeRawHeaderValueSizeMax = (String::SizeMaxXXX()+1)/3;
+ // TODO: static_assert(safeRawHeaderValueSizeMax >= 64*1024); // no WARNINGs for default settings
+ if (Config.maxRequestHeaderSize > safeRawHeaderValueSizeMax)
+ debugs(3, DBG_CRITICAL, "WARNING: Increasing request_header_max_size beyond " << safeRawHeaderValueSizeMax <<
+ " bytes makes Squid more vulnerable to denial-of-service attacks; configured value: " << Config.maxRequestHeaderSize << " bytes");
+ if (Config.maxReplyHeaderSize > safeRawHeaderValueSizeMax)
+ debugs(3, DBG_CRITICAL, "WARNING: Increasing reply_header_max_size beyond " << safeRawHeaderValueSizeMax <<
+ " bytes makes Squid more vulnerable to denial-of-service attacks; configured value: " << Config.maxReplyHeaderSize << " bytes");
+
/*
* Disable client side request pipelining if client_persistent_connections OFF.
* Waste of resources queueing any pipelined requests when the first will close the connection.
diff --git a/src/cf.data.pre b/src/cf.data.pre
index 67a66b0..61a66f1 100644
--- a/src/cf.data.pre
+++ b/src/cf.data.pre
@@ -6489,11 +6489,14 @@ TYPE: b_size_t
DEFAULT: 64 KB
LOC: Config.maxRequestHeaderSize
DOC_START
- This specifies the maximum size for HTTP headers in a request.
- Request headers are usually relatively small (about 512 bytes).
- Placing a limit on the request header size will catch certain
- bugs (for example with persistent connections) and possibly
- buffer-overflow or denial-of-service attacks.
+ This directives limits the header size of a received HTTP request
+ (including request-line). Increasing this limit beyond its 64 KB default
+ exposes certain old Squid code to various denial-of-service attacks. This
+ limit also applies to received FTP commands.
+
+ This limit has no direct affect on Squid memory consumption.
+
+ Squid does not check this limit when sending requests.
DOC_END
NAME: reply_header_max_size
@@ -6502,11 +6505,14 @@ TYPE: b_size_t
DEFAULT: 64 KB
LOC: Config.maxReplyHeaderSize
DOC_START
- This specifies the maximum size for HTTP headers in a reply.
- Reply headers are usually relatively small (about 512 bytes).
- Placing a limit on the reply header size will catch certain
- bugs (for example with persistent connections) and possibly
- buffer-overflow or denial-of-service attacks.
+ This directives limits the header size of a received HTTP response
+ (including status-line). Increasing this limit beyond its 64 KB default
+ exposes certain old Squid code to various denial-of-service attacks. This
+ limit also applies to FTP command responses.
+
+ Squid also checks this limit when loading hit responses from disk cache.
+
+ Squid does not check this limit when sending responses.
DOC_END
NAME: request_body_max_size
diff --git a/src/http.cc b/src/http.cc
index 7c9ae70..98e3969 100644
--- a/src/http.cc
+++ b/src/http.cc
@@ -1926,8 +1926,9 @@ HttpStateData::httpBuildRequestHeader(HttpRequest * request,
String strFwd = hdr_in->getList(Http::HdrType::X_FORWARDED_FOR);
- // if we cannot double strFwd size, then it grew past 50% of the limit
- if (!strFwd.canGrowBy(strFwd.size())) {
+ // Detect unreasonably long header values. And paranoidly check String
+ // limits: a String ought to accommodate two reasonable-length values.
+ if (strFwd.size() > 32*1024 || !strFwd.canGrowBy(strFwd.size())) {
// There is probably a forwarding loop with Via detection disabled.
// If we do nothing, String will assert on overflow soon.
// TODO: Terminate all transactions with huge XFF?

@ -0,0 +1,13 @@
diff --git a/lib/libTrie/TrieNode.cc b/lib/libTrie/TrieNode.cc
index b379856..5d87279 100644
--- a/lib/libTrie/TrieNode.cc
+++ b/lib/libTrie/TrieNode.cc
@@ -32,7 +32,7 @@ TrieNode::add(char const *aString, size_t theLength, void *privatedata, TrieChar
/* We trust that privatedata and existant keys have already been checked */
if (theLength) {
- int index = transform ? (*transform)(*aString): *aString;
+ const unsigned char index = transform ? (*transform)(*aString): *aString;
if (!internal[index])
internal[index] = new TrieNode;

@ -1,8 +1,8 @@
diff --git a/src/client_side.cc b/src/client_side.cc diff --git a/src/client_side.cc b/src/client_side.cc
index f488fc4..69586df 100644 index 4eb6976..63f1b66 100644
--- a/src/client_side.cc --- a/src/client_side.cc
+++ b/src/client_side.cc +++ b/src/client_side.cc
@@ -932,7 +932,7 @@ ConnStateData::kick() @@ -957,7 +957,7 @@ ConnStateData::kick()
* We are done with the response, and we are either still receiving request * We are done with the response, and we are either still receiving request
* body (early response!) or have already stopped receiving anything. * body (early response!) or have already stopped receiving anything.
* *
@ -11,7 +11,7 @@ index f488fc4..69586df 100644
* (XXX: but then we will call readNextRequest() which may succeed and * (XXX: but then we will call readNextRequest() which may succeed and
* execute a smuggled request as we are not done with the current request). * execute a smuggled request as we are not done with the current request).
* *
@@ -952,28 +952,12 @@ ConnStateData::kick() @@ -977,28 +977,12 @@ ConnStateData::kick()
* Attempt to parse a request from the request buffer. * Attempt to parse a request from the request buffer.
* If we've been fed a pipelined request it may already * If we've been fed a pipelined request it may already
* be in our read buffer. * be in our read buffer.
@ -42,7 +42,7 @@ index f488fc4..69586df 100644
/** \par /** \par
* At this point we either have a parsed request (which we've * At this point we either have a parsed request (which we've
@@ -1893,16 +1877,11 @@ ConnStateData::receivedFirstByte() @@ -1935,16 +1919,11 @@ ConnStateData::receivedFirstByte()
resetReadTimeout(Config.Timeout.request); resetReadTimeout(Config.Timeout.request);
} }
@ -60,19 +60,19 @@ index f488fc4..69586df 100644
{ {
- bool parsed_req = false; - bool parsed_req = false;
- -
debugs(33, 5, clientConnection << ": attempting to parse"); debugs(33, 5, HERE << clientConnection << ": attempting to parse");
// Loop while we have read bytes that are not needed for producing the body // Loop while we have read bytes that are not needed for producing the body
@@ -1947,8 +1926,6 @@ ConnStateData::clientParseRequests() @@ -1989,8 +1968,6 @@ ConnStateData::clientParseRequests()
processParsedRequest(context); processParsedRequest(context);
- parsed_req = true; // XXX: do we really need to parse everything right NOW ? - parsed_req = true; // XXX: do we really need to parse everything right NOW ?
- -
if (context->mayUseConnection()) { if (context->mayUseConnection()) {
debugs(33, 3, "Not parsing new requests, as this request may need the connection"); debugs(33, 3, HERE << "Not parsing new requests, as this request may need the connection");
break; break;
@@ -1961,8 +1938,19 @@ ConnStateData::clientParseRequests() @@ -2003,8 +1980,19 @@ ConnStateData::clientParseRequests()
} }
} }
@ -94,7 +94,7 @@ index f488fc4..69586df 100644
} }
void void
@@ -1979,18 +1967,7 @@ ConnStateData::afterClientRead() @@ -2021,18 +2009,7 @@ ConnStateData::afterClientRead()
if (pipeline.empty()) if (pipeline.empty())
fd_note(clientConnection->fd, "Reading next request"); fd_note(clientConnection->fd, "Reading next request");
@ -114,7 +114,7 @@ index f488fc4..69586df 100644
if (!isOpen()) if (!isOpen())
return; return;
@@ -3775,7 +3752,7 @@ ConnStateData::notePinnedConnectionBecameIdle(PinnedIdleContext pic) @@ -3789,7 +3766,7 @@ ConnStateData::notePinnedConnectionBecameIdle(PinnedIdleContext pic)
startPinnedConnectionMonitoring(); startPinnedConnectionMonitoring();
if (pipeline.empty()) if (pipeline.empty())
@ -124,18 +124,18 @@ index f488fc4..69586df 100644
/// Forward future client requests using the given server connection. /// Forward future client requests using the given server connection.
diff --git a/src/client_side.h b/src/client_side.h diff --git a/src/client_side.h b/src/client_side.h
index 6027b31..60b99b1 100644 index 2793673..7c8d86b 100644
--- a/src/client_side.h --- a/src/client_side.h
+++ b/src/client_side.h +++ b/src/client_side.h
@@ -98,7 +98,6 @@ public: @@ -93,7 +93,6 @@ public:
void doneWithControlMsg() override; virtual void doneWithControlMsg();
/// Traffic parsing /// Traffic parsing
- bool clientParseRequests(); - bool clientParseRequests();
void readNextRequest(); void readNextRequest();
/// try to make progress on a transaction or read more I/O /// try to make progress on a transaction or read more I/O
@@ -443,6 +442,7 @@ private: @@ -422,6 +421,7 @@ private:
void checkLogging(); void checkLogging();
@ -144,7 +144,7 @@ index 6027b31..60b99b1 100644
bool concurrentRequestQueueFilled() const; bool concurrentRequestQueueFilled() const;
diff --git a/src/tests/stub_client_side.cc b/src/tests/stub_client_side.cc diff --git a/src/tests/stub_client_side.cc b/src/tests/stub_client_side.cc
index 8c160e5..f49d5dc 100644 index acf61c4..b1d82bf 100644
--- a/src/tests/stub_client_side.cc --- a/src/tests/stub_client_side.cc
+++ b/src/tests/stub_client_side.cc +++ b/src/tests/stub_client_side.cc
@@ -14,7 +14,7 @@ @@ -14,7 +14,7 @@

@ -0,0 +1,367 @@
From 8d0ee420a4d91ac7fd97316338f1e28b4b060cbf Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Lubo=C5=A1=20Uhliarik?= <luhliari@redhat.com>
Date: Thu, 10 Oct 2024 19:26:27 +0200
Subject: [PATCH 1/6] Ignore whitespace chars after chunk-size
Previously (before #1498 change), squid was accepting TE-chunked replies
with whitespaces after chunk-size and missing chunk-ext data. After
It turned out that replies with such whitespace chars are pretty
common and other webservers which can act as forward proxies (e.g.
nginx, httpd...) are accepting them.
This change will allow to proxy chunked responses from origin server,
which had whitespaces inbetween chunk-size and CRLF.
---
src/http/one/TeChunkedParser.cc | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/http/one/TeChunkedParser.cc b/src/http/one/TeChunkedParser.cc
index 9cce10fdc91..04753395e16 100644
--- a/src/http/one/TeChunkedParser.cc
+++ b/src/http/one/TeChunkedParser.cc
@@ -125,6 +125,7 @@ Http::One::TeChunkedParser::parseChunkMetadataSuffix(Tokenizer &tok)
// Code becomes much simpler when incremental parsing functions throw on
// bad or insufficient input, like in the code below. TODO: Expand up.
try {
+ tok.skipAll(CharacterSet::WSP); // Some servers send SP/TAB after chunk-size
parseChunkExtensions(tok); // a possibly empty chunk-ext list
tok.skipRequired("CRLF after [chunk-ext]", Http1::CrLf());
buf_ = tok.remaining();
From 9c8d35f899035fa06021ab3fe6919f892c2f0c6b Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Lubo=C5=A1=20Uhliarik?= <luhliari@redhat.com>
Date: Fri, 11 Oct 2024 02:06:31 +0200
Subject: [PATCH 2/6] Added new argument to Http::One::ParseBws()
Depending on new wsp_only argument in ParseBws() it will be decided
which set of whitespaces characters will be parsed. If wsp_only is set
to true, only SP and HTAB chars will be parsed.
Also optimized number of ParseBws calls.
---
src/http/one/Parser.cc | 4 ++--
src/http/one/Parser.h | 3 ++-
src/http/one/TeChunkedParser.cc | 13 +++++++++----
src/http/one/TeChunkedParser.h | 2 +-
4 files changed, 14 insertions(+), 8 deletions(-)
diff --git a/src/http/one/Parser.cc b/src/http/one/Parser.cc
index b1908316a0b..01d7e3bc0e8 100644
--- a/src/http/one/Parser.cc
+++ b/src/http/one/Parser.cc
@@ -273,9 +273,9 @@ Http::One::ErrorLevel()
// BWS = *( SP / HTAB ) ; WhitespaceCharacters() may relax this RFC 7230 rule
void
-Http::One::ParseBws(Parser::Tokenizer &tok)
+Http::One::ParseBws(Parser::Tokenizer &tok, const bool wsp_only)
{
- const auto count = tok.skipAll(Parser::WhitespaceCharacters());
+ const auto count = tok.skipAll(wsp_only ? CharacterSet::WSP : Parser::WhitespaceCharacters());
if (tok.atEnd())
throw InsufficientInput(); // even if count is positive
diff --git a/src/http/one/Parser.h b/src/http/one/Parser.h
index d9a0ac8c273..08200371cd6 100644
--- a/src/http/one/Parser.h
+++ b/src/http/one/Parser.h
@@ -163,8 +163,9 @@ class Parser : public RefCountable
};
/// skips and, if needed, warns about RFC 7230 BWS ("bad" whitespace)
+/// \param wsp_only force skipping of whitespaces only, don't consider skipping relaxed delimeter chars
/// \throws InsufficientInput when the end of BWS cannot be confirmed
-void ParseBws(Parser::Tokenizer &);
+void ParseBws(Parser::Tokenizer &, const bool wsp_only = false);
/// the right debugs() level for logging HTTP violation messages
int ErrorLevel();
diff --git a/src/http/one/TeChunkedParser.cc b/src/http/one/TeChunkedParser.cc
index 04753395e16..41e1e5ddaea 100644
--- a/src/http/one/TeChunkedParser.cc
+++ b/src/http/one/TeChunkedParser.cc
@@ -125,8 +125,11 @@ Http::One::TeChunkedParser::parseChunkMetadataSuffix(Tokenizer &tok)
// Code becomes much simpler when incremental parsing functions throw on
// bad or insufficient input, like in the code below. TODO: Expand up.
try {
- tok.skipAll(CharacterSet::WSP); // Some servers send SP/TAB after chunk-size
- parseChunkExtensions(tok); // a possibly empty chunk-ext list
+ // A possibly empty chunk-ext list. If no chunk-ext has been found,
+ // try to skip trailing BWS, because some servers send "chunk-size BWS CRLF".
+ if (!parseChunkExtensions(tok))
+ ParseBws(tok, true);
+
tok.skipRequired("CRLF after [chunk-ext]", Http1::CrLf());
buf_ = tok.remaining();
parsingStage_ = theChunkSize ? Http1::HTTP_PARSE_CHUNK : Http1::HTTP_PARSE_MIME;
@@ -140,20 +143,22 @@ Http::One::TeChunkedParser::parseChunkMetadataSuffix(Tokenizer &tok)
/// Parses the chunk-ext list (RFC 9112 section 7.1.1:
/// chunk-ext = *( BWS ";" BWS chunk-ext-name [ BWS "=" BWS chunk-ext-val ] )
-void
+bool
Http::One::TeChunkedParser::parseChunkExtensions(Tokenizer &callerTok)
{
+ bool foundChunkExt = false;
do {
auto tok = callerTok;
ParseBws(tok); // Bug 4492: IBM_HTTP_Server sends SP after chunk-size
if (!tok.skip(';'))
- return; // reached the end of extensions (if any)
+ return foundChunkExt; // reached the end of extensions (if any)
parseOneChunkExtension(tok);
buf_ = tok.remaining(); // got one extension
callerTok = tok;
+ foundChunkExt = true;
} while (true);
}
diff --git a/src/http/one/TeChunkedParser.h b/src/http/one/TeChunkedParser.h
index 02eacd1bb89..8c5d4bb4cba 100644
--- a/src/http/one/TeChunkedParser.h
+++ b/src/http/one/TeChunkedParser.h
@@ -71,7 +71,7 @@ class TeChunkedParser : public Http1::Parser
private:
bool parseChunkSize(Tokenizer &tok);
bool parseChunkMetadataSuffix(Tokenizer &);
- void parseChunkExtensions(Tokenizer &);
+ bool parseChunkExtensions(Tokenizer &);
void parseOneChunkExtension(Tokenizer &);
bool parseChunkBody(Tokenizer &tok);
bool parseChunkEnd(Tokenizer &tok);
From 81e67f97f9c386bdd0bb4a5e182395c46adb70ad Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Lubo=C5=A1=20Uhliarik?= <luhliari@redhat.com>
Date: Fri, 11 Oct 2024 02:44:33 +0200
Subject: [PATCH 3/6] Fix typo in Parser.h
---
src/http/one/Parser.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/http/one/Parser.h b/src/http/one/Parser.h
index 08200371cd6..3ef4c5f7752 100644
--- a/src/http/one/Parser.h
+++ b/src/http/one/Parser.h
@@ -163,7 +163,7 @@ class Parser : public RefCountable
};
/// skips and, if needed, warns about RFC 7230 BWS ("bad" whitespace)
-/// \param wsp_only force skipping of whitespaces only, don't consider skipping relaxed delimeter chars
+/// \param wsp_only force skipping of whitespaces only, don't consider skipping relaxed delimiter chars
/// \throws InsufficientInput when the end of BWS cannot be confirmed
void ParseBws(Parser::Tokenizer &, const bool wsp_only = false);
From a0d4fe1794e605f8299a5c118c758a807453f016 Mon Sep 17 00:00:00 2001
From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Thu, 10 Oct 2024 22:39:42 -0400
Subject: [PATCH 4/6] Bug 5449 is a regression of Bug 4492!
Both bugs deal with "chunk-size SP+ CRLF" use cases. Bug 4492 had _two_
spaces after chunk-size, which answers one of the PR review questions:
Should we skip just one space? No, we should not.
The lines moved around in many commits, but I believe this regression
was introduced in commit 951013d0 because that commit stopped consuming
partially parsed chunk-ext sequences. That consumption was wrong, but it
had a positive side effect -- fixing Bug 4492...
---
src/http/one/TeChunkedParser.cc | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/http/one/TeChunkedParser.cc b/src/http/one/TeChunkedParser.cc
index 41e1e5ddaea..aa4a840fdcf 100644
--- a/src/http/one/TeChunkedParser.cc
+++ b/src/http/one/TeChunkedParser.cc
@@ -125,10 +125,10 @@ Http::One::TeChunkedParser::parseChunkMetadataSuffix(Tokenizer &tok)
// Code becomes much simpler when incremental parsing functions throw on
// bad or insufficient input, like in the code below. TODO: Expand up.
try {
- // A possibly empty chunk-ext list. If no chunk-ext has been found,
- // try to skip trailing BWS, because some servers send "chunk-size BWS CRLF".
- if (!parseChunkExtensions(tok))
- ParseBws(tok, true);
+ // Bug 4492: IBM_HTTP_Server sends SP after chunk-size
+ ParseBws(tok, true);
+
+ parseChunkExtensions(tok);
tok.skipRequired("CRLF after [chunk-ext]", Http1::CrLf());
buf_ = tok.remaining();
@@ -150,7 +150,7 @@ Http::One::TeChunkedParser::parseChunkExtensions(Tokenizer &callerTok)
do {
auto tok = callerTok;
- ParseBws(tok); // Bug 4492: IBM_HTTP_Server sends SP after chunk-size
+ ParseBws(tok);
if (!tok.skip(';'))
return foundChunkExt; // reached the end of extensions (if any)
From f837f5ff61301a17008f16ce1fb793c2abf19786 Mon Sep 17 00:00:00 2001
From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Thu, 10 Oct 2024 23:06:42 -0400
Subject: [PATCH 5/6] fixup: Fewer conditionals/ifs and more explicit spelling
... to draw code reader attention when something unusual is going on.
---
src/http/one/Parser.cc | 22 ++++++++++++++++++----
src/http/one/Parser.h | 10 ++++++++--
src/http/one/TeChunkedParser.cc | 14 ++++++--------
src/http/one/TeChunkedParser.h | 2 +-
4 files changed, 33 insertions(+), 15 deletions(-)
diff --git a/src/http/one/Parser.cc b/src/http/one/Parser.cc
index 01d7e3bc0e8..d3937e5e96b 100644
--- a/src/http/one/Parser.cc
+++ b/src/http/one/Parser.cc
@@ -271,11 +271,12 @@ Http::One::ErrorLevel()
return Config.onoff.relaxed_header_parser < 0 ? DBG_IMPORTANT : 5;
}
-// BWS = *( SP / HTAB ) ; WhitespaceCharacters() may relax this RFC 7230 rule
-void
-Http::One::ParseBws(Parser::Tokenizer &tok, const bool wsp_only)
+/// common part of ParseBws() and ParseStrctBws()
+namespace Http::One {
+static void
+ParseBws_(Parser::Tokenizer &tok, const CharacterSet &bwsChars)
{
- const auto count = tok.skipAll(wsp_only ? CharacterSet::WSP : Parser::WhitespaceCharacters());
+ const auto count = tok.skipAll(bwsChars);
if (tok.atEnd())
throw InsufficientInput(); // even if count is positive
@@ -290,4 +291,17 @@ Http::One::ParseBws(Parser::Tokenizer &tok, const bool wsp_only)
// success: no more BWS characters expected
}
+} // namespace Http::One
+
+void
+Http::One::ParseBws(Parser::Tokenizer &tok)
+{
+ ParseBws_(tok, CharacterSet::WSP);
+}
+
+void
+Http::One::ParseStrictBws(Parser::Tokenizer &tok)
+{
+ ParseBws_(tok, Parser::WhitespaceCharacters());
+}
diff --git a/src/http/one/Parser.h b/src/http/one/Parser.h
index 3ef4c5f7752..49e399de546 100644
--- a/src/http/one/Parser.h
+++ b/src/http/one/Parser.h
@@ -163,9 +163,15 @@ class Parser : public RefCountable
};
/// skips and, if needed, warns about RFC 7230 BWS ("bad" whitespace)
-/// \param wsp_only force skipping of whitespaces only, don't consider skipping relaxed delimiter chars
/// \throws InsufficientInput when the end of BWS cannot be confirmed
-void ParseBws(Parser::Tokenizer &, const bool wsp_only = false);
+/// \sa WhitespaceCharacters() for the definition of BWS characters
+/// \sa ParseStrictBws() that avoids WhitespaceCharacters() uncertainties
+void ParseBws(Parser::Tokenizer &);
+
+/// Like ParseBws() but only skips CharacterSet::WSP characters. This variation
+/// must be used if the next element may start with CR or any other character
+/// from RelaxedDelimiterCharacters().
+void ParseStrictBws(Parser::Tokenizer &);
/// the right debugs() level for logging HTTP violation messages
int ErrorLevel();
diff --git a/src/http/one/TeChunkedParser.cc b/src/http/one/TeChunkedParser.cc
index aa4a840fdcf..859471b8c77 100644
--- a/src/http/one/TeChunkedParser.cc
+++ b/src/http/one/TeChunkedParser.cc
@@ -125,11 +125,11 @@ Http::One::TeChunkedParser::parseChunkMetadataSuffix(Tokenizer &tok)
// Code becomes much simpler when incremental parsing functions throw on
// bad or insufficient input, like in the code below. TODO: Expand up.
try {
- // Bug 4492: IBM_HTTP_Server sends SP after chunk-size
- ParseBws(tok, true);
-
- parseChunkExtensions(tok);
+ // Bug 4492: IBM_HTTP_Server sends SP after chunk-size.
+ // No ParseBws() here because it may consume CR required further below.
+ ParseStrictBws(tok);
+ parseChunkExtensions(tok); // a possibly empty chunk-ext list
tok.skipRequired("CRLF after [chunk-ext]", Http1::CrLf());
buf_ = tok.remaining();
parsingStage_ = theChunkSize ? Http1::HTTP_PARSE_CHUNK : Http1::HTTP_PARSE_MIME;
@@ -143,22 +143,20 @@ Http::One::TeChunkedParser::parseChunkMetadataSuffix(Tokenizer &tok)
/// Parses the chunk-ext list (RFC 9112 section 7.1.1:
/// chunk-ext = *( BWS ";" BWS chunk-ext-name [ BWS "=" BWS chunk-ext-val ] )
-bool
+void
Http::One::TeChunkedParser::parseChunkExtensions(Tokenizer &callerTok)
{
- bool foundChunkExt = false;
do {
auto tok = callerTok;
ParseBws(tok);
if (!tok.skip(';'))
- return foundChunkExt; // reached the end of extensions (if any)
+ return; // reached the end of extensions (if any)
parseOneChunkExtension(tok);
buf_ = tok.remaining(); // got one extension
callerTok = tok;
- foundChunkExt = true;
} while (true);
}
diff --git a/src/http/one/TeChunkedParser.h b/src/http/one/TeChunkedParser.h
index 8c5d4bb4cba..02eacd1bb89 100644
--- a/src/http/one/TeChunkedParser.h
+++ b/src/http/one/TeChunkedParser.h
@@ -71,7 +71,7 @@ class TeChunkedParser : public Http1::Parser
private:
bool parseChunkSize(Tokenizer &tok);
bool parseChunkMetadataSuffix(Tokenizer &);
- bool parseChunkExtensions(Tokenizer &);
+ void parseChunkExtensions(Tokenizer &);
void parseOneChunkExtension(Tokenizer &);
bool parseChunkBody(Tokenizer &tok);
bool parseChunkEnd(Tokenizer &tok);
From f79936a234e722adb2dd08f31cf6019d81ee712c Mon Sep 17 00:00:00 2001
From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Thu, 10 Oct 2024 23:31:08 -0400
Subject: [PATCH 6/6] fixup: Deadly typo
---
src/http/one/Parser.cc | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/http/one/Parser.cc b/src/http/one/Parser.cc
index d3937e5e96b..7403a9163a2 100644
--- a/src/http/one/Parser.cc
+++ b/src/http/one/Parser.cc
@@ -296,12 +296,12 @@ ParseBws_(Parser::Tokenizer &tok, const CharacterSet &bwsChars)
void
Http::One::ParseBws(Parser::Tokenizer &tok)
{
- ParseBws_(tok, CharacterSet::WSP);
+ ParseBws_(tok, Parser::WhitespaceCharacters());
}
void
Http::One::ParseStrictBws(Parser::Tokenizer &tok)
{
- ParseBws_(tok, Parser::WhitespaceCharacters());
+ ParseBws_(tok, CharacterSet::WSP);
}

@ -0,0 +1,156 @@
commit c54122584d175cf1d292b239a5b70f2d1aa77c3a
Author: Tomas Korbar <tkorbar@redhat.com>
Date: Mon Dec 5 15:03:07 2022 +0100
Backport adding IP_BIND_ADDRESS_NO_PORT flag to outgoing connections
diff --git a/src/comm.cc b/src/comm.cc
index b4818f3..b18d175 100644
--- a/src/comm.cc
+++ b/src/comm.cc
@@ -59,6 +59,7 @@
*/
static IOCB commHalfClosedReader;
+static int comm_openex(int sock_type, int proto, Ip::Address &, int flags, const char *note);
static void comm_init_opened(const Comm::ConnectionPointer &conn, const char *note, struct addrinfo *AI);
static int comm_apply_flags(int new_socket, Ip::Address &addr, int flags, struct addrinfo *AI);
@@ -76,6 +77,7 @@ static EVH commHalfClosedCheck;
static void commPlanHalfClosedCheck();
static Comm::Flag commBind(int s, struct addrinfo &);
+static void commSetBindAddressNoPort(int);
static void commSetReuseAddr(int);
static void commSetNoLinger(int);
#ifdef TCP_NODELAY
@@ -202,6 +204,22 @@ comm_local_port(int fd)
return F->local_addr.port();
}
+/// sets the IP_BIND_ADDRESS_NO_PORT socket option to optimize ephemeral port
+/// reuse by outgoing TCP connections that must bind(2) to a source IP address
+static void
+commSetBindAddressNoPort(const int fd)
+{
+#if defined(IP_BIND_ADDRESS_NO_PORT)
+ int flag = 1;
+ if (setsockopt(fd, IPPROTO_IP, IP_BIND_ADDRESS_NO_PORT, reinterpret_cast<char*>(&flag), sizeof(flag)) < 0) {
+ const auto savedErrno = errno;
+ debugs(50, DBG_IMPORTANT, "ERROR: setsockopt(IP_BIND_ADDRESS_NO_PORT) failure: " << xstrerr(savedErrno));
+ }
+#else
+ (void)fd;
+#endif
+}
+
static Comm::Flag
commBind(int s, struct addrinfo &inaddr)
{
@@ -228,6 +246,10 @@ comm_open(int sock_type,
int flags,
const char *note)
{
+ // assume zero-port callers do not need to know the assigned port right away
+ if (sock_type == SOCK_STREAM && addr.port() == 0 && ((flags & COMM_DOBIND) || !addr.isAnyAddr()))
+ flags |= COMM_DOBIND_PORT_LATER;
+
return comm_openex(sock_type, proto, addr, flags, note);
}
@@ -329,7 +351,7 @@ comm_set_transparent(int fd)
* Create a socket. Default is blocking, stream (TCP) socket. IO_TYPE
* is OR of flags specified in defines.h:COMM_*
*/
-int
+static int
comm_openex(int sock_type,
int proto,
Ip::Address &addr,
@@ -488,6 +510,9 @@ comm_apply_flags(int new_socket,
}
}
#endif
+ if ((flags & COMM_DOBIND_PORT_LATER))
+ commSetBindAddressNoPort(new_socket);
+
if (commBind(new_socket, *AI) != Comm::OK) {
comm_close(new_socket);
return -1;
diff --git a/src/comm.h b/src/comm.h
index 5a1a7c2..a9f33db 100644
--- a/src/comm.h
+++ b/src/comm.h
@@ -43,7 +43,6 @@ void comm_import_opened(const Comm::ConnectionPointer &, const char *note, struc
/**
* Open a port specially bound for listening or sending through a specific port.
- * This is a wrapper providing IPv4/IPv6 failover around comm_openex().
* Please use for all listening sockets and bind() outbound sockets.
*
* It will open a socket bound for:
@@ -59,7 +58,6 @@ void comm_import_opened(const Comm::ConnectionPointer &, const char *note, struc
int comm_open_listener(int sock_type, int proto, Ip::Address &addr, int flags, const char *note);
void comm_open_listener(int sock_type, int proto, Comm::ConnectionPointer &conn, const char *note);
-int comm_openex(int, int, Ip::Address &, int, const char *);
unsigned short comm_local_port(int fd);
int comm_udp_sendto(int sock, const Ip::Address &to, const void *buf, int buflen);
diff --git a/src/comm/ConnOpener.cc b/src/comm/ConnOpener.cc
index 19c1237..79fa2ed 100644
--- a/src/comm/ConnOpener.cc
+++ b/src/comm/ConnOpener.cc
@@ -285,7 +285,7 @@ Comm::ConnOpener::createFd()
if (callback_ == NULL || callback_->canceled())
return false;
- temporaryFd_ = comm_openex(SOCK_STREAM, IPPROTO_TCP, conn_->local, conn_->flags, host_);
+ temporaryFd_ = comm_open(SOCK_STREAM, IPPROTO_TCP, conn_->local, conn_->flags, host_);
if (temporaryFd_ < 0) {
sendAnswer(Comm::ERR_CONNECT, 0, "Comm::ConnOpener::createFd");
return false;
diff --git a/src/comm/Connection.h b/src/comm/Connection.h
index 40c2249..2641f4e 100644
--- a/src/comm/Connection.h
+++ b/src/comm/Connection.h
@@ -52,6 +52,8 @@ namespace Comm
#define COMM_REUSEPORT 0x40 //< needs SO_REUSEPORT
/// not registered with Comm and not owned by any connection-closing code
#define COMM_ORPHANED 0x40
+/// Internal Comm optimization: Keep the source port unassigned until connect(2)
+#define COMM_DOBIND_PORT_LATER 0x100
/**
* Store data about the physical and logical attributes of a connection.
diff --git a/src/ipc.cc b/src/ipc.cc
index 45cab52..42e11e6 100644
--- a/src/ipc.cc
+++ b/src/ipc.cc
@@ -95,12 +95,12 @@ ipcCreate(int type, const char *prog, const char *const args[], const char *name
} else void(0)
if (type == IPC_TCP_SOCKET) {
- crfd = cwfd = comm_open(SOCK_STREAM,
+ crfd = cwfd = comm_open_listener(SOCK_STREAM,
0,
local_addr,
COMM_NOCLOEXEC,
name);
- prfd = pwfd = comm_open(SOCK_STREAM,
+ prfd = pwfd = comm_open_listener(SOCK_STREAM,
0, /* protocol */
local_addr,
0, /* blocking */
diff --git a/src/tests/stub_comm.cc b/src/tests/stub_comm.cc
index a1d33d6..bf4bea6 100644
--- a/src/tests/stub_comm.cc
+++ b/src/tests/stub_comm.cc
@@ -48,7 +48,6 @@ int comm_open_uds(int sock_type, int proto, struct sockaddr_un* addr, int flags)
void comm_import_opened(const Comm::ConnectionPointer &, const char *note, struct addrinfo *AI) STUB
int comm_open_listener(int sock_type, int proto, Ip::Address &addr, int flags, const char *note) STUB_RETVAL(-1)
void comm_open_listener(int sock_type, int proto, Comm::ConnectionPointer &conn, const char *note) STUB
-int comm_openex(int, int, Ip::Address &, int, tos_t tos, nfmark_t nfmark, const char *) STUB_RETVAL(-1)
unsigned short comm_local_port(int fd) STUB_RETVAL(0)
int comm_udp_sendto(int sock, const Ip::Address &to, const void *buf, int buflen) STUB_RETVAL(-1)
void commCallCloseHandlers(int fd) STUB

@ -0,0 +1,113 @@
From a0a9e6dc69d0c7b9ba237702b4c5020abc7ad1f8 Mon Sep 17 00:00:00 2001
From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Sat, 4 Nov 2023 00:30:42 +0000
Subject: [PATCH] Bug 5154: Do not open IPv6 sockets when IPv6 is disabled
(#1567)
... but allow basic IPv6 manipulations like getSockAddr().
Address.cc:663 getAddrInfo() assertion failed: false
Squids receives IPv6 addresses from traffic, configuration, or
hard-coded constants even when ./configured with --disable-ipv6 or when
IPv6 support was automatically disabled at startup after failing IPv6
tests. To handle IPv6 correctly, such Squids must support basic IPv6
operations like recognizing an IPv6 address in a request-target or
reporting an unsolicited IPv6 DNS record. At least for now, such Squids
must also correctly parse configuration-related IPv6 addresses.
All those activities rely on various low-level operations like filling
addrinfo structure with IP address information. Since 2012 commit
c5fbbc7, Ip::Address::getAddrInfo() was failing for IPv6 addresses when
Ip::EnableIpv6 was falsy. That change correctly recognized[^1] the need
for such Squids to handle IPv6, but to support basic operations, we need
to reject IPv6 addresses at a higher level and without asserting.
That high-level rejection work is ongoing, but initial attempts have
exposed difficult problems that will take time to address. For now, we
just avoid the assertion while protecting IPv6-disabled Squid from
listening on or opening connections to IPv6 addresses. Since Squid
already expects (and usually correctly handles) socket opening failures,
disabling those operations is better than failing in low-level IP
manipulation code.
The overall IPv6 posture of IPv6-disabled Squids that lack http_access
or other rules to deny IPv6 requests will change: This fix exposes more
of IPv6-disabled Squid code to IPv6 addresses. It is possible that such
exposure will make some IPv6 resources inside Squid (e.g., a previously
cached HTTP response) accessible to external requests. Squids will not
open or accept IPv6 connections but may forward requests with raw IPv6
targets to IPv4 cache_peers. Whether these and similar behavior changes
are going to be permanent is open for debate, but even if they are
temporary, they are arguably better than the corresponding assertions.
These changes do not effect IPv6-enabled Squids.
The assertion in IPv6-disabled Squid was reported by Joshua Rogers at
https://megamansec.github.io/Squid-Security-Audit/ipv6-assert.html where
it was filed as "Assertion on IPv6 Host Requests with --disable-ipv6".
[^1]: https://bugs.squid-cache.org/show_bug.cgi?id=3593#c1
---
src/comm.cc | 6 ++++++
src/ip/Address.cc | 2 +-
src/ip/Intercept.cc | 8 ++++++++
3 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/src/comm.cc b/src/comm.cc
index 4659955b011..271ba04d4da 100644
--- a/src/comm.cc
+++ b/src/comm.cc
@@ -344,6 +344,12 @@ comm_openex(int sock_type,
/* Create socket for accepting new connections. */
++ statCounter.syscalls.sock.sockets;
+ if (!Ip::EnableIpv6 && addr.isIPv6()) {
+ debugs(50, 2, "refusing to open an IPv6 socket when IPv6 support is disabled: " << addr);
+ errno = ENOTSUP;
+ return -1;
+ }
+
/* Setup the socket addrinfo details for use */
addr.getAddrInfo(AI);
AI->ai_socktype = sock_type;
diff --git a/src/ip/Address.cc b/src/ip/Address.cc
index b6f810bfc25..ae6db37da5e 100644
--- a/src/ip/Address.cc
+++ b/src/ip/Address.cc
@@ -623,7 +623,7 @@ Ip::Address::getAddrInfo(struct addrinfo *&dst, int force) const
&& dst->ai_protocol == 0)
dst->ai_protocol = IPPROTO_UDP;
- if (force == AF_INET6 || (force == AF_UNSPEC && Ip::EnableIpv6 && isIPv6()) ) {
+ if (force == AF_INET6 || (force == AF_UNSPEC && isIPv6()) ) {
dst->ai_addr = (struct sockaddr*)new sockaddr_in6;
memset(dst->ai_addr,0,sizeof(struct sockaddr_in6));
diff --git a/src/ip/Intercept.cc b/src/ip/Intercept.cc
index 1a5e2d15af1..a8522efaac0 100644
--- a/src/ip/Intercept.cc
+++ b/src/ip/Intercept.cc
@@ -15,6 +15,7 @@
#include "comm/Connection.h"
#include "fde.h"
#include "ip/Intercept.h"
+#include "ip/tools.h"
#include "src/tools.h"
#include <cerrno>
@@ -430,6 +431,13 @@ Ip::Intercept::ProbeForTproxy(Ip::Address &test)
debugs(3, 3, "Detect TPROXY support on port " << test);
+ if (!Ip::EnableIpv6 && test.isIPv6() && !test.setIPv4()) {
+ debugs(3, DBG_CRITICAL, "Cannot use TPROXY for " << test << " because IPv6 support is disabled");
+ if (doneSuid)
+ leave_suid();
+ return false;
+ }
+
int tos = 1;
int tmp_sock = -1;

@ -0,0 +1,117 @@
From 4d6dd3ddba5e850a42c86d8233735165a371c31c Mon Sep 17 00:00:00 2001
From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Sun, 1 Sep 2024 00:39:34 +0000
Subject: [PATCH] Bug 5405: Large uploads fill request buffer and die (#1887)
maybeMakeSpaceAvailable: request buffer full
ReadNow: ... size 0, retval 0, errno 0
terminateAll: 1/1 after ERR_CLIENT_GONE/WITH_CLIENT
%Ss=TCP_MISS_ABORTED
This bug is triggered by a combination of the following two conditions:
* HTTP client upload fills Squid request buffer faster than it is
drained by an origin server, cache_peer, or REQMOD service. The buffer
accumulates 576 KB (default 512 KB client_request_buffer_max_size + 64
KB internal "pipe" buffer).
* The affected server or service consumes a few bytes after the critical
accumulation is reached. In other words, the bug cannot be triggered
if nothing is consumed after the first condition above is met.
Comm::ReadNow() must not be called with a full buffer: Related
FD_READ_METHOD() code cannot distinguish "received EOF" from "had no
buffer space" outcomes. Server::readSomeData() tried to prevent such
calls, but the corresponding check had two problems:
* The check had an unsigned integer underflow bug[^1] that made it
ineffective when inBuf length exceeded Config.maxRequestBufferSize.
That length could exceed the limit due to reconfiguration and when
inBuf space size first grew outside of maybeMakeSpaceAvailable()
protections (e.g., during an inBuf.c_str() call) and then got filled
with newly read data. That growth started happening after 2020 commit
1dfbca06 optimized SBuf::cow() to merge leading and trailing space.
Prior to that commit, Bug 5405 could probably only affect Squid
reconfigurations that lower client_request_buffer_max_size.
* The check was separated from the ReadNow() call it was meant to
protect. While ConnStateData was waiting for the socket to become
ready for reading, various asynchronous events could alter inBuf or
Config.maxRequestBufferSize.
This change fixes both problems.
This change also fixes Squid Bug 5214.
[^1]: That underflow bug was probably introduced in 2015 commit 4d1376d7
while trying to emulate the original "do not read less than two bytes"
ConnStateData::In::maybeMakeSpaceAvailable() condition. That condition
itself looks like a leftover from manual zero-terminated input buffer
days that ended with 2014 commit e7287625. It is now removed.
---
diff --git a/src/servers/Server.cc b/src/servers/Server.cc
index 70fd10b..dd20619 100644
--- a/src/servers/Server.cc
+++ b/src/servers/Server.cc
@@ -83,16 +83,25 @@ Server::maybeMakeSpaceAvailable()
debugs(33, 4, "request buffer full: client_request_buffer_max_size=" << Config.maxRequestBufferSize);
}
+bool
+Server::mayBufferMoreRequestBytes() const
+{
+ // TODO: Account for bodyPipe buffering as well.
+ if (inBuf.length() >= Config.maxRequestBufferSize) {
+ debugs(33, 4, "no: " << inBuf.length() << '-' << Config.maxRequestBufferSize << '=' << (inBuf.length() - Config.maxRequestBufferSize));
+ return false;
+ }
+ debugs(33, 7, "yes: " << Config.maxRequestBufferSize << '-' << inBuf.length() << '=' << (Config.maxRequestBufferSize - inBuf.length()));
+ return true;
+}
+
void
Server::readSomeData()
{
if (reading())
return;
- debugs(33, 4, clientConnection << ": reading request...");
-
- // we can only read if there is more than 1 byte of space free
- if (Config.maxRequestBufferSize - inBuf.length() < 2)
+ if (!mayBufferMoreRequestBytes())
return;
typedef CommCbMemFunT<Server, CommIoCbParams> Dialer;
@@ -123,7 +132,16 @@ Server::doClientRead(const CommIoCbParams &io)
* Plus, it breaks our lame *HalfClosed() detection
*/
+ // mayBufferMoreRequestBytes() was true during readSomeData(), but variables
+ // like Config.maxRequestBufferSize may have changed since that check
+ if (!mayBufferMoreRequestBytes()) {
+ // XXX: If we avoid Comm::ReadNow(), we should not Comm::Read() again
+ // when the wait is over; resume these doClientRead() checks instead.
+ return; // wait for noteMoreBodySpaceAvailable() or a similar inBuf draining event
+ }
maybeMakeSpaceAvailable();
+ Assure(inBuf.spaceSize());
+
CommIoCbParams rd(this); // will be expanded with ReadNow results
rd.conn = io.conn;
switch (Comm::ReadNow(rd, inBuf)) {
diff --git a/src/servers/Server.h b/src/servers/Server.h
index ef105f5..6e549b3 100644
--- a/src/servers/Server.h
+++ b/src/servers/Server.h
@@ -119,6 +119,9 @@ protected:
/// abort any pending transactions and prevent new ones (by closing)
virtual void terminateAll(const Error &, const LogTagsErrors &) = 0;
+ /// whether client_request_buffer_max_size allows inBuf.length() increase
+ bool mayBufferMoreRequestBytes() const;
+
void doClientRead(const CommIoCbParams &io);
void clientWriteDone(const CommIoCbParams &io);

@ -0,0 +1,25 @@
File: squid-5.5.tar.xz
Date: Wed 13 Apr 2022 08:45:42 UTC
Size: 2565732
MD5 : 83ccc2d86ca0966e3555a3b78f5afd14
SHA1: 42302bd9b8feff851a41420334cb8eaeab2806ab
Key : CD6DBF8EF3B17D3E <squid3@treenet.co.nz>
B068 84ED B779 C89B 044E 64E3 CD6D BF8E F3B1 7D3E
keyring = http://www.squid-cache.org/pgp.asc
keyserver = pool.sks-keyservers.net
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEsGiE7bd5yJsETmTjzW2/jvOxfT4FAmJWjb4ACgkQzW2/jvOx
fT7t0A/9GjAdINfSP4gQyUr+Uvakz9O6fA9Jo3F30VafYimrSGm+VdGWntTsrOaP
VcsCdG3/Dvrhnqtu9+hwfKKQ61lmmUC7KVycx3whEUepQbZu5kd05csD7nwQ+AFe
7eJr0IwbRI4XdUhNW4AB52i/+hpHs/YSrSokumx5NVhwAUvT81TToUNzUjfKuXyy
U+w6GQ9kJbVW1UgFYZGZdJwCmD5Z7fNdUllKZhLj4I5GZ+5Zz5+lJP3ZBC6qavde
34hbpHbt+/lqz337eNoxwlyPNKPDiGIUEY9T4cdzA0BiLggTmlukDFErlYuHgCMX
BmQ9elJtdRaCD2YD+U1H9J+2wqt9O01gdyFU1V3RnNLZphgWur9X808rujuE46+Q
sxyV6SjeBh6Xs/I7wA9utX0pbVD+nLvna6Be49M1yAghBwTjiYN9fGC3ufj4St3k
PCvkTkBUOop3m4aBCRtUVO6w4Y/YmF71qAHIiSLe1i6xoztEDTVI0CA+vfrwwu2G
rFP5wuKsaYfBjkhQw4Jv6X30vnnOVqlxITGXcOnPXrHoD5KuYXv/Xsobqf8XsFdl
3qyXUe8lSI5idCg+Ajj9m0IqGWA50iFBs28Ca7GDacl9KApGn4O7kPLQY+7nN5cz
Nv3k8lYPh4KvRI1b2hcuoe3K63rEzty0e2vqG9zqxkpxOt20E/U=
=9xr/
-----END PGP SIGNATURE-----

@ -1,26 +0,0 @@
diff --git a/errors/aliases b/errors/aliases
index c256106..38c123a 100644
--- a/errors/aliases
+++ b/errors/aliases
@@ -14,8 +14,7 @@ da da-dk
de de-at de-ch de-de de-li de-lu
el el-gr
en en-au en-bz en-ca en-cn en-gb en-ie en-in en-jm en-nz en-ph en-sg en-tt en-uk en-us en-za en-zw
-es es-ar es-bo es-cl es-cu es-co es-do es-ec es-es es-pe es-pr es-py es-us es-uy es-ve es-xl spq
-es-mx es-bz es-cr es-gt es-hn es-ni es-pa es-sv
+es es-ar es-bo es-cl es-co es-cr es-do es-ec es-es es-gt es-hn es-mx es-ni es-pa es-pe es-pr es-py es-sv es-us es-uy es-ve es-xl
et et-ee
fa fa-fa fa-ir
fi fi-fi
diff --git a/errors/language.am b/errors/language.am
index a437d17..f2fe463 100644
--- a/errors/language.am
+++ b/errors/language.am
@@ -19,7 +19,6 @@ LANGUAGE_FILES = \
de.lang \
el.lang \
en.lang \
- es-mx.lang \
es.lang \
et.lang \
fa.lang \

@ -1,17 +0,0 @@
File: squid-6.10.tar.xz
Date: Sat Jun 8 02:53:29 PM UTC 2024
Size: 2558208
MD5 : 86deefa7282c4388be95260aa4d4cf6a
SHA1: 70e90865df0e4e9ba7765b622da40bda9bb8fc5d
Key : 29B4B1F7CE03D1B1DED22F3028F85029FEF6E865 <kinkie@squid-cache.org>
29B4 B1F7 CE03 D1B1 DED2 2F30 28F8 5029 FEF6 E865
sub cv25519 2021-05-15 [E]
keyring = http://www.squid-cache.org/pgp.asc
keyserver = pool.sks-keyservers.net
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQQptLH3zgPRsd7SLzAo+FAp/vboZQUCZmRwewAKCRAo+FAp/vbo
ZZV0AP0WDdXJFarEEYCSXSv/zT1l0FrI8jLQCT3Rsp6nTbWxfwD/VYmUMDetPLPJ
GYHJNrRm7OceMQcsqhQIz6X71SR9AQs=
=4HPC
-----END PGP SIGNATURE-----

@ -1,16 +1,16 @@
%define __perl_requires %{SOURCE98} %define __perl_requires %{SOURCE98}
Name: squid Name: squid
Version: 6.10 Version: 5.5
Release: 1%{?dist} Release: 14%{?dist}.3
Summary: The Squid proxy caching server Summary: The Squid proxy caching server
Epoch: 7 Epoch: 7
# See CREDITS for breakdown of non GPLv2+ code # See CREDITS for breakdown of non GPLv2+ code
License: GPL-2.0-or-later AND (LGPL-2.0-or-later AND MIT AND BSD-2-Clause AND BSD-3-Clause AND BSD-4-Clause AND BSD-4-Clause-UC AND LicenseRef-Fedora-Public-Domain AND Beerware) License: GPLv2+ and (LGPLv2+ and MIT and BSD and Public Domain)
URL: http://www.squid-cache.org URL: http://www.squid-cache.org
Source0: http://www.squid-cache.org/Versions/v6/squid-%{version}.tar.xz Source0: http://www.squid-cache.org/Versions/v5/squid-%{version}.tar.xz
Source1: http://www.squid-cache.org/Versions/v6/squid-%{version}.tar.xz.asc Source1: http://www.squid-cache.org/Versions/v5/squid-%{version}.tar.xz.asc
Source2: http://www.squid-cache.org/pgp.asc Source2: http://www.squid-cache.org/pgp.asc
Source3: squid.logrotate Source3: squid.logrotate
Source4: squid.sysconfig Source4: squid.sysconfig
@ -25,19 +25,66 @@ Source98: perl-requires-squid.sh
# Upstream patches # Upstream patches
# Backported patches # Backported patches
# Patch101: patch # https://bugzilla.redhat.com/show_bug.cgi?id=2151188
Patch101: squid-5.5-ip-bind-address-no-port.patch
# Local patches # Local patches
# Applying upstream patches first makes it less likely that local patches # Applying upstream patches first makes it less likely that local patches
# will break upstream ones. # will break upstream ones.
Patch201: squid-6.1-config.patch Patch201: squid-4.0.11-config.patch
Patch202: squid-6.1-location.patch Patch202: squid-3.1.0.9-location.patch
Patch203: squid-6.1-perlpath.patch Patch203: squid-3.0.STABLE1-perlpath.patch
Patch204: squid-3.5.9-include-guards.patch
# revert this upstream patch - https://bugzilla.redhat.com/show_bug.cgi?id=1936422 # revert this upstream patch - https://bugzilla.redhat.com/show_bug.cgi?id=1936422
# workaround for #1934919 # workaround for #1934919
Patch204: squid-6.1-symlink-lang-err.patch Patch205: squid-5.0.5-symlink-lang-err.patch
# Upstream PR: https://github.com/squid-cache/squid/pull/1442 # https://bugzilla.redhat.com/show_bug.cgi?id=1953505
Patch205: squid-6.1-crash-half-closed.patch Patch206: squid-5.0.6-openssl3.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=1969322
Patch207: squid-5.0.6-active-ftp.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=1988122
Patch208: squid-5.1-test-store-cppsuite.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2231827
Patch209: squid-5.5-halfclosed.patch
# https://issues.redhat.com/browse/RHEL-30352
Patch210: squid-5.5-ipv6-crash.patch
# https://issues.redhat.com/browse/RHEL-12356
Patch211: squid-5.5-large-upload-buffer-dies.patch
# Security patches
# https://bugzilla.redhat.com/show_bug.cgi?id=2100721
Patch501: squid-5.5-CVE-2021-46784.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2129771
Patch502: squid-5.5-CVE-2022-41318.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2245910
Patch503: squid-5.5-CVE-2023-46846.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2245916
Patch504: squid-5.5-CVE-2023-46847.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2245919
Patch505: squid-5.5-CVE-2023-46848.patch
# https://issues.redhat.com/browse/RHEL-14802
Patch506: squid-5.5-CVE-2023-5824.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2248521
Patch507: squid-5.5-CVE-2023-46728.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2247567
Patch508: squid-5.5-CVE-2023-46724.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2252926
Patch509: squid-5.5-CVE-2023-49285.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2252923
Patch510: squid-5.5-CVE-2023-49286.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2254663
Patch511: squid-5.5-CVE-2023-50269.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2264309
Patch512: squid-5.5-CVE-2024-25617.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2268366
Patch513: squid-5.5-CVE-2024-25111.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2294353
Patch514: squid-5.5-CVE-2024-37894.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2260051
Patch515: squid-5.5-CVE-2024-23638.patch
# Regression caused by squid-5.5-CVE-2023-46846.patch
# Upstream PR: https://github.com/squid-cache/squid/pull/1914
Patch516: squid-5.5-ignore-wsp-after-chunk-size.patch
# cache_swap.sh # cache_swap.sh
Requires: bash gawk Requires: bash gawk
@ -55,8 +102,6 @@ BuildRequires: openssl-devel
BuildRequires: krb5-devel BuildRequires: krb5-devel
# time_quota requires TrivialDB # time_quota requires TrivialDB
BuildRequires: libtdb-devel BuildRequires: libtdb-devel
# ESI support requires Expat & libxml2
BuildRequires: expat-devel libxml2-devel
# TPROXY requires libcap, and also increases security somewhat # TPROXY requires libcap, and also increases security somewhat
BuildRequires: libcap-devel BuildRequires: libcap-devel
# eCAP support # eCAP support
@ -83,7 +128,7 @@ Conflicts: NetworkManager < 1.20
%description %description
Squid is a high-performance proxy caching server for Web clients, Squid is a high-performance proxy caching server for Web clients,
supporting FTP and HTTP data objects. Unlike traditional supporting FTP, gopher, and HTTP data objects. Unlike traditional
caching software, Squid handles all requests in a single, caching software, Squid handles all requests in a single,
non-blocking, I/O-driven process. Squid keeps meta data and especially non-blocking, I/O-driven process. Squid keeps meta data and especially
hot objects cached in RAM, caches DNS lookups, supports non-blocking hot objects cached in RAM, caches DNS lookups, supports non-blocking
@ -95,8 +140,43 @@ lookup program (dnsserver), a program for retrieving FTP data
%prep %prep
%{gpgverify} --keyring='%{SOURCE2}' --signature='%{SOURCE1}' --data='%{SOURCE0}' %{gpgverify} --keyring='%{SOURCE2}' --signature='%{SOURCE1}' --data='%{SOURCE0}'
%setup -q
# Upstream patches
# Backported patches
%patch101 -p1 -b .ip-bind-address-no-port
# Local patches
%patch201 -p1 -b .config
%patch202 -p1 -b .location
%patch203 -p1 -b .perlpath
%patch204 -p0 -b .include-guards
%patch205 -p1 -R -b .symlink-lang-err
%patch206 -p1 -b .openssl3
%patch207 -p1 -b .active-ftp
%patch208 -p1 -b .test-store-cpp
%patch209 -p1 -b .halfclosed
%patch210 -p1 -b .ipv6-crash
%patch211 -p1 -b .large-upload-buffer-dies
%patch501 -p1 -b .CVE-2021-46784
%patch502 -p1 -b .CVE-2022-41318
%patch503 -p1 -b .CVE-2023-46846
%patch504 -p1 -b .CVE-2023-46847
%patch505 -p1 -b .CVE-2023-46848
%patch506 -p1 -b .CVE-2023-5824
%patch507 -p1 -b .CVE-2023-46728
%patch508 -p1 -b .CVE-2023-46724
%patch509 -p1 -b .CVE-2023-49285
%patch510 -p1 -b .CVE-2023-49286
%patch511 -p1 -b .CVE-2023-50269
%patch512 -p1 -b .CVE-2024-25617
%patch513 -p1 -b .CVE-2024-25111
%patch514 -p1 -b .CVE-2024-37894
%patch515 -p1 -b .CVE-2024-23638
%patch516 -p1 -b .ignore-wsp-chunk-sz
%autosetup -p1
# https://bugzilla.redhat.com/show_bug.cgi?id=1679526 # https://bugzilla.redhat.com/show_bug.cgi?id=1679526
# Patch in the vendor documentation and used different location for documentation # Patch in the vendor documentation and used different location for documentation
@ -139,7 +219,7 @@ sed -i 's|@SYSCONFDIR@/squid.conf.documented|%{_pkgdocdir}/squid.conf.documented
--enable-storeio="aufs,diskd,ufs,rock" \ --enable-storeio="aufs,diskd,ufs,rock" \
--enable-diskio \ --enable-diskio \
--enable-wccpv2 \ --enable-wccpv2 \
--enable-esi \ --disable-esi \
--enable-ecap \ --enable-ecap \
--with-aio \ --with-aio \
--with-default-user="squid" \ --with-default-user="squid" \
@ -149,8 +229,7 @@ sed -i 's|@SYSCONFDIR@/squid.conf.documented|%{_pkgdocdir}/squid.conf.documented
--disable-arch-native \ --disable-arch-native \
--disable-security-cert-validators \ --disable-security-cert-validators \
--disable-strict-error-checking \ --disable-strict-error-checking \
--with-swapdir=%{_localstatedir}/spool/squid \ --with-swapdir=%{_localstatedir}/spool/squid
--enable-translation
# workaround to build squid v5 # workaround to build squid v5
mkdir -p src/icmp/tests mkdir -p src/icmp/tests
@ -324,111 +403,125 @@ fi
%changelog %changelog
* Mon Jul 01 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:6.10-1 * Thu Nov 07 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-14.3
- new version 6.10 - Disable ESI support
- Resolves: RHEL-45048 - squid: Out-of-bounds write error may lead to Denial of - Resolves: RHEL-65076 - CVE-2024-45802 squid: Denial of Service processing ESI
Service (CVE-2024-37894) response content
* Mon Jun 24 2024 Troy Dawson <tdawson@redhat.com> - 7:6.7-2
- Bump release for June 2024 mass rebuild
* Mon Feb 12 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:6.7-1
- new version 6.7
- switch to autosetup
- fix FTBFS when using gcc14
* Sat Jan 27 2024 Fedora Release Engineering <releng@fedoraproject.org> - 7:6.6-2
- Rebuilt for https://fedoraproject.org/wiki/Fedora_40_Mass_Rebuild
* Wed Dec 13 2023 Yaakov Selkowitz <yselkowi@redhat.com> - 7:6.6-1
- new version 6.6
* Tue Nov 07 2023 Luboš Uhliarik <luhliari@redhat.com> - 7:6.5-1
- new version 6.5
* Tue Oct 24 2023 Luboš Uhliarik <luhliari@redhat.com> - 7:6.4-1
- new version 6.4
* Thu Sep 14 2023 Luboš Uhliarik <luhliari@redhat.com> - 7:6.3-2
- SPDX migration
* Tue Sep 05 2023 Luboš Uhliarik <luhliari@redhat.com> - 7:6.3-1
- new version 6.3
* Wed Aug 16 2023 Luboš Uhliarik <luhliari@redhat.com> - 7:6.2-1
- new version 6.2
* Fri Aug 04 2023 Luboš Uhliarik <luhliari@redhat.com> - 7:6.1-3 * Wed Oct 23 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-14.2
- Fix "!commHasHalfClosedMonitor(fd)" assertion - Resolves: RHEL-64425 TCP_MISS_ABORTED/100 erros when uploading
* Sat Jul 22 2023 Fedora Release Engineering <releng@fedoraproject.org> - 7:6.1-2 * Mon Oct 14 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-14.1
- Rebuilt for https://fedoraproject.org/wiki/Fedora_39_Mass_Rebuild - Resolves: RHEL-62332 - (Regression) Transfer-encoding:chunked data is not sent
to the client in its complementary
* Tue Jul 11 2023 Luboš Uhliarik <luhliari@redhat.com> - 7:6.1-1 * Mon Jul 01 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-14
- new version 6.1 - Resolves: RHEL-45057 - squid: Out-of-bounds write error may lead to Denial of
Service (CVE-2024-37894)
* Tue May 09 2023 Luboš Uhliarik <luhliari@redhat.com> - 7:5.9-1 - Resolves: RHEL-22594 - squid: vulnerable to a Denial of Service attack against
- new version 5.9 Cache Manager error responses (CVE-2024-23638)
* Tue Feb 28 2023 Luboš Uhliarik <luhliari@redhat.com> - 7:5.8-1 * Thu May 09 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-13
- new version 5.8 - Resolves: RHEL-30352 - squid v5 crashes with SIGABRT when ipv6 is disabled
at kernel level but it is asked to connect to an ipv6 address by a client
* Sat Jan 21 2023 Fedora Release Engineering <releng@fedoraproject.org> - 7:5.7-4
- Rebuilt for https://fedoraproject.org/wiki/Fedora_38_Mass_Rebuild * Tue Mar 19 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-12
- Resolves: RHEL-28530 - squid: Denial of Service in HTTP Chunked
* Mon Dec 05 2022 Tomas Korbar <tkorbar@redhat.com> - 7:5.7-3 Decoding (CVE-2024-25111)
- Backport adding IP_BIND_ADDRESS_NO_PORT flag to outgoing connections - Resolves: RHEL-26092 - squid: denial of service in HTTP header
parser (CVE-2024-25617)
* Wed Oct 12 2022 Luboš Uhliarik <luhliari@redhat.com> - 7:5.7-2
- Provide a sysusers.d file to get user() and group() provides (#2134071) * Fri Feb 02 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-10
- Resolves: RHEL-19556 - squid: denial of service in HTTP request
* Tue Sep 06 2022 Luboš Uhliarik <luhliari@redhat.com> - 7:5.7-1 parsing (CVE-2023-50269)
- new version 5.7
* Thu Feb 01 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-9
* Sat Jul 23 2022 Fedora Release Engineering <releng@fedoraproject.org> - 7:5.6-2 - Resolves: RHEL-18354 - squid: Buffer over-read in the HTTP Message processing
- Rebuilt for https://fedoraproject.org/wiki/Fedora_37_Mass_Rebuild feature (CVE-2023-49285)
- Resolves: RHEL-18345 - squid: Incorrect Check of Function Return Value In
* Mon Jun 27 2022 Luboš Uhliarik <luhliari@redhat.com> - 7:5.6-1 Helper Process management (CVE-2023-49286)
- new version 5.6 - Resolves: RHEL-18146 - squid crashes in assertion when a parent peer exists
- Resolves: RHEL-18231 - squid: Denial of Service in SSL Certificate validation
* Wed Apr 20 2022 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-1 (CVE-2023-46724)
- Resolves: RHEL-15912 - squid: NULL pointer dereference in the gopher protocol
code (CVE-2023-46728)
* Tue Dec 05 2023 Tomas Korbar <tkorbar@redhat.com> - 7:5.5-8
- Resolves: RHEL-14802 - squid: multiple issues in HTTP response caching
* Sun Nov 12 2023 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-7
- Resolves: RHEL-14820 - squid: squid: denial of Servicein FTP
- Resolves: RHEL-14809 - squid: squid: Denial of Service in HTTP Digest
Authentication
- Resolves: RHEL-14781 - squid: squid: Request/Response smuggling in HTTP/1.1
and ICAP
* Wed Aug 16 2023 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-6
- Resolves: #2231827 - Crash with half_closed_client on
* Tue Dec 06 2022 Tomas Korbar <tkorbar@redhat.com> - 7:5.5-5
- Resolves: #2151188 - [RFE] Add the "IP_BIND_ADDRESS_NO_PORT"
flag to sockets created for outgoing connections in the squid source code.
* Mon Nov 07 2022 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-4
- Resolves: #2095468 - [RFE] squid use systemd-sysusers
* Mon Nov 07 2022 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-3
- Resolves: #2130253 - CVE-2022-41318 squid: buffer-over-read in SSPI and SMB
authentication
* Mon Jul 11 2022 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-2
- Resolves: #2100785 - CVE-2021-46784 squid: DoS when processing gopher server
responses
* Tue May 31 2022 Luboš Uhliarik <luhliari@redhat.com> - 7:5.5-1
- new version 5.5 - new version 5.5
- Resolves: #2053799 - squid-5.5 is available - Resolves: #2075727 - The memory usage of the squid process keeps increasing
* Wed Feb 09 2022 Luboš Uhliarik <luhliari@redhat.com> - 7:5.4-1 * Thu Oct 07 2021 Luboš Uhliarik <luhliari@redhat.com> - 7:5.2-1
- new version 5.4 - new version 5.2
- Resolves: #1934560 - squid: out-of-bounds read in WCCP protocol
- Resolves: #2011637 - Rebase squid to 5.2
* Sat Jan 22 2022 Fedora Release Engineering <releng@fedoraproject.org> - 7:5.2-2 * Wed Sep 15 2021 Luboš Uhliarik <luhliari@redhat.com> - 7:5.1-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_36_Mass_Rebuild - Resolves: #1988122 - Enable LTO build of squid for RHEL 9
* Tue Oct 05 2021 Luboš Uhliarik <luhliari@redhat.com> - 7:5.2-1 * Tue Aug 10 2021 Mohan Boddu <mboddu@redhat.com> - 7:5.1-2
- new version 5.2 (#2010109) - Rebuilt for IMA sigs, glibc 2.34, aarch64 flags
- Resolves: #1934559 - squid: out-of-bounds read in WCCP protocol Related: rhbz#1991688
* Tue Sep 14 2021 Sahana Prasad <sahana@redhat.com> - 7:5.1-2
- Rebuilt with OpenSSL 3.0.0
* Thu Aug 05 2021 Luboš Uhliarik <luhliari@redhat.com> - 7:5.1-1 * Thu Aug 05 2021 Luboš Uhliarik <luhliari@redhat.com> - 7:5.1-1
- new version 5.1 - new version 5.1
- Resolves: #1990517 - Rebase squid to 5.1
- Resolves: #1985231 - squid: FTBFS because of OpenSSL 3.0 preprocessor macro
changes
* Wed Jun 16 2021 Mohan Boddu <mboddu@redhat.com> - 7:5.0.6-4
- Rebuilt for RHEL 9 BETA for openssl 3.0
Related: rhbz#1971065
* Fri Jul 23 2021 Fedora Release Engineering <releng@fedoraproject.org> - 7:5.0.6-2 * Tue Jun 08 2021 Luboš Uhliarik <luhliari@redhat.com> - 7:5.0.6-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_35_Mass_Rebuild - Resolves: #1969322 - squid doesn't work with active ftp
* Mon May 17 2021 Lubos Uhliarik <luhliari@redhat.com> - 7:5.0.6-2
- Resolves: #1953505 - squid: Port to OpenSSL 3.0
* Mon May 17 2021 Lubos Uhliarik <luhliari@redhat.com> - 7:5.0.6-1 * Mon May 17 2021 Lubos Uhliarik <luhliari@redhat.com> - 7:5.0.6-1
- new version 5.0.6 - new version 5.0.6
- Resolves: #1961253 - Rebase squid to 5.0.6
* Fri Apr 23 2021 Lubos Uhliarik <luhliari@redhat.com> - 7:5.0.5-4 * Fri Apr 23 2021 Lubos Uhliarik <luhliari@redhat.com> - 7:5.0.5-2
- Related: #1934919 - squid update attempts fail with file conflicts - new version 5.0.5
- Resolves: #1952896 - Rebase squid to >= 5.0.5
* Fri Mar 05 2021 Lubos Uhliarik <luhliari@redhat.com> - 7:5.0.5-3 - Resolves: #1940412 - Remove libdb dependency from squid
- Resolves: #1934919 - squid update attempts fail with file conflicts
* Tue Mar 02 2021 Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl> - 7:5.0.5-2 * Fri Apr 16 2021 Mohan Boddu <mboddu@redhat.com> - 7:4.14-2
- Rebuilt for updated systemd-rpm-macros - Rebuilt for RHEL 9 BETA on Apr 15th 2021. Related: rhbz#1947937
See https://pagure.io/fesco/issue/2583.
* Wed Feb 10 2021 Lubos Uhliarik <luhliari@redhat.com> - 7:5.0.5-1 * Wed Mar 31 2021 Lubos Uhliarik <luhliari@redhat.com> - 7:4.14-1
- new version 5.0.5 - new version 4.14
- Resolves: #1939927 - CVE-2020-25097 squid: improper input validation may allow
a trusted client to perform HTTP Request Smuggling
* Wed Jan 27 2021 Fedora Release Engineering <releng@fedoraproject.org> - 7:4.13-3 * Wed Jan 27 2021 Fedora Release Engineering <releng@fedoraproject.org> - 7:4.13-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_34_Mass_Rebuild - Rebuilt for https://fedoraproject.org/wiki/Fedora_34_Mass_Rebuild

Loading…
Cancel
Save