import kmod-redhat-mlx5_core-5.0_0_dup8.2-2.el8_2

c8 imports/c8/kmod-redhat-mlx5_core-5.0_0_dup8.2-2.el8_2
CentOS Sources 4 years ago committed by MSVSphere Packaging Team
commit 19094e80b6

1
.gitignore vendored

@ -0,0 +1 @@
SOURCES/mlx5_core-redhat-5.0_0_dup8.2.tar.bz2

@ -0,0 +1 @@
c6b12c77b647c399d937311a20a55a2ba2d7fcc5 SOURCES/mlx5_core-redhat-5.0_0_dup8.2.tar.bz2

@ -0,0 +1,58 @@
From d03ac6626a42264e3b6a0cea3ec19e8c7a83f326 Mon Sep 17 00:00:00 2001
From: Davide Caratti <dcaratti@redhat.com>
Date: Tue, 28 Jan 2020 09:13:33 -0500
Subject: [PATCH 001/312] [netdrv] mlx5e: allow TSO on VXLAN over VLAN
topologies
Message-id: <92832a2adaee9760b05b903f7b15c4b107dab620.1580148241.git.dcaratti@redhat.com>
Patchwork-id: 294141
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 netdrv] net/mlx5e: allow TSO on VXLAN over VLAN topologies
Bugzilla: 1780643
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: John Linville <linville@redhat.com>
RH-Acked-by: Paolo Abeni <pabeni@redhat.com>
RH-Acked-by: David S. Miller <davem@redhat.com>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1780643
Upstream Status: net-next.git commit a1718505d7f6
Brew: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=26037831
Tested: using a variant of the script used to verify bz1626213
Conflicts: none
commit a1718505d7f67ee0ab051322f1cbc7ac42b5da82
Author: Davide Caratti <dcaratti@redhat.com>
Date: Thu Jan 9 12:07:59 2020 +0100
net/mlx5e: allow TSO on VXLAN over VLAN topologies
since mlx5 hardware can segment correctly TSO packets on VXLAN over VLAN
topologies, CPU usage can improve significantly if we enable tunnel
offloads in dev->vlan_features, like it was done in the past with other
NIC drivers (e.g. mlx4, be2net and ixgbe).
Signed-off-by: Davide Caratti <dcaratti@redhat.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Davide Caratti <dcaratti@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 00ef0cd3ca13..7447b84e2d44 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -4855,6 +4855,8 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL |
NETIF_F_GSO_UDP_TUNNEL_CSUM;
netdev->gso_partial_features = NETIF_F_GSO_UDP_TUNNEL_CSUM;
+ netdev->vlan_features |= NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_GSO_UDP_TUNNEL_CSUM;
}
if (MLX5_CAP_ETH(mdev, tunnel_stateless_gre)) {
--
2.13.6

@ -0,0 +1,75 @@
From 2c53f8c40495fbe39613f8cf3a800474846fa96b Mon Sep 17 00:00:00 2001
From: Petr Oros <poros@redhat.com>
Date: Mon, 24 Feb 2020 16:46:48 -0500
Subject: [PATCH 002/312] [netdrv] net: reject PTP periodic output requests
with unsupported flags
Message-id: <b9789c5e34985cfcd9226d3d80179cbbdd68abb3.1582559430.git.poros@redhat.com>
Patchwork-id: 295286
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 net PATCH 03/14] net: reject PTP periodic output requests with unsupported flags
Bugzilla: 1795192
RH-Acked-by: Neil Horman <nhorman@redhat.com>
RH-Acked-by: Prarit Bhargava <prarit@redhat.com>
RH-Acked-by: Corinna Vinschen <vinschen@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
Bugzilla: http://bugzilla.redhat.com/show_bug.cgi?id=1795192
Conflicts: \
- Unmerged path drivers/net/ethernet/microchip/lan743x_ptp.c
Upstream commit(s):
commit 7f9048f1df6f0c1c7a74a15c8b4ce033a753f274
Author: Jacob Keller <jacob.e.keller@intel.com>
Date: Thu Nov 14 10:44:56 2019 -0800
net: reject PTP periodic output requests with unsupported flags
Commit 823eb2a3c4c7 ("PTP: add support for one-shot output") introduced
a new flag for the PTP periodic output request ioctl. This flag is not
currently supported by any driver.
Fix all drivers which implement the periodic output request ioctl to
explicitly reject any request with flags they do not understand. This
ensures that the driver does not accidentally misinterpret the
PTP_PEROUT_ONE_SHOT flag, or any new flag introduced in the future.
This is important for forward compatibility: if a new flag is
introduced, the driver should reject requests to enable the flag until
the driver has actually been modified to support the flag in question.
Cc: Felipe Balbi <felipe.balbi@linux.intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Christopher Hall <christopher.s.hall@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Richard Cochran <richardcochran@gmail.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Reviewed-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Petr Oros <poros@redhat.com>
Signed-off-by: Timothy Redaelli <tredaelli@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
index 0059b290e095..cff6b60de304 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
@@ -290,6 +290,10 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
if (!MLX5_PPS_CAP(mdev))
return -EOPNOTSUPP;
+ /* Reject requests with unsupported flags */
+ if (rq->perout.flags)
+ return -EOPNOTSUPP;
+
if (rq->perout.index >= clock->ptp_info.n_pins)
return -EINVAL;
--
2.13.6

@ -0,0 +1,78 @@
From 87d65423773d32028e88214dbbb13e147b0388ac Mon Sep 17 00:00:00 2001
From: Petr Oros <poros@redhat.com>
Date: Mon, 24 Feb 2020 16:46:52 -0500
Subject: [PATCH 003/312] [netdrv] mlx5: reject unsupported external timestamp
flags
Message-id: <37f4742ef0d140155bdf2a2761983f6b886c9289.1582559430.git.poros@redhat.com>
Patchwork-id: 295290
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 net PATCH 07/14] mlx5: reject unsupported external timestamp flags
Bugzilla: 1795192
RH-Acked-by: Neil Horman <nhorman@redhat.com>
RH-Acked-by: Prarit Bhargava <prarit@redhat.com>
RH-Acked-by: Corinna Vinschen <vinschen@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
Bugzilla: http://bugzilla.redhat.com/show_bug.cgi?id=1795192
Upstream commit(s):
commit 2e0645a00e25f7122cad6da57ce3cc855df49ddd
Author: Jacob Keller <jacob.e.keller@intel.com>
Date: Thu Nov 14 10:45:00 2019 -0800
mlx5: reject unsupported external timestamp flags
Fix the mlx5 core PTP support to explicitly reject any future flags that
get added to the external timestamp request ioctl.
In order to maintain currently functioning code, this patch accepts all
three current flags. This is because the PTP_RISING_EDGE and
PTP_FALLING_EDGE flags have unclear semantics and each driver seems to
have interpreted them slightly differently.
[ RC: I'm not 100% sure what this driver does, but if I'm not wrong it
follows the dp83640:
flags Meaning
---------------------------------------------------- --------------------------
PTP_ENABLE_FEATURE Time stamp rising edge
PTP_ENABLE_FEATURE|PTP_RISING_EDGE Time stamp rising edge
PTP_ENABLE_FEATURE|PTP_FALLING_EDGE Time stamp falling edge
PTP_ENABLE_FEATURE|PTP_RISING_EDGE|PTP_FALLING_EDGE Time stamp falling edge
]
Cc: Feras Daoud <ferasda@mellanox.com>
Cc: Eugenia Emantayev <eugenia@mellanox.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Richard Cochran <richardcochran@gmail.com>
Reviewed-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Petr Oros <poros@redhat.com>
Signed-off-by: Timothy Redaelli <tredaelli@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
index cff6b60de304..9a40f24e3193 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
@@ -236,6 +236,12 @@ static int mlx5_extts_configure(struct ptp_clock_info *ptp,
if (!MLX5_PPS_CAP(mdev))
return -EOPNOTSUPP;
+ /* Reject requests with unsupported flags */
+ if (rq->extts.flags & ~(PTP_ENABLE_FEATURE |
+ PTP_RISING_EDGE |
+ PTP_FALLING_EDGE))
+ return -EOPNOTSUPP;
+
if (rq->extts.index >= clock->ptp_info.n_pins)
return -EINVAL;
--
2.13.6

@ -0,0 +1,103 @@
From 1ee524cc59988f1b56d8bc6f1f49ba56223852fe Mon Sep 17 00:00:00 2001
From: Ivan Vecera <ivecera@redhat.com>
Date: Fri, 27 Mar 2020 19:44:24 -0400
Subject: [PATCH 004/312] [netdrv] mlx5e: Reorder mirrer action parsing to
check for encap first
Message-id: <20200327194424.1643094-20-ivecera@redhat.com>
Patchwork-id: 298090
Patchwork-instance: patchwork
O-Subject: [RHEL-8.3 net PATCH 19/19] net/mlx5e: Reorder mirrer action parsing to check for encap first
Bugzilla: 1818074
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: John Linville <linville@redhat.com>
RH-Acked-by: Petr Oros <poros@redhat.com>
Bugzilla: http://bugzilla.redhat.com/show_bug.cgi?id=1818074
Upstream commit(s):
commit b6a4ac24c14be1247b0fd896737a01b8fa121318
Author: Vlad Buslov <vladbu@mellanox.com>
Date: Thu Nov 7 13:37:57 2019 +0200
net/mlx5e: Reorder mirrer action parsing to check for encap first
Mirred action parsing code in parse_tc_fdb_actions() first checks if
out_dev has same parent id, and only verifies that there is a pending encap
action that was parsed before. Recent change in vxlan module made function
netdev_port_same_parent_id() to return true when called for mlx5 eswitch
representor and vxlan device created explicitly on mlx5 representor
device (vxlan devices created with "external" flag without explicitly
specifying parent interface are not affected). With call to
netdev_port_same_parent_id() returning true, incorrect code path is chosen
and encap rules fail to offload because vxlan dev is not a valid eswitch
forwarding dev. Dmesg log of error:
[ 1784.389797] devices ens1f0_0 vxlan1 not on same switch HW, can't offload forwarding
In order to fix the issue, rearrange conditional in parse_tc_fdb_actions()
to check for pending encap action before checking if out_dev has the same
parent id.
Fixes: 0ce1822c2a08 ("vxlan: add adjacent link to limit depth level")
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: Timothy Redaelli <tredaelli@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 28 ++++++++++++-------------
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index db960e3ea3cd..f06e99eb06b9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -3270,7 +3270,20 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST |
MLX5_FLOW_CONTEXT_ACTION_COUNT;
- if (netdev_port_same_parent_id(priv->netdev, out_dev)) {
+ if (encap) {
+ parse_attr->mirred_ifindex[attr->out_count] =
+ out_dev->ifindex;
+ parse_attr->tun_info[attr->out_count] = dup_tun_info(info);
+ if (!parse_attr->tun_info[attr->out_count])
+ return -ENOMEM;
+ encap = false;
+ attr->dests[attr->out_count].flags |=
+ MLX5_ESW_DEST_ENCAP;
+ attr->out_count++;
+ /* attr->dests[].rep is resolved when we
+ * handle encap
+ */
+ } else if (netdev_port_same_parent_id(priv->netdev, out_dev)) {
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct net_device *uplink_dev = mlx5_eswitch_uplink_get_proto_dev(esw, REP_ETH);
struct net_device *uplink_upper;
@@ -3312,19 +3325,6 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
attr->dests[attr->out_count].rep = rpriv->rep;
attr->dests[attr->out_count].mdev = out_priv->mdev;
attr->out_count++;
- } else if (encap) {
- parse_attr->mirred_ifindex[attr->out_count] =
- out_dev->ifindex;
- parse_attr->tun_info[attr->out_count] = dup_tun_info(info);
- if (!parse_attr->tun_info[attr->out_count])
- return -ENOMEM;
- encap = false;
- attr->dests[attr->out_count].flags |=
- MLX5_ESW_DEST_ENCAP;
- attr->out_count++;
- /* attr->dests[].rep is resolved when we
- * handle encap
- */
} else if (parse_attr->filter_dev != priv->netdev) {
/* All mlx5 devices are called to configure
* high level device filters. Therefore, the
--
2.13.6

@ -0,0 +1,84 @@
From edf3630554bc462e0bee93faa5685e8e11a5a936 Mon Sep 17 00:00:00 2001
From: Jiri Benc <jbenc@redhat.com>
Date: Wed, 22 Apr 2020 18:18:00 -0400
Subject: [PATCH 005/312] [netdrv] net/mlx5e: Move the SW XSK code from NAPI
poll to a separate function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Message-id: <6bb2443d30349d894a710f787928942121ac29dc.1587578778.git.jbenc@redhat.com>
Patchwork-id: 304519
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 net 09/46] net/mlx5e: Move the SW XSK code from NAPI poll to a separate function
Bugzilla: 1819630
RH-Acked-by: Hangbin Liu <haliu@redhat.com>
RH-Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
RH-Acked-by: Ivan Vecera <ivecera@redhat.com>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1819630
commit 871aa189a69f7bbe6254459d17b78e1cce65c9ae
Author: Maxim Mikityanskiy <maximmi@mellanox.com>
Date: Wed Aug 14 09:27:22 2019 +0200
net/mlx5e: Move the SW XSK code from NAPI poll to a separate function
Two XSK tasks are performed during NAPI polling, that are not bound to
hardware interrupts: TXing packets and polling for frames in the Fill
Ring. They are special in a way that the hardware doesn't know about
these tasks, so it doesn't trigger interrupts if there is still some
work to be done, it's our driver's responsibility to ensure NAPI will be
rescheduled if needed.
Create a new function to handle these tasks and move the corresponding
code from mlx5e_napi_poll to the new function to improve modularity and
prepare for the changes in the following patch.
Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Reviewed-by: Saeed Mahameed <saeedm@mellanox.com>
Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: Timothy Redaelli <tredaelli@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
index 49b06b256c92..6d16dee38ede 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
@@ -81,6 +81,16 @@ void mlx5e_trigger_irq(struct mlx5e_icosq *sq)
mlx5e_notify_hw(wq, sq->pc, sq->uar_map, &nopwqe->ctrl);
}
+static bool mlx5e_napi_xsk_post(struct mlx5e_xdpsq *xsksq, struct mlx5e_rq *xskrq)
+{
+ bool busy_xsk = false;
+
+ busy_xsk |= mlx5e_xsk_tx(xsksq, MLX5E_TX_XSK_POLL_BUDGET);
+ busy_xsk |= xskrq->post_wqes(xskrq);
+
+ return busy_xsk;
+}
+
int mlx5e_napi_poll(struct napi_struct *napi, int budget)
{
struct mlx5e_channel *c = container_of(napi, struct mlx5e_channel,
@@ -122,8 +132,7 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
if (xsk_open) {
mlx5e_poll_ico_cq(&c->xskicosq.cq);
busy |= mlx5e_poll_xdpsq_cq(&xsksq->cq);
- busy_xsk |= mlx5e_xsk_tx(xsksq, MLX5E_TX_XSK_POLL_BUDGET);
- busy_xsk |= xskrq->post_wqes(xskrq);
+ busy_xsk |= mlx5e_napi_xsk_post(xsksq, xskrq);
}
busy |= busy_xsk;
--
2.13.6

@ -0,0 +1,163 @@
From d1ac1b641ea39e946e94c155520c590a5a27e23a Mon Sep 17 00:00:00 2001
From: Jiri Benc <jbenc@redhat.com>
Date: Wed, 22 Apr 2020 18:18:11 -0400
Subject: [PATCH 006/312] [netdrv] mlx5e: Allow XSK frames smaller than a page
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Message-id: <5c0604430537e1022e4424f8683b5611f3ccceb3.1587578778.git.jbenc@redhat.com>
Patchwork-id: 304531
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 net 20/46] net/mlx5e: Allow XSK frames smaller than a page
Bugzilla: 1819630
RH-Acked-by: Hangbin Liu <haliu@redhat.com>
RH-Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
RH-Acked-by: Ivan Vecera <ivecera@redhat.com>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1819630
commit 282c0c798f8ec883c2ac2f1ce2dc06ef9421731c
Author: Maxim Mikityanskiy <maximmi@mellanox.com>
Date: Tue Aug 27 02:25:26 2019 +0000
net/mlx5e: Allow XSK frames smaller than a page
Relax the requirements to the XSK frame size to allow it to be smaller
than a page and even not a power of two. The current implementation can
work in this mode, both with Striding RQ and without it.
The code that checks `mtu + headroom <= XSK frame size` is modified
accordingly. Any frame size between 2048 and PAGE_SIZE is accepted.
Functions that worked with pages only now work with XSK frames, even if
their size is different from PAGE_SIZE.
With XSK queues, regardless of the frame size, Striding RQ uses the
stride size of PAGE_SIZE, and UMR MTTs are posted using starting
addresses of frames, but PAGE_SIZE as page size. MTU guarantees that no
packet data will overlap with other frames. UMR MTT size is made equal
to the stride size of the RQ, because UMEM frames may come in random
order, and we need to handle them one by one. PAGE_SIZE is just a power
of two that is bigger than any allowed XSK frame size, and also it
doesn't require making additional changes to the code.
Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Saeed Mahameed <saeedm@mellanox.com>
Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: Timothy Redaelli <tredaelli@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../net/ethernet/mellanox/mlx5/core/en/params.c | 23 ++++++++++++++++++----
.../net/ethernet/mellanox/mlx5/core/en/params.h | 2 ++
.../net/ethernet/mellanox/mlx5/core/en/xsk/rx.c | 2 +-
.../net/ethernet/mellanox/mlx5/core/en/xsk/setup.c | 15 +++++++++-----
4 files changed, 32 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
index 79301d116667..eb2e1f2138e4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
@@ -25,18 +25,33 @@ u16 mlx5e_get_linear_rq_headroom(struct mlx5e_params *params,
return headroom;
}
-u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params,
- struct mlx5e_xsk_param *xsk)
+u32 mlx5e_rx_get_min_frag_sz(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
{
u32 hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
u16 linear_rq_headroom = mlx5e_get_linear_rq_headroom(params, xsk);
- u32 frag_sz = linear_rq_headroom + hw_mtu;
+
+ return linear_rq_headroom + hw_mtu;
+}
+
+u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
+{
+ u32 frag_sz = mlx5e_rx_get_min_frag_sz(params, xsk);
/* AF_XDP doesn't build SKBs in place. */
if (!xsk)
frag_sz = MLX5_SKB_FRAG_SZ(frag_sz);
- /* XDP in mlx5e doesn't support multiple packets per page. */
+ /* XDP in mlx5e doesn't support multiple packets per page. AF_XDP is a
+ * special case. It can run with frames smaller than a page, as it
+ * doesn't allocate pages dynamically. However, here we pretend that
+ * fragments are page-sized: it allows to treat XSK frames like pages
+ * by redirecting alloc and free operations to XSK rings and by using
+ * the fact there are no multiple packets per "page" (which is a frame).
+ * The latter is important, because frames may come in a random order,
+ * and we will have trouble assemblying a real page of multiple frames.
+ */
if (mlx5e_rx_is_xdp(params, xsk))
frag_sz = max_t(u32, frag_sz, PAGE_SIZE);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
index 3a615d663d84..989d8f429438 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
@@ -76,6 +76,8 @@ static inline bool mlx5e_qid_validate(const struct mlx5e_profile *profile,
u16 mlx5e_get_linear_rq_headroom(struct mlx5e_params *params,
struct mlx5e_xsk_param *xsk);
+u32 mlx5e_rx_get_min_frag_sz(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk);
u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params,
struct mlx5e_xsk_param *xsk);
u8 mlx5e_mpwqe_log_pkts_per_wqe(struct mlx5e_params *params,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
index 6a55573ec8f2..3783776b6d70 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
@@ -104,7 +104,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
/* head_offset is not used in this function, because di->xsk.data and
* di->addr point directly to the necessary place. Furthermore, in the
- * current implementation, one page = one packet = one frame, so
+ * current implementation, UMR pages are mapped to XSK frames, so
* head_offset should always be 0.
*/
WARN_ON_ONCE(head_offset);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
index d3a173e88e24..81efd2fbc75d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
@@ -4,18 +4,23 @@
#include "setup.h"
#include "en/params.h"
+/* It matches XDP_UMEM_MIN_CHUNK_SIZE, but as this constant is private and may
+ * change unexpectedly, and mlx5e has a minimum valid stride size for striding
+ * RQ, keep this check in the driver.
+ */
+#define MLX5E_MIN_XSK_CHUNK_SIZE 2048
+
bool mlx5e_validate_xsk_param(struct mlx5e_params *params,
struct mlx5e_xsk_param *xsk,
struct mlx5_core_dev *mdev)
{
- /* AF_XDP doesn't support frames larger than PAGE_SIZE, and the current
- * mlx5e XDP implementation doesn't support multiple packets per page.
- */
- if (xsk->chunk_size != PAGE_SIZE)
+ /* AF_XDP doesn't support frames larger than PAGE_SIZE. */
+ if (xsk->chunk_size > PAGE_SIZE ||
+ xsk->chunk_size < MLX5E_MIN_XSK_CHUNK_SIZE)
return false;
/* Current MTU and XSK headroom don't allow packets to fit the frames. */
- if (mlx5e_rx_get_linear_frag_sz(params, xsk) > xsk->chunk_size)
+ if (mlx5e_rx_get_min_frag_sz(params, xsk) > xsk->chunk_size)
return false;
/* frag_sz is different for regular and XSK RQs, so ensure that linear
--
2.13.6

@ -0,0 +1,55 @@
From a0952a05dcb2a18564f90d1181591f7682cc9728 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:24 -0400
Subject: [PATCH 007/312] [netdrv] net: Use skb accessors in network drivers
Message-id: <20200510145245.10054-2-ahleihel@redhat.com>
Patchwork-id: 306543
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 01/82] net: Use skb accessors in network drivers
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
Conflicts:
- Take mlx5 changes only.
commit d7840976e3915669382c62ddd1700960f348328e
Author: Matthew Wilcox (Oracle) <willy@infradead.org>
Date: Mon Jul 22 20:08:25 2019 -0700
net: Use skb accessors in network drivers
In preparation for unifying the skb_frag and bio_vec, use the fine
accessors which already exist and use skb_frag_t instead of
struct skb_frag_struct.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
index 79f891c627da..5be0bad6d359 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
@@ -211,7 +211,7 @@ mlx5e_txwqe_build_dsegs(struct mlx5e_txqsq *sq, struct sk_buff *skb,
}
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
- struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i];
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
int fsz = skb_frag_size(frag);
dma_addr = skb_frag_dma_map(sq->pdev, frag, 0, fsz,
--
2.13.6

@ -0,0 +1,123 @@
From bd1ba9688ed45fe25f151e33657b2c50c0b4f424 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:25 -0400
Subject: [PATCH 008/312] [netdrv] net/mlx5e: xsk: dynamically allocate
mlx5e_channel_param
Message-id: <20200510145245.10054-3-ahleihel@redhat.com>
Patchwork-id: 306542
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 02/82] net/mlx5e: xsk: dynamically allocate mlx5e_channel_param
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 658688ce6c936254c34ea1f31549ec62439574aa
Author: Arnd Bergmann <arnd@arndb.de>
Date: Tue Jul 23 12:02:26 2019 +0000
net/mlx5e: xsk: dynamically allocate mlx5e_channel_param
The structure is too large to put on the stack, resulting in a
warning on 32-bit ARM:
drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c:59:5: error: stack frame size of 1344 bytes in function
'mlx5e_open_xsk' [-Werror,-Wframe-larger-than=]
Use kvzalloc() instead.
Fixes: a038e9794541 ("net/mlx5e: Add XSK zero-copy support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../net/ethernet/mellanox/mlx5/core/en/xsk/setup.c | 27 ++++++++++++++--------
1 file changed, 18 insertions(+), 9 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
index 81efd2fbc75d..79060ee60c98 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
@@ -65,24 +65,28 @@ int mlx5e_open_xsk(struct mlx5e_priv *priv, struct mlx5e_params *params,
struct mlx5e_xsk_param *xsk, struct xdp_umem *umem,
struct mlx5e_channel *c)
{
- struct mlx5e_channel_param cparam = {};
+ struct mlx5e_channel_param *cparam;
struct dim_cq_moder icocq_moder = {};
int err;
if (!mlx5e_validate_xsk_param(params, xsk, priv->mdev))
return -EINVAL;
- mlx5e_build_xsk_cparam(priv, params, xsk, &cparam);
+ cparam = kvzalloc(sizeof(*cparam), GFP_KERNEL);
+ if (!cparam)
+ return -ENOMEM;
- err = mlx5e_open_cq(c, params->rx_cq_moderation, &cparam.rx_cq, &c->xskrq.cq);
+ mlx5e_build_xsk_cparam(priv, params, xsk, cparam);
+
+ err = mlx5e_open_cq(c, params->rx_cq_moderation, &cparam->rx_cq, &c->xskrq.cq);
if (unlikely(err))
- return err;
+ goto err_free_cparam;
- err = mlx5e_open_rq(c, params, &cparam.rq, xsk, umem, &c->xskrq);
+ err = mlx5e_open_rq(c, params, &cparam->rq, xsk, umem, &c->xskrq);
if (unlikely(err))
goto err_close_rx_cq;
- err = mlx5e_open_cq(c, params->tx_cq_moderation, &cparam.tx_cq, &c->xsksq.cq);
+ err = mlx5e_open_cq(c, params->tx_cq_moderation, &cparam->tx_cq, &c->xsksq.cq);
if (unlikely(err))
goto err_close_rq;
@@ -92,21 +96,23 @@ int mlx5e_open_xsk(struct mlx5e_priv *priv, struct mlx5e_params *params,
* is disabled and then reenabled, but the SQ continues receiving CQEs
* from the old UMEM.
*/
- err = mlx5e_open_xdpsq(c, params, &cparam.xdp_sq, umem, &c->xsksq, true);
+ err = mlx5e_open_xdpsq(c, params, &cparam->xdp_sq, umem, &c->xsksq, true);
if (unlikely(err))
goto err_close_tx_cq;
- err = mlx5e_open_cq(c, icocq_moder, &cparam.icosq_cq, &c->xskicosq.cq);
+ err = mlx5e_open_cq(c, icocq_moder, &cparam->icosq_cq, &c->xskicosq.cq);
if (unlikely(err))
goto err_close_sq;
/* Create a dedicated SQ for posting NOPs whenever we need an IRQ to be
* triggered and NAPI to be called on the correct CPU.
*/
- err = mlx5e_open_icosq(c, params, &cparam.icosq, &c->xskicosq);
+ err = mlx5e_open_icosq(c, params, &cparam->icosq, &c->xskicosq);
if (unlikely(err))
goto err_close_icocq;
+ kvfree(cparam);
+
spin_lock_init(&c->xskicosq_lock);
set_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
@@ -128,6 +134,9 @@ int mlx5e_open_xsk(struct mlx5e_priv *priv, struct mlx5e_params *params,
err_close_rx_cq:
mlx5e_close_cq(&c->xskrq.cq);
+err_free_cparam:
+ kvfree(cparam);
+
return err;
}
--
2.13.6

@ -0,0 +1,276 @@
From ef56ac3b60e0e366983a421b51afc0e980c7cb1d Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:29 -0400
Subject: [PATCH 009/312] [netdrv] net/mlx5: E-Switch, add ingress rate support
Message-id: <20200510145245.10054-7-ahleihel@redhat.com>
Patchwork-id: 306545
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 06/82] net/mlx5: E-Switch, add ingress rate support
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit fcb64c0f5640e629bd77c2cb088f9fd70ff5bde7
Author: Eli Cohen <eli@mellanox.com>
Date: Wed May 8 11:44:56 2019 +0300
net/mlx5: E-Switch, add ingress rate support
Use the scheduling elements to implement ingress rate limiter on an
eswitch ports ingress traffic. Since the ingress of eswitch port is the
egress of VF port, we control eswitch ingress by controlling VF egress.
Configuration is done using the ports' representor net devices.
Please note that burst size configuration is not supported by devices
ConnectX-5 and earlier generations.
Configuration examples:
tc:
tc filter add dev enp59s0f0_0 root protocol ip matchall action police rate 1mbit burst 20k
ovs:
ovs-vsctl set interface eth0 ingress_policing_rate=1000
Signed-off-by: Eli Cohen <eli@mellanox.com>
Reviewed-by: Paul Blakey <paulb@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 19 ++++
drivers/net/ethernet/mellanox/mlx5/core/en_rep.h | 1 +
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 100 ++++++++++++++++++++++
drivers/net/ethernet/mellanox/mlx5/core/en_tc.h | 7 ++
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 16 ++++
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 2 +
6 files changed, 145 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index c8ebd93ad5ac..66c8c2ace4b9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -1179,6 +1179,23 @@ mlx5e_rep_setup_tc_cls_flower(struct mlx5e_priv *priv,
}
}
+static
+int mlx5e_rep_setup_tc_cls_matchall(struct mlx5e_priv *priv,
+ struct tc_cls_matchall_offload *ma)
+{
+ switch (ma->command) {
+ case TC_CLSMATCHALL_REPLACE:
+ return mlx5e_tc_configure_matchall(priv, ma);
+ case TC_CLSMATCHALL_DESTROY:
+ return mlx5e_tc_delete_matchall(priv, ma);
+ case TC_CLSMATCHALL_STATS:
+ mlx5e_tc_stats_matchall(priv, ma);
+ return 0;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
static int mlx5e_rep_setup_tc_cb(enum tc_setup_type type, void *type_data,
void *cb_priv)
{
@@ -1188,6 +1205,8 @@ static int mlx5e_rep_setup_tc_cb(enum tc_setup_type type, void *type_data,
switch (type) {
case TC_SETUP_CLSFLOWER:
return mlx5e_rep_setup_tc_cls_flower(priv, type_data, flags);
+ case TC_SETUP_CLSMATCHALL:
+ return mlx5e_rep_setup_tc_cls_matchall(priv, type_data);
default:
return -EOPNOTSUPP;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
index fcc5e52023ef..c8f3bbdc1ffb 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
@@ -90,6 +90,7 @@ struct mlx5e_rep_priv {
struct mlx5_flow_handle *vport_rx_rule;
struct list_head vport_sqs_list;
struct mlx5_rep_uplink_priv uplink_priv; /* valid for uplink rep */
+ struct rtnl_link_stats64 prev_vf_vport_stats;
struct devlink_port dl_port;
};
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index f06e99eb06b9..1f76974dc946 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -3932,6 +3932,106 @@ int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
return err;
}
+static int apply_police_params(struct mlx5e_priv *priv, u32 rate,
+ struct netlink_ext_ack *extack)
+{
+ struct mlx5e_rep_priv *rpriv = priv->ppriv;
+ struct mlx5_eswitch *esw;
+ u16 vport_num;
+ u32 rate_mbps;
+ int err;
+
+ esw = priv->mdev->priv.eswitch;
+ /* rate is given in bytes/sec.
+ * First convert to bits/sec and then round to the nearest mbit/secs.
+ * mbit means million bits.
+ * Moreover, if rate is non zero we choose to configure to a minimum of
+ * 1 mbit/sec.
+ */
+ rate_mbps = rate ? max_t(u32, (rate * 8 + 500000) / 1000000, 1) : 0;
+ vport_num = rpriv->rep->vport;
+
+ err = mlx5_esw_modify_vport_rate(esw, vport_num, rate_mbps);
+ if (err)
+ NL_SET_ERR_MSG_MOD(extack, "failed applying action to hardware");
+
+ return err;
+}
+
+static int scan_tc_matchall_fdb_actions(struct mlx5e_priv *priv,
+ struct flow_action *flow_action,
+ struct netlink_ext_ack *extack)
+{
+ struct mlx5e_rep_priv *rpriv = priv->ppriv;
+ const struct flow_action_entry *act;
+ int err;
+ int i;
+
+ if (!flow_action_has_entries(flow_action)) {
+ NL_SET_ERR_MSG_MOD(extack, "matchall called with no action");
+ return -EINVAL;
+ }
+
+ if (!flow_offload_has_one_action(flow_action)) {
+ NL_SET_ERR_MSG_MOD(extack, "matchall policing support only a single action");
+ return -EOPNOTSUPP;
+ }
+
+ flow_action_for_each(i, act, flow_action) {
+ switch (act->id) {
+ case FLOW_ACTION_POLICE:
+ err = apply_police_params(priv, act->police.rate_bytes_ps, extack);
+ if (err)
+ return err;
+
+ rpriv->prev_vf_vport_stats = priv->stats.vf_vport;
+ break;
+ default:
+ NL_SET_ERR_MSG_MOD(extack, "mlx5 supports only police action for matchall");
+ return -EOPNOTSUPP;
+ }
+ }
+
+ return 0;
+}
+
+int mlx5e_tc_configure_matchall(struct mlx5e_priv *priv,
+ struct tc_cls_matchall_offload *ma)
+{
+ struct netlink_ext_ack *extack = ma->common.extack;
+ int prio = TC_H_MAJ(ma->common.prio) >> 16;
+
+ if (prio != 1) {
+ NL_SET_ERR_MSG_MOD(extack, "only priority 1 is supported");
+ return -EINVAL;
+ }
+
+ return scan_tc_matchall_fdb_actions(priv, &ma->rule->action, extack);
+}
+
+int mlx5e_tc_delete_matchall(struct mlx5e_priv *priv,
+ struct tc_cls_matchall_offload *ma)
+{
+ struct netlink_ext_ack *extack = ma->common.extack;
+
+ return apply_police_params(priv, 0, extack);
+}
+
+void mlx5e_tc_stats_matchall(struct mlx5e_priv *priv,
+ struct tc_cls_matchall_offload *ma)
+{
+ struct mlx5e_rep_priv *rpriv = priv->ppriv;
+ struct rtnl_link_stats64 cur_stats;
+ u64 dbytes;
+ u64 dpkts;
+
+ cur_stats = priv->stats.vf_vport;
+ dpkts = cur_stats.rx_packets - rpriv->prev_vf_vport_stats.rx_packets;
+ dbytes = cur_stats.rx_bytes - rpriv->prev_vf_vport_stats.rx_bytes;
+ rpriv->prev_vf_vport_stats = cur_stats;
+ flow_stats_update(&ma->stats, dpkts, dbytes, jiffies);
+}
+
static void mlx5e_tc_hairpin_update_dead_peer(struct mlx5e_priv *priv,
struct mlx5e_priv *peer_priv)
{
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
index 876a78a09dd6..924c6ef86a14 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
@@ -63,6 +63,13 @@ int mlx5e_delete_flower(struct net_device *dev, struct mlx5e_priv *priv,
int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
struct flow_cls_offload *f, unsigned long flags);
+int mlx5e_tc_configure_matchall(struct mlx5e_priv *priv,
+ struct tc_cls_matchall_offload *f);
+int mlx5e_tc_delete_matchall(struct mlx5e_priv *priv,
+ struct tc_cls_matchall_offload *f);
+void mlx5e_tc_stats_matchall(struct mlx5e_priv *priv,
+ struct tc_cls_matchall_offload *ma);
+
struct mlx5e_encap_entry;
void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv,
struct mlx5e_encap_entry *e,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 691f5e27e389..386e82850ed5 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1580,6 +1580,22 @@ static int esw_vport_qos_config(struct mlx5_eswitch *esw,
return 0;
}
+int mlx5_esw_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num,
+ u32 rate_mbps)
+{
+ u32 ctx[MLX5_ST_SZ_DW(scheduling_context)] = {};
+ struct mlx5_vport *vport;
+
+ vport = mlx5_eswitch_get_vport(esw, vport_num);
+ MLX5_SET(scheduling_context, ctx, max_average_bw, rate_mbps);
+
+ return mlx5_modify_scheduling_element_cmd(esw->dev,
+ SCHEDULING_HIERARCHY_E_SWITCH,
+ ctx,
+ vport->qos.esw_tsar_ix,
+ MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_MAX_AVERAGE_BW);
+}
+
static void node_guid_gen_from_mac(u64 *node_guid, u8 mac[ETH_ALEN])
{
((u8 *)node_guid)[7] = mac[0];
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 1747b6616e66..436c633407d6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -263,6 +263,8 @@ void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
struct mlx5_vport *vport);
void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
struct mlx5_vport *vport);
+int mlx5_esw_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num,
+ u32 rate_mbps);
/* E-Switch API */
int mlx5_eswitch_init(struct mlx5_core_dev *dev);
--
2.13.6

@ -0,0 +1,56 @@
From 20db6bb321f335b527ccf7befb50c50696e37ebf Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:30 -0400
Subject: [PATCH 010/312] [netdrv] net/mlx5e: Tx, Strict the room needed for SQ
edge NOPs
Message-id: <20200510145245.10054-8-ahleihel@redhat.com>
Patchwork-id: 306547
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 07/82] net/mlx5e: Tx, Strict the room needed for SQ edge NOPs
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 68865419ba1bf502a5bd279a500deda64000249d
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Thu Jul 11 11:20:22 2019 +0300
net/mlx5e: Tx, Strict the room needed for SQ edge NOPs
We use NOPs to populate the WQ fragment edge if the WQE does not fit
in frag, to avoid WQEs crossing a page boundary (or wrap-around the WQ).
The upper bound on the needed number of NOPs is one WQEBB less than
the largest possible WQE, for otherwise the WQE would certainly fit.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
index b495e6a976a1..a7a2cd415e69 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
@@ -6,7 +6,7 @@
#include "en.h"
-#define MLX5E_SQ_NOPS_ROOM MLX5_SEND_WQE_MAX_WQEBBS
+#define MLX5E_SQ_NOPS_ROOM (MLX5_SEND_WQE_MAX_WQEBBS - 1)
#define MLX5E_SQ_STOP_ROOM (MLX5_SEND_WQE_MAX_WQEBBS +\
MLX5E_SQ_NOPS_ROOM)
--
2.13.6

@ -0,0 +1,328 @@
From eee2fd0e4f3d4d9f833a2eec6169c8c46c9388c2 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:31 -0400
Subject: [PATCH 011/312] [netdrv] net/mlx5e: XDP, Close TX MPWQE session when
no room for inline packet left
Message-id: <20200510145245.10054-9-ahleihel@redhat.com>
Patchwork-id: 306548
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 08/82] net/mlx5e: XDP, Close TX MPWQE session when no room for inline packet left
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 6c085a8aab5183d8658c9a692bcfda3e24195b7a
Author: Shay Agroskin <shayag@mellanox.com>
Date: Sun May 12 18:28:27 2019 +0300
net/mlx5e: XDP, Close TX MPWQE session when no room for inline packet left
In MPWQE mode, when transmitting packets with XDP, a packet that is smaller
than a certain size (set to 256 bytes) would be sent inline within its WQE
TX descriptor (mem-copied), in case the hardware tx queue is congested
beyond a pre-defined water-mark.
If a MPWQE cannot contain an additional inline packet, we close this
MPWQE session, and send the packet inlined within the next MPWQE.
To save some MPWQE session close+open operations, we don't open MPWQE
sessions that are contiguously smaller than certain size (set to the
HW MPWQE maximum size). If there isn't enough contiguous room in the
send queue, we fill it with NOPs and wrap the send queue index around.
This way, qualified packets are always sent inline.
Perf tests:
Tested packet rate for UDP 64Byte multi-stream
over two dual port ConnectX-5 100Gbps NICs.
CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
XDP_TX:
With 24 channels:
| ------ | bounced packets | inlined packets | inline ratio |
| before | 113.6Mpps | 96.3Mpps | 84% |
| after | 115Mpps | 99.5Mpps | 86% |
With one channel:
| ------ | bounced packets | inlined packets | inline ratio |
| before | 6.7Mpps | 0pps | 0% |
| after | 6.8Mpps | 0pps | 0% |
As we can see, there is improvement in both inline ratio and overall
packet rate for 24 channels. Also, we see no degradation for the
one-channel case.
Signed-off-by: Shay Agroskin <shayag@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en.h | 2 -
drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 32 ++++---------
drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h | 53 ++++++++++++++++++----
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c | 6 +++
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h | 3 ++
5 files changed, 63 insertions(+), 33 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 3b77b43db748..bc2c38faadc8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -488,8 +488,6 @@ struct mlx5e_xdp_mpwqe {
struct mlx5e_tx_wqe *wqe;
u8 ds_count;
u8 pkt_count;
- u8 max_ds_count;
- u8 complete;
u8 inline_on;
};
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
index b0b982cf69bb..8cb98326531f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
@@ -179,34 +179,22 @@ static void mlx5e_xdp_mpwqe_session_start(struct mlx5e_xdpsq *sq)
struct mlx5e_xdp_mpwqe *session = &sq->mpwqe;
struct mlx5e_xdpsq_stats *stats = sq->stats;
struct mlx5_wq_cyc *wq = &sq->wq;
- u8 wqebbs;
- u16 pi;
+ u16 pi, contig_wqebbs;
+
+ pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+ contig_wqebbs = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
+
+ if (unlikely(contig_wqebbs < MLX5_SEND_WQE_MAX_WQEBBS))
+ mlx5e_fill_xdpsq_frag_edge(sq, wq, pi, contig_wqebbs);
mlx5e_xdpsq_fetch_wqe(sq, &session->wqe);
prefetchw(session->wqe->data);
session->ds_count = MLX5E_XDP_TX_EMPTY_DS_COUNT;
session->pkt_count = 0;
- session->complete = 0;
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
-/* The mult of MLX5_SEND_WQE_MAX_WQEBBS * MLX5_SEND_WQEBB_NUM_DS
- * (16 * 4 == 64) does not fit in the 6-bit DS field of Ctrl Segment.
- * We use a bound lower that MLX5_SEND_WQE_MAX_WQEBBS to let a
- * full-session WQE be cache-aligned.
- */
-#if L1_CACHE_BYTES < 128
-#define MLX5E_XDP_MPW_MAX_WQEBBS (MLX5_SEND_WQE_MAX_WQEBBS - 1)
-#else
-#define MLX5E_XDP_MPW_MAX_WQEBBS (MLX5_SEND_WQE_MAX_WQEBBS - 2)
-#endif
-
- wqebbs = min_t(u16, mlx5_wq_cyc_get_contig_wqebbs(wq, pi),
- MLX5E_XDP_MPW_MAX_WQEBBS);
-
- session->max_ds_count = MLX5_SEND_WQEBB_NUM_DS * wqebbs;
-
mlx5e_xdp_update_inline_state(sq);
stats->mpwqe++;
@@ -244,7 +232,7 @@ static int mlx5e_xmit_xdp_frame_check_mpwqe(struct mlx5e_xdpsq *sq)
{
if (unlikely(!sq->mpwqe.wqe)) {
if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc,
- MLX5_SEND_WQE_MAX_WQEBBS))) {
+ MLX5E_XDPSQ_STOP_ROOM))) {
/* SQ is full, ring doorbell */
mlx5e_xmit_xdp_doorbell(sq);
sq->stats->full++;
@@ -285,8 +273,8 @@ static bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq,
mlx5e_xdp_mpwqe_add_dseg(sq, xdptxd, stats);
- if (unlikely(session->complete ||
- session->ds_count == session->max_ds_count))
+ if (unlikely(mlx5e_xdp_no_room_for_inline_pkt(session) ||
+ session->ds_count == MLX5E_XDP_MPW_MAX_NUM_DS))
mlx5e_xdp_mpwqe_complete(sq);
mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, xdpi);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
index d5b0d55d434b..c52f72062b33 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
@@ -40,6 +40,26 @@
(sizeof(struct mlx5e_tx_wqe) / MLX5_SEND_WQE_DS)
#define MLX5E_XDP_TX_DS_COUNT (MLX5E_XDP_TX_EMPTY_DS_COUNT + 1 /* SG DS */)
+#define MLX5E_XDPSQ_STOP_ROOM (MLX5E_SQ_STOP_ROOM)
+
+#define MLX5E_XDP_INLINE_WQE_SZ_THRSD (256 - sizeof(struct mlx5_wqe_inline_seg))
+#define MLX5E_XDP_INLINE_WQE_MAX_DS_CNT \
+ DIV_ROUND_UP(MLX5E_XDP_INLINE_WQE_SZ_THRSD, MLX5_SEND_WQE_DS)
+
+/* The mult of MLX5_SEND_WQE_MAX_WQEBBS * MLX5_SEND_WQEBB_NUM_DS
+ * (16 * 4 == 64) does not fit in the 6-bit DS field of Ctrl Segment.
+ * We use a bound lower that MLX5_SEND_WQE_MAX_WQEBBS to let a
+ * full-session WQE be cache-aligned.
+ */
+#if L1_CACHE_BYTES < 128
+#define MLX5E_XDP_MPW_MAX_WQEBBS (MLX5_SEND_WQE_MAX_WQEBBS - 1)
+#else
+#define MLX5E_XDP_MPW_MAX_WQEBBS (MLX5_SEND_WQE_MAX_WQEBBS - 2)
+#endif
+
+#define MLX5E_XDP_MPW_MAX_NUM_DS \
+ (MLX5E_XDP_MPW_MAX_WQEBBS * MLX5_SEND_WQEBB_NUM_DS)
+
struct mlx5e_xsk_param;
int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk);
bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di,
@@ -110,6 +130,30 @@ static inline void mlx5e_xdp_update_inline_state(struct mlx5e_xdpsq *sq)
session->inline_on = 1;
}
+static inline bool
+mlx5e_xdp_no_room_for_inline_pkt(struct mlx5e_xdp_mpwqe *session)
+{
+ return session->inline_on &&
+ session->ds_count + MLX5E_XDP_INLINE_WQE_MAX_DS_CNT > MLX5E_XDP_MPW_MAX_NUM_DS;
+}
+
+static inline void
+mlx5e_fill_xdpsq_frag_edge(struct mlx5e_xdpsq *sq, struct mlx5_wq_cyc *wq,
+ u16 pi, u16 nnops)
+{
+ struct mlx5e_xdp_wqe_info *edge_wi, *wi = &sq->db.wqe_info[pi];
+
+ edge_wi = wi + nnops;
+ /* fill sq frag edge with nops to avoid wqe wrapping two pages */
+ for (; wi < edge_wi; wi++) {
+ wi->num_wqebbs = 1;
+ wi->num_pkts = 0;
+ mlx5e_post_nop(wq, sq->sqn, &sq->pc);
+ }
+
+ sq->stats->nops += nnops;
+}
+
static inline void
mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq,
struct mlx5e_xdp_xmit_data *xdptxd,
@@ -122,20 +166,12 @@ mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq,
session->pkt_count++;
-#define MLX5E_XDP_INLINE_WQE_SZ_THRSD (256 - sizeof(struct mlx5_wqe_inline_seg))
-
if (session->inline_on && dma_len <= MLX5E_XDP_INLINE_WQE_SZ_THRSD) {
struct mlx5_wqe_inline_seg *inline_dseg =
(struct mlx5_wqe_inline_seg *)dseg;
u16 ds_len = sizeof(*inline_dseg) + dma_len;
u16 ds_cnt = DIV_ROUND_UP(ds_len, MLX5_SEND_WQE_DS);
- if (unlikely(session->ds_count + ds_cnt > session->max_ds_count)) {
- /* Not enough space for inline wqe, send with memory pointer */
- session->complete = true;
- goto no_inline;
- }
-
inline_dseg->byte_count = cpu_to_be32(dma_len | MLX5_INLINE_SEG);
memcpy(inline_dseg->data, xdptxd->data, dma_len);
@@ -144,7 +180,6 @@ mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq,
return;
}
-no_inline:
dseg->addr = cpu_to_be64(xdptxd->dma_addr);
dseg->byte_count = cpu_to_be32(dma_len);
dseg->lkey = sq->mkey_be;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
index b4f5ae30dae2..3d993e2e7bea 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
@@ -126,6 +126,7 @@ static const struct counter_desc sw_stats_desc[] = {
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xdp_tx_xmit) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xdp_tx_mpwqe) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xdp_tx_inlnw) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xdp_tx_nops) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xdp_tx_full) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xdp_tx_err) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xdp_tx_cqe) },
@@ -142,6 +143,7 @@ static const struct counter_desc sw_stats_desc[] = {
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xdp_xmit) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xdp_mpwqe) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xdp_inlnw) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xdp_nops) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xdp_full) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xdp_err) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xdp_cqes) },
@@ -252,6 +254,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(sw)
s->rx_xdp_tx_xmit += xdpsq_stats->xmit;
s->rx_xdp_tx_mpwqe += xdpsq_stats->mpwqe;
s->rx_xdp_tx_inlnw += xdpsq_stats->inlnw;
+ s->rx_xdp_tx_nops += xdpsq_stats->nops;
s->rx_xdp_tx_full += xdpsq_stats->full;
s->rx_xdp_tx_err += xdpsq_stats->err;
s->rx_xdp_tx_cqe += xdpsq_stats->cqes;
@@ -279,6 +282,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(sw)
s->tx_xdp_xmit += xdpsq_red_stats->xmit;
s->tx_xdp_mpwqe += xdpsq_red_stats->mpwqe;
s->tx_xdp_inlnw += xdpsq_red_stats->inlnw;
+ s->tx_xdp_nops += xdpsq_red_stats->nops;
s->tx_xdp_full += xdpsq_red_stats->full;
s->tx_xdp_err += xdpsq_red_stats->err;
s->tx_xdp_cqes += xdpsq_red_stats->cqes;
@@ -1517,6 +1521,7 @@ static const struct counter_desc rq_xdpsq_stats_desc[] = {
{ MLX5E_DECLARE_RQ_XDPSQ_STAT(struct mlx5e_xdpsq_stats, xmit) },
{ MLX5E_DECLARE_RQ_XDPSQ_STAT(struct mlx5e_xdpsq_stats, mpwqe) },
{ MLX5E_DECLARE_RQ_XDPSQ_STAT(struct mlx5e_xdpsq_stats, inlnw) },
+ { MLX5E_DECLARE_RQ_XDPSQ_STAT(struct mlx5e_xdpsq_stats, nops) },
{ MLX5E_DECLARE_RQ_XDPSQ_STAT(struct mlx5e_xdpsq_stats, full) },
{ MLX5E_DECLARE_RQ_XDPSQ_STAT(struct mlx5e_xdpsq_stats, err) },
{ MLX5E_DECLARE_RQ_XDPSQ_STAT(struct mlx5e_xdpsq_stats, cqes) },
@@ -1526,6 +1531,7 @@ static const struct counter_desc xdpsq_stats_desc[] = {
{ MLX5E_DECLARE_XDPSQ_STAT(struct mlx5e_xdpsq_stats, xmit) },
{ MLX5E_DECLARE_XDPSQ_STAT(struct mlx5e_xdpsq_stats, mpwqe) },
{ MLX5E_DECLARE_XDPSQ_STAT(struct mlx5e_xdpsq_stats, inlnw) },
+ { MLX5E_DECLARE_XDPSQ_STAT(struct mlx5e_xdpsq_stats, nops) },
{ MLX5E_DECLARE_XDPSQ_STAT(struct mlx5e_xdpsq_stats, full) },
{ MLX5E_DECLARE_XDPSQ_STAT(struct mlx5e_xdpsq_stats, err) },
{ MLX5E_DECLARE_XDPSQ_STAT(struct mlx5e_xdpsq_stats, cqes) },
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
index 0f9fa22a955e..a4a43613d026 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
@@ -132,6 +132,7 @@ struct mlx5e_sw_stats {
u64 rx_xdp_tx_xmit;
u64 rx_xdp_tx_mpwqe;
u64 rx_xdp_tx_inlnw;
+ u64 rx_xdp_tx_nops;
u64 rx_xdp_tx_full;
u64 rx_xdp_tx_err;
u64 rx_xdp_tx_cqe;
@@ -148,6 +149,7 @@ struct mlx5e_sw_stats {
u64 tx_xdp_xmit;
u64 tx_xdp_mpwqe;
u64 tx_xdp_inlnw;
+ u64 tx_xdp_nops;
u64 tx_xdp_full;
u64 tx_xdp_err;
u64 tx_xdp_cqes;
@@ -341,6 +343,7 @@ struct mlx5e_xdpsq_stats {
u64 xmit;
u64 mpwqe;
u64 inlnw;
+ u64 nops;
u64 full;
u64 err;
/* dirtied @completion */
--
2.13.6

@ -0,0 +1,90 @@
From b881433dfe615d066de735fd8b7e49db22fd4460 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:32 -0400
Subject: [PATCH 012/312] [netdrv] net/mlx5e: XDP, Slight enhancement for WQE
fetch function
Message-id: <20200510145245.10054-10-ahleihel@redhat.com>
Patchwork-id: 306549
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 09/82] net/mlx5e: XDP, Slight enhancement for WQE fetch function
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 7cf6f811b72aced0c48e1065fe059d604ef6363d
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Sun Jul 14 17:50:51 2019 +0300
net/mlx5e: XDP, Slight enhancement for WQE fetch function
Instead of passing an output param, let function return the
WQE pointer.
In addition, pass &pi so it gets its value in the function,
and save the redundant assignment that comes after it.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 4 +---
drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h | 13 ++++++++-----
2 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
index 8cb98326531f..1ed5c33e022f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
@@ -187,14 +187,12 @@ static void mlx5e_xdp_mpwqe_session_start(struct mlx5e_xdpsq *sq)
if (unlikely(contig_wqebbs < MLX5_SEND_WQE_MAX_WQEBBS))
mlx5e_fill_xdpsq_frag_edge(sq, wq, pi, contig_wqebbs);
- mlx5e_xdpsq_fetch_wqe(sq, &session->wqe);
+ session->wqe = mlx5e_xdpsq_fetch_wqe(sq, &pi);
prefetchw(session->wqe->data);
session->ds_count = MLX5E_XDP_TX_EMPTY_DS_COUNT;
session->pkt_count = 0;
- pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
-
mlx5e_xdp_update_inline_state(sq);
stats->mpwqe++;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
index c52f72062b33..d7587f40ecae 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
@@ -186,14 +186,17 @@ mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq,
session->ds_count++;
}
-static inline void mlx5e_xdpsq_fetch_wqe(struct mlx5e_xdpsq *sq,
- struct mlx5e_tx_wqe **wqe)
+static inline struct mlx5e_tx_wqe *
+mlx5e_xdpsq_fetch_wqe(struct mlx5e_xdpsq *sq, u16 *pi)
{
struct mlx5_wq_cyc *wq = &sq->wq;
- u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+ struct mlx5e_tx_wqe *wqe;
- *wqe = mlx5_wq_cyc_get_wqe(wq, pi);
- memset(*wqe, 0, sizeof(**wqe));
+ *pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+ wqe = mlx5_wq_cyc_get_wqe(wq, *pi);
+ memset(wqe, 0, sizeof(*wqe));
+
+ return wqe;
}
static inline void
--
2.13.6

@ -0,0 +1,222 @@
From a1c13dde2d8edd63949ada1ee41cd9d88b328aaa Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:33 -0400
Subject: [PATCH 013/312] [netdrv] net/mlx5e: Tx, Soften inline mode VLAN
dependencies
Message-id: <20200510145245.10054-11-ahleihel@redhat.com>
Patchwork-id: 306551
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 10/82] net/mlx5e: Tx, Soften inline mode VLAN dependencies
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit b431302e92f00b7acd5617a4d289f8006394bfc2
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Mon Jul 1 12:08:08 2019 +0300
net/mlx5e: Tx, Soften inline mode VLAN dependencies
If capable, use zero inline mode in TX WQE for non-VLAN packets.
For VLAN ones, keep the enforcement of at least L2 inline mode,
unless the WQE VLAN insertion offload cap is on.
Performance:
Tested single core packet rate of 64Bytes.
NIC: ConnectX-5
CPU: Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz
pktgen:
Before: 12.46 Mpps
After: 14.65 Mpps (+17.5%)
XDP_TX:
The MPWQE flow is not affected, as it already has this optimization.
So we test with priv-flag xdp_tx_mpwqe: off.
Before: 9.90 Mpps
After: 10.20 Mpps (+3%)
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Tested-by: Noam Stolero <noams@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en.h | 2 +-
drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h | 22 ++++++++++++++++++++--
.../net/ethernet/mellanox/mlx5/core/en_common.c | 12 ------------
drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c | 2 +-
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 4 +++-
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c | 7 ++++---
drivers/net/ethernet/mellanox/mlx5/core/vport.c | 7 ++++---
7 files changed, 33 insertions(+), 23 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index bc2c38faadc8..84575c0bcca6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -364,6 +364,7 @@ enum {
MLX5E_SQ_STATE_IPSEC,
MLX5E_SQ_STATE_AM,
MLX5E_SQ_STATE_TLS,
+ MLX5E_SQ_STATE_VLAN_NEED_L2_INLINE,
};
struct mlx5e_sq_wqe_info {
@@ -1151,7 +1152,6 @@ void mlx5e_build_rq_params(struct mlx5_core_dev *mdev,
struct mlx5e_params *params);
void mlx5e_build_rss_params(struct mlx5e_rss_params *rss_params,
u16 num_channels);
-u8 mlx5e_params_calculate_tx_min_inline(struct mlx5_core_dev *mdev);
void mlx5e_rx_dim_work(struct work_struct *work);
void mlx5e_tx_dim_work(struct work_struct *work);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
index a7a2cd415e69..182d5c5664eb 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
@@ -117,9 +117,27 @@ mlx5e_notify_hw(struct mlx5_wq_cyc *wq, u16 pc, void __iomem *uar_map,
mlx5_write64((__be32 *)ctrl, uar_map);
}
-static inline bool mlx5e_transport_inline_tx_wqe(struct mlx5e_tx_wqe *wqe)
+static inline bool mlx5e_transport_inline_tx_wqe(struct mlx5_wqe_ctrl_seg *cseg)
{
- return !!wqe->ctrl.tisn;
+ return cseg && !!cseg->tisn;
+}
+
+static inline u8
+mlx5e_tx_wqe_inline_mode(struct mlx5e_txqsq *sq, struct mlx5_wqe_ctrl_seg *cseg,
+ struct sk_buff *skb)
+{
+ u8 mode;
+
+ if (mlx5e_transport_inline_tx_wqe(cseg))
+ return MLX5_INLINE_MODE_TCP_UDP;
+
+ mode = sq->min_inline_mode;
+
+ if (skb_vlan_tag_present(skb) &&
+ test_bit(MLX5E_SQ_STATE_VLAN_NEED_L2_INLINE, &sq->state))
+ mode = max_t(u8, MLX5_INLINE_MODE_L2, mode);
+
+ return mode;
}
static inline void mlx5e_cq_arm(struct mlx5e_cq *cq)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
index 1539cf3de5dc..f7890e0ce96c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
@@ -180,15 +180,3 @@ int mlx5e_refresh_tirs(struct mlx5e_priv *priv, bool enable_uc_lb)
return err;
}
-
-u8 mlx5e_params_calculate_tx_min_inline(struct mlx5_core_dev *mdev)
-{
- u8 min_inline_mode;
-
- mlx5_query_min_inline(mdev, &min_inline_mode);
- if (min_inline_mode == MLX5_INLINE_MODE_NONE &&
- !MLX5_CAP_ETH(mdev, wqe_vlan_insert))
- min_inline_mode = MLX5_INLINE_MODE_L2;
-
- return min_inline_mode;
-}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
index 8dd31b5c740c..01f2918063af 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
@@ -1101,7 +1101,7 @@ void mlx5e_dcbnl_delete_app(struct mlx5e_priv *priv)
static void mlx5e_trust_update_tx_min_inline_mode(struct mlx5e_priv *priv,
struct mlx5e_params *params)
{
- params->tx_min_inline_mode = mlx5e_params_calculate_tx_min_inline(priv->mdev);
+ mlx5_query_min_inline(priv->mdev, &params->tx_min_inline_mode);
if (priv->dcbx_dp.trust_state == MLX5_QPTS_TRUST_DSCP &&
params->tx_min_inline_mode == MLX5_INLINE_MODE_L2)
params->tx_min_inline_mode = MLX5_INLINE_MODE_IP;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 7447b84e2d44..5be38cf34551 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -1121,6 +1121,8 @@ static int mlx5e_alloc_txqsq(struct mlx5e_channel *c,
sq->stats = &c->priv->channel_stats[c->ix].sq[tc];
sq->stop_room = MLX5E_SQ_STOP_ROOM;
INIT_WORK(&sq->recover_work, mlx5e_tx_err_cqe_work);
+ if (!MLX5_CAP_ETH(mdev, wqe_vlan_insert))
+ set_bit(MLX5E_SQ_STATE_VLAN_NEED_L2_INLINE, &sq->state);
if (MLX5_IPSEC_DEV(c->priv->mdev))
set_bit(MLX5E_SQ_STATE_IPSEC, &sq->state);
if (mlx5_accel_is_tls_device(c->priv->mdev)) {
@@ -4772,7 +4774,7 @@ void mlx5e_build_nic_params(struct mlx5_core_dev *mdev,
mlx5e_set_tx_cq_mode_params(params, MLX5_CQ_PERIOD_MODE_START_FROM_EQE);
/* TX inline */
- params->tx_min_inline_mode = mlx5e_params_calculate_tx_min_inline(mdev);
+ mlx5_query_min_inline(mdev, &params->tx_min_inline_mode);
/* RSS */
mlx5e_build_rss_params(rss_params, params->num_channels);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
index 5be0bad6d359..9cc22b62d73d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
@@ -293,8 +293,7 @@ netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
num_bytes = skb->len + (skb_shinfo(skb)->gso_segs - 1) * ihs;
stats->packets += skb_shinfo(skb)->gso_segs;
} else {
- u8 mode = mlx5e_transport_inline_tx_wqe(wqe) ?
- MLX5_INLINE_MODE_TCP_UDP : sq->min_inline_mode;
+ u8 mode = mlx5e_tx_wqe_inline_mode(sq, &wqe->ctrl, skb);
opcode = MLX5_OPCODE_SEND;
mss = 0;
@@ -612,9 +611,11 @@ netdev_tx_t mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
num_bytes = skb->len + (skb_shinfo(skb)->gso_segs - 1) * ihs;
stats->packets += skb_shinfo(skb)->gso_segs;
} else {
+ u8 mode = mlx5e_tx_wqe_inline_mode(sq, NULL, skb);
+
opcode = MLX5_OPCODE_SEND;
mss = 0;
- ihs = mlx5e_calc_min_inline(sq->min_inline_mode, skb);
+ ihs = mlx5e_calc_min_inline(mode, skb);
num_bytes = max_t(unsigned int, skb->len, ETH_ZLEN);
stats->packets++;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/vport.c b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
index c912d82ca64b..30f7848a6f88 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/vport.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
@@ -122,12 +122,13 @@ void mlx5_query_min_inline(struct mlx5_core_dev *mdev,
u8 *min_inline_mode)
{
switch (MLX5_CAP_ETH(mdev, wqe_inline_mode)) {
+ case MLX5_CAP_INLINE_MODE_VPORT_CONTEXT:
+ if (!mlx5_query_nic_vport_min_inline(mdev, 0, min_inline_mode))
+ break;
+ /* fall through */
case MLX5_CAP_INLINE_MODE_L2:
*min_inline_mode = MLX5_INLINE_MODE_L2;
break;
- case MLX5_CAP_INLINE_MODE_VPORT_CONTEXT:
- mlx5_query_nic_vport_min_inline(mdev, 0, min_inline_mode);
- break;
case MLX5_CAP_INLINE_MODE_NOT_REQUIRED:
*min_inline_mode = MLX5_INLINE_MODE_NONE;
break;
--
2.13.6

@ -0,0 +1,88 @@
From e31d980ab9849683588324b04f8596e901b3721e Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:34 -0400
Subject: [PATCH 014/312] [netdrv] net/mlx5e: Rx, checksum handling refactoring
Message-id: <20200510145245.10054-12-ahleihel@redhat.com>
Patchwork-id: 306554
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 11/82] net/mlx5e: Rx, checksum handling refactoring
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 8c7698d5caa7852bebae0cf7402b7d3a1f30423b
Author: Saeed Mahameed <saeedm@mellanox.com>
Date: Fri May 3 15:12:46 2019 -0700
net/mlx5e: Rx, checksum handling refactoring
Move vlan checksum fixup flow into mlx5e_skb_padding_csum(), which is
supposed to fixup SKB checksum if needed. And rename
mlx5e_skb_padding_csum() to mlx5e_skb_csum_fixup().
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 26 +++++++++++++------------
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 6518d1101de0..a22b3a3db253 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -860,13 +860,24 @@ tail_padding_csum(struct sk_buff *skb, int offset,
}
static void
-mlx5e_skb_padding_csum(struct sk_buff *skb, int network_depth, __be16 proto,
- struct mlx5e_rq_stats *stats)
+mlx5e_skb_csum_fixup(struct sk_buff *skb, int network_depth, __be16 proto,
+ struct mlx5e_rq_stats *stats)
{
struct ipv6hdr *ip6;
struct iphdr *ip4;
int pkt_len;
+ /* Fixup vlan headers, if any */
+ if (network_depth > ETH_HLEN)
+ /* CQE csum is calculated from the IP header and does
+ * not cover VLAN headers (if present). This will add
+ * the checksum manually.
+ */
+ skb->csum = csum_partial(skb->data + ETH_HLEN,
+ network_depth - ETH_HLEN,
+ skb->csum);
+
+ /* Fixup tail padding, if any */
switch (proto) {
case htons(ETH_P_IP):
ip4 = (struct iphdr *)(skb->data + network_depth);
@@ -932,16 +943,7 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
return; /* CQE csum covers all received bytes */
/* csum might need some fixups ...*/
- if (network_depth > ETH_HLEN)
- /* CQE csum is calculated from the IP header and does
- * not cover VLAN headers (if present). This will add
- * the checksum manually.
- */
- skb->csum = csum_partial(skb->data + ETH_HLEN,
- network_depth - ETH_HLEN,
- skb->csum);
-
- mlx5e_skb_padding_csum(skb, network_depth, proto, stats);
+ mlx5e_skb_csum_fixup(skb, network_depth, proto, stats);
return;
}
--
2.13.6

@ -0,0 +1,103 @@
From ac0e05eab5ead240e977ff6b629bfddf78c5c2c6 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:35 -0400
Subject: [PATCH 015/312] [netdrv] net/mlx5e: Set tx reporter only on
successful creation
Message-id: <20200510145245.10054-13-ahleihel@redhat.com>
Patchwork-id: 306553
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 12/82] net/mlx5e: Set tx reporter only on successful creation
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit baf6dfdb10e9695637d72429159fd26fc36d30c3
Author: Aya Levin <ayal@mellanox.com>
Date: Mon Jun 24 19:34:42 2019 +0300
net/mlx5e: Set tx reporter only on successful creation
When failing to create tx reporter, don't set the reporter's pointer.
Creating a reporter is not mandatory for driver load, avoid
garbage/error pointer.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c | 14 ++++++++------
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 +-
2 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
index 2b3d2292b8c5..d9116e77ef68 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
@@ -116,7 +116,7 @@ static int mlx5_tx_health_report(struct devlink_health_reporter *tx_reporter,
char *err_str,
struct mlx5e_tx_err_ctx *err_ctx)
{
- if (IS_ERR_OR_NULL(tx_reporter)) {
+ if (!tx_reporter) {
netdev_err(err_ctx->sq->channel->netdev, err_str);
return err_ctx->recover(err_ctx->sq);
}
@@ -288,25 +288,27 @@ static const struct devlink_health_reporter_ops mlx5_tx_reporter_ops = {
int mlx5e_tx_reporter_create(struct mlx5e_priv *priv)
{
+ struct devlink_health_reporter *reporter;
struct mlx5_core_dev *mdev = priv->mdev;
struct devlink *devlink = priv_to_devlink(mdev);
- priv->tx_reporter =
+ reporter =
devlink_health_reporter_create(devlink, &mlx5_tx_reporter_ops,
MLX5_REPORTER_TX_GRACEFUL_PERIOD,
true, priv);
- if (IS_ERR(priv->tx_reporter)) {
+ if (IS_ERR(reporter)) {
netdev_warn(priv->netdev,
"Failed to create tx reporter, err = %ld\n",
- PTR_ERR(priv->tx_reporter));
- return PTR_ERR(priv->tx_reporter);
+ PTR_ERR(reporter));
+ return PTR_ERR(reporter);
}
+ priv->tx_reporter = reporter;
return 0;
}
void mlx5e_tx_reporter_destroy(struct mlx5e_priv *priv)
{
- if (IS_ERR_OR_NULL(priv->tx_reporter))
+ if (!priv->tx_reporter)
return;
devlink_health_reporter_destroy(priv->tx_reporter);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 5be38cf34551..9ffcfa017d4f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -2323,7 +2323,7 @@ int mlx5e_open_channels(struct mlx5e_priv *priv,
goto err_close_channels;
}
- if (!IS_ERR_OR_NULL(priv->tx_reporter))
+ if (priv->tx_reporter)
devlink_health_reporter_state_update(priv->tx_reporter,
DEVLINK_HEALTH_REPORTER_STATE_HEALTHY);
--
2.13.6

@ -0,0 +1,65 @@
From a9b583f090f95ac9fe24ef0906c897f216014da3 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:36 -0400
Subject: [PATCH 016/312] [netdrv] net/mlx5e: TX reporter cleanup
Message-id: <20200510145245.10054-14-ahleihel@redhat.com>
Patchwork-id: 306552
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 13/82] net/mlx5e: TX reporter cleanup
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit c9e6c7209a9a26a0281b311c6880b9e2382ad635
Author: Aya Levin <ayal@mellanox.com>
Date: Mon Jun 24 20:33:52 2019 +0300
net/mlx5e: TX reporter cleanup
Remove redundant include files.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en/reporter.h | 1 -
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c | 1 -
2 files changed, 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter.h b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter.h
index e78e92753d73..ed7a3881d2c5 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter.h
@@ -4,7 +4,6 @@
#ifndef __MLX5E_EN_REPORTER_H
#define __MLX5E_EN_REPORTER_H
-#include <linux/mlx5/driver.h>
#include "en.h"
int mlx5e_tx_reporter_create(struct mlx5e_priv *priv);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
index d9116e77ef68..817c6ea7e349 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
@@ -1,7 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2019 Mellanox Technologies. */
-#include <net/devlink.h>
#include "reporter.h"
#include "lib/eq.h"
--
2.13.6

@ -0,0 +1,63 @@
From e8913d1bb5b7a35a1ddc3d58fb18ec240b2d2110 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:37 -0400
Subject: [PATCH 017/312] [netdrv] net/mlx5e: Allow dropping specific tunnel
packets
Message-id: <20200510145245.10054-15-ahleihel@redhat.com>
Patchwork-id: 306555
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 14/82] net/mlx5e: Allow dropping specific tunnel packets
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 6830b468259b45e3b73070474b8cec9388aa8c11
Author: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Date: Thu Aug 1 16:40:59 2019 +0800
net/mlx5e: Allow dropping specific tunnel packets
In some case, we don't want to allow specific tunnel packets
to host that can avoid to take up high CPU (e.g network attacks).
But other tunnel packets which not matched in hardware will be
sent to host too.
$ tc filter add dev vxlan_sys_4789 \
protocol ip chain 0 parent ffff: prio 1 handle 1 \
flower dst_ip 1.1.1.100 ip_proto tcp dst_port 80 \
enc_dst_ip 2.2.2.100 enc_key_id 100 enc_dst_port 4789 \
action tunnel_key unset pipe action drop
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 1f76974dc946..d7d2151d1ef3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -2715,7 +2715,8 @@ static bool actions_match_supported(struct mlx5e_priv *priv,
if (flow_flag_test(flow, EGRESS) &&
!((actions & MLX5_FLOW_CONTEXT_ACTION_DECAP) ||
- (actions & MLX5_FLOW_CONTEXT_ACTION_VLAN_POP)))
+ (actions & MLX5_FLOW_CONTEXT_ACTION_VLAN_POP) ||
+ (actions & MLX5_FLOW_CONTEXT_ACTION_DROP)))
return false;
if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR)
--
2.13.6

@ -0,0 +1,436 @@
From fffd2ca5a253c4a49aa53caa87e833bd0d56e78a Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:39 -0400
Subject: [PATCH 018/312] [netdrv] mlx5: no need to check return value of
debugfs_create functions
Message-id: <20200510145245.10054-17-ahleihel@redhat.com>
Patchwork-id: 306556
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 16/82] mlx5: no need to check return value of debugfs_create functions
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 9f818c8a7388ad1a5c60ace50be6f658c058a5f2
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date: Sat Aug 10 12:17:18 2019 +0200
mlx5: no need to check return value of debugfs_create functions
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
This cleans up a lot of unneeded code and logic around the debugfs
files, making all of this much simpler and easier to understand as we
don't need to keep the dentries saved anymore.
Cc: Saeed Mahameed <saeedm@mellanox.com>
Cc: Leon Romanovsky <leon@kernel.org>
Cc: netdev@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 51 ++---------
drivers/net/ethernet/mellanox/mlx5/core/debugfs.c | 102 ++-------------------
drivers/net/ethernet/mellanox/mlx5/core/eq.c | 11 +--
drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h | 2 +-
drivers/net/ethernet/mellanox/mlx5/core/main.c | 7 +-
.../net/ethernet/mellanox/mlx5/core/mlx5_core.h | 2 +-
include/linux/mlx5/driver.h | 4 +-
7 files changed, 24 insertions(+), 155 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
index 8da5a1cd87af..4b7ca04ae25e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -1368,49 +1368,19 @@ static void clean_debug_files(struct mlx5_core_dev *dev)
debugfs_remove_recursive(dbg->dbg_root);
}
-static int create_debugfs_files(struct mlx5_core_dev *dev)
+static void create_debugfs_files(struct mlx5_core_dev *dev)
{
struct mlx5_cmd_debug *dbg = &dev->cmd.dbg;
- int err = -ENOMEM;
-
- if (!mlx5_debugfs_root)
- return 0;
dbg->dbg_root = debugfs_create_dir("cmd", dev->priv.dbg_root);
- if (!dbg->dbg_root)
- return err;
-
- dbg->dbg_in = debugfs_create_file("in", 0400, dbg->dbg_root,
- dev, &dfops);
- if (!dbg->dbg_in)
- goto err_dbg;
- dbg->dbg_out = debugfs_create_file("out", 0200, dbg->dbg_root,
- dev, &dfops);
- if (!dbg->dbg_out)
- goto err_dbg;
-
- dbg->dbg_outlen = debugfs_create_file("out_len", 0600, dbg->dbg_root,
- dev, &olfops);
- if (!dbg->dbg_outlen)
- goto err_dbg;
-
- dbg->dbg_status = debugfs_create_u8("status", 0600, dbg->dbg_root,
- &dbg->status);
- if (!dbg->dbg_status)
- goto err_dbg;
-
- dbg->dbg_run = debugfs_create_file("run", 0200, dbg->dbg_root, dev, &fops);
- if (!dbg->dbg_run)
- goto err_dbg;
+ debugfs_create_file("in", 0400, dbg->dbg_root, dev, &dfops);
+ debugfs_create_file("out", 0200, dbg->dbg_root, dev, &dfops);
+ debugfs_create_file("out_len", 0600, dbg->dbg_root, dev, &olfops);
+ debugfs_create_u8("status", 0600, dbg->dbg_root, &dbg->status);
+ debugfs_create_file("run", 0200, dbg->dbg_root, dev, &fops);
mlx5_cmdif_debugfs_init(dev);
-
- return 0;
-
-err_dbg:
- clean_debug_files(dev);
- return err;
}
static void mlx5_cmd_change_mod(struct mlx5_core_dev *dev, int mode)
@@ -2007,17 +1977,10 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev)
goto err_cache;
}
- err = create_debugfs_files(dev);
- if (err) {
- err = -ENOMEM;
- goto err_wq;
- }
+ create_debugfs_files(dev);
return 0;
-err_wq:
- destroy_workqueue(cmd->wq);
-
err_cache:
destroy_msg_cache(dev);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c b/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
index a11e22d0b0cc..04854e5fbcd7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
@@ -92,8 +92,6 @@ EXPORT_SYMBOL(mlx5_debugfs_root);
void mlx5_register_debugfs(void)
{
mlx5_debugfs_root = debugfs_create_dir("mlx5", NULL);
- if (IS_ERR_OR_NULL(mlx5_debugfs_root))
- mlx5_debugfs_root = NULL;
}
void mlx5_unregister_debugfs(void)
@@ -101,45 +99,25 @@ void mlx5_unregister_debugfs(void)
debugfs_remove(mlx5_debugfs_root);
}
-int mlx5_qp_debugfs_init(struct mlx5_core_dev *dev)
+void mlx5_qp_debugfs_init(struct mlx5_core_dev *dev)
{
- if (!mlx5_debugfs_root)
- return 0;
-
atomic_set(&dev->num_qps, 0);
dev->priv.qp_debugfs = debugfs_create_dir("QPs", dev->priv.dbg_root);
- if (!dev->priv.qp_debugfs)
- return -ENOMEM;
-
- return 0;
}
void mlx5_qp_debugfs_cleanup(struct mlx5_core_dev *dev)
{
- if (!mlx5_debugfs_root)
- return;
-
debugfs_remove_recursive(dev->priv.qp_debugfs);
}
-int mlx5_eq_debugfs_init(struct mlx5_core_dev *dev)
+void mlx5_eq_debugfs_init(struct mlx5_core_dev *dev)
{
- if (!mlx5_debugfs_root)
- return 0;
-
dev->priv.eq_debugfs = debugfs_create_dir("EQs", dev->priv.dbg_root);
- if (!dev->priv.eq_debugfs)
- return -ENOMEM;
-
- return 0;
}
void mlx5_eq_debugfs_cleanup(struct mlx5_core_dev *dev)
{
- if (!mlx5_debugfs_root)
- return;
-
debugfs_remove_recursive(dev->priv.eq_debugfs);
}
@@ -183,85 +161,41 @@ static const struct file_operations stats_fops = {
.write = average_write,
};
-int mlx5_cmdif_debugfs_init(struct mlx5_core_dev *dev)
+void mlx5_cmdif_debugfs_init(struct mlx5_core_dev *dev)
{
struct mlx5_cmd_stats *stats;
struct dentry **cmd;
const char *namep;
- int err;
int i;
- if (!mlx5_debugfs_root)
- return 0;
-
cmd = &dev->priv.cmdif_debugfs;
*cmd = debugfs_create_dir("commands", dev->priv.dbg_root);
- if (!*cmd)
- return -ENOMEM;
for (i = 0; i < ARRAY_SIZE(dev->cmd.stats); i++) {
stats = &dev->cmd.stats[i];
namep = mlx5_command_str(i);
if (strcmp(namep, "unknown command opcode")) {
stats->root = debugfs_create_dir(namep, *cmd);
- if (!stats->root) {
- mlx5_core_warn(dev, "failed adding command %d\n",
- i);
- err = -ENOMEM;
- goto out;
- }
-
- stats->avg = debugfs_create_file("average", 0400,
- stats->root, stats,
- &stats_fops);
- if (!stats->avg) {
- mlx5_core_warn(dev, "failed creating debugfs file\n");
- err = -ENOMEM;
- goto out;
- }
-
- stats->count = debugfs_create_u64("n", 0400,
- stats->root,
- &stats->n);
- if (!stats->count) {
- mlx5_core_warn(dev, "failed creating debugfs file\n");
- err = -ENOMEM;
- goto out;
- }
+
+ debugfs_create_file("average", 0400, stats->root, stats,
+ &stats_fops);
+ debugfs_create_u64("n", 0400, stats->root, &stats->n);
}
}
-
- return 0;
-out:
- debugfs_remove_recursive(dev->priv.cmdif_debugfs);
- return err;
}
void mlx5_cmdif_debugfs_cleanup(struct mlx5_core_dev *dev)
{
- if (!mlx5_debugfs_root)
- return;
-
debugfs_remove_recursive(dev->priv.cmdif_debugfs);
}
-int mlx5_cq_debugfs_init(struct mlx5_core_dev *dev)
+void mlx5_cq_debugfs_init(struct mlx5_core_dev *dev)
{
- if (!mlx5_debugfs_root)
- return 0;
-
dev->priv.cq_debugfs = debugfs_create_dir("CQs", dev->priv.dbg_root);
- if (!dev->priv.cq_debugfs)
- return -ENOMEM;
-
- return 0;
}
void mlx5_cq_debugfs_cleanup(struct mlx5_core_dev *dev)
{
- if (!mlx5_debugfs_root)
- return;
-
debugfs_remove_recursive(dev->priv.cq_debugfs);
}
@@ -484,7 +418,6 @@ static int add_res_tree(struct mlx5_core_dev *dev, enum dbg_rsc_type type,
{
struct mlx5_rsc_debug *d;
char resn[32];
- int err;
int i;
d = kzalloc(struct_size(d, fields, nfile), GFP_KERNEL);
@@ -496,30 +429,15 @@ static int add_res_tree(struct mlx5_core_dev *dev, enum dbg_rsc_type type,
d->type = type;
sprintf(resn, "0x%x", rsn);
d->root = debugfs_create_dir(resn, root);
- if (!d->root) {
- err = -ENOMEM;
- goto out_free;
- }
for (i = 0; i < nfile; i++) {
d->fields[i].i = i;
- d->fields[i].dent = debugfs_create_file(field[i], 0400,
- d->root, &d->fields[i],
- &fops);
- if (!d->fields[i].dent) {
- err = -ENOMEM;
- goto out_rem;
- }
+ debugfs_create_file(field[i], 0400, d->root, &d->fields[i],
+ &fops);
}
*dbg = d;
return 0;
-out_rem:
- debugfs_remove_recursive(d->root);
-
-out_free:
- kfree(d);
- return err;
}
static void rem_res_tree(struct mlx5_rsc_debug *d)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
index 2df9aaa421c6..09d4c64b6e73 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
@@ -411,7 +411,7 @@ void mlx5_eq_del_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq)
int mlx5_eq_table_init(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *eq_table;
- int i, err;
+ int i;
eq_table = kvzalloc(sizeof(*eq_table), GFP_KERNEL);
if (!eq_table)
@@ -419,9 +419,7 @@ int mlx5_eq_table_init(struct mlx5_core_dev *dev)
dev->priv.eq_table = eq_table;
- err = mlx5_eq_debugfs_init(dev);
- if (err)
- goto kvfree_eq_table;
+ mlx5_eq_debugfs_init(dev);
mutex_init(&eq_table->lock);
for (i = 0; i < MLX5_EVENT_TYPE_MAX; i++)
@@ -429,11 +427,6 @@ int mlx5_eq_table_init(struct mlx5_core_dev *dev)
eq_table->irq_table = dev->priv.irq_table;
return 0;
-
-kvfree_eq_table:
- kvfree(eq_table);
- dev->priv.eq_table = NULL;
- return err;
}
void mlx5_eq_table_cleanup(struct mlx5_core_dev *dev)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
index 3dfab91ae5f2..4be4d2d36218 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
@@ -87,7 +87,7 @@ void mlx5_eq_synchronize_cmd_irq(struct mlx5_core_dev *dev);
int mlx5_debug_eq_add(struct mlx5_core_dev *dev, struct mlx5_eq *eq);
void mlx5_debug_eq_remove(struct mlx5_core_dev *dev, struct mlx5_eq *eq);
-int mlx5_eq_debugfs_init(struct mlx5_core_dev *dev);
+void mlx5_eq_debugfs_init(struct mlx5_core_dev *dev);
void mlx5_eq_debugfs_cleanup(struct mlx5_core_dev *dev);
/* This function should only be called after mlx5_cmd_force_teardown_hca */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index a5bae398a9e7..568d973725b6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -855,11 +855,7 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
goto err_eq_cleanup;
}
- err = mlx5_cq_debugfs_init(dev);
- if (err) {
- mlx5_core_err(dev, "failed to initialize cq debugfs\n");
- goto err_events_cleanup;
- }
+ mlx5_cq_debugfs_init(dev);
mlx5_init_qp_table(dev);
@@ -924,7 +920,6 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
mlx5_cleanup_mkey_table(dev);
mlx5_cleanup_qp_table(dev);
mlx5_cq_debugfs_cleanup(dev);
-err_events_cleanup:
mlx5_events_cleanup(dev);
err_eq_cleanup:
mlx5_eq_table_cleanup(dev);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
index bbcf4ee40ad5..b100489dc85c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
@@ -146,7 +146,7 @@ u64 mlx5_read_internal_timer(struct mlx5_core_dev *dev,
void mlx5_cmd_trigger_completions(struct mlx5_core_dev *dev);
void mlx5_cmd_flush(struct mlx5_core_dev *dev);
-int mlx5_cq_debugfs_init(struct mlx5_core_dev *dev);
+void mlx5_cq_debugfs_init(struct mlx5_core_dev *dev);
void mlx5_cq_debugfs_cleanup(struct mlx5_core_dev *dev);
int mlx5_query_pcam_reg(struct mlx5_core_dev *dev, u32 *pcam, u8 feature_group,
diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
index 2b8b0ef2e425..904d864f7259 100644
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -982,7 +982,7 @@ int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn,
int mlx5_core_attach_mcg(struct mlx5_core_dev *dev, union ib_gid *mgid, u32 qpn);
int mlx5_core_detach_mcg(struct mlx5_core_dev *dev, union ib_gid *mgid, u32 qpn);
-int mlx5_qp_debugfs_init(struct mlx5_core_dev *dev);
+void mlx5_qp_debugfs_init(struct mlx5_core_dev *dev);
void mlx5_qp_debugfs_cleanup(struct mlx5_core_dev *dev);
int mlx5_core_access_reg(struct mlx5_core_dev *dev, void *data_in,
int size_in, void *data_out, int size_out,
@@ -994,7 +994,7 @@ int mlx5_db_alloc_node(struct mlx5_core_dev *dev, struct mlx5_db *db,
void mlx5_db_free(struct mlx5_core_dev *dev, struct mlx5_db *db);
const char *mlx5_command_str(int command);
-int mlx5_cmdif_debugfs_init(struct mlx5_core_dev *dev);
+void mlx5_cmdif_debugfs_init(struct mlx5_core_dev *dev);
void mlx5_cmdif_debugfs_cleanup(struct mlx5_core_dev *dev);
int mlx5_core_create_psv(struct mlx5_core_dev *dev, u32 pdn,
int npsvs, u32 *sig_index);
--
2.13.6

@ -0,0 +1,53 @@
From 371f9058e2e9ac34a7db38b39dbf6f64593c0905 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:40 -0400
Subject: [PATCH 019/312] [netdrv] net/mlx5: Use debug message instead of warn
Message-id: <20200510145245.10054-18-ahleihel@redhat.com>
Patchwork-id: 306559
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 17/82] net/mlx5: Use debug message instead of warn
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 647d58a989b3b0b788c721a08394aec825e3438c
Author: Yishai Hadas <yishaih@mellanox.com>
Date: Thu Aug 8 11:43:55 2019 +0300
net/mlx5: Use debug message instead of warn
As QP may be created by DEVX, it may be valid to not find the rsn in
mlx5 core tree, change the level to be debug.
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Reviewed-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/qp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/qp.c b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
index b8ba74de9555..f0f3abe331da 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/qp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
@@ -162,7 +162,7 @@ static int rsc_event_notifier(struct notifier_block *nb,
common = mlx5_get_rsc(table, rsn);
if (!common) {
- mlx5_core_warn(dev, "Async event for bogus resource 0x%x\n", rsn);
+ mlx5_core_dbg(dev, "Async event for unknown resource 0x%x\n", rsn);
return NOTIFY_OK;
}
--
2.13.6

@ -0,0 +1,75 @@
From b1e3c3ee5f0ae27994321ff5513aba666bcc5813 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:41 -0400
Subject: [PATCH 020/312] [netdrv] net/mlx5: Add XRQ legacy commands opcodes
Message-id: <20200510145245.10054-19-ahleihel@redhat.com>
Patchwork-id: 306558
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 18/82] net/mlx5: Add XRQ legacy commands opcodes
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit b1635ee6120cbeb3de6ab270432b2a2b839c9c56
Author: Yishai Hadas <yishaih@mellanox.com>
Date: Thu Aug 8 11:43:56 2019 +0300
net/mlx5: Add XRQ legacy commands opcodes
Add XRQ legacy commands opcodes, will be used via the DEVX interface.
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Reviewed-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 4 ++++
include/linux/mlx5/mlx5_ifc.h | 2 ++
2 files changed, 6 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
index 4b7ca04ae25e..8242f96ab931 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -446,6 +446,8 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
case MLX5_CMD_OP_CREATE_UMEM:
case MLX5_CMD_OP_DESTROY_UMEM:
case MLX5_CMD_OP_ALLOC_MEMIC:
+ case MLX5_CMD_OP_MODIFY_XRQ:
+ case MLX5_CMD_OP_RELEASE_XRQ_ERROR:
*status = MLX5_DRIVER_STATUS_ABORTED;
*synd = MLX5_DRIVER_SYND;
return -EIO;
@@ -637,6 +639,8 @@ const char *mlx5_command_str(int command)
MLX5_COMMAND_STR_CASE(DESTROY_UCTX);
MLX5_COMMAND_STR_CASE(CREATE_UMEM);
MLX5_COMMAND_STR_CASE(DESTROY_UMEM);
+ MLX5_COMMAND_STR_CASE(RELEASE_XRQ_ERROR);
+ MLX5_COMMAND_STR_CASE(MODIFY_XRQ);
default: return "unknown command opcode";
}
}
diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 7e6149895d87..03cb1cf0e285 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -172,6 +172,8 @@ enum {
MLX5_CMD_OP_QUERY_XRQ_DC_PARAMS_ENTRY = 0x725,
MLX5_CMD_OP_SET_XRQ_DC_PARAMS_ENTRY = 0x726,
MLX5_CMD_OP_QUERY_XRQ_ERROR_PARAMS = 0x727,
+ MLX5_CMD_OP_RELEASE_XRQ_ERROR = 0x729,
+ MLX5_CMD_OP_MODIFY_XRQ = 0x72a,
MLX5_CMD_OP_QUERY_ESW_FUNCTIONS = 0x740,
MLX5_CMD_OP_QUERY_VPORT_STATE = 0x750,
MLX5_CMD_OP_MODIFY_VPORT_STATE = 0x751,
--
2.13.6

@ -0,0 +1,89 @@
From ab6346c0e2832d07aeba3f097ac796d14d198930 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:45 -0400
Subject: [PATCH 021/312] [netdrv] net/mlx5e: Rename reporter header file
Message-id: <20200510145245.10054-23-ahleihel@redhat.com>
Patchwork-id: 306565
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 22/82] net/mlx5e: Rename reporter header file
Bugzilla: 1790198 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Bugzilla: http://bugzilla.redhat.com/1790198
Upstream: v5.4-rc1
commit 4edc17fdfdf15c2971d15cbfa4d6f2f5f537ee5e
Author: Aya Levin <ayal@mellanox.com>
Date: Mon Jul 1 14:53:34 2019 +0300
net/mlx5e: Rename reporter header file
Rename reporter.h -> health.h so patches in the set can use it for
health related functionality.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en/{reporter.h => health.h} | 4 ++--
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c | 2 +-
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
rename drivers/net/ethernet/mellanox/mlx5/core/en/{reporter.h => health.h} (84%)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter.h b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
similarity index 84%
rename from drivers/net/ethernet/mellanox/mlx5/core/en/reporter.h
rename to drivers/net/ethernet/mellanox/mlx5/core/en/health.h
index ed7a3881d2c5..cee840e40a05 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
@@ -1,8 +1,8 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2019 Mellanox Technologies. */
-#ifndef __MLX5E_EN_REPORTER_H
-#define __MLX5E_EN_REPORTER_H
+#ifndef __MLX5E_EN_HEALTH_H
+#define __MLX5E_EN_HEALTH_H
#include "en.h"
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
index 817c6ea7e349..9ff19d69619f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2019 Mellanox Technologies. */
-#include "reporter.h"
+#include "health.h"
#include "lib/eq.h"
#define MLX5E_TX_REPORTER_PER_SQ_MAX_LEN 256
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 9ffcfa017d4f..118ad4717bfd 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -56,7 +56,7 @@
#include "en/xdp.h"
#include "lib/eq.h"
#include "en/monitor_stats.h"
-#include "en/reporter.h"
+#include "en/health.h"
#include "en/params.h"
#include "en/xsk/umem.h"
#include "en/xsk/setup.h"
--
2.13.6

@ -0,0 +1,149 @@
From 287e3c4357bff248a4b5228fd39588cc7d43c860 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:46 -0400
Subject: [PATCH 022/312] [netdrv] net/mlx5e: Change naming convention for
reporter's functions
Message-id: <20200510145245.10054-24-ahleihel@redhat.com>
Patchwork-id: 306563
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 23/82] net/mlx5e: Change naming convention for reporter's functions
Bugzilla: 1790198 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Bugzilla: http://bugzilla.redhat.com/1790198
Upstream: v5.4-rc1
Conflicts:
- drivers/net/ethernet/mellanox/mlx5/core/en_main.c
Context diff due to already backported commit
3c14562663c6 ("net/mlx5e: Expose new function for TIS destroy loop")
---> We now call mlx5e_destroy_tises instead of the for loop.
commit 06293ae4fa0a1b62bf3bb8add8f9bbe8815b0aba
Author: Aya Levin <ayal@mellanox.com>
Date: Mon Jul 1 15:51:51 2019 +0300
net/mlx5e: Change naming convention for reporter's functions
Change from mlx5e_tx_reporter_* to mlx5e_reporter_tx_*. In the following
patches in the set rx reporter is added, the new naming convention is
more uniformed.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en/health.h | 8 ++++----
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c | 8 ++++----
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 8 ++++----
3 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
index cee840e40a05..c7a5a149011e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
@@ -6,9 +6,9 @@
#include "en.h"
-int mlx5e_tx_reporter_create(struct mlx5e_priv *priv);
-void mlx5e_tx_reporter_destroy(struct mlx5e_priv *priv);
-void mlx5e_tx_reporter_err_cqe(struct mlx5e_txqsq *sq);
-int mlx5e_tx_reporter_timeout(struct mlx5e_txqsq *sq);
+int mlx5e_reporter_tx_create(struct mlx5e_priv *priv);
+void mlx5e_reporter_tx_destroy(struct mlx5e_priv *priv);
+void mlx5e_reporter_tx_err_cqe(struct mlx5e_txqsq *sq);
+int mlx5e_reporter_tx_timeout(struct mlx5e_txqsq *sq);
#endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
index 9ff19d69619f..62b95f62e4dc 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
@@ -123,7 +123,7 @@ static int mlx5_tx_health_report(struct devlink_health_reporter *tx_reporter,
return devlink_health_report(tx_reporter, err_str, err_ctx);
}
-void mlx5e_tx_reporter_err_cqe(struct mlx5e_txqsq *sq)
+void mlx5e_reporter_tx_err_cqe(struct mlx5e_txqsq *sq)
{
char err_str[MLX5E_TX_REPORTER_PER_SQ_MAX_LEN];
struct mlx5e_tx_err_ctx err_ctx = {0};
@@ -156,7 +156,7 @@ static int mlx5e_tx_reporter_timeout_recover(struct mlx5e_txqsq *sq)
return 0;
}
-int mlx5e_tx_reporter_timeout(struct mlx5e_txqsq *sq)
+int mlx5e_reporter_tx_timeout(struct mlx5e_txqsq *sq)
{
char err_str[MLX5E_TX_REPORTER_PER_SQ_MAX_LEN];
struct mlx5e_tx_err_ctx err_ctx;
@@ -285,7 +285,7 @@ static const struct devlink_health_reporter_ops mlx5_tx_reporter_ops = {
#define MLX5_REPORTER_TX_GRACEFUL_PERIOD 500
-int mlx5e_tx_reporter_create(struct mlx5e_priv *priv)
+int mlx5e_reporter_tx_create(struct mlx5e_priv *priv)
{
struct devlink_health_reporter *reporter;
struct mlx5_core_dev *mdev = priv->mdev;
@@ -305,7 +305,7 @@ int mlx5e_tx_reporter_create(struct mlx5e_priv *priv)
return 0;
}
-void mlx5e_tx_reporter_destroy(struct mlx5e_priv *priv)
+void mlx5e_reporter_tx_destroy(struct mlx5e_priv *priv)
{
if (!priv->tx_reporter)
return;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 118ad4717bfd..49f5dbab2b8e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -1373,7 +1373,7 @@ static void mlx5e_tx_err_cqe_work(struct work_struct *recover_work)
struct mlx5e_txqsq *sq = container_of(recover_work, struct mlx5e_txqsq,
recover_work);
- mlx5e_tx_reporter_err_cqe(sq);
+ mlx5e_reporter_tx_err_cqe(sq);
}
int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params,
@@ -3210,7 +3210,7 @@ int mlx5e_create_tises(struct mlx5e_priv *priv)
static void mlx5e_cleanup_nic_tx(struct mlx5e_priv *priv)
{
- mlx5e_tx_reporter_destroy(priv);
+ mlx5e_reporter_tx_destroy(priv);
mlx5e_destroy_tises(priv);
}
@@ -4283,7 +4283,7 @@ static void mlx5e_tx_timeout_work(struct work_struct *work)
if (!netif_xmit_stopped(dev_queue))
continue;
- if (mlx5e_tx_reporter_timeout(sq))
+ if (mlx5e_reporter_tx_timeout(sq))
report_failed = true;
}
@@ -5080,7 +5080,7 @@ static int mlx5e_init_nic_tx(struct mlx5e_priv *priv)
#ifdef CONFIG_MLX5_CORE_EN_DCB
mlx5e_dcbnl_initialize(priv);
#endif
- mlx5e_tx_reporter_create(priv);
+ mlx5e_reporter_tx_create(priv);
return 0;
}
--
2.13.6

@ -0,0 +1,399 @@
From 6258b703b584c06c8f63788431a978bd4db8bb97 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:47 -0400
Subject: [PATCH 023/312] [netdrv] net/mlx5e: Generalize tx reporter's
functionality
Message-id: <20200510145245.10054-25-ahleihel@redhat.com>
Patchwork-id: 306564
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 24/82] net/mlx5e: Generalize tx reporter's functionality
Bugzilla: 1790198 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Bugzilla: http://bugzilla.redhat.com/1790198
Upstream: v5.4-rc1
Conflicts:
- drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
Conext diff due to already mereged commit:
e7a981050a7f ("devlink: propagate extack down to health reporter ops")
---> Function mlx5e_tx_reporter_recover takes also extact parameter now.
commit c50de4af1d635fab3a5c8bd358f55623c01f7ee5
Author: Aya Levin <ayal@mellanox.com>
Date: Mon Jul 1 15:08:13 2019 +0300
net/mlx5e: Generalize tx reporter's functionality
Prepare for code sharing with rx reporter, which is added in the
following patches in the set. Introduce a generic error_ctx for
agnostic recovery despatch.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/Makefile | 5 +-
.../net/ethernet/mellanox/mlx5/core/en/health.c | 82 ++++++++++++
.../net/ethernet/mellanox/mlx5/core/en/health.h | 14 +++
.../ethernet/mellanox/mlx5/core/en/reporter_tx.c | 140 ++++++---------------
4 files changed, 137 insertions(+), 104 deletions(-)
create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/en/health.c
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
index 35079e1f1f6f..4369dfd04a34 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
@@ -23,8 +23,9 @@ mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \
#
mlx5_core-$(CONFIG_MLX5_CORE_EN) += en_main.o en_common.o en_fs.o en_ethtool.o \
en_tx.o en_rx.o en_dim.o en_txrx.o en/xdp.o en_stats.o \
- en_selftest.o en/port.o en/monitor_stats.o en/reporter_tx.o \
- en/params.o en/xsk/umem.o en/xsk/setup.o en/xsk/rx.o en/xsk/tx.o
+ en_selftest.o en/port.o en/monitor_stats.o en/health.o \
+ en/reporter_tx.o en/params.o en/xsk/umem.o en/xsk/setup.o \
+ en/xsk/rx.o en/xsk/tx.o
#
# Netdev extra
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
new file mode 100644
index 000000000000..fc3112921bd3
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
@@ -0,0 +1,82 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2019 Mellanox Technologies.
+
+#include "health.h"
+#include "lib/eq.h"
+
+int mlx5e_health_sq_to_ready(struct mlx5e_channel *channel, u32 sqn)
+{
+ struct mlx5_core_dev *mdev = channel->mdev;
+ struct net_device *dev = channel->netdev;
+ struct mlx5e_modify_sq_param msp = {};
+ int err;
+
+ msp.curr_state = MLX5_SQC_STATE_ERR;
+ msp.next_state = MLX5_SQC_STATE_RST;
+
+ err = mlx5e_modify_sq(mdev, sqn, &msp);
+ if (err) {
+ netdev_err(dev, "Failed to move sq 0x%x to reset\n", sqn);
+ return err;
+ }
+
+ memset(&msp, 0, sizeof(msp));
+ msp.curr_state = MLX5_SQC_STATE_RST;
+ msp.next_state = MLX5_SQC_STATE_RDY;
+
+ err = mlx5e_modify_sq(mdev, sqn, &msp);
+ if (err) {
+ netdev_err(dev, "Failed to move sq 0x%x to ready\n", sqn);
+ return err;
+ }
+
+ return 0;
+}
+
+int mlx5e_health_recover_channels(struct mlx5e_priv *priv)
+{
+ int err = 0;
+
+ rtnl_lock();
+ mutex_lock(&priv->state_lock);
+
+ if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
+ goto out;
+
+ err = mlx5e_safe_reopen_channels(priv);
+
+out:
+ mutex_unlock(&priv->state_lock);
+ rtnl_unlock();
+
+ return err;
+}
+
+int mlx5e_health_channel_eq_recover(struct mlx5_eq_comp *eq, struct mlx5e_channel *channel)
+{
+ u32 eqe_count;
+
+ netdev_err(channel->netdev, "EQ 0x%x: Cons = 0x%x, irqn = 0x%x\n",
+ eq->core.eqn, eq->core.cons_index, eq->core.irqn);
+
+ eqe_count = mlx5_eq_poll_irq_disabled(eq);
+ if (!eqe_count)
+ return -EIO;
+
+ netdev_err(channel->netdev, "Recovered %d eqes on EQ 0x%x\n",
+ eqe_count, eq->core.eqn);
+
+ channel->stats->eq_rearm++;
+ return 0;
+}
+
+int mlx5e_health_report(struct mlx5e_priv *priv,
+ struct devlink_health_reporter *reporter, char *err_str,
+ struct mlx5e_err_ctx *err_ctx)
+{
+ if (!reporter) {
+ netdev_err(priv->netdev, err_str);
+ return err_ctx->recover(&err_ctx->ctx);
+ }
+ return devlink_health_report(reporter, err_str, err_ctx);
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
index c7a5a149011e..386bda6104aa 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
@@ -11,4 +11,18 @@ void mlx5e_reporter_tx_destroy(struct mlx5e_priv *priv);
void mlx5e_reporter_tx_err_cqe(struct mlx5e_txqsq *sq);
int mlx5e_reporter_tx_timeout(struct mlx5e_txqsq *sq);
+#define MLX5E_REPORTER_PER_Q_MAX_LEN 256
+
+struct mlx5e_err_ctx {
+ int (*recover)(void *ctx);
+ void *ctx;
+};
+
+int mlx5e_health_sq_to_ready(struct mlx5e_channel *channel, u32 sqn);
+int mlx5e_health_channel_eq_recover(struct mlx5_eq_comp *eq, struct mlx5e_channel *channel);
+int mlx5e_health_recover_channels(struct mlx5e_priv *priv);
+int mlx5e_health_report(struct mlx5e_priv *priv,
+ struct devlink_health_reporter *reporter, char *err_str,
+ struct mlx5e_err_ctx *err_ctx);
+
#endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
index 62b95f62e4dc..6f9f42ab3005 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
@@ -2,14 +2,6 @@
/* Copyright (c) 2019 Mellanox Technologies. */
#include "health.h"
-#include "lib/eq.h"
-
-#define MLX5E_TX_REPORTER_PER_SQ_MAX_LEN 256
-
-struct mlx5e_tx_err_ctx {
- int (*recover)(struct mlx5e_txqsq *sq);
- struct mlx5e_txqsq *sq;
-};
static int mlx5e_wait_for_sq_flush(struct mlx5e_txqsq *sq)
{
@@ -39,41 +31,20 @@ static void mlx5e_reset_txqsq_cc_pc(struct mlx5e_txqsq *sq)
sq->pc = 0;
}
-static int mlx5e_sq_to_ready(struct mlx5e_txqsq *sq, int curr_state)
+static int mlx5e_tx_reporter_err_cqe_recover(void *ctx)
{
- struct mlx5_core_dev *mdev = sq->channel->mdev;
- struct net_device *dev = sq->channel->netdev;
- struct mlx5e_modify_sq_param msp = {0};
+ struct mlx5_core_dev *mdev;
+ struct net_device *dev;
+ struct mlx5e_txqsq *sq;
+ u8 state;
int err;
- msp.curr_state = curr_state;
- msp.next_state = MLX5_SQC_STATE_RST;
-
- err = mlx5e_modify_sq(mdev, sq->sqn, &msp);
- if (err) {
- netdev_err(dev, "Failed to move sq 0x%x to reset\n", sq->sqn);
- return err;
- }
-
- memset(&msp, 0, sizeof(msp));
- msp.curr_state = MLX5_SQC_STATE_RST;
- msp.next_state = MLX5_SQC_STATE_RDY;
-
- err = mlx5e_modify_sq(mdev, sq->sqn, &msp);
- if (err) {
- netdev_err(dev, "Failed to move sq 0x%x to ready\n", sq->sqn);
- return err;
- }
-
- return 0;
-}
+ sq = ctx;
+ mdev = sq->channel->mdev;
+ dev = sq->channel->netdev;
-static int mlx5e_tx_reporter_err_cqe_recover(struct mlx5e_txqsq *sq)
-{
- struct mlx5_core_dev *mdev = sq->channel->mdev;
- struct net_device *dev = sq->channel->netdev;
- u8 state;
- int err;
+ if (!test_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state))
+ return 0;
err = mlx5_core_query_sq_state(mdev, sq->sqn, &state);
if (err) {
@@ -96,7 +67,7 @@ static int mlx5e_tx_reporter_err_cqe_recover(struct mlx5e_txqsq *sq)
* pending WQEs. SQ can safely reset the SQ.
*/
- err = mlx5e_sq_to_ready(sq, state);
+ err = mlx5e_health_sq_to_ready(sq->channel, sq->sqn);
if (err)
goto out;
@@ -111,102 +82,66 @@ static int mlx5e_tx_reporter_err_cqe_recover(struct mlx5e_txqsq *sq)
return err;
}
-static int mlx5_tx_health_report(struct devlink_health_reporter *tx_reporter,
- char *err_str,
- struct mlx5e_tx_err_ctx *err_ctx)
-{
- if (!tx_reporter) {
- netdev_err(err_ctx->sq->channel->netdev, err_str);
- return err_ctx->recover(err_ctx->sq);
- }
-
- return devlink_health_report(tx_reporter, err_str, err_ctx);
-}
-
void mlx5e_reporter_tx_err_cqe(struct mlx5e_txqsq *sq)
{
- char err_str[MLX5E_TX_REPORTER_PER_SQ_MAX_LEN];
- struct mlx5e_tx_err_ctx err_ctx = {0};
+ struct mlx5e_priv *priv = sq->channel->priv;
+ char err_str[MLX5E_REPORTER_PER_Q_MAX_LEN];
+ struct mlx5e_err_ctx err_ctx = {0};
- err_ctx.sq = sq;
- err_ctx.recover = mlx5e_tx_reporter_err_cqe_recover;
+ err_ctx.ctx = sq;
+ err_ctx.recover = mlx5e_tx_reporter_err_cqe_recover;
sprintf(err_str, "ERR CQE on SQ: 0x%x", sq->sqn);
- mlx5_tx_health_report(sq->channel->priv->tx_reporter, err_str,
- &err_ctx);
+ mlx5e_health_report(priv, priv->tx_reporter, err_str, &err_ctx);
}
-static int mlx5e_tx_reporter_timeout_recover(struct mlx5e_txqsq *sq)
+static int mlx5e_tx_reporter_timeout_recover(void *ctx)
{
- struct mlx5_eq_comp *eq = sq->cq.mcq.eq;
- u32 eqe_count;
-
- netdev_err(sq->channel->netdev, "EQ 0x%x: Cons = 0x%x, irqn = 0x%x\n",
- eq->core.eqn, eq->core.cons_index, eq->core.irqn);
+ struct mlx5_eq_comp *eq;
+ struct mlx5e_txqsq *sq;
+ int err;
- eqe_count = mlx5_eq_poll_irq_disabled(eq);
- if (!eqe_count) {
+ sq = ctx;
+ eq = sq->cq.mcq.eq;
+ err = mlx5e_health_channel_eq_recover(eq, sq->channel);
+ if (err)
clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);
- return -EIO;
- }
- netdev_err(sq->channel->netdev, "Recover %d eqes on EQ 0x%x\n",
- eqe_count, eq->core.eqn);
- sq->channel->stats->eq_rearm++;
- return 0;
+ return err;
}
int mlx5e_reporter_tx_timeout(struct mlx5e_txqsq *sq)
{
- char err_str[MLX5E_TX_REPORTER_PER_SQ_MAX_LEN];
- struct mlx5e_tx_err_ctx err_ctx;
+ struct mlx5e_priv *priv = sq->channel->priv;
+ char err_str[MLX5E_REPORTER_PER_Q_MAX_LEN];
+ struct mlx5e_err_ctx err_ctx;
- err_ctx.sq = sq;
- err_ctx.recover = mlx5e_tx_reporter_timeout_recover;
+ err_ctx.ctx = sq;
+ err_ctx.recover = mlx5e_tx_reporter_timeout_recover;
sprintf(err_str,
"TX timeout on queue: %d, SQ: 0x%x, CQ: 0x%x, SQ Cons: 0x%x SQ Prod: 0x%x, usecs since last trans: %u\n",
sq->channel->ix, sq->sqn, sq->cq.mcq.cqn, sq->cc, sq->pc,
jiffies_to_usecs(jiffies - sq->txq->trans_start));
- return mlx5_tx_health_report(sq->channel->priv->tx_reporter, err_str,
- &err_ctx);
+ return mlx5e_health_report(priv, priv->tx_reporter, err_str, &err_ctx);
}
/* state lock cannot be grabbed within this function.
* It can cause a dead lock or a read-after-free.
*/
-static int mlx5e_tx_reporter_recover_from_ctx(struct mlx5e_tx_err_ctx *err_ctx)
-{
- return err_ctx->recover(err_ctx->sq);
-}
-
-static int mlx5e_tx_reporter_recover_all(struct mlx5e_priv *priv)
+static int mlx5e_tx_reporter_recover_from_ctx(struct mlx5e_err_ctx *err_ctx)
{
- int err = 0;
-
- rtnl_lock();
- mutex_lock(&priv->state_lock);
-
- if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
- goto out;
-
- err = mlx5e_safe_reopen_channels(priv);
-
-out:
- mutex_unlock(&priv->state_lock);
- rtnl_unlock();
-
- return err;
+ return err_ctx->recover(err_ctx->ctx);
}
static int mlx5e_tx_reporter_recover(struct devlink_health_reporter *reporter,
void *context)
{
struct mlx5e_priv *priv = devlink_health_reporter_priv(reporter);
- struct mlx5e_tx_err_ctx *err_ctx = context;
+ struct mlx5e_err_ctx *err_ctx = context;
return err_ctx ? mlx5e_tx_reporter_recover_from_ctx(err_ctx) :
- mlx5e_tx_reporter_recover_all(priv);
+ mlx5e_health_recover_channels(priv);
}
static int
@@ -289,8 +224,9 @@ int mlx5e_reporter_tx_create(struct mlx5e_priv *priv)
{
struct devlink_health_reporter *reporter;
struct mlx5_core_dev *mdev = priv->mdev;
- struct devlink *devlink = priv_to_devlink(mdev);
+ struct devlink *devlink;
+ devlink = priv_to_devlink(mdev);
reporter =
devlink_health_reporter_create(devlink, &mlx5_tx_reporter_ops,
MLX5_REPORTER_TX_GRACEFUL_PERIOD,
--
2.13.6

@ -0,0 +1,89 @@
From 0da733b2b08fa6c3c9036b1b45ed8fcbc50727ef Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:48 -0400
Subject: [PATCH 024/312] [netdrv] net/mlx5e: Extend tx diagnose function
Message-id: <20200510145245.10054-26-ahleihel@redhat.com>
Patchwork-id: 306566
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 25/82] net/mlx5e: Extend tx diagnose function
Bugzilla: 1790198 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Bugzilla: http://bugzilla.redhat.com/1790198
Upstream: v5.4-rc1
commit dd921fd24179e51fc8d8d7bd7978f369da5ba34a
Author: Aya Levin <ayal@mellanox.com>
Date: Mon Jun 24 21:41:21 2019 +0300
net/mlx5e: Extend tx diagnose function
The following patches in the set enhance the diagnostics info of tx
reporter. Therefore, it is better to pass a pointer to the SQ for
further data extraction.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../net/ethernet/mellanox/mlx5/core/en/reporter_tx.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
index 6f9f42ab3005..b9429ff8d9c4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
@@ -146,15 +146,22 @@ static int mlx5e_tx_reporter_recover(struct devlink_health_reporter *reporter,
static int
mlx5e_tx_reporter_build_diagnose_output(struct devlink_fmsg *fmsg,
- u32 sqn, u8 state, bool stopped)
+ struct mlx5e_txqsq *sq)
{
+ struct mlx5e_priv *priv = sq->channel->priv;
+ bool stopped = netif_xmit_stopped(sq->txq);
+ u8 state;
int err;
+ err = mlx5_core_query_sq_state(priv->mdev, sq->sqn, &state);
+ if (err)
+ return err;
+
err = devlink_fmsg_obj_nest_start(fmsg);
if (err)
return err;
- err = devlink_fmsg_u32_pair_put(fmsg, "sqn", sqn);
+ err = devlink_fmsg_u32_pair_put(fmsg, "sqn", sq->sqn);
if (err)
return err;
@@ -191,15 +198,8 @@ static int mlx5e_tx_reporter_diagnose(struct devlink_health_reporter *reporter,
for (i = 0; i < priv->channels.num * priv->channels.params.num_tc;
i++) {
struct mlx5e_txqsq *sq = priv->txq2sq[i];
- u8 state;
-
- err = mlx5_core_query_sq_state(priv->mdev, sq->sqn, &state);
- if (err)
- goto unlock;
- err = mlx5e_tx_reporter_build_diagnose_output(fmsg, sq->sqn,
- state,
- netif_xmit_stopped(sq->txq));
+ err = mlx5e_tx_reporter_build_diagnose_output(fmsg, sq);
if (err)
goto unlock;
}
--
2.13.6

@ -0,0 +1,270 @@
From 22b79810283de893e445fec4710fd5645cf90237 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:49 -0400
Subject: [PATCH 025/312] [netdrv] net/mlx5e: Extend tx reporter diagnostics
output
Message-id: <20200510145245.10054-27-ahleihel@redhat.com>
Patchwork-id: 306567
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 26/82] net/mlx5e: Extend tx reporter diagnostics output
Bugzilla: 1790198 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Bugzilla: http://bugzilla.redhat.com/1790198
Upstream: v5.4-rc1
commit 2d708887a4b1cb142c3179b3b1030dab047467b6
Author: Aya Levin <ayal@mellanox.com>
Date: Sun Jun 30 11:34:15 2019 +0300
net/mlx5e: Extend tx reporter diagnostics output
Enhance tx reporter's diagnostics output to include: information common
to all SQs: SQ size, SQ stride size.
In addition add channel ix, tc, txq ix, cc and pc.
$ devlink health diagnose pci/0000:00:0b.0 reporter tx
Common config:
SQ:
stride size: 64 size: 1024
SQs:
channel ix: 0 tc: 0 txq ix: 0 sqn: 4307 HW state: 1 stopped: false cc: 0 pc: 0
channel ix: 1 tc: 0 txq ix: 1 sqn: 4312 HW state: 1 stopped: false cc: 0 pc: 0
channel ix: 2 tc: 0 txq ix: 2 sqn: 4317 HW state: 1 stopped: false cc: 0 pc: 0
channel ix: 3 tc: 0 txq ix: 3 sqn: 4322 HW state: 1 stopped: false cc: 0 pc: 0
$ devlink health diagnose pci/0000:00:0b.0 reporter tx -jp
{
"Common config": {
"SQ": {
"stride size": 64,
"size": 1024
}
},
"SQs": [ {
"channel ix": 0,
"tc": 0,
"txq ix": 0,
"sqn": 4307,
"HW state": 1,
"stopped": false,
"cc": 0,
"pc": 0
},{
"channel ix": 1,
"tc": 0,
"txq ix": 1,
"sqn": 4312,
"HW state": 1,
"stopped": false,
"cc": 0,
"pc": 0
},{
"channel ix": 2,
"tc": 0,
"txq ix": 2,
"sqn": 4317,
"HW state": 1,
"stopped": false,
"cc": 0,
"pc": 0
},{
"channel ix": 3,
"tc": 0,
"txq ix": 3,
"sqn": 4322,
"HW state": 1,
"stopped": false,
"cc": 0,
"pc": 0
} ]
}
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../net/ethernet/mellanox/mlx5/core/en/health.c | 30 ++++++++++
.../net/ethernet/mellanox/mlx5/core/en/health.h | 3 +
.../ethernet/mellanox/mlx5/core/en/reporter_tx.c | 69 +++++++++++++++++++---
3 files changed, 94 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
index fc3112921bd3..dab563f07157 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
@@ -4,6 +4,36 @@
#include "health.h"
#include "lib/eq.h"
+int mlx5e_reporter_named_obj_nest_start(struct devlink_fmsg *fmsg, char *name)
+{
+ int err;
+
+ err = devlink_fmsg_pair_nest_start(fmsg, name);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_obj_nest_start(fmsg);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+int mlx5e_reporter_named_obj_nest_end(struct devlink_fmsg *fmsg)
+{
+ int err;
+
+ err = devlink_fmsg_obj_nest_end(fmsg);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_pair_nest_end(fmsg);
+ if (err)
+ return err;
+
+ return 0;
+}
+
int mlx5e_health_sq_to_ready(struct mlx5e_channel *channel, u32 sqn)
{
struct mlx5_core_dev *mdev = channel->mdev;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
index 386bda6104aa..112771ad516c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
@@ -11,6 +11,9 @@ void mlx5e_reporter_tx_destroy(struct mlx5e_priv *priv);
void mlx5e_reporter_tx_err_cqe(struct mlx5e_txqsq *sq);
int mlx5e_reporter_tx_timeout(struct mlx5e_txqsq *sq);
+int mlx5e_reporter_named_obj_nest_start(struct devlink_fmsg *fmsg, char *name);
+int mlx5e_reporter_named_obj_nest_end(struct devlink_fmsg *fmsg);
+
#define MLX5E_REPORTER_PER_Q_MAX_LEN 256
struct mlx5e_err_ctx {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
index b9429ff8d9c4..a5d0fcbb85af 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
@@ -146,7 +146,7 @@ static int mlx5e_tx_reporter_recover(struct devlink_health_reporter *reporter,
static int
mlx5e_tx_reporter_build_diagnose_output(struct devlink_fmsg *fmsg,
- struct mlx5e_txqsq *sq)
+ struct mlx5e_txqsq *sq, int tc)
{
struct mlx5e_priv *priv = sq->channel->priv;
bool stopped = netif_xmit_stopped(sq->txq);
@@ -161,6 +161,18 @@ mlx5e_tx_reporter_build_diagnose_output(struct devlink_fmsg *fmsg,
if (err)
return err;
+ err = devlink_fmsg_u32_pair_put(fmsg, "channel ix", sq->ch_ix);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u32_pair_put(fmsg, "tc", tc);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u32_pair_put(fmsg, "txq ix", sq->txq_ix);
+ if (err)
+ return err;
+
err = devlink_fmsg_u32_pair_put(fmsg, "sqn", sq->sqn);
if (err)
return err;
@@ -173,6 +185,14 @@ mlx5e_tx_reporter_build_diagnose_output(struct devlink_fmsg *fmsg,
if (err)
return err;
+ err = devlink_fmsg_u32_pair_put(fmsg, "cc", sq->cc);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u32_pair_put(fmsg, "pc", sq->pc);
+ if (err)
+ return err;
+
err = devlink_fmsg_obj_nest_end(fmsg);
if (err)
return err;
@@ -184,24 +204,57 @@ static int mlx5e_tx_reporter_diagnose(struct devlink_health_reporter *reporter,
struct devlink_fmsg *fmsg)
{
struct mlx5e_priv *priv = devlink_health_reporter_priv(reporter);
- int i, err = 0;
+ struct mlx5e_txqsq *generic_sq = priv->txq2sq[0];
+ u32 sq_stride, sq_sz;
+
+ int i, tc, err = 0;
mutex_lock(&priv->state_lock);
if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
goto unlock;
+ sq_sz = mlx5_wq_cyc_get_size(&generic_sq->wq);
+ sq_stride = MLX5_SEND_WQE_BB;
+
+ err = mlx5e_reporter_named_obj_nest_start(fmsg, "Common Config");
+ if (err)
+ goto unlock;
+
+ err = mlx5e_reporter_named_obj_nest_start(fmsg, "SQ");
+ if (err)
+ goto unlock;
+
+ err = devlink_fmsg_u64_pair_put(fmsg, "stride size", sq_stride);
+ if (err)
+ goto unlock;
+
+ err = devlink_fmsg_u32_pair_put(fmsg, "size", sq_sz);
+ if (err)
+ goto unlock;
+
+ err = mlx5e_reporter_named_obj_nest_end(fmsg);
+ if (err)
+ goto unlock;
+
+ err = mlx5e_reporter_named_obj_nest_end(fmsg);
+ if (err)
+ goto unlock;
+
err = devlink_fmsg_arr_pair_nest_start(fmsg, "SQs");
if (err)
goto unlock;
- for (i = 0; i < priv->channels.num * priv->channels.params.num_tc;
- i++) {
- struct mlx5e_txqsq *sq = priv->txq2sq[i];
+ for (i = 0; i < priv->channels.num; i++) {
+ struct mlx5e_channel *c = priv->channels.c[i];
+
+ for (tc = 0; tc < priv->channels.params.num_tc; tc++) {
+ struct mlx5e_txqsq *sq = &c->sq[tc];
- err = mlx5e_tx_reporter_build_diagnose_output(fmsg, sq);
- if (err)
- goto unlock;
+ err = mlx5e_tx_reporter_build_diagnose_output(fmsg, sq, tc);
+ if (err)
+ goto unlock;
+ }
}
err = devlink_fmsg_arr_pair_nest_end(fmsg);
if (err)
--
2.13.6

@ -0,0 +1,273 @@
From 53141c2d2ece30134507bf0342288ed1340a8d83 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:50 -0400
Subject: [PATCH 026/312] [netdrv] net/mlx5e: Add cq info to tx reporter
diagnose
Message-id: <20200510145245.10054-28-ahleihel@redhat.com>
Patchwork-id: 306568
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 27/82] net/mlx5e: Add cq info to tx reporter diagnose
Bugzilla: 1790198 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Bugzilla: http://bugzilla.redhat.com/1790198
Upstream: v5.4-rc1
commit 2bf09e60ae5ef68c2282f97baf37b7dbd9cc1d48
Author: Aya Levin <ayal@mellanox.com>
Date: Sun Jun 30 15:08:00 2019 +0300
net/mlx5e: Add cq info to tx reporter diagnose
Add cq information to general diagnose output: CQ size and stride size.
Per SQ add information about the related CQ: cqn and CQ's HW status.
$ devlink health diagnose pci/0000:00:0b.0 reporter tx
Common Config:
SQ:
stride size: 64 size: 1024
CQ:
stride size: 64 size: 1024
SQs:
channel ix: 0 tc: 0 txq ix: 0 sqn: 4307 HW state: 1 stopped: false cc: 0 pc: 0
CQ:
cqn: 1030 HW status: 0
channel ix: 1 tc: 0 txq ix: 1 sqn: 4312 HW state: 1 stopped: false cc: 0 pc: 0
CQ:
cqn: 1034 HW status: 0
channel ix: 2 tc: 0 txq ix: 2 sqn: 4317 HW state: 1 stopped: false cc: 0 pc: 0
CQ:
cqn: 1038 HW status: 0
channel ix: 3 tc: 0 txq ix: 3 sqn: 4322 HW state: 1 stopped: false cc: 0 pc: 0
CQ:
cqn: 1042 HW status: 0
$ devlink health diagnose pci/0000:00:0b.0 reporter tx -jp
{
"Common Config": {
"SQ": {
"stride size": 64,
"size": 1024
},
"CQ": {
"stride size": 64,
"size": 1024
}
},
"SQs": [ {
"channel ix": 0,
"tc": 0,
"txq ix": 0,
"sqn": 4307,
"HW state": 1,
"stopped": false,
"cc": 0,
"pc": 0,
"CQ": {
"cqn": 1030,
"HW status": 0
}
},{
"channel ix": 1,
"tc": 0,
"txq ix": 1,
"sqn": 4312,
"HW state": 1,
"stopped": false,
"cc": 0,
"pc": 0,
"CQ": {
"cqn": 1034,
"HW status": 0
}
},{
"channel ix": 2,
"tc": 0,
"txq ix": 2,
"sqn": 4317,
"HW state": 1,
"stopped": false,
"cc": 0,
"pc": 0,
"CQ": {
"cqn": 1038,
"HW status": 0
}
},{
"channel ix": 3,
"tc": 0,
"txq ix": 3,
"sqn": 4322,
"HW state": 1,
"stopped": false,
"cc": 0,
"pc": 0,
"CQ": {
"cqn": 1042,
"HW status": 0
} ]
}
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../net/ethernet/mellanox/mlx5/core/en/health.c | 62 ++++++++++++++++++++++
.../net/ethernet/mellanox/mlx5/core/en/health.h | 2 +
.../ethernet/mellanox/mlx5/core/en/reporter_tx.c | 8 +++
drivers/net/ethernet/mellanox/mlx5/core/wq.c | 5 ++
drivers/net/ethernet/mellanox/mlx5/core/wq.h | 1 +
5 files changed, 78 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
index dab563f07157..ffd9a7a165a2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
@@ -34,6 +34,68 @@ int mlx5e_reporter_named_obj_nest_end(struct devlink_fmsg *fmsg)
return 0;
}
+int mlx5e_reporter_cq_diagnose(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg)
+{
+ struct mlx5e_priv *priv = cq->channel->priv;
+ u32 out[MLX5_ST_SZ_DW(query_cq_out)] = {};
+ u8 hw_status;
+ void *cqc;
+ int err;
+
+ err = mlx5_core_query_cq(priv->mdev, &cq->mcq, out, sizeof(out));
+ if (err)
+ return err;
+
+ cqc = MLX5_ADDR_OF(query_cq_out, out, cq_context);
+ hw_status = MLX5_GET(cqc, cqc, status);
+
+ err = mlx5e_reporter_named_obj_nest_start(fmsg, "CQ");
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u32_pair_put(fmsg, "cqn", cq->mcq.cqn);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u8_pair_put(fmsg, "HW status", hw_status);
+ if (err)
+ return err;
+
+ err = mlx5e_reporter_named_obj_nest_end(fmsg);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+int mlx5e_reporter_cq_common_diagnose(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg)
+{
+ u8 cq_log_stride;
+ u32 cq_sz;
+ int err;
+
+ cq_sz = mlx5_cqwq_get_size(&cq->wq);
+ cq_log_stride = mlx5_cqwq_get_log_stride_size(&cq->wq);
+
+ err = mlx5e_reporter_named_obj_nest_start(fmsg, "CQ");
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u64_pair_put(fmsg, "stride size", BIT(cq_log_stride));
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u32_pair_put(fmsg, "size", cq_sz);
+ if (err)
+ return err;
+
+ err = mlx5e_reporter_named_obj_nest_end(fmsg);
+ if (err)
+ return err;
+
+ return 0;
+}
+
int mlx5e_health_sq_to_ready(struct mlx5e_channel *channel, u32 sqn)
{
struct mlx5_core_dev *mdev = channel->mdev;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
index 112771ad516c..6725d417aaf5 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
@@ -11,6 +11,8 @@ void mlx5e_reporter_tx_destroy(struct mlx5e_priv *priv);
void mlx5e_reporter_tx_err_cqe(struct mlx5e_txqsq *sq);
int mlx5e_reporter_tx_timeout(struct mlx5e_txqsq *sq);
+int mlx5e_reporter_cq_diagnose(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg);
+int mlx5e_reporter_cq_common_diagnose(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg);
int mlx5e_reporter_named_obj_nest_start(struct devlink_fmsg *fmsg, char *name);
int mlx5e_reporter_named_obj_nest_end(struct devlink_fmsg *fmsg);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
index a5d0fcbb85af..bfed558637c2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
@@ -193,6 +193,10 @@ mlx5e_tx_reporter_build_diagnose_output(struct devlink_fmsg *fmsg,
if (err)
return err;
+ err = mlx5e_reporter_cq_diagnose(&sq->cq, fmsg);
+ if (err)
+ return err;
+
err = devlink_fmsg_obj_nest_end(fmsg);
if (err)
return err;
@@ -233,6 +237,10 @@ static int mlx5e_tx_reporter_diagnose(struct devlink_health_reporter *reporter,
if (err)
goto unlock;
+ err = mlx5e_reporter_cq_common_diagnose(&generic_sq->cq, fmsg);
+ if (err)
+ goto unlock;
+
err = mlx5e_reporter_named_obj_nest_end(fmsg);
if (err)
goto unlock;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.c b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
index 953cc8efba69..dd2315ce4441 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
@@ -44,6 +44,11 @@ u32 mlx5_cqwq_get_size(struct mlx5_cqwq *wq)
return wq->fbc.sz_m1 + 1;
}
+u8 mlx5_cqwq_get_log_stride_size(struct mlx5_cqwq *wq)
+{
+ return wq->fbc.log_stride;
+}
+
u32 mlx5_wq_ll_get_size(struct mlx5_wq_ll *wq)
{
return (u32)wq->fbc.sz_m1 + 1;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.h b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
index f1ec58c9e9e3..55791f71a778 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
@@ -89,6 +89,7 @@ int mlx5_cqwq_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *cqc, struct mlx5_cqwq *wq,
struct mlx5_wq_ctrl *wq_ctrl);
u32 mlx5_cqwq_get_size(struct mlx5_cqwq *wq);
+u8 mlx5_cqwq_get_log_stride_size(struct mlx5_cqwq *wq);
int mlx5_wq_ll_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *wqc, struct mlx5_wq_ll *wq,
--
2.13.6

@ -0,0 +1,142 @@
From 713b69f0ad280204ad68ebe2cd6e185e213182f0 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:51 -0400
Subject: [PATCH 027/312] [netdrv] net/mlx5e: Add helper functions for
reporter's basics
Message-id: <20200510145245.10054-29-ahleihel@redhat.com>
Patchwork-id: 306569
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 28/82] net/mlx5e: Add helper functions for reporter's basics
Bugzilla: 1790198 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Bugzilla: http://bugzilla.redhat.com/1790198
Upstream: v5.4-rc1
Conflicts:
- drivers/net/ethernet/mellanox/mlx5/core/en_main.c
Context diff due to already backported commit
3c14562663c6 ("net/mlx5e: Expose new function for TIS destroy loop")
---> In function mlx5e_cleanup_nic_tx, we now call mlx5e_destroy_tises
instead of the for loop.
Also, in function mlx5e_nic_init we no longer call mlx5e_build_tc2txq_maps.
commit 11af6a6d09e9a90e05f4a21564232b30c6c25d69
Author: Aya Levin <ayal@mellanox.com>
Date: Thu Jul 11 17:17:36 2019 +0300
net/mlx5e: Add helper functions for reporter's basics
Introduce helper functions for create and destroy reporters and update
channels. In the following patch, rx reporter is added and it will use
these helpers too.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en/health.c | 17 +++++++++++++++++
drivers/net/ethernet/mellanox/mlx5/core/en/health.h | 4 ++++
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 9 +++------
3 files changed, 24 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
index ffd9a7a165a2..c11d0162eaf8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
@@ -96,6 +96,23 @@ int mlx5e_reporter_cq_common_diagnose(struct mlx5e_cq *cq, struct devlink_fmsg *
return 0;
}
+int mlx5e_health_create_reporters(struct mlx5e_priv *priv)
+{
+ return mlx5e_reporter_tx_create(priv);
+}
+
+void mlx5e_health_destroy_reporters(struct mlx5e_priv *priv)
+{
+ mlx5e_reporter_tx_destroy(priv);
+}
+
+void mlx5e_health_channels_update(struct mlx5e_priv *priv)
+{
+ if (priv->tx_reporter)
+ devlink_health_reporter_state_update(priv->tx_reporter,
+ DEVLINK_HEALTH_REPORTER_STATE_HEALTHY);
+}
+
int mlx5e_health_sq_to_ready(struct mlx5e_channel *channel, u32 sqn)
{
struct mlx5_core_dev *mdev = channel->mdev;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
index 6725d417aaf5..b2c0ccc79b22 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
@@ -29,5 +29,9 @@ int mlx5e_health_recover_channels(struct mlx5e_priv *priv);
int mlx5e_health_report(struct mlx5e_priv *priv,
struct devlink_health_reporter *reporter, char *err_str,
struct mlx5e_err_ctx *err_ctx);
+int mlx5e_health_create_reporters(struct mlx5e_priv *priv);
+void mlx5e_health_destroy_reporters(struct mlx5e_priv *priv);
+void mlx5e_health_channels_update(struct mlx5e_priv *priv);
+
#endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 49f5dbab2b8e..908b88891325 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -2323,10 +2323,7 @@ int mlx5e_open_channels(struct mlx5e_priv *priv,
goto err_close_channels;
}
- if (priv->tx_reporter)
- devlink_health_reporter_state_update(priv->tx_reporter,
- DEVLINK_HEALTH_REPORTER_STATE_HEALTHY);
-
+ mlx5e_health_channels_update(priv);
kvfree(cparam);
return 0;
@@ -3210,7 +3207,6 @@ int mlx5e_create_tises(struct mlx5e_priv *priv)
static void mlx5e_cleanup_nic_tx(struct mlx5e_priv *priv)
{
- mlx5e_reporter_tx_destroy(priv);
mlx5e_destroy_tises(priv);
}
@@ -4972,12 +4968,14 @@ static int mlx5e_nic_init(struct mlx5_core_dev *mdev,
if (err)
mlx5_core_err(mdev, "TLS initialization failed, %d\n", err);
mlx5e_build_nic_netdev(netdev);
+ mlx5e_health_create_reporters(priv);
return 0;
}
static void mlx5e_nic_cleanup(struct mlx5e_priv *priv)
{
+ mlx5e_health_destroy_reporters(priv);
mlx5e_tls_cleanup(priv);
mlx5e_ipsec_cleanup(priv);
mlx5e_netdev_cleanup(priv->netdev, priv);
@@ -5080,7 +5078,6 @@ static int mlx5e_init_nic_tx(struct mlx5e_priv *priv)
#ifdef CONFIG_MLX5_CORE_EN_DCB
mlx5e_dcbnl_initialize(priv);
#endif
- mlx5e_reporter_tx_create(priv);
return 0;
}
--
2.13.6

@ -0,0 +1,481 @@
From f89402f33560dd8e1f4cfb6a5d2b849e9fff7f47 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:52 -0400
Subject: [PATCH 028/312] [netdrv] net/mlx5e: Add support to rx reporter
diagnose
Message-id: <20200510145245.10054-30-ahleihel@redhat.com>
Patchwork-id: 306570
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 29/82] net/mlx5e: Add support to rx reporter diagnose
Bugzilla: 1790198 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Bugzilla: http://bugzilla.redhat.com/1790198
Upstream: v5.4-rc1
Conflicts:
- drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
Adapt mlx5e_rx_reporter_diagnose parameters to current API due to already
backported commit:
e7a981050a7f ("devlink: propagate extack down to health reporter ops")
---> .diagnose callback now expects to get extact as well.
commit 9032e7192eac8e657b52cf1c89fe730308b72c2a
Author: Aya Levin <ayal@mellanox.com>
Date: Tue Jun 25 16:26:46 2019 +0300
net/mlx5e: Add support to rx reporter diagnose
Add rx reporter, which supports diagnose call-back. Diagnostics output
include: information common to all RQs: RQ type, RQ size, RQ stride
size, CQ size and CQ stride size. In addition advertise information per
RQ and its related icosq and attached CQ.
$ devlink health diagnose pci/0000:00:0b.0 reporter rx
Common config:
RQ:
type: 2 stride size: 2048 size: 8
CQ:
stride size: 64 size: 1024
RQs:
channel ix: 0 rqn: 4308 HW state: 1 SW state: 3 posted WQEs: 7 cc: 7 ICOSQ HW state: 1
CQ:
cqn: 1032 HW status: 0
channel ix: 1 rqn: 4313 HW state: 1 SW state: 3 posted WQEs: 7 cc: 7 ICOSQ HW state: 1
CQ:
cqn: 1036 HW status: 0
channel ix: 2 rqn: 4318 HW state: 1 SW state: 3 posted WQEs: 7 cc: 7 ICOSQ HW state: 1
CQ:
cqn: 1040 HW status: 0
channel ix: 3 rqn: 4323 HW state: 1 SW state: 3 posted WQEs: 7 cc: 7 ICOSQ HW state: 1
CQ:
cqn: 1044 HW status: 0
$ devlink health diagnose pci/0000:00:0b.0 reporter rx -jp
{
"Common config": {
"RQ": {
"type": 2,
"stride size": 2048,
"size": 8
},
"CQ": {
"stride size": 64,
"size": 1024
}
},
"RQs": [ {
"channel ix": 0,
"rqn": 4308,
"HW state": 1,
"SW state": 3,
"posted WQEs": 7,
"cc": 7,
"ICOSQ HW state": 1,
"CQ": {
"cqn": 1032,
"HW status": 0
}
},{
"channel ix": 1,
"rqn": 4313,
"HW state": 1,
"SW state": 3,
"posted WQEs": 7,
"cc": 7,
"ICOSQ HW state": 1,
"CQ": {
"cqn": 1036,
"HW status": 0
}
},{
"channel ix": 2,
"rqn": 4318,
"HW state": 1,
"SW state": 3,
"posted WQEs": 7,
"cc": 7,
"ICOSQ HW state": 1,
"CQ": {
"cqn": 1040,
"HW status": 0
}
},{
"channel ix": 3,
"rqn": 4323,
"HW state": 1,
"SW state": 3,
"posted WQEs": 7,
"cc": 7,
"ICOSQ HW state": 1,
"CQ": {
"cqn": 1044,
"HW status": 0
}
} ]
}
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/Makefile | 4 +-
drivers/net/ethernet/mellanox/mlx5/core/en.h | 21 +++
.../net/ethernet/mellanox/mlx5/core/en/health.c | 16 +-
.../net/ethernet/mellanox/mlx5/core/en/health.h | 3 +
.../ethernet/mellanox/mlx5/core/en/reporter_rx.c | 197 +++++++++++++++++++++
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 20 ---
6 files changed, 238 insertions(+), 23 deletions(-)
create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
index 4369dfd04a34..bd2074d5eb87 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
@@ -24,8 +24,8 @@ mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \
mlx5_core-$(CONFIG_MLX5_CORE_EN) += en_main.o en_common.o en_fs.o en_ethtool.o \
en_tx.o en_rx.o en_dim.o en_txrx.o en/xdp.o en_stats.o \
en_selftest.o en/port.o en/monitor_stats.o en/health.o \
- en/reporter_tx.o en/params.o en/xsk/umem.o en/xsk/setup.o \
- en/xsk/rx.o en/xsk/tx.o
+ en/reporter_tx.o en/reporter_rx.o en/params.o en/xsk/umem.o \
+ en/xsk/setup.o en/xsk/rx.o en/xsk/tx.o
#
# Netdev extra
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 84575c0bcca6..3ba2dec04137 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -855,6 +855,7 @@ struct mlx5e_priv {
struct mlx5e_tls *tls;
#endif
struct devlink_health_reporter *tx_reporter;
+ struct devlink_health_reporter *rx_reporter;
struct mlx5e_xsk xsk;
};
@@ -899,6 +900,26 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget);
int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget);
void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq);
+static inline u32 mlx5e_rqwq_get_size(struct mlx5e_rq *rq)
+{
+ switch (rq->wq_type) {
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+ return mlx5_wq_ll_get_size(&rq->mpwqe.wq);
+ default:
+ return mlx5_wq_cyc_get_size(&rq->wqe.wq);
+ }
+}
+
+static inline u32 mlx5e_rqwq_get_cur_sz(struct mlx5e_rq *rq)
+{
+ switch (rq->wq_type) {
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+ return rq->mpwqe.wq.cur_sz;
+ default:
+ return rq->wqe.wq.cur_sz;
+ }
+}
+
bool mlx5e_check_fragmented_striding_rq_cap(struct mlx5_core_dev *mdev);
bool mlx5e_striding_rq_possible(struct mlx5_core_dev *mdev,
struct mlx5e_params *params);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
index c11d0162eaf8..1d6b58860da6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
@@ -98,11 +98,22 @@ int mlx5e_reporter_cq_common_diagnose(struct mlx5e_cq *cq, struct devlink_fmsg *
int mlx5e_health_create_reporters(struct mlx5e_priv *priv)
{
- return mlx5e_reporter_tx_create(priv);
+ int err;
+
+ err = mlx5e_reporter_tx_create(priv);
+ if (err)
+ return err;
+
+ err = mlx5e_reporter_rx_create(priv);
+ if (err)
+ return err;
+
+ return 0;
}
void mlx5e_health_destroy_reporters(struct mlx5e_priv *priv)
{
+ mlx5e_reporter_rx_destroy(priv);
mlx5e_reporter_tx_destroy(priv);
}
@@ -111,6 +122,9 @@ void mlx5e_health_channels_update(struct mlx5e_priv *priv)
if (priv->tx_reporter)
devlink_health_reporter_state_update(priv->tx_reporter,
DEVLINK_HEALTH_REPORTER_STATE_HEALTHY);
+ if (priv->rx_reporter)
+ devlink_health_reporter_state_update(priv->rx_reporter,
+ DEVLINK_HEALTH_REPORTER_STATE_HEALTHY);
}
int mlx5e_health_sq_to_ready(struct mlx5e_channel *channel, u32 sqn)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
index b2c0ccc79b22..a751c5316baf 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
@@ -16,6 +16,9 @@ int mlx5e_reporter_cq_common_diagnose(struct mlx5e_cq *cq, struct devlink_fmsg *
int mlx5e_reporter_named_obj_nest_start(struct devlink_fmsg *fmsg, char *name);
int mlx5e_reporter_named_obj_nest_end(struct devlink_fmsg *fmsg);
+int mlx5e_reporter_rx_create(struct mlx5e_priv *priv);
+void mlx5e_reporter_rx_destroy(struct mlx5e_priv *priv);
+
#define MLX5E_REPORTER_PER_Q_MAX_LEN 256
struct mlx5e_err_ctx {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
new file mode 100644
index 000000000000..7cd767f0b8c7
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
@@ -0,0 +1,197 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2019 Mellanox Technologies.
+
+#include "health.h"
+#include "params.h"
+
+static int mlx5e_query_rq_state(struct mlx5_core_dev *dev, u32 rqn, u8 *state)
+{
+ int outlen = MLX5_ST_SZ_BYTES(query_rq_out);
+ void *out;
+ void *rqc;
+ int err;
+
+ out = kvzalloc(outlen, GFP_KERNEL);
+ if (!out)
+ return -ENOMEM;
+
+ err = mlx5_core_query_rq(dev, rqn, out);
+ if (err)
+ goto out;
+
+ rqc = MLX5_ADDR_OF(query_rq_out, out, rq_context);
+ *state = MLX5_GET(rqc, rqc, state);
+
+out:
+ kvfree(out);
+ return err;
+}
+
+static int mlx5e_rx_reporter_build_diagnose_output(struct mlx5e_rq *rq,
+ struct devlink_fmsg *fmsg)
+{
+ struct mlx5e_priv *priv = rq->channel->priv;
+ struct mlx5e_params *params;
+ struct mlx5e_icosq *icosq;
+ u8 icosq_hw_state;
+ int wqes_sz;
+ u8 hw_state;
+ u16 wq_head;
+ int err;
+
+ params = &priv->channels.params;
+ icosq = &rq->channel->icosq;
+ err = mlx5e_query_rq_state(priv->mdev, rq->rqn, &hw_state);
+ if (err)
+ return err;
+
+ err = mlx5_core_query_sq_state(priv->mdev, icosq->sqn, &icosq_hw_state);
+ if (err)
+ return err;
+
+ wqes_sz = mlx5e_rqwq_get_cur_sz(rq);
+ wq_head = params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ ?
+ rq->mpwqe.wq.head : mlx5_wq_cyc_get_head(&rq->wqe.wq);
+
+ err = devlink_fmsg_obj_nest_start(fmsg);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u32_pair_put(fmsg, "channel ix", rq->channel->ix);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u32_pair_put(fmsg, "rqn", rq->rqn);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u8_pair_put(fmsg, "HW state", hw_state);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u8_pair_put(fmsg, "SW state", rq->state);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u32_pair_put(fmsg, "posted WQEs", wqes_sz);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u32_pair_put(fmsg, "cc", wq_head);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u8_pair_put(fmsg, "ICOSQ HW state", icosq_hw_state);
+ if (err)
+ return err;
+
+ err = mlx5e_reporter_cq_diagnose(&rq->cq, fmsg);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_obj_nest_end(fmsg);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static int mlx5e_rx_reporter_diagnose(struct devlink_health_reporter *reporter,
+ struct devlink_fmsg *fmsg,
+ struct netlink_ext_ack *extack)
+{
+ struct mlx5e_priv *priv = devlink_health_reporter_priv(reporter);
+ struct mlx5e_params *params = &priv->channels.params;
+ struct mlx5e_rq *generic_rq;
+ u32 rq_stride, rq_sz;
+ int i, err = 0;
+
+ mutex_lock(&priv->state_lock);
+
+ if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
+ goto unlock;
+
+ generic_rq = &priv->channels.c[0]->rq;
+ rq_sz = mlx5e_rqwq_get_size(generic_rq);
+ rq_stride = BIT(mlx5e_mpwqe_get_log_stride_size(priv->mdev, params, NULL));
+
+ err = mlx5e_reporter_named_obj_nest_start(fmsg, "Common config");
+ if (err)
+ goto unlock;
+
+ err = mlx5e_reporter_named_obj_nest_start(fmsg, "RQ");
+ if (err)
+ goto unlock;
+
+ err = devlink_fmsg_u8_pair_put(fmsg, "type", params->rq_wq_type);
+ if (err)
+ goto unlock;
+
+ err = devlink_fmsg_u64_pair_put(fmsg, "stride size", rq_stride);
+ if (err)
+ goto unlock;
+
+ err = devlink_fmsg_u32_pair_put(fmsg, "size", rq_sz);
+ if (err)
+ goto unlock;
+
+ err = mlx5e_reporter_named_obj_nest_end(fmsg);
+ if (err)
+ goto unlock;
+
+ err = mlx5e_reporter_cq_common_diagnose(&generic_rq->cq, fmsg);
+ if (err)
+ goto unlock;
+
+ err = mlx5e_reporter_named_obj_nest_end(fmsg);
+ if (err)
+ goto unlock;
+
+ err = devlink_fmsg_arr_pair_nest_start(fmsg, "RQs");
+ if (err)
+ goto unlock;
+
+ for (i = 0; i < priv->channels.num; i++) {
+ struct mlx5e_rq *rq = &priv->channels.c[i]->rq;
+
+ err = mlx5e_rx_reporter_build_diagnose_output(rq, fmsg);
+ if (err)
+ goto unlock;
+ }
+ err = devlink_fmsg_arr_pair_nest_end(fmsg);
+ if (err)
+ goto unlock;
+unlock:
+ mutex_unlock(&priv->state_lock);
+ return err;
+}
+
+static const struct devlink_health_reporter_ops mlx5_rx_reporter_ops = {
+ .name = "rx",
+ .diagnose = mlx5e_rx_reporter_diagnose,
+};
+
+int mlx5e_reporter_rx_create(struct mlx5e_priv *priv)
+{
+ struct devlink *devlink = priv_to_devlink(priv->mdev);
+ struct devlink_health_reporter *reporter;
+
+ reporter = devlink_health_reporter_create(devlink,
+ &mlx5_rx_reporter_ops,
+ 0, false, priv);
+ if (IS_ERR(reporter)) {
+ netdev_warn(priv->netdev, "Failed to create rx reporter, err = %ld\n",
+ PTR_ERR(reporter));
+ return PTR_ERR(reporter);
+ }
+ priv->rx_reporter = reporter;
+ return 0;
+}
+
+void mlx5e_reporter_rx_destroy(struct mlx5e_priv *priv)
+{
+ if (!priv->rx_reporter)
+ return;
+
+ devlink_health_reporter_destroy(priv->rx_reporter);
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 908b88891325..d78f60bc86ff 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -238,26 +238,6 @@ static inline void mlx5e_build_umr_wqe(struct mlx5e_rq *rq,
ucseg->mkey_mask = cpu_to_be64(MLX5_MKEY_MASK_FREE);
}
-static u32 mlx5e_rqwq_get_size(struct mlx5e_rq *rq)
-{
- switch (rq->wq_type) {
- case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
- return mlx5_wq_ll_get_size(&rq->mpwqe.wq);
- default:
- return mlx5_wq_cyc_get_size(&rq->wqe.wq);
- }
-}
-
-static u32 mlx5e_rqwq_get_cur_sz(struct mlx5e_rq *rq)
-{
- switch (rq->wq_type) {
- case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
- return rq->mpwqe.wq.cur_sz;
- default:
- return rq->wqe.wq.cur_sz;
- }
-}
-
static int mlx5e_rq_alloc_mpwqe_info(struct mlx5e_rq *rq,
struct mlx5e_channel *c)
{
--
2.13.6

@ -0,0 +1,159 @@
From 3f66afcb58cbade919b064a2eee38d35bd9c64ad Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:53 -0400
Subject: [PATCH 029/312] [netdrv] net/mlx5e: Split open/close ICOSQ into
stages
Message-id: <20200510145245.10054-31-ahleihel@redhat.com>
Patchwork-id: 306572
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 30/82] net/mlx5e: Split open/close ICOSQ into stages
Bugzilla: 1790198 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Bugzilla: http://bugzilla.redhat.com/1790198
Upstream: v5.4-rc1
Conflicts:
- drivers/net/ethernet/mellanox/mlx5/core/en_main.c
drivers/net/ethernet/mellanox/mlx5/core/en.h
Take a couple of hunks from this commit to fix incremental build:
be5323c8379f ("net/mlx5e: Report and recover from CQE error on ICOSQ")
---> Expose function mlx5e_(de)activate_icosq to be used in other files.
commit 9d18b5144a0a850e722e7c3d7b700eb1fba7b7e2
Author: Aya Levin <ayal@mellanox.com>
Date: Tue Jul 2 15:47:29 2019 +0300
net/mlx5e: Split open/close ICOSQ into stages
Align ICOSQ open/close behaviour with RQ and SQ. Split open flow into
open and activate where open handles creation and activate enables the
queue. Do a symmetric thing in close flow: split into close and
deactivate.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en.h | 2 ++
.../net/ethernet/mellanox/mlx5/core/en/xsk/setup.c | 2 ++
drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c | 7 +++++++
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 19 +++++++++++++++----
4 files changed, 26 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 3ba2dec04137..21926cb209f9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -1037,6 +1037,8 @@ void mlx5e_set_rx_cq_mode_params(struct mlx5e_params *params,
void mlx5e_set_rq_type(struct mlx5_core_dev *mdev, struct mlx5e_params *params);
void mlx5e_init_rq_type_params(struct mlx5_core_dev *mdev,
struct mlx5e_params *params);
+void mlx5e_activate_icosq(struct mlx5e_icosq *icosq);
+void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq);
int mlx5e_modify_sq(struct mlx5_core_dev *mdev, u32 sqn,
struct mlx5e_modify_sq_param *p);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
index 79060ee60c98..c28cbae42331 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
@@ -156,6 +156,7 @@ void mlx5e_close_xsk(struct mlx5e_channel *c)
void mlx5e_activate_xsk(struct mlx5e_channel *c)
{
+ mlx5e_activate_icosq(&c->xskicosq);
set_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);
/* TX queue is created active. */
@@ -168,6 +169,7 @@ void mlx5e_deactivate_xsk(struct mlx5e_channel *c)
{
mlx5e_deactivate_rq(&c->xskrq);
/* TX queue is disabled on close. */
+ mlx5e_deactivate_icosq(&c->xskicosq);
}
static int mlx5e_redirect_xsk_rqt(struct mlx5e_priv *priv, u16 ix, u32 rqn)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c
index 19ae0e28fead..03abb8cb96be 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c
@@ -26,6 +26,13 @@ int mlx5e_xsk_async_xmit(struct net_device *dev, u32 qid)
return -ENXIO;
if (!napi_if_scheduled_mark_missed(&c->napi)) {
+ /* To avoid WQE overrun, don't post a NOP if XSKICOSQ is not
+ * active and not polled by NAPI. Return 0, because the upcoming
+ * activate will trigger the IRQ for us.
+ */
+ if (unlikely(!test_bit(MLX5E_SQ_STATE_ENABLED, &c->xskicosq.state)))
+ return 0;
+
spin_lock(&c->xskicosq_lock);
mlx5e_trigger_irq(&c->xskicosq);
spin_unlock(&c->xskicosq_lock);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index d78f60bc86ff..7dde1be49f35 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -1369,7 +1369,6 @@ int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params,
csp.cqn = sq->cq.mcq.cqn;
csp.wq_ctrl = &sq->wq_ctrl;
csp.min_inline_mode = params->tx_min_inline_mode;
- set_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);
err = mlx5e_create_sq_rdy(c->mdev, param, &csp, &sq->sqn);
if (err)
goto err_free_icosq;
@@ -1382,12 +1381,22 @@ int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params,
return err;
}
-void mlx5e_close_icosq(struct mlx5e_icosq *sq)
+void mlx5e_activate_icosq(struct mlx5e_icosq *icosq)
{
- struct mlx5e_channel *c = sq->channel;
+ set_bit(MLX5E_SQ_STATE_ENABLED, &icosq->state);
+}
- clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);
+void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq)
+{
+ struct mlx5e_channel *c = icosq->channel;
+
+ clear_bit(MLX5E_SQ_STATE_ENABLED, &icosq->state);
napi_synchronize(&c->napi);
+}
+
+void mlx5e_close_icosq(struct mlx5e_icosq *sq)
+{
+ struct mlx5e_channel *c = sq->channel;
mlx5e_destroy_sq(c->mdev, sq->sqn);
mlx5e_free_icosq(sq);
@@ -1971,6 +1980,7 @@ static void mlx5e_activate_channel(struct mlx5e_channel *c)
for (tc = 0; tc < c->num_tc; tc++)
mlx5e_activate_txqsq(&c->sq[tc]);
+ mlx5e_activate_icosq(&c->icosq);
mlx5e_activate_rq(&c->rq);
netif_set_xps_queue(c->netdev, c->xps_cpumask, c->ix);
@@ -1986,6 +1996,7 @@ static void mlx5e_deactivate_channel(struct mlx5e_channel *c)
mlx5e_deactivate_xsk(c);
mlx5e_deactivate_rq(&c->rq);
+ mlx5e_deactivate_icosq(&c->icosq);
for (tc = 0; tc < c->num_tc; tc++)
mlx5e_deactivate_txqsq(&c->sq[tc]);
}
--
2.13.6

@ -0,0 +1,360 @@
From beae62dd1772b395964f8e73f82c202f1ad346d9 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:54 -0400
Subject: [PATCH 030/312] [netdrv] net/mlx5e: Report and recover from CQE error
on ICOSQ
Message-id: <20200510145245.10054-32-ahleihel@redhat.com>
Patchwork-id: 306571
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 31/82] net/mlx5e: Report and recover from CQE error on ICOSQ
Bugzilla: 1790198 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Bugzilla: http://bugzilla.redhat.com/1790198
Upstream: v5.4-rc1
Conflicts:
- drivers/net/ethernet/mellanox/mlx5/core/en_main.c
- drivers/net/ethernet/mellanox/mlx5/core/en.h
Dropped hunks that were previously applied for fixing incremental build.
- drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
Adapt mlx5e_rx_reporter_recover parameters to current API due to already
backported commit:
e7a981050a7f ("devlink: propagate extack down to health reporter ops")
---> .recover callback now expects to get extact as well.
commit be5323c8379f488f1de53206edeaf80fc20d7686
Author: Aya Levin <ayal@mellanox.com>
Date: Tue Jun 25 17:44:28 2019 +0300
net/mlx5e: Report and recover from CQE error on ICOSQ
Add support for report and recovery from error on completion on ICOSQ.
Deactivate RQ and flush, then deactivate ICOSQ. Set the queue back to
ready state (firmware) and reset the ICOSQ and the RQ (software
resources). Finally, activate the ICOSQ and the RQ.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en.h | 6 ++
.../net/ethernet/mellanox/mlx5/core/en/health.h | 1 +
.../ethernet/mellanox/mlx5/core/en/reporter_rx.c | 110 ++++++++++++++++++++-
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 18 +++-
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 2 +
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c | 3 +
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h | 2 +
7 files changed, 137 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 21926cb209f9..f0ba350579ae 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -559,6 +559,8 @@ struct mlx5e_icosq {
/* control path */
struct mlx5_wq_ctrl wq_ctrl;
struct mlx5e_channel *channel;
+
+ struct work_struct recover_work;
} ____cacheline_aligned_in_smp;
struct mlx5e_wqe_frag_info {
@@ -1037,6 +1039,10 @@ void mlx5e_set_rx_cq_mode_params(struct mlx5e_params *params,
void mlx5e_set_rq_type(struct mlx5_core_dev *mdev, struct mlx5e_params *params);
void mlx5e_init_rq_type_params(struct mlx5_core_dev *mdev,
struct mlx5e_params *params);
+int mlx5e_modify_rq_state(struct mlx5e_rq *rq, int curr_state, int next_state);
+void mlx5e_activate_rq(struct mlx5e_rq *rq);
+void mlx5e_deactivate_rq(struct mlx5e_rq *rq);
+void mlx5e_free_rx_descs(struct mlx5e_rq *rq);
void mlx5e_activate_icosq(struct mlx5e_icosq *icosq);
void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
index a751c5316baf..8acd9dc520cf 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
@@ -18,6 +18,7 @@ int mlx5e_reporter_named_obj_nest_end(struct devlink_fmsg *fmsg);
int mlx5e_reporter_rx_create(struct mlx5e_priv *priv);
void mlx5e_reporter_rx_destroy(struct mlx5e_priv *priv);
+void mlx5e_reporter_icosq_cqe_err(struct mlx5e_icosq *icosq);
#define MLX5E_REPORTER_PER_Q_MAX_LEN 256
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
index 7cd767f0b8c7..661de567ca6c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
@@ -27,6 +27,110 @@ static int mlx5e_query_rq_state(struct mlx5_core_dev *dev, u32 rqn, u8 *state)
return err;
}
+static int mlx5e_wait_for_icosq_flush(struct mlx5e_icosq *icosq)
+{
+ unsigned long exp_time = jiffies + msecs_to_jiffies(2000);
+
+ while (time_before(jiffies, exp_time)) {
+ if (icosq->cc == icosq->pc)
+ return 0;
+
+ msleep(20);
+ }
+
+ netdev_err(icosq->channel->netdev,
+ "Wait for ICOSQ 0x%x flush timeout (cc = 0x%x, pc = 0x%x)\n",
+ icosq->sqn, icosq->cc, icosq->pc);
+
+ return -ETIMEDOUT;
+}
+
+static void mlx5e_reset_icosq_cc_pc(struct mlx5e_icosq *icosq)
+{
+ WARN_ONCE(icosq->cc != icosq->pc, "ICOSQ 0x%x: cc (0x%x) != pc (0x%x)\n",
+ icosq->sqn, icosq->cc, icosq->pc);
+ icosq->cc = 0;
+ icosq->pc = 0;
+}
+
+static int mlx5e_rx_reporter_err_icosq_cqe_recover(void *ctx)
+{
+ struct mlx5_core_dev *mdev;
+ struct mlx5e_icosq *icosq;
+ struct net_device *dev;
+ struct mlx5e_rq *rq;
+ u8 state;
+ int err;
+
+ icosq = ctx;
+ rq = &icosq->channel->rq;
+ mdev = icosq->channel->mdev;
+ dev = icosq->channel->netdev;
+ err = mlx5_core_query_sq_state(mdev, icosq->sqn, &state);
+ if (err) {
+ netdev_err(dev, "Failed to query ICOSQ 0x%x state. err = %d\n",
+ icosq->sqn, err);
+ goto out;
+ }
+
+ if (state != MLX5_SQC_STATE_ERR)
+ goto out;
+
+ mlx5e_deactivate_rq(rq);
+ err = mlx5e_wait_for_icosq_flush(icosq);
+ if (err)
+ goto out;
+
+ mlx5e_deactivate_icosq(icosq);
+
+ /* At this point, both the rq and the icosq are disabled */
+
+ err = mlx5e_health_sq_to_ready(icosq->channel, icosq->sqn);
+ if (err)
+ goto out;
+
+ mlx5e_reset_icosq_cc_pc(icosq);
+ mlx5e_free_rx_descs(rq);
+ clear_bit(MLX5E_SQ_STATE_RECOVERING, &icosq->state);
+ mlx5e_activate_icosq(icosq);
+ mlx5e_activate_rq(rq);
+
+ rq->stats->recover++;
+ return 0;
+out:
+ clear_bit(MLX5E_SQ_STATE_RECOVERING, &icosq->state);
+ return err;
+}
+
+void mlx5e_reporter_icosq_cqe_err(struct mlx5e_icosq *icosq)
+{
+ struct mlx5e_priv *priv = icosq->channel->priv;
+ char err_str[MLX5E_REPORTER_PER_Q_MAX_LEN];
+ struct mlx5e_err_ctx err_ctx = {};
+
+ err_ctx.ctx = icosq;
+ err_ctx.recover = mlx5e_rx_reporter_err_icosq_cqe_recover;
+ sprintf(err_str, "ERR CQE on ICOSQ: 0x%x", icosq->sqn);
+
+ mlx5e_health_report(priv, priv->rx_reporter, err_str, &err_ctx);
+}
+
+static int mlx5e_rx_reporter_recover_from_ctx(struct mlx5e_err_ctx *err_ctx)
+{
+ return err_ctx->recover(err_ctx->ctx);
+}
+
+static int mlx5e_rx_reporter_recover(struct devlink_health_reporter *reporter,
+ void *context,
+ struct netlink_ext_ack *extack)
+{
+ struct mlx5e_priv *priv = devlink_health_reporter_priv(reporter);
+ struct mlx5e_err_ctx *err_ctx = context;
+
+ return err_ctx ? mlx5e_rx_reporter_recover_from_ctx(err_ctx) :
+ mlx5e_health_recover_channels(priv);
+}
+
static int mlx5e_rx_reporter_build_diagnose_output(struct mlx5e_rq *rq,
struct devlink_fmsg *fmsg)
{
@@ -168,9 +272,12 @@ static int mlx5e_rx_reporter_diagnose(struct devlink_health_reporter *reporter,
static const struct devlink_health_reporter_ops mlx5_rx_reporter_ops = {
.name = "rx",
+ .recover = mlx5e_rx_reporter_recover,
.diagnose = mlx5e_rx_reporter_diagnose,
};
+#define MLX5E_REPORTER_RX_GRACEFUL_PERIOD 500
+
int mlx5e_reporter_rx_create(struct mlx5e_priv *priv)
{
struct devlink *devlink = priv_to_devlink(priv->mdev);
@@ -178,7 +285,8 @@ int mlx5e_reporter_rx_create(struct mlx5e_priv *priv)
reporter = devlink_health_reporter_create(devlink,
&mlx5_rx_reporter_ops,
- 0, false, priv);
+ MLX5E_REPORTER_RX_GRACEFUL_PERIOD,
+ true, priv);
if (IS_ERR(reporter)) {
netdev_warn(priv->netdev, "Failed to create rx reporter, err = %ld\n",
PTR_ERR(reporter));
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 7dde1be49f35..430fb04ea96f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -691,8 +691,7 @@ static int mlx5e_create_rq(struct mlx5e_rq *rq,
return err;
}
-static int mlx5e_modify_rq_state(struct mlx5e_rq *rq, int curr_state,
- int next_state)
+int mlx5e_modify_rq_state(struct mlx5e_rq *rq, int curr_state, int next_state)
{
struct mlx5_core_dev *mdev = rq->mdev;
@@ -803,7 +802,7 @@ int mlx5e_wait_for_min_rx_wqes(struct mlx5e_rq *rq, int wait_time)
return -ETIMEDOUT;
}
-static void mlx5e_free_rx_descs(struct mlx5e_rq *rq)
+void mlx5e_free_rx_descs(struct mlx5e_rq *rq)
{
__be16 wqe_ix_be;
u16 wqe_ix;
@@ -882,7 +881,7 @@ int mlx5e_open_rq(struct mlx5e_channel *c, struct mlx5e_params *params,
return err;
}
-static void mlx5e_activate_rq(struct mlx5e_rq *rq)
+void mlx5e_activate_rq(struct mlx5e_rq *rq)
{
set_bit(MLX5E_RQ_STATE_ENABLED, &rq->state);
mlx5e_trigger_irq(&rq->channel->icosq);
@@ -897,6 +896,7 @@ void mlx5e_deactivate_rq(struct mlx5e_rq *rq)
void mlx5e_close_rq(struct mlx5e_rq *rq)
{
cancel_work_sync(&rq->dim.work);
+ cancel_work_sync(&rq->channel->icosq.recover_work);
mlx5e_destroy_rq(rq);
mlx5e_free_rx_descs(rq);
mlx5e_free_rq(rq);
@@ -1013,6 +1013,14 @@ static int mlx5e_alloc_icosq_db(struct mlx5e_icosq *sq, int numa)
return 0;
}
+static void mlx5e_icosq_err_cqe_work(struct work_struct *recover_work)
+{
+ struct mlx5e_icosq *sq = container_of(recover_work, struct mlx5e_icosq,
+ recover_work);
+
+ mlx5e_reporter_icosq_cqe_err(sq);
+}
+
static int mlx5e_alloc_icosq(struct mlx5e_channel *c,
struct mlx5e_sq_param *param,
struct mlx5e_icosq *sq)
@@ -1035,6 +1043,8 @@ static int mlx5e_alloc_icosq(struct mlx5e_channel *c,
if (err)
goto err_sq_wq_destroy;
+ INIT_WORK(&sq->recover_work, mlx5e_icosq_err_cqe_work);
+
return 0;
err_sq_wq_destroy:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index a22b3a3db253..ce4d357188df 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -616,6 +616,8 @@ void mlx5e_poll_ico_cq(struct mlx5e_cq *cq)
if (unlikely(get_cqe_opcode(cqe) != MLX5_CQE_REQ)) {
netdev_WARN_ONCE(cq->channel->netdev,
"Bad OP in ICOSQ CQE: 0x%x\n", get_cqe_opcode(cqe));
+ if (!test_and_set_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state))
+ queue_work(cq->channel->priv->wq, &sq->recover_work);
break;
}
do {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
index 3d993e2e7bea..79b3ec005f43 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
@@ -161,6 +161,7 @@ static const struct counter_desc sw_stats_desc[] = {
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cache_waive) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_congst_umr) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_arfs_err) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_recover) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, ch_events) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, ch_poll) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, ch_arm) },
@@ -272,6 +273,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(sw)
s->rx_cache_waive += rq_stats->cache_waive;
s->rx_congst_umr += rq_stats->congst_umr;
s->rx_arfs_err += rq_stats->arfs_err;
+ s->rx_recover += rq_stats->recover;
s->ch_events += ch_stats->events;
s->ch_poll += ch_stats->poll;
s->ch_arm += ch_stats->arm;
@@ -1484,6 +1486,7 @@ static const struct counter_desc rq_stats_desc[] = {
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cache_waive) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, congst_umr) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, arfs_err) },
+ { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, recover) },
};
static const struct counter_desc sq_stats_desc[] = {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
index a4a43613d026..ab1c3366ff7d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
@@ -167,6 +167,7 @@ struct mlx5e_sw_stats {
u64 rx_cache_waive;
u64 rx_congst_umr;
u64 rx_arfs_err;
+ u64 rx_recover;
u64 ch_events;
u64 ch_poll;
u64 ch_arm;
@@ -302,6 +303,7 @@ struct mlx5e_rq_stats {
u64 cache_waive;
u64 congst_umr;
u64 arfs_err;
+ u64 recover;
};
struct mlx5e_sq_stats {
--
2.13.6

@ -0,0 +1,114 @@
From ac9174fc02907c3b322b1cba4fe37b73ae29e71b Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:55 -0400
Subject: [PATCH 031/312] [netdrv] net/mlx5e: Report and recover from rx
timeout
Message-id: <20200510145245.10054-33-ahleihel@redhat.com>
Patchwork-id: 306573
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 32/82] net/mlx5e: Report and recover from rx timeout
Bugzilla: 1790198 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Bugzilla: http://bugzilla.redhat.com/1790198
Upstream: v5.4-rc1
commit 32c57fb26863b48982e33aa95f3b5b23f24b1feb
Author: Aya Levin <ayal@mellanox.com>
Date: Tue Jun 25 21:42:27 2019 +0300
net/mlx5e: Report and recover from rx timeout
Add support for report and recovery from rx timeout. On driver open we
post NOP work request on the rx channels to trigger napi in order to
fillup the rx rings. In case napi wasn't scheduled due to a lost
interrupt, perform EQ recovery.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../net/ethernet/mellanox/mlx5/core/en/health.h | 1 +
.../ethernet/mellanox/mlx5/core/en/reporter_rx.c | 32 ++++++++++++++++++++++
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 1 +
3 files changed, 34 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
index 8acd9dc520cf..b4a2d9be17d6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
@@ -19,6 +19,7 @@ int mlx5e_reporter_named_obj_nest_end(struct devlink_fmsg *fmsg);
int mlx5e_reporter_rx_create(struct mlx5e_priv *priv);
void mlx5e_reporter_rx_destroy(struct mlx5e_priv *priv);
void mlx5e_reporter_icosq_cqe_err(struct mlx5e_icosq *icosq);
+void mlx5e_reporter_rx_timeout(struct mlx5e_rq *rq);
#define MLX5E_REPORTER_PER_Q_MAX_LEN 256
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
index 661de567ca6c..4e933db759b2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
@@ -115,6 +115,38 @@ void mlx5e_reporter_icosq_cqe_err(struct mlx5e_icosq *icosq)
mlx5e_health_report(priv, priv->rx_reporter, err_str, &err_ctx);
}
+static int mlx5e_rx_reporter_timeout_recover(void *ctx)
+{
+ struct mlx5e_icosq *icosq;
+ struct mlx5_eq_comp *eq;
+ struct mlx5e_rq *rq;
+ int err;
+
+ rq = ctx;
+ icosq = &rq->channel->icosq;
+ eq = rq->cq.mcq.eq;
+ err = mlx5e_health_channel_eq_recover(eq, rq->channel);
+ if (err)
+ clear_bit(MLX5E_SQ_STATE_ENABLED, &icosq->state);
+
+ return err;
+}
+
+void mlx5e_reporter_rx_timeout(struct mlx5e_rq *rq)
+{
+ struct mlx5e_icosq *icosq = &rq->channel->icosq;
+ struct mlx5e_priv *priv = rq->channel->priv;
+ char err_str[MLX5E_REPORTER_PER_Q_MAX_LEN];
+ struct mlx5e_err_ctx err_ctx = {};
+
+ err_ctx.ctx = rq;
+ err_ctx.recover = mlx5e_rx_reporter_timeout_recover;
+ sprintf(err_str, "RX timeout on channel: %d, ICOSQ: 0x%x RQ: 0x%x, CQ: 0x%x\n",
+ icosq->channel->ix, icosq->sqn, rq->rqn, rq->cq.mcq.cqn);
+
+ mlx5e_health_report(priv, priv->rx_reporter, err_str, &err_ctx);
+}
+
static int mlx5e_rx_reporter_recover_from_ctx(struct mlx5e_err_ctx *err_ctx)
{
return err_ctx->recover(err_ctx->ctx);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 430fb04ea96f..c3eba55e8a21 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -799,6 +799,7 @@ int mlx5e_wait_for_min_rx_wqes(struct mlx5e_rq *rq, int wait_time)
netdev_warn(c->netdev, "Failed to get min RX wqes on Channel[%d] RQN[0x%x] wq cur_sz(%d) min_rx_wqes(%d)\n",
c->ix, rq->rqn, mlx5e_rqwq_get_cur_sz(rq), min_wqes);
+ mlx5e_reporter_rx_timeout(rq);
return -ETIMEDOUT;
}
--
2.13.6

@ -0,0 +1,181 @@
From f0e7d22454ff73e1c1ecb37f5ff11a8a5eedbf74 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:56 -0400
Subject: [PATCH 032/312] [netdrv] net/mlx5e: RX, Handle CQE with error at the
earliest stage
Message-id: <20200510145245.10054-34-ahleihel@redhat.com>
Patchwork-id: 306574
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 33/82] net/mlx5e: RX, Handle CQE with error at the earliest stage
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 0a35ab3e138296cfe192628520e4d5f3ff23e730
Author: Saeed Mahameed <saeedm@mellanox.com>
Date: Fri Jun 14 15:21:15 2019 -0700
net/mlx5e: RX, Handle CQE with error at the earliest stage
Just to be aligned with the MPWQE handlers, handle RX WQE with error
for legacy RQs in the top RX handlers, just before calling skb_from_cqe().
CQE error handling will now be called at the same stage regardless of
the RQ type or netdev mode NIC, Representor, IPoIB, etc ..
This will be useful for down stream patch to improve error CQE
handling.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../net/ethernet/mellanox/mlx5/core/en/health.h | 2 +
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 49 ++++++++++++----------
2 files changed, 30 insertions(+), 21 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
index b4a2d9be17d6..52e9ca37cf46 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
@@ -6,6 +6,8 @@
#include "en.h"
+#define MLX5E_RX_ERR_CQE(cqe) (get_cqe_opcode(cqe) != MLX5_CQE_RESP_SEND)
+
int mlx5e_reporter_tx_create(struct mlx5e_priv *priv);
void mlx5e_reporter_tx_destroy(struct mlx5e_priv *priv);
void mlx5e_reporter_tx_err_cqe(struct mlx5e_txqsq *sq);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index ce4d357188df..1c3da221ee69 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -49,6 +49,7 @@
#include "lib/clock.h"
#include "en/xdp.h"
#include "en/xsk/rx.h"
+#include "en/health.h"
static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config)
{
@@ -1070,11 +1071,6 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe,
prefetchw(va); /* xdp_frame data area */
prefetch(data);
- if (unlikely(get_cqe_opcode(cqe) != MLX5_CQE_RESP_SEND)) {
- rq->stats->wqe_err++;
- return NULL;
- }
-
rcu_read_lock();
consumed = mlx5e_xdp_handle(rq, di, va, &rx_headroom, &cqe_bcnt, false);
rcu_read_unlock();
@@ -1102,11 +1098,6 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe,
u16 byte_cnt = cqe_bcnt - headlen;
struct sk_buff *skb;
- if (unlikely(get_cqe_opcode(cqe) != MLX5_CQE_RESP_SEND)) {
- rq->stats->wqe_err++;
- return NULL;
- }
-
/* XDP is not supported in this configuration, as incoming packets
* might spread among multiple pages.
*/
@@ -1152,6 +1143,11 @@ void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
wi = get_frag(rq, ci);
cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
+ if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
+ rq->stats->wqe_err++;
+ goto free_wqe;
+ }
+
skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe,
mlx5e_skb_from_cqe_linear,
mlx5e_skb_from_cqe_nonlinear,
@@ -1193,6 +1189,11 @@ void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
wi = get_frag(rq, ci);
cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
+ if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
+ rq->stats->wqe_err++;
+ goto free_wqe;
+ }
+
skb = rq->wqe.skb_from_cqe(rq, cqe, wi, cqe_bcnt);
if (!skb) {
/* probably for XDP */
@@ -1327,7 +1328,7 @@ void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
wi->consumed_strides += cstrides;
- if (unlikely(get_cqe_opcode(cqe) != MLX5_CQE_RESP_SEND)) {
+ if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
rq->stats->wqe_err++;
goto mpwrq_cqe_out;
}
@@ -1506,6 +1507,11 @@ void mlx5i_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
wi = get_frag(rq, ci);
cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
+ if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
+ rq->stats->wqe_err++;
+ goto wq_free_wqe;
+ }
+
skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe,
mlx5e_skb_from_cqe_linear,
mlx5e_skb_from_cqe_nonlinear,
@@ -1541,26 +1547,27 @@ void mlx5e_ipsec_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
wi = get_frag(rq, ci);
cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
+ if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
+ rq->stats->wqe_err++;
+ goto wq_free_wqe;
+ }
+
skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe,
mlx5e_skb_from_cqe_linear,
mlx5e_skb_from_cqe_nonlinear,
rq, cqe, wi, cqe_bcnt);
- if (unlikely(!skb)) {
- /* a DROP, save the page-reuse checks */
- mlx5e_free_rx_wqe(rq, wi, true);
- goto wq_cyc_pop;
- }
+ if (unlikely(!skb)) /* a DROP, save the page-reuse checks */
+ goto wq_free_wqe;
+
skb = mlx5e_ipsec_handle_rx_skb(rq->netdev, skb, &cqe_bcnt);
- if (unlikely(!skb)) {
- mlx5e_free_rx_wqe(rq, wi, true);
- goto wq_cyc_pop;
- }
+ if (unlikely(!skb))
+ goto wq_free_wqe;
mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
napi_gro_receive(rq->cq.napi, skb);
+wq_free_wqe:
mlx5e_free_rx_wqe(rq, wi, true);
-wq_cyc_pop:
mlx5_wq_cyc_pop(wq);
}
--
2.13.6

@ -0,0 +1,247 @@
From e945df9ee0cc44e01807d66995d4aa0e458a52aa Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:57 -0400
Subject: [PATCH 033/312] [netdrv] net/mlx5e: Report and recover from CQE with
error on RQ
Message-id: <20200510145245.10054-35-ahleihel@redhat.com>
Patchwork-id: 306575
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 34/82] net/mlx5e: Report and recover from CQE with error on RQ
Bugzilla: 1790198 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Bugzilla: http://bugzilla.redhat.com/1790198
Upstream: v5.4-rc1
commit 8276ea1353a4968a212f04ddf16659223e5408d9
Author: Aya Levin <ayal@mellanox.com>
Date: Wed Jun 26 23:21:40 2019 +0300
net/mlx5e: Report and recover from CQE with error on RQ
Add support for report and recovery from error on completion on RQ by
setting the queue back to ready state. Handle only errors with a
syndrome indicating the RQ might enter error state and could be
recovered.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en.h | 3 +
.../net/ethernet/mellanox/mlx5/core/en/health.h | 9 +++
.../ethernet/mellanox/mlx5/core/en/reporter_rx.c | 69 ++++++++++++++++++++++
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 9 +++
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 11 ++++
5 files changed, 101 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index f0ba350579ae..ada39a3f83a9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -308,6 +308,7 @@ struct mlx5e_dcbx_dp {
enum {
MLX5E_RQ_STATE_ENABLED,
+ MLX5E_RQ_STATE_RECOVERING,
MLX5E_RQ_STATE_AM,
MLX5E_RQ_STATE_NO_CSUM_COMPLETE,
MLX5E_RQ_STATE_CSUM_FULL, /* cqe_csum_full hw bit is set */
@@ -680,6 +681,8 @@ struct mlx5e_rq {
struct zero_copy_allocator zca;
struct xdp_umem *umem;
+ struct work_struct recover_work;
+
/* control */
struct mlx5_wq_ctrl wq_ctrl;
__be32 mkey_be;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
index 52e9ca37cf46..d3693fa547ac 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
@@ -8,6 +8,14 @@
#define MLX5E_RX_ERR_CQE(cqe) (get_cqe_opcode(cqe) != MLX5_CQE_RESP_SEND)
+static inline bool cqe_syndrome_needs_recover(u8 syndrome)
+{
+ return syndrome == MLX5_CQE_SYNDROME_LOCAL_LENGTH_ERR ||
+ syndrome == MLX5_CQE_SYNDROME_LOCAL_QP_OP_ERR ||
+ syndrome == MLX5_CQE_SYNDROME_LOCAL_PROT_ERR ||
+ syndrome == MLX5_CQE_SYNDROME_WR_FLUSH_ERR;
+}
+
int mlx5e_reporter_tx_create(struct mlx5e_priv *priv);
void mlx5e_reporter_tx_destroy(struct mlx5e_priv *priv);
void mlx5e_reporter_tx_err_cqe(struct mlx5e_txqsq *sq);
@@ -21,6 +29,7 @@ int mlx5e_reporter_named_obj_nest_end(struct devlink_fmsg *fmsg);
int mlx5e_reporter_rx_create(struct mlx5e_priv *priv);
void mlx5e_reporter_rx_destroy(struct mlx5e_priv *priv);
void mlx5e_reporter_icosq_cqe_err(struct mlx5e_icosq *icosq);
+void mlx5e_reporter_rq_cqe_err(struct mlx5e_rq *rq);
void mlx5e_reporter_rx_timeout(struct mlx5e_rq *rq);
#define MLX5E_REPORTER_PER_Q_MAX_LEN 256
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
index 4e933db759b2..6c72b592315b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
@@ -115,6 +115,75 @@ void mlx5e_reporter_icosq_cqe_err(struct mlx5e_icosq *icosq)
mlx5e_health_report(priv, priv->rx_reporter, err_str, &err_ctx);
}
+static int mlx5e_rq_to_ready(struct mlx5e_rq *rq, int curr_state)
+{
+ struct net_device *dev = rq->netdev;
+ int err;
+
+ err = mlx5e_modify_rq_state(rq, curr_state, MLX5_RQC_STATE_RST);
+ if (err) {
+ netdev_err(dev, "Failed to move rq 0x%x to reset\n", rq->rqn);
+ return err;
+ }
+ err = mlx5e_modify_rq_state(rq, MLX5_RQC_STATE_RST, MLX5_RQC_STATE_RDY);
+ if (err) {
+ netdev_err(dev, "Failed to move rq 0x%x to ready\n", rq->rqn);
+ return err;
+ }
+
+ return 0;
+}
+
+static int mlx5e_rx_reporter_err_rq_cqe_recover(void *ctx)
+{
+ struct mlx5_core_dev *mdev;
+ struct net_device *dev;
+ struct mlx5e_rq *rq;
+ u8 state;
+ int err;
+
+ rq = ctx;
+ mdev = rq->mdev;
+ dev = rq->netdev;
+ err = mlx5e_query_rq_state(mdev, rq->rqn, &state);
+ if (err) {
+ netdev_err(dev, "Failed to query RQ 0x%x state. err = %d\n",
+ rq->rqn, err);
+ goto out;
+ }
+
+ if (state != MLX5_RQC_STATE_ERR)
+ goto out;
+
+ mlx5e_deactivate_rq(rq);
+ mlx5e_free_rx_descs(rq);
+
+ err = mlx5e_rq_to_ready(rq, MLX5_RQC_STATE_ERR);
+ if (err)
+ goto out;
+
+ clear_bit(MLX5E_RQ_STATE_RECOVERING, &rq->state);
+ mlx5e_activate_rq(rq);
+ rq->stats->recover++;
+ return 0;
+out:
+ clear_bit(MLX5E_RQ_STATE_RECOVERING, &rq->state);
+ return err;
+}
+
+void mlx5e_reporter_rq_cqe_err(struct mlx5e_rq *rq)
+{
+ struct mlx5e_priv *priv = rq->channel->priv;
+ char err_str[MLX5E_REPORTER_PER_Q_MAX_LEN];
+ struct mlx5e_err_ctx err_ctx = {};
+
+ err_ctx.ctx = rq;
+ err_ctx.recover = mlx5e_rx_reporter_err_rq_cqe_recover;
+ sprintf(err_str, "ERR CQE on RQ: 0x%x", rq->rqn);
+
+ mlx5e_health_report(priv, priv->rx_reporter, err_str, &err_ctx);
+}
+
static int mlx5e_rx_reporter_timeout_recover(void *ctx)
{
struct mlx5e_icosq *icosq;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index c3eba55e8a21..13c1151bf60c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -353,6 +353,13 @@ static void mlx5e_free_di_list(struct mlx5e_rq *rq)
kvfree(rq->wqe.di);
}
+static void mlx5e_rq_err_cqe_work(struct work_struct *recover_work)
+{
+ struct mlx5e_rq *rq = container_of(recover_work, struct mlx5e_rq, recover_work);
+
+ mlx5e_reporter_rq_cqe_err(rq);
+}
+
static int mlx5e_alloc_rq(struct mlx5e_channel *c,
struct mlx5e_params *params,
struct mlx5e_xsk_param *xsk,
@@ -389,6 +396,7 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
rq->stats = &c->priv->channel_stats[c->ix].xskrq;
else
rq->stats = &c->priv->channel_stats[c->ix].rq;
+ INIT_WORK(&rq->recover_work, mlx5e_rq_err_cqe_work);
rq->xdp_prog = params->xdp_prog ? bpf_prog_inc(params->xdp_prog) : NULL;
if (IS_ERR(rq->xdp_prog)) {
@@ -898,6 +906,7 @@ void mlx5e_close_rq(struct mlx5e_rq *rq)
{
cancel_work_sync(&rq->dim.work);
cancel_work_sync(&rq->channel->icosq.recover_work);
+ cancel_work_sync(&rq->recover_work);
mlx5e_destroy_rq(rq);
mlx5e_free_rx_descs(rq);
mlx5e_free_rq(rq);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 1c3da221ee69..64d6ecbece80 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -1131,6 +1131,15 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe,
return skb;
}
+static void trigger_report(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
+{
+ struct mlx5_err_cqe *err_cqe = (struct mlx5_err_cqe *)cqe;
+
+ if (cqe_syndrome_needs_recover(err_cqe->syndrome) &&
+ !test_and_set_bit(MLX5E_RQ_STATE_RECOVERING, &rq->state))
+ queue_work(rq->channel->priv->wq, &rq->recover_work);
+}
+
void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
{
struct mlx5_wq_cyc *wq = &rq->wqe.wq;
@@ -1144,6 +1153,7 @@ void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
+ trigger_report(rq, cqe);
rq->stats->wqe_err++;
goto free_wqe;
}
@@ -1329,6 +1339,7 @@ void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
wi->consumed_strides += cstrides;
if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
+ trigger_report(rq, cqe);
rq->stats->wqe_err++;
goto mpwrq_cqe_out;
}
--
2.13.6

@ -0,0 +1,87 @@
From 34060a4ab8c1af0bac3e6a229edce9e92ddeeb43 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:58 -0400
Subject: [PATCH 034/312] [netdrv] net/mlx5: Improve functions documentation
Message-id: <20200510145245.10054-36-ahleihel@redhat.com>
Patchwork-id: 306576
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 35/82] net/mlx5: Improve functions documentation
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 866ff8f22380a49d665ed72521704844bba6de08
Author: Saeed Mahameed <saeedm@mellanox.com>
Date: Thu Aug 15 19:46:09 2019 +0000
net/mlx5: Improve functions documentation
Fix documentation of mlx5_eq_enable/disable to cleanup compiler warnings.
drivers/net/ethernet/mellanox/mlx5/core//eq.c:334:
warning: Function parameter or member 'dev' not described in 'mlx5_eq_enable'
warning: Function parameter or member 'eq' not described in 'mlx5_eq_enable'
warning: Function parameter or member 'nb' not described in 'mlx5_eq_enable'
drivers/net/ethernet/mellanox/mlx5/core//eq.c:355:
warning: Function parameter or member 'dev' not described in 'mlx5_eq_disable'
warning: Function parameter or member 'eq' not described in 'mlx5_eq_disable'
warning: Function parameter or member 'nb' not described in 'mlx5_eq_disable'
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eq.c | 22 +++++++++++++---------
1 file changed, 13 insertions(+), 9 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
index 09d4c64b6e73..580c71cb9dfa 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
@@ -324,10 +324,13 @@ create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq,
/**
* mlx5_eq_enable - Enable EQ for receiving EQEs
- * @dev - Device which owns the eq
- * @eq - EQ to enable
- * @nb - notifier call block
- * mlx5_eq_enable - must be called after EQ is created in device.
+ * @dev : Device which owns the eq
+ * @eq : EQ to enable
+ * @nb : Notifier call block
+ *
+ * Must be called after EQ is created in device.
+ *
+ * @return: 0 if no error
*/
int mlx5_eq_enable(struct mlx5_core_dev *dev, struct mlx5_eq *eq,
struct notifier_block *nb)
@@ -344,11 +347,12 @@ int mlx5_eq_enable(struct mlx5_core_dev *dev, struct mlx5_eq *eq,
EXPORT_SYMBOL(mlx5_eq_enable);
/**
- * mlx5_eq_disable - Enable EQ for receiving EQEs
- * @dev - Device which owns the eq
- * @eq - EQ to disable
- * @nb - notifier call block
- * mlx5_eq_disable - must be called before EQ is destroyed.
+ * mlx5_eq_disable - Disable EQ for receiving EQEs
+ * @dev : Device which owns the eq
+ * @eq : EQ to disable
+ * @nb : Notifier call block
+ *
+ * Must be called before EQ is destroyed.
*/
void mlx5_eq_disable(struct mlx5_core_dev *dev, struct mlx5_eq *eq,
struct notifier_block *nb)
--
2.13.6

@ -0,0 +1,56 @@
From c3fc6a1251852a166487548deb89993b88d2ca87 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:51:59 -0400
Subject: [PATCH 035/312] [include] net/mlx5: Expose IP-in-IP capability bit
Message-id: <20200510145245.10054-37-ahleihel@redhat.com>
Patchwork-id: 306578
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 36/82] net/mlx5: Expose IP-in-IP capability bit
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit caa1854735449d7afac6781679621fb9142fe810
Author: Aya Levin <ayal@mellanox.com>
Date: Thu Aug 15 19:46:14 2019 +0000
net/mlx5: Expose IP-in-IP capability bit
Expose Fw indication that it supports Stateless Offloads for IP over IP
tunneled packets. The following offloads are supported for the inner
packets: RSS, RX & TX Checksum Offloads, LSO and Flow Steering.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
include/linux/mlx5/mlx5_ifc.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 03cb1cf0e285..77c354384ce5 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -860,7 +860,9 @@ struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
u8 swp_csum[0x1];
u8 swp_lso[0x1];
u8 cqe_checksum_full[0x1];
- u8 reserved_at_24[0xc];
+ u8 reserved_at_24[0x5];
+ u8 tunnel_stateless_ip_over_ip[0x1];
+ u8 reserved_at_2a[0x6];
u8 max_vxlan_udp_ports[0x8];
u8 reserved_at_38[0x6];
u8 max_geneve_opt_len[0x1];
--
2.13.6

@ -0,0 +1,249 @@
From a051b0b47d1666c94db81514fc8a5798ba552851 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:00 -0400
Subject: [PATCH 036/312] [netdrv] net/mlx5: Add per-namespace flow table
default miss action support
Message-id: <20200510145245.10054-38-ahleihel@redhat.com>
Patchwork-id: 306577
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 37/82] net/mlx5: Add per-namespace flow table default miss action support
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
Conflicts:
- drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
Context diff due to already backported commit:
20f7b37ffc7d ("net/mlx5e: Introduce root ft concept for representors netdevs")
---> We have FS_CHAINING_CAPS instead of empty element.
commit f66ad830b11406cdff84e7d8656a0a9e34b0b606
Author: Mark Zhang <markz@mellanox.com>
Date: Mon Aug 19 14:36:24 2019 +0300
net/mlx5: Add per-namespace flow table default miss action support
Currently all the namespaces under the same steering domain share the same
default table miss action, however in some situations (e.g., RDMA RX)
different actions are required. This patch adds a per-namespace default
table miss action instead of using the miss action of the steering domain.
Signed-off-by: Mark Zhang <markz@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c | 4 +-
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 73 +++++++++++++----------
drivers/net/ethernet/mellanox/mlx5/core/fs_core.h | 8 +++
3 files changed, 53 insertions(+), 32 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
index a848272a60a1..3c816e81f8d9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
@@ -226,7 +226,7 @@ static int mlx5_cmd_create_flow_table(struct mlx5_flow_root_namespace *ns,
} else {
MLX5_SET(create_flow_table_in, in,
flow_table_context.table_miss_action,
- ns->def_miss_action);
+ ft->def_miss_action);
}
break;
@@ -306,7 +306,7 @@ static int mlx5_cmd_modify_flow_table(struct mlx5_flow_root_namespace *ns,
} else {
MLX5_SET(modify_flow_table_in, in,
flow_table_context.table_miss_action,
- ns->def_miss_action);
+ ft->def_miss_action);
}
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
index 26d0333080e4..5ebd74d078f2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
@@ -60,7 +60,8 @@
ADD_PRIO(num_prios_val, 0, num_levels_val, {},\
__VA_ARGS__)\
-#define ADD_NS(...) {.type = FS_TYPE_NAMESPACE,\
+#define ADD_NS(def_miss_act, ...) {.type = FS_TYPE_NAMESPACE, \
+ .def_miss_action = def_miss_act,\
.children = (struct init_tree_node[]) {__VA_ARGS__},\
.ar_size = INIT_TREE_NODE_ARRAY_SIZE(__VA_ARGS__) \
}
@@ -131,33 +132,41 @@ static struct init_tree_node {
int num_leaf_prios;
int prio;
int num_levels;
+ enum mlx5_flow_table_miss_action def_miss_action;
} root_fs = {
.type = FS_TYPE_NAMESPACE,
.ar_size = 7,
- .children = (struct init_tree_node[]) {
- ADD_PRIO(0, BY_PASS_MIN_LEVEL, 0,
- FS_CHAINING_CAPS,
- ADD_NS(ADD_MULTIPLE_PRIO(MLX5_BY_PASS_NUM_PRIOS,
- BY_PASS_PRIO_NUM_LEVELS))),
- ADD_PRIO(0, LAG_MIN_LEVEL, 0,
- FS_CHAINING_CAPS,
- ADD_NS(ADD_MULTIPLE_PRIO(LAG_NUM_PRIOS,
- LAG_PRIO_NUM_LEVELS))),
- ADD_PRIO(0, OFFLOADS_MIN_LEVEL, 0, FS_CHAINING_CAPS,
- ADD_NS(ADD_MULTIPLE_PRIO(OFFLOADS_NUM_PRIOS, OFFLOADS_MAX_FT))),
- ADD_PRIO(0, ETHTOOL_MIN_LEVEL, 0,
- FS_CHAINING_CAPS,
- ADD_NS(ADD_MULTIPLE_PRIO(ETHTOOL_NUM_PRIOS,
- ETHTOOL_PRIO_NUM_LEVELS))),
- ADD_PRIO(0, KERNEL_MIN_LEVEL, 0, {},
- ADD_NS(ADD_MULTIPLE_PRIO(KERNEL_NIC_TC_NUM_PRIOS, KERNEL_NIC_TC_NUM_LEVELS),
- ADD_MULTIPLE_PRIO(KERNEL_NIC_NUM_PRIOS,
- KERNEL_NIC_PRIO_NUM_LEVELS))),
- ADD_PRIO(0, BY_PASS_MIN_LEVEL, 0,
- FS_CHAINING_CAPS,
- ADD_NS(ADD_MULTIPLE_PRIO(LEFTOVERS_NUM_PRIOS, LEFTOVERS_NUM_LEVELS))),
- ADD_PRIO(0, ANCHOR_MIN_LEVEL, 0, {},
- ADD_NS(ADD_MULTIPLE_PRIO(ANCHOR_NUM_PRIOS, ANCHOR_NUM_LEVELS))),
+ .children = (struct init_tree_node[]){
+ ADD_PRIO(0, BY_PASS_MIN_LEVEL, 0, FS_CHAINING_CAPS,
+ ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF,
+ ADD_MULTIPLE_PRIO(MLX5_BY_PASS_NUM_PRIOS,
+ BY_PASS_PRIO_NUM_LEVELS))),
+ ADD_PRIO(0, LAG_MIN_LEVEL, 0, FS_CHAINING_CAPS,
+ ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF,
+ ADD_MULTIPLE_PRIO(LAG_NUM_PRIOS,
+ LAG_PRIO_NUM_LEVELS))),
+ ADD_PRIO(0, OFFLOADS_MIN_LEVEL, 0, FS_CHAINING_CAPS,
+ ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF,
+ ADD_MULTIPLE_PRIO(OFFLOADS_NUM_PRIOS,
+ OFFLOADS_MAX_FT))),
+ ADD_PRIO(0, ETHTOOL_MIN_LEVEL, 0, FS_CHAINING_CAPS,
+ ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF,
+ ADD_MULTIPLE_PRIO(ETHTOOL_NUM_PRIOS,
+ ETHTOOL_PRIO_NUM_LEVELS))),
+ ADD_PRIO(0, KERNEL_MIN_LEVEL, 0, {},
+ ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF,
+ ADD_MULTIPLE_PRIO(KERNEL_NIC_TC_NUM_PRIOS,
+ KERNEL_NIC_TC_NUM_LEVELS),
+ ADD_MULTIPLE_PRIO(KERNEL_NIC_NUM_PRIOS,
+ KERNEL_NIC_PRIO_NUM_LEVELS))),
+ ADD_PRIO(0, BY_PASS_MIN_LEVEL, 0, FS_CHAINING_CAPS,
+ ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF,
+ ADD_MULTIPLE_PRIO(LEFTOVERS_NUM_PRIOS,
+ LEFTOVERS_NUM_LEVELS))),
+ ADD_PRIO(0, ANCHOR_MIN_LEVEL, 0, {},
+ ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF,
+ ADD_MULTIPLE_PRIO(ANCHOR_NUM_PRIOS,
+ ANCHOR_NUM_LEVELS))),
}
};
@@ -167,7 +176,8 @@ static struct init_tree_node egress_root_fs = {
.children = (struct init_tree_node[]) {
ADD_PRIO(0, MLX5_BY_PASS_NUM_PRIOS, 0,
FS_CHAINING_CAPS_EGRESS,
- ADD_NS(ADD_MULTIPLE_PRIO(MLX5_BY_PASS_NUM_PRIOS,
+ ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF,
+ ADD_MULTIPLE_PRIO(MLX5_BY_PASS_NUM_PRIOS,
BY_PASS_PRIO_NUM_LEVELS))),
}
};
@@ -1014,6 +1024,7 @@ static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespa
tree_init_node(&ft->node, del_hw_flow_table, del_sw_flow_table);
log_table_sz = ft->max_fte ? ilog2(ft->max_fte) : 0;
next_ft = find_next_chained_ft(fs_prio);
+ ft->def_miss_action = ns->def_miss_action;
err = root->cmds->create_flow_table(root, ft, log_table_sz, next_ft);
if (err)
goto free_ft;
@@ -2159,7 +2170,8 @@ static struct mlx5_flow_namespace *fs_init_namespace(struct mlx5_flow_namespace
return ns;
}
-static struct mlx5_flow_namespace *fs_create_namespace(struct fs_prio *prio)
+static struct mlx5_flow_namespace *fs_create_namespace(struct fs_prio *prio,
+ int def_miss_act)
{
struct mlx5_flow_namespace *ns;
@@ -2168,6 +2180,7 @@ static struct mlx5_flow_namespace *fs_create_namespace(struct fs_prio *prio)
return ERR_PTR(-ENOMEM);
fs_init_namespace(ns);
+ ns->def_miss_action = def_miss_act;
tree_init_node(&ns->node, NULL, del_sw_ns);
tree_add_node(&ns->node, &prio->node);
list_add_tail(&ns->node.list, &prio->node.children);
@@ -2234,7 +2247,7 @@ static int init_root_tree_recursive(struct mlx5_flow_steering *steering,
base = &fs_prio->node;
} else if (init_node->type == FS_TYPE_NAMESPACE) {
fs_get_obj(fs_prio, fs_parent_node);
- fs_ns = fs_create_namespace(fs_prio);
+ fs_ns = fs_create_namespace(fs_prio, init_node->def_miss_action);
if (IS_ERR(fs_ns))
return PTR_ERR(fs_ns);
base = &fs_ns->node;
@@ -2504,7 +2517,7 @@ static int init_rdma_rx_root_ns(struct mlx5_flow_steering *steering)
if (!steering->rdma_rx_root_ns)
return -ENOMEM;
- steering->rdma_rx_root_ns->def_miss_action =
+ steering->rdma_rx_root_ns->ns.def_miss_action =
MLX5_FLOW_TABLE_MISS_ACTION_SWITCH_DOMAIN;
/* Create single prio */
@@ -2547,7 +2560,7 @@ static int init_fdb_root_ns(struct mlx5_flow_steering *steering)
}
for (chain = 0; chain <= FDB_MAX_CHAIN; chain++) {
- ns = fs_create_namespace(maj_prio);
+ ns = fs_create_namespace(maj_prio, MLX5_FLOW_TABLE_MISS_ACTION_DEF);
if (IS_ERR(ns)) {
err = PTR_ERR(ns);
goto out_err;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
index 51e1bdb49ff8..c6221ccbdddf 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
@@ -171,6 +171,9 @@ struct mlx5_flow_table {
struct list_head fwd_rules;
u32 flags;
struct rhltable fgs_hash;
+#ifndef __GENKSYMS__
+ enum mlx5_flow_table_miss_action def_miss_action;
+#endif
};
struct mlx5_ft_underlay_qp {
@@ -218,6 +221,9 @@ struct fs_prio {
struct mlx5_flow_namespace {
/* parent == NULL => root ns */
struct fs_node node;
+#ifndef __GENKSYMS__
+ enum mlx5_flow_table_miss_action def_miss_action;
+#endif
};
struct mlx5_flow_group_mask {
@@ -249,7 +255,9 @@ struct mlx5_flow_root_namespace {
struct mutex chain_lock;
struct list_head underlay_qpns;
const struct mlx5_flow_cmds *cmds;
+#ifdef __GENKSYMS__
enum mlx5_flow_table_miss_action def_miss_action;
+#endif
};
int mlx5_init_fc_stats(struct mlx5_core_dev *dev);
--
2.13.6

@ -0,0 +1,161 @@
From d5e6f312b0c92828a91b795274a9fece4b45f953 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:01 -0400
Subject: [PATCH 037/312] [netdrv] net/mlx5: Create bypass and loopback flow
steering namespaces for RDMA RX
Message-id: <20200510145245.10054-39-ahleihel@redhat.com>
Patchwork-id: 306579
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 38/82] net/mlx5: Create bypass and loopback flow steering namespaces for RDMA RX
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit e6806e9a63a759e445383915bb9d2ec85a90aebf
Author: Mark Zhang <markz@mellanox.com>
Date: Mon Aug 19 14:36:25 2019 +0300
net/mlx5: Create bypass and loopback flow steering namespaces for RDMA RX
Use different namespaces for bypass and switchdev loopback because they
have different priorities and default table miss action requirement:
1. bypass: with multiple priorities support, and
MLX5_FLOW_TABLE_MISS_ACTION_DEF as the default table miss action;
2. switchdev loopback: with single priority support, and
MLX5_FLOW_TABLE_MISS_ACTION_SWITCH_DOMAIN as the default table miss
action.
Signed-off-by: Mark Zhang <markz@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 49 ++++++++++++++++++-----
drivers/net/ethernet/mellanox/mlx5/core/rdma.c | 2 +-
include/linux/mlx5/fs.h | 3 ++
3 files changed, 43 insertions(+), 11 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
index 5ebd74d078f2..495396f42153 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
@@ -182,6 +182,26 @@ static struct init_tree_node egress_root_fs = {
}
};
+#define RDMA_RX_BYPASS_PRIO 0
+#define RDMA_RX_KERNEL_PRIO 1
+static struct init_tree_node rdma_rx_root_fs = {
+ .type = FS_TYPE_NAMESPACE,
+ .ar_size = 2,
+ .children = (struct init_tree_node[]) {
+ [RDMA_RX_BYPASS_PRIO] =
+ ADD_PRIO(0, MLX5_BY_PASS_NUM_REGULAR_PRIOS, 0,
+ FS_CHAINING_CAPS,
+ ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF,
+ ADD_MULTIPLE_PRIO(MLX5_BY_PASS_NUM_REGULAR_PRIOS,
+ BY_PASS_PRIO_NUM_LEVELS))),
+ [RDMA_RX_KERNEL_PRIO] =
+ ADD_PRIO(0, MLX5_BY_PASS_NUM_REGULAR_PRIOS + 1, 0,
+ FS_CHAINING_CAPS,
+ ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_SWITCH_DOMAIN,
+ ADD_MULTIPLE_PRIO(1, 1))),
+ }
+};
+
enum fs_i_lock_class {
FS_LOCK_GRANDPARENT,
FS_LOCK_PARENT,
@@ -2071,16 +2091,18 @@ struct mlx5_flow_namespace *mlx5_get_flow_namespace(struct mlx5_core_dev *dev,
if (steering->sniffer_tx_root_ns)
return &steering->sniffer_tx_root_ns->ns;
return NULL;
- case MLX5_FLOW_NAMESPACE_RDMA_RX:
- if (steering->rdma_rx_root_ns)
- return &steering->rdma_rx_root_ns->ns;
- return NULL;
default:
break;
}
if (type == MLX5_FLOW_NAMESPACE_EGRESS) {
root_ns = steering->egress_root_ns;
+ } else if (type == MLX5_FLOW_NAMESPACE_RDMA_RX) {
+ root_ns = steering->rdma_rx_root_ns;
+ prio = RDMA_RX_BYPASS_PRIO;
+ } else if (type == MLX5_FLOW_NAMESPACE_RDMA_RX_KERNEL) {
+ root_ns = steering->rdma_rx_root_ns;
+ prio = RDMA_RX_KERNEL_PRIO;
} else { /* Must be NIC RX */
root_ns = steering->root_ns;
prio = type;
@@ -2511,18 +2533,25 @@ static int init_sniffer_rx_root_ns(struct mlx5_flow_steering *steering)
static int init_rdma_rx_root_ns(struct mlx5_flow_steering *steering)
{
- struct fs_prio *prio;
+ int err;
steering->rdma_rx_root_ns = create_root_ns(steering, FS_FT_RDMA_RX);
if (!steering->rdma_rx_root_ns)
return -ENOMEM;
- steering->rdma_rx_root_ns->ns.def_miss_action =
- MLX5_FLOW_TABLE_MISS_ACTION_SWITCH_DOMAIN;
+ err = init_root_tree(steering, &rdma_rx_root_fs,
+ &steering->rdma_rx_root_ns->ns.node);
+ if (err)
+ goto out_err;
- /* Create single prio */
- prio = fs_create_prio(&steering->rdma_rx_root_ns->ns, 0, 1);
- return PTR_ERR_OR_ZERO(prio);
+ set_prio_attrs(steering->rdma_rx_root_ns);
+
+ return 0;
+
+out_err:
+ cleanup_root_ns(steering->rdma_rx_root_ns);
+ steering->rdma_rx_root_ns = NULL;
+ return err;
}
static int init_fdb_root_ns(struct mlx5_flow_steering *steering)
{
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/rdma.c b/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
index c43f7dc43cea..0fc7de4aa572 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
@@ -48,7 +48,7 @@ static int mlx5_rdma_enable_roce_steering(struct mlx5_core_dev *dev)
return -ENOMEM;
}
- ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_RDMA_RX);
+ ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_RDMA_RX_KERNEL);
if (!ns) {
mlx5_core_err(dev, "Failed to get RDMA RX namespace");
err = -EOPNOTSUPP;
diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h
index a008e9b63b78..948cba3389ff 100644
--- a/include/linux/mlx5/fs.h
+++ b/include/linux/mlx5/fs.h
@@ -75,6 +75,9 @@ enum mlx5_flow_namespace_type {
MLX5_FLOW_NAMESPACE_SNIFFER_TX,
MLX5_FLOW_NAMESPACE_EGRESS,
MLX5_FLOW_NAMESPACE_RDMA_RX,
+#ifndef __GENKSYMS__
+ MLX5_FLOW_NAMESPACE_RDMA_RX_KERNEL,
+#endif
};
enum {
--
2.13.6

@ -0,0 +1,262 @@
From c4bef68d1ee7d83b186a264f290c8fdbf47abdae Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:04 -0400
Subject: [PATCH 038/312] [netdrv] net/mlx5e: Add tc flower tracepoints
Message-id: <20200510145245.10054-42-ahleihel@redhat.com>
Patchwork-id: 306582
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 41/82] net/mlx5e: Add tc flower tracepoints
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
Conflicts:
- Documentation/networking/device_drivers/mellanox/mlx5.rst
Drop changes to doc file that doesn't exist in RHEL-8 tree.
commit 7a978759b4e0e7a2ad3f10cbf9077915a85ec956
Author: Dmytro Linkin <dmitrolin@mellanox.com>
Date: Thu Jun 27 10:55:02 2019 +0000
net/mlx5e: Add tc flower tracepoints
Implemented following tracepoints:
1. Configure flower (mlx5e_configure_flower)
2. Delete flower (mlx5e_delete_flower)
3. Stats flower (mlx5e_stats_flower)
Usage example:
># cd /sys/kernel/debug/tracing
># echo mlx5:mlx5e_configure_flower >> set_event
># cat trace
...
tc-6535 [019] ...1 2672.404466: mlx5e_configure_flower: cookie=0000000067874a55 actions= REDIRECT
Added corresponding documentation in
Documentation/networking/device-driver/mellanox/mlx5.rst
Signed-off-by: Dmytro Linkin <dmitrolin@mellanox.com>
Reviewed-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/Makefile | 2 +-
.../mellanox/mlx5/core/diag/en_tc_tracepoint.c | 58 +++++++++++++++
.../mellanox/mlx5/core/diag/en_tc_tracepoint.h | 83 ++++++++++++++++++++++
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 4 ++
4 files changed, 146 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/diag/en_tc_tracepoint.c
create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/diag/en_tc_tracepoint.h
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
index bd2074d5eb87..3ac94d97cc24 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
@@ -35,7 +35,7 @@ mlx5_core-$(CONFIG_MLX5_EN_RXNFC) += en_fs_ethtool.o
mlx5_core-$(CONFIG_MLX5_CORE_EN_DCB) += en_dcbnl.o en/port_buffer.o
mlx5_core-$(CONFIG_MLX5_ESWITCH) += en_rep.o en_tc.o en/tc_tun.o lib/port_tun.o lag_mp.o \
lib/geneve.o en/tc_tun_vxlan.o en/tc_tun_gre.o \
- en/tc_tun_geneve.o
+ en/tc_tun_geneve.o diag/en_tc_tracepoint.o
#
# Core extra
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/en_tc_tracepoint.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/en_tc_tracepoint.c
new file mode 100644
index 000000000000..c5dc6c50fa87
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/en_tc_tracepoint.c
@@ -0,0 +1,58 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#define CREATE_TRACE_POINTS
+#include "en_tc_tracepoint.h"
+
+void put_ids_to_array(int *ids,
+ const struct flow_action_entry *entries,
+ unsigned int num)
+{
+ unsigned int i;
+
+ for (i = 0; i < num; i++)
+ ids[i] = entries[i].id;
+}
+
+#define NAME_SIZE 16
+
+static const char FLOWACT2STR[NUM_FLOW_ACTIONS][NAME_SIZE] = {
+ [FLOW_ACTION_ACCEPT] = "ACCEPT",
+ [FLOW_ACTION_DROP] = "DROP",
+ [FLOW_ACTION_TRAP] = "TRAP",
+ [FLOW_ACTION_GOTO] = "GOTO",
+ [FLOW_ACTION_REDIRECT] = "REDIRECT",
+ [FLOW_ACTION_MIRRED] = "MIRRED",
+ [FLOW_ACTION_VLAN_PUSH] = "VLAN_PUSH",
+ [FLOW_ACTION_VLAN_POP] = "VLAN_POP",
+ [FLOW_ACTION_VLAN_MANGLE] = "VLAN_MANGLE",
+ [FLOW_ACTION_TUNNEL_ENCAP] = "TUNNEL_ENCAP",
+ [FLOW_ACTION_TUNNEL_DECAP] = "TUNNEL_DECAP",
+ [FLOW_ACTION_MANGLE] = "MANGLE",
+ [FLOW_ACTION_ADD] = "ADD",
+ [FLOW_ACTION_CSUM] = "CSUM",
+ [FLOW_ACTION_MARK] = "MARK",
+ [FLOW_ACTION_WAKE] = "WAKE",
+ [FLOW_ACTION_QUEUE] = "QUEUE",
+ [FLOW_ACTION_SAMPLE] = "SAMPLE",
+ [FLOW_ACTION_POLICE] = "POLICE",
+ [FLOW_ACTION_CT] = "CT",
+};
+
+const char *parse_action(struct trace_seq *p,
+ int *ids,
+ unsigned int num)
+{
+ const char *ret = trace_seq_buffer_ptr(p);
+ unsigned int i;
+
+ for (i = 0; i < num; i++) {
+ if (ids[i] < NUM_FLOW_ACTIONS)
+ trace_seq_printf(p, "%s ", FLOWACT2STR[ids[i]]);
+ else
+ trace_seq_printf(p, "UNKNOWN ");
+ }
+
+ trace_seq_putc(p, 0);
+ return ret;
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/en_tc_tracepoint.h b/drivers/net/ethernet/mellanox/mlx5/core/diag/en_tc_tracepoint.h
new file mode 100644
index 000000000000..a362100fe6d3
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/en_tc_tracepoint.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM mlx5
+
+#if !defined(_MLX5_TC_TP_) || defined(TRACE_HEADER_MULTI_READ)
+#define _MLX5_TC_TP_
+
+#include <linux/tracepoint.h>
+#include <linux/trace_seq.h>
+#include <net/flow_offload.h>
+
+#define __parse_action(ids, num) parse_action(p, ids, num)
+
+void put_ids_to_array(int *ids,
+ const struct flow_action_entry *entries,
+ unsigned int num);
+
+const char *parse_action(struct trace_seq *p,
+ int *ids,
+ unsigned int num);
+
+DECLARE_EVENT_CLASS(mlx5e_flower_template,
+ TP_PROTO(const struct flow_cls_offload *f),
+ TP_ARGS(f),
+ TP_STRUCT__entry(__field(void *, cookie)
+ __field(unsigned int, num)
+ __dynamic_array(int, ids, f->rule ?
+ f->rule->action.num_entries : 0)
+ ),
+ TP_fast_assign(__entry->cookie = (void *)f->cookie;
+ __entry->num = (f->rule ?
+ f->rule->action.num_entries : 0);
+ if (__entry->num)
+ put_ids_to_array(__get_dynamic_array(ids),
+ f->rule->action.entries,
+ f->rule->action.num_entries);
+ ),
+ TP_printk("cookie=%p actions= %s\n",
+ __entry->cookie, __entry->num ?
+ __parse_action(__get_dynamic_array(ids),
+ __entry->num) : "NULL"
+ )
+);
+
+DEFINE_EVENT(mlx5e_flower_template, mlx5e_configure_flower,
+ TP_PROTO(const struct flow_cls_offload *f),
+ TP_ARGS(f)
+ );
+
+DEFINE_EVENT(mlx5e_flower_template, mlx5e_delete_flower,
+ TP_PROTO(const struct flow_cls_offload *f),
+ TP_ARGS(f)
+ );
+
+TRACE_EVENT(mlx5e_stats_flower,
+ TP_PROTO(const struct flow_cls_offload *f),
+ TP_ARGS(f),
+ TP_STRUCT__entry(__field(void *, cookie)
+ __field(u64, bytes)
+ __field(u64, packets)
+ __field(u64, lastused)
+ ),
+ TP_fast_assign(__entry->cookie = (void *)f->cookie;
+ __entry->bytes = f->stats.bytes;
+ __entry->packets = f->stats.pkts;
+ __entry->lastused = f->stats.lastused;
+ ),
+ TP_printk("cookie=%p bytes=%llu packets=%llu lastused=%llu\n",
+ __entry->cookie, __entry->bytes,
+ __entry->packets, __entry->lastused
+ )
+);
+
+#endif /* _MLX5_TC_TP_ */
+
+/* This part must be outside protection */
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH ./diag
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE en_tc_tracepoint
+#include <trace/define_trace.h>
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index d7d2151d1ef3..8d0cf434d16c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -56,6 +56,7 @@
#include "en/tc_tun.h"
#include "lib/devcom.h"
#include "lib/geneve.h"
+#include "diag/en_tc_tracepoint.h"
struct mlx5_nic_flow_attr {
u32 action;
@@ -3810,6 +3811,7 @@ int mlx5e_configure_flower(struct net_device *dev, struct mlx5e_priv *priv,
goto out;
}
+ trace_mlx5e_configure_flower(f);
err = mlx5e_tc_add_flow(priv, f, flags, dev, &flow);
if (err)
goto out;
@@ -3859,6 +3861,7 @@ int mlx5e_delete_flower(struct net_device *dev, struct mlx5e_priv *priv,
rhashtable_remove_fast(tc_ht, &flow->node, tc_ht_params);
rcu_read_unlock();
+ trace_mlx5e_delete_flower(f);
mlx5e_flow_put(priv, flow);
return 0;
@@ -3928,6 +3931,7 @@ int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
mlx5_devcom_release_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
out:
flow_stats_update(&f->stats, bytes, packets, lastuse);
+ trace_mlx5e_stats_flower(f);
errout:
mlx5e_flow_put(priv, flow);
return err;
--
2.13.6

@ -0,0 +1,118 @@
From e2600e33bb83fcfb5ee3505f069d5c469e1633ef Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:05 -0400
Subject: [PATCH 039/312] [netdrv] net/mlx5e: Add trace point for neigh used
value update
Message-id: <20200510145245.10054-43-ahleihel@redhat.com>
Patchwork-id: 306583
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 42/82] net/mlx5e: Add trace point for neigh used value update
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
Conflicts:
- Documentation/networking/device_drivers/mellanox/mlx5.rst
Drop changes to doc file that doesn't exist.
commit c786fe596bede275f887f212eebee74490043b84
Author: Vlad Buslov <vladbu@mellanox.com>
Date: Tue Jun 25 22:33:15 2019 +0300
net/mlx5e: Add trace point for neigh used value update
Allow tracing result of neigh used value update task that is executed
periodically on workqueue.
Usage example:
># cd /sys/kernel/debug/tracing
># echo mlx5:mlx5e_tc_update_neigh_used_value >> set_event
># cat trace
...
kworker/u48:4-8806 [009] ...1 55117.882428: mlx5e_tc_update_neigh_used_value:
netdev: ens1f0 IPv4: 1.1.1.10 IPv6: ::ffff:1.1.1.10 neigh_used=1
Added corresponding documentation in
Documentation/networking/device-driver/mellanox/mlx5.rst
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Dmytro Linkin <dmitrolin@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../mellanox/mlx5/core/diag/en_tc_tracepoint.h | 31 ++++++++++++++++++++++
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 2 ++
2 files changed, 33 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/en_tc_tracepoint.h b/drivers/net/ethernet/mellanox/mlx5/core/diag/en_tc_tracepoint.h
index a362100fe6d3..d4e6cfaaade3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/en_tc_tracepoint.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/en_tc_tracepoint.h
@@ -10,6 +10,7 @@
#include <linux/tracepoint.h>
#include <linux/trace_seq.h>
#include <net/flow_offload.h>
+#include "en_rep.h"
#define __parse_action(ids, num) parse_action(p, ids, num)
@@ -73,6 +74,36 @@ TRACE_EVENT(mlx5e_stats_flower,
)
);
+TRACE_EVENT(mlx5e_tc_update_neigh_used_value,
+ TP_PROTO(const struct mlx5e_neigh_hash_entry *nhe, bool neigh_used),
+ TP_ARGS(nhe, neigh_used),
+ TP_STRUCT__entry(__string(devname, nhe->m_neigh.dev->name)
+ __array(u8, v4, 4)
+ __array(u8, v6, 16)
+ __field(bool, neigh_used)
+ ),
+ TP_fast_assign(const struct mlx5e_neigh *mn = &nhe->m_neigh;
+ struct in6_addr *pin6;
+ __be32 *p32;
+
+ __assign_str(devname, mn->dev->name);
+ __entry->neigh_used = neigh_used;
+
+ p32 = (__be32 *)__entry->v4;
+ pin6 = (struct in6_addr *)__entry->v6;
+ if (mn->family == AF_INET) {
+ *p32 = mn->dst_ip.v4;
+ ipv6_addr_set_v4mapped(*p32, pin6);
+ } else if (mn->family == AF_INET6) {
+ *pin6 = mn->dst_ip.v6;
+ }
+ ),
+ TP_printk("netdev: %s IPv4: %pI4 IPv6: %pI6c neigh_used=%d\n",
+ __get_str(devname), __entry->v4, __entry->v6,
+ __entry->neigh_used
+ )
+);
+
#endif /* _MLX5_TC_TP_ */
/* This part must be outside protection */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 8d0cf434d16c..31d71e1f0545 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -1536,6 +1536,8 @@ void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe)
}
}
+ trace_mlx5e_tc_update_neigh_used_value(nhe, neigh_used);
+
if (neigh_used) {
nhe->reported_lastuse = jiffies;
--
2.13.6

@ -0,0 +1,138 @@
From 94744255e69bab4bcd94627d5255f75bc71f09e0 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:06 -0400
Subject: [PATCH 040/312] [netdrv] net/mlx5e: Add trace point for neigh update
Message-id: <20200510145245.10054-44-ahleihel@redhat.com>
Patchwork-id: 306584
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 43/82] net/mlx5e: Add trace point for neigh update
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
Conflicts:
- Documentation/networking/device_drivers/mellanox/mlx5.rst
Drop changes to doc file that doesn't exist.
commit 5970882a2510e8bffaef518a82ea207798187a93
Author: Vlad Buslov <vladbu@mellanox.com>
Date: Tue Jun 25 22:40:20 2019 +0300
net/mlx5e: Add trace point for neigh update
Allow tracing neigh state during neigh update task that is executed on
workqueue and is scheduled by neigh state change event.
Usage example:
># cd /sys/kernel/debug/tracing
># echo mlx5:mlx5e_rep_neigh_update >> set_event
># cat trace
...
kworker/u48:7-2221 [009] ...1 1475.387435: mlx5e_rep_neigh_update:
netdev: ens1f0 MAC: 24:8a:07:9a:17:9a IPv4: 1.1.1.10 IPv6: ::ffff:1.1.1.10 neigh_connected=1
Added corresponding documentation in
Documentation/networking/device-driver/mellanox/mlx5.rst
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Dmytro Linkin <dmitrolin@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../mellanox/mlx5/core/diag/en_rep_tracepoint.h | 54 ++++++++++++++++++++++
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 4 ++
2 files changed, 58 insertions(+)
create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/diag/en_rep_tracepoint.h
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/en_rep_tracepoint.h b/drivers/net/ethernet/mellanox/mlx5/core/diag/en_rep_tracepoint.h
new file mode 100644
index 000000000000..1177860a2ee4
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/en_rep_tracepoint.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM mlx5
+
+#if !defined(_MLX5_EN_REP_TP_) || defined(TRACE_HEADER_MULTI_READ)
+#define _MLX5_EN_REP_TP_
+
+#include <linux/tracepoint.h>
+#include <linux/trace_seq.h>
+#include "en_rep.h"
+
+TRACE_EVENT(mlx5e_rep_neigh_update,
+ TP_PROTO(const struct mlx5e_neigh_hash_entry *nhe, const u8 *ha,
+ bool neigh_connected),
+ TP_ARGS(nhe, ha, neigh_connected),
+ TP_STRUCT__entry(__string(devname, nhe->m_neigh.dev->name)
+ __array(u8, ha, ETH_ALEN)
+ __array(u8, v4, 4)
+ __array(u8, v6, 16)
+ __field(bool, neigh_connected)
+ ),
+ TP_fast_assign(const struct mlx5e_neigh *mn = &nhe->m_neigh;
+ struct in6_addr *pin6;
+ __be32 *p32;
+
+ __assign_str(devname, mn->dev->name);
+ __entry->neigh_connected = neigh_connected;
+ memcpy(__entry->ha, ha, ETH_ALEN);
+
+ p32 = (__be32 *)__entry->v4;
+ pin6 = (struct in6_addr *)__entry->v6;
+ if (mn->family == AF_INET) {
+ *p32 = mn->dst_ip.v4;
+ ipv6_addr_set_v4mapped(*p32, pin6);
+ } else if (mn->family == AF_INET6) {
+ *pin6 = mn->dst_ip.v6;
+ }
+ ),
+ TP_printk("netdev: %s MAC: %pM IPv4: %pI4 IPv6: %pI6c neigh_connected=%d\n",
+ __get_str(devname), __entry->ha,
+ __entry->v4, __entry->v6, __entry->neigh_connected
+ )
+);
+
+#endif /* _MLX5_EN_REP_TP_ */
+
+/* This part must be outside protection */
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH ./diag
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE en_rep_tracepoint
+#include <trace/define_trace.h>
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index 66c8c2ace4b9..037983a8f149 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -46,6 +46,8 @@
#include "en/tc_tun.h"
#include "fs_core.h"
#include "lib/port_tun.h"
+#define CREATE_TRACE_POINTS
+#include "diag/en_rep_tracepoint.h"
#define MLX5E_REP_PARAMS_DEF_LOG_SQ_SIZE \
max(0x7, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE)
@@ -633,6 +635,8 @@ static void mlx5e_rep_neigh_update(struct work_struct *work)
neigh_connected = (nud_state & NUD_VALID) && !dead;
+ trace_mlx5e_rep_neigh_update(nhe, ha, neigh_connected);
+
list_for_each_entry(e, &nhe->encap_list, encap_list) {
if (!mlx5e_encap_take(e))
continue;
--
2.13.6

@ -0,0 +1,159 @@
From c77866236e272b0520fba28dd04977c85c672167 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:07 -0400
Subject: [PATCH 041/312] [netdrv] net/mlx5: Add wrappers for HyperV PCIe
operations
Message-id: <20200510145245.10054-45-ahleihel@redhat.com>
Patchwork-id: 306586
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 44/82] net/mlx5: Add wrappers for HyperV PCIe operations
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 913d14e866573350de3adede3c90cefb81944b0c
Author: Eran Ben Elisha <eranbe@mellanox.com>
Date: Thu Aug 22 05:05:47 2019 +0000
net/mlx5: Add wrappers for HyperV PCIe operations
Add wrapper functions for HyperV PCIe read / write /
block_invalidate_register operations. This will be used as an
infrastructure in the downstream patch for software communication.
This will be enabled by default if CONFIG_PCI_HYPERV_INTERFACE is set.
Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/Makefile | 1 +
drivers/net/ethernet/mellanox/mlx5/core/lib/hv.c | 64 ++++++++++++++++++++++++
drivers/net/ethernet/mellanox/mlx5/core/lib/hv.h | 22 ++++++++
3 files changed, 87 insertions(+)
create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/lib/hv.c
create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/lib/hv.h
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
index 3ac94d97cc24..d14a13557c0c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
@@ -45,6 +45,7 @@ mlx5_core-$(CONFIG_MLX5_ESWITCH) += eswitch.o eswitch_offloads.o eswitch_offlo
mlx5_core-$(CONFIG_MLX5_MPFS) += lib/mpfs.o
mlx5_core-$(CONFIG_VXLAN) += lib/vxlan.o
mlx5_core-$(CONFIG_PTP_1588_CLOCK) += lib/clock.o
+mlx5_core-$(CONFIG_PCI_HYPERV_INTERFACE) += lib/hv.o
#
# Ipoib netdev
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/hv.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/hv.c
new file mode 100644
index 000000000000..cf08d02703fb
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/hv.c
@@ -0,0 +1,64 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+// Copyright (c) 2018 Mellanox Technologies
+
+#include <linux/hyperv.h>
+#include "mlx5_core.h"
+#include "lib/hv.h"
+
+static int mlx5_hv_config_common(struct mlx5_core_dev *dev, void *buf, int len,
+ int offset, bool read)
+{
+ int rc = -EOPNOTSUPP;
+ int bytes_returned;
+ int block_id;
+
+ if (offset % HV_CONFIG_BLOCK_SIZE_MAX || len % HV_CONFIG_BLOCK_SIZE_MAX)
+ return -EINVAL;
+
+ block_id = offset / HV_CONFIG_BLOCK_SIZE_MAX;
+
+ rc = read ?
+ hyperv_read_cfg_blk(dev->pdev, buf,
+ HV_CONFIG_BLOCK_SIZE_MAX, block_id,
+ &bytes_returned) :
+ hyperv_write_cfg_blk(dev->pdev, buf,
+ HV_CONFIG_BLOCK_SIZE_MAX, block_id);
+
+ /* Make sure len bytes were read successfully */
+ if (read)
+ rc |= !(len == bytes_returned);
+
+ if (rc) {
+ mlx5_core_err(dev, "Failed to %s hv config, err = %d, len = %d, offset = %d\n",
+ read ? "read" : "write", rc, len,
+ offset);
+ return rc;
+ }
+
+ return 0;
+}
+
+int mlx5_hv_read_config(struct mlx5_core_dev *dev, void *buf, int len,
+ int offset)
+{
+ return mlx5_hv_config_common(dev, buf, len, offset, true);
+}
+
+int mlx5_hv_write_config(struct mlx5_core_dev *dev, void *buf, int len,
+ int offset)
+{
+ return mlx5_hv_config_common(dev, buf, len, offset, false);
+}
+
+int mlx5_hv_register_invalidate(struct mlx5_core_dev *dev, void *context,
+ void (*block_invalidate)(void *context,
+ u64 block_mask))
+{
+ return hyperv_reg_block_invalidate(dev->pdev, context,
+ block_invalidate);
+}
+
+void mlx5_hv_unregister_invalidate(struct mlx5_core_dev *dev)
+{
+ hyperv_reg_block_invalidate(dev->pdev, NULL, NULL);
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/hv.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/hv.h
new file mode 100644
index 000000000000..f9a45573f459
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/hv.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#ifndef __LIB_HV_H__
+#define __LIB_HV_H__
+
+#if IS_ENABLED(CONFIG_PCI_HYPERV_INTERFACE)
+
+#include <linux/hyperv.h>
+#include <linux/mlx5/driver.h>
+
+int mlx5_hv_read_config(struct mlx5_core_dev *dev, void *buf, int len,
+ int offset);
+int mlx5_hv_write_config(struct mlx5_core_dev *dev, void *buf, int len,
+ int offset);
+int mlx5_hv_register_invalidate(struct mlx5_core_dev *dev, void *context,
+ void (*block_invalidate)(void *context,
+ u64 block_mask));
+void mlx5_hv_unregister_invalidate(struct mlx5_core_dev *dev);
+#endif
+
+#endif /* __LIB_HV_H__ */
--
2.13.6

@ -0,0 +1,70 @@
From e64a826c128582b7af72680bd51b27f44803c829 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:11 -0400
Subject: [PATCH 042/312] [netdrv] net/mlx5: Fix return code in case of hyperv
wrong size read
Message-id: <20200510145245.10054-49-ahleihel@redhat.com>
Patchwork-id: 306590
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 48/82] net/mlx5: Fix return code in case of hyperv wrong size read
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 87cade2997c9210cfeb625957e44b865a89d0c13
Author: Eran Ben Elisha <eranbe@mellanox.com>
Date: Fri Aug 23 15:34:47 2019 +0300
net/mlx5: Fix return code in case of hyperv wrong size read
Return code value could be non deterministic in case of wrong size read.
With this patch, if such error occurs, set rc to be -EIO.
In addition, mlx5_hv_config_common() supports reading of
HV_CONFIG_BLOCK_SIZE_MAX bytes only, fix to early return error with
bad input.
Fixes: 913d14e86657 ("net/mlx5: Add wrappers for HyperV PCIe operations")
Reported-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/lib/hv.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/hv.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/hv.c
index cf08d02703fb..583dc7e2aca8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/hv.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/hv.c
@@ -12,7 +12,7 @@ static int mlx5_hv_config_common(struct mlx5_core_dev *dev, void *buf, int len,
int bytes_returned;
int block_id;
- if (offset % HV_CONFIG_BLOCK_SIZE_MAX || len % HV_CONFIG_BLOCK_SIZE_MAX)
+ if (offset % HV_CONFIG_BLOCK_SIZE_MAX || len != HV_CONFIG_BLOCK_SIZE_MAX)
return -EINVAL;
block_id = offset / HV_CONFIG_BLOCK_SIZE_MAX;
@@ -25,8 +25,8 @@ static int mlx5_hv_config_common(struct mlx5_core_dev *dev, void *buf, int len,
HV_CONFIG_BLOCK_SIZE_MAX, block_id);
/* Make sure len bytes were read successfully */
- if (read)
- rc |= !(len == bytes_returned);
+ if (read && !rc && len != bytes_returned)
+ rc = -EIO;
if (rc) {
mlx5_core_err(dev, "Failed to %s hv config, err = %d, len = %d, offset = %d\n",
--
2.13.6

@ -0,0 +1,73 @@
From d8bf00ef12e6537c0c1c10982ce05a681526a0f5 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:13 -0400
Subject: [PATCH 043/312] [netdrv] net/mlx5: Set ODP capabilities for DC
transport to max
Message-id: <20200510145245.10054-51-ahleihel@redhat.com>
Patchwork-id: 306592
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 50/82] net/mlx5: Set ODP capabilities for DC transport to max
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 00679b631eddaa0aa0ceba719fcb1f60c65da5a3
Author: Michael Guralnik <michaelgur@mellanox.com>
Date: Mon Aug 19 15:08:13 2019 +0300
net/mlx5: Set ODP capabilities for DC transport to max
In mlx5_core initialization, query max ODP capabilities for DC transport
from FW and set as current capabilities.
Signed-off-by: Michael Guralnik <michaelgur@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/main.c | 6 ++++++
include/linux/mlx5/mlx5_ifc.h | 4 +++-
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index 568d973725b6..490bd80c586a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -495,6 +495,12 @@ static int handle_hca_cap_odp(struct mlx5_core_dev *dev)
ODP_CAP_SET_MAX(dev, xrc_odp_caps.write);
ODP_CAP_SET_MAX(dev, xrc_odp_caps.read);
ODP_CAP_SET_MAX(dev, xrc_odp_caps.atomic);
+ ODP_CAP_SET_MAX(dev, dc_odp_caps.srq_receive);
+ ODP_CAP_SET_MAX(dev, dc_odp_caps.send);
+ ODP_CAP_SET_MAX(dev, dc_odp_caps.receive);
+ ODP_CAP_SET_MAX(dev, dc_odp_caps.write);
+ ODP_CAP_SET_MAX(dev, dc_odp_caps.read);
+ ODP_CAP_SET_MAX(dev, dc_odp_caps.atomic);
if (do_set)
err = set_caps(dev, set_ctx, set_sz,
diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 77c354384ce5..caa0bcd9dd0f 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -1019,7 +1019,9 @@ struct mlx5_ifc_odp_cap_bits {
struct mlx5_ifc_odp_per_transport_service_cap_bits xrc_odp_caps;
- u8 reserved_at_100[0x700];
+ struct mlx5_ifc_odp_per_transport_service_cap_bits dc_odp_caps;
+
+ u8 reserved_at_120[0x6E0];
};
struct mlx5_ifc_calc_op {
--
2.13.6

@ -0,0 +1,91 @@
From d4eb0855820857638058bada0a1189f24b06010b Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:16 -0400
Subject: [PATCH 044/312] [netdrv] net/mlx5e: Change function's position to a
more fitting file
Message-id: <20200510145245.10054-54-ahleihel@redhat.com>
Patchwork-id: 306594
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 53/82] net/mlx5e: Change function's position to a more fitting file
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit a49e1f31ae155d64355d0cd0e0afa5b2bc8544cd
Author: Aya Levin <ayal@mellanox.com>
Date: Thu Aug 8 16:16:28 2019 +0300
net/mlx5e: Change function's position to a more fitting file
Move function which indicates whether tunnel inner flow table is
supported from en.h to en_fs.c. It fits better right after tunnel
protocol rules definitions.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en.h | 6 ------
drivers/net/ethernet/mellanox/mlx5/core/en/fs.h | 2 ++
drivers/net/ethernet/mellanox/mlx5/core/en_fs.c | 6 ++++++
3 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index ada39a3f83a9..35cf78134737 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -1054,12 +1054,6 @@ int mlx5e_modify_sq(struct mlx5_core_dev *mdev, u32 sqn,
void mlx5e_activate_txqsq(struct mlx5e_txqsq *sq);
void mlx5e_tx_disable_queue(struct netdev_queue *txq);
-static inline bool mlx5e_tunnel_inner_ft_supported(struct mlx5_core_dev *mdev)
-{
- return (MLX5_CAP_ETH(mdev, tunnel_stateless_gre) &&
- MLX5_CAP_FLOWTABLE_NIC_RX(mdev, ft_field_support.inner_ip_version));
-}
-
static inline bool mlx5_tx_swp_supported(struct mlx5_core_dev *mdev)
{
return MLX5_CAP_ETH(mdev, swp) &&
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
index ca2161b42c7f..5acd982ff228 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
@@ -98,6 +98,8 @@ enum mlx5e_tunnel_types {
MLX5E_NUM_TUNNEL_TT,
};
+bool mlx5e_tunnel_inner_ft_supported(struct mlx5_core_dev *mdev);
+
/* L3/L4 traffic type classifier */
struct mlx5e_ttc_table {
struct mlx5e_flow_table ft;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
index 76cc10e44080..a8340e4fb0b9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
@@ -749,6 +749,12 @@ static struct mlx5e_etype_proto ttc_tunnel_rules[] = {
},
};
+bool mlx5e_tunnel_inner_ft_supported(struct mlx5_core_dev *mdev)
+{
+ return (MLX5_CAP_ETH(mdev, tunnel_stateless_gre) &&
+ MLX5_CAP_FLOWTABLE_NIC_RX(mdev, ft_field_support.inner_ip_version));
+}
+
static u8 mlx5e_etype_to_ipv(u16 ethertype)
{
if (ethertype == ETH_P_IP)
--
2.13.6

@ -0,0 +1,135 @@
From 7afc70d063523d563f11360b6e8174d809efd3fc Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:17 -0400
Subject: [PATCH 045/312] [netdrv] net/mlx5e: Support RSS for IP-in-IP and IPv6
tunneled packets
Message-id: <20200510145245.10054-55-ahleihel@redhat.com>
Patchwork-id: 306595
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 54/82] net/mlx5e: Support RSS for IP-in-IP and IPv6 tunneled packets
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit a795d8db2a6d3c6f80e7002dd6357e6736dad1b6
Author: Aya Levin <ayal@mellanox.com>
Date: Mon Apr 29 17:45:52 2019 +0300
net/mlx5e: Support RSS for IP-in-IP and IPv6 tunneled packets
Add support for inner header RSS on IP-in-IP and IPv6 tunneled packets.
Add rules to the steering table regarding outer IP header, with
IPv4/6->IP-in-IP. Tunneled packets with protocol numbers: 0x4 (IP-in-IP)
and 0x29 (IPv6) are RSS-ed on the inner IP header.
Separate FW dependencies between flow table inner IP capabilities and
GRE offload support. Allowing this feature even if GRE offload is not
supported. Tested with multi stream TCP traffic tunneled with IPnIP.
Verified that:
Without this patch, only a single RX ring was processing the traffic.
With this patch, multiple RX rings were processing the traffic.
Verified with and without GRE offload support.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en/fs.h | 4 +++
drivers/net/ethernet/mellanox/mlx5/core/en_fs.c | 46 ++++++++++++++++++++++++-
2 files changed, 49 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
index 5acd982ff228..5aae3a7a5497 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
@@ -95,6 +95,10 @@ struct mlx5e_tirc_config {
enum mlx5e_tunnel_types {
MLX5E_TT_IPV4_GRE,
MLX5E_TT_IPV6_GRE,
+ MLX5E_TT_IPV4_IPIP,
+ MLX5E_TT_IPV6_IPIP,
+ MLX5E_TT_IPV4_IPV6,
+ MLX5E_TT_IPV6_IPV6,
MLX5E_NUM_TUNNEL_TT,
};
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
index a8340e4fb0b9..b99b17957543 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
@@ -747,11 +747,52 @@ static struct mlx5e_etype_proto ttc_tunnel_rules[] = {
.etype = ETH_P_IPV6,
.proto = IPPROTO_GRE,
},
+ [MLX5E_TT_IPV4_IPIP] = {
+ .etype = ETH_P_IP,
+ .proto = IPPROTO_IPIP,
+ },
+ [MLX5E_TT_IPV6_IPIP] = {
+ .etype = ETH_P_IPV6,
+ .proto = IPPROTO_IPIP,
+ },
+ [MLX5E_TT_IPV4_IPV6] = {
+ .etype = ETH_P_IP,
+ .proto = IPPROTO_IPV6,
+ },
+ [MLX5E_TT_IPV6_IPV6] = {
+ .etype = ETH_P_IPV6,
+ .proto = IPPROTO_IPV6,
+ },
+
};
+static bool mlx5e_tunnel_proto_supported(struct mlx5_core_dev *mdev, u8 proto_type)
+{
+ switch (proto_type) {
+ case IPPROTO_GRE:
+ return MLX5_CAP_ETH(mdev, tunnel_stateless_gre);
+ case IPPROTO_IPIP:
+ case IPPROTO_IPV6:
+ return MLX5_CAP_ETH(mdev, tunnel_stateless_ip_over_ip);
+ default:
+ return false;
+ }
+}
+
+static bool mlx5e_any_tunnel_proto_supported(struct mlx5_core_dev *mdev)
+{
+ int tt;
+
+ for (tt = 0; tt < MLX5E_NUM_TUNNEL_TT; tt++) {
+ if (mlx5e_tunnel_proto_supported(mdev, ttc_tunnel_rules[tt].proto))
+ return true;
+ }
+ return false;
+}
+
bool mlx5e_tunnel_inner_ft_supported(struct mlx5_core_dev *mdev)
{
- return (MLX5_CAP_ETH(mdev, tunnel_stateless_gre) &&
+ return (mlx5e_any_tunnel_proto_supported(mdev) &&
MLX5_CAP_FLOWTABLE_NIC_RX(mdev, ft_field_support.inner_ip_version));
}
@@ -844,6 +885,9 @@ static int mlx5e_generate_ttc_table_rules(struct mlx5e_priv *priv,
dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
dest.ft = params->inner_ttc->ft.t;
for (tt = 0; tt < MLX5E_NUM_TUNNEL_TT; tt++) {
+ if (!mlx5e_tunnel_proto_supported(priv->mdev,
+ ttc_tunnel_rules[tt].proto))
+ continue;
rules[tt] = mlx5e_generate_ttc_rule(priv, ft, &dest,
ttc_tunnel_rules[tt].etype,
ttc_tunnel_rules[tt].proto);
--
2.13.6

@ -0,0 +1,100 @@
From c83f03ec3f1e21f96aadd5a4a0eb912541c08bb5 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:18 -0400
Subject: [PATCH 046/312] [netdrv] net/mlx5e: Improve stateless offload
capability check
Message-id: <20200510145245.10054-56-ahleihel@redhat.com>
Patchwork-id: 306596
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 55/82] net/mlx5e: Improve stateless offload capability check
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit e3a53bc536fc279de2ace13b8d6d54b071afb722
Author: Marina Varshaver <marinav@mellanox.com>
Date: Tue Aug 20 03:36:29 2019 +0300
net/mlx5e: Improve stateless offload capability check
Use generic function for checking tunnel stateless offload capability
instead of separate macros.
Signed-off-by: Marina Varshaver <marinav@mellanox.com>
Reviewed-by: Aya Levin <ayal@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en/fs.h | 3 +++
drivers/net/ethernet/mellanox/mlx5/core/en_fs.c | 4 ++--
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 4 ++--
3 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
index 5aae3a7a5497..68d593074f6c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
@@ -238,5 +238,8 @@ void mlx5e_disable_cvlan_filter(struct mlx5e_priv *priv);
int mlx5e_create_flow_steering(struct mlx5e_priv *priv);
void mlx5e_destroy_flow_steering(struct mlx5e_priv *priv);
+bool mlx5e_tunnel_proto_supported(struct mlx5_core_dev *mdev, u8 proto_type);
+bool mlx5e_any_tunnel_proto_supported(struct mlx5_core_dev *mdev);
+
#endif /* __MLX5E_FLOW_STEER_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
index b99b17957543..15b7f0f1427c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
@@ -766,7 +766,7 @@ static struct mlx5e_etype_proto ttc_tunnel_rules[] = {
};
-static bool mlx5e_tunnel_proto_supported(struct mlx5_core_dev *mdev, u8 proto_type)
+bool mlx5e_tunnel_proto_supported(struct mlx5_core_dev *mdev, u8 proto_type)
{
switch (proto_type) {
case IPPROTO_GRE:
@@ -779,7 +779,7 @@ static bool mlx5e_tunnel_proto_supported(struct mlx5_core_dev *mdev, u8 proto_ty
}
}
-static bool mlx5e_any_tunnel_proto_supported(struct mlx5_core_dev *mdev)
+bool mlx5e_any_tunnel_proto_supported(struct mlx5_core_dev *mdev)
{
int tt;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 13c1151bf60c..afe24002987d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -4851,7 +4851,7 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
netdev->hw_features |= NETIF_F_HW_VLAN_STAG_TX;
if (mlx5_vxlan_allowed(mdev->vxlan) || mlx5_geneve_tx_allowed(mdev) ||
- MLX5_CAP_ETH(mdev, tunnel_stateless_gre)) {
+ mlx5e_any_tunnel_proto_supported(mdev)) {
netdev->hw_enc_features |= NETIF_F_HW_CSUM;
netdev->hw_enc_features |= NETIF_F_TSO;
netdev->hw_enc_features |= NETIF_F_TSO6;
@@ -4868,7 +4868,7 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
NETIF_F_GSO_UDP_TUNNEL_CSUM;
}
- if (MLX5_CAP_ETH(mdev, tunnel_stateless_gre)) {
+ if (mlx5e_tunnel_proto_supported(mdev, IPPROTO_GRE)) {
netdev->hw_features |= NETIF_F_GSO_GRE |
NETIF_F_GSO_GRE_CSUM;
netdev->hw_enc_features |= NETIF_F_GSO_GRE |
--
2.13.6

@ -0,0 +1,72 @@
From 4570a8510cd01423160448fa0d0362c1b605d07f Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:19 -0400
Subject: [PATCH 047/312] [netdrv] net/mlx5e: Support TSO and TX checksum
offloads for IP-in-IP tunnels
Message-id: <20200510145245.10054-57-ahleihel@redhat.com>
Patchwork-id: 306597
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 56/82] net/mlx5e: Support TSO and TX checksum offloads for IP-in-IP tunnels
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 25948b87dda284664edeb3b3dab689df0a7dc889
Author: Marina Varshaver <marinav@mellanox.com>
Date: Tue Aug 20 04:59:11 2019 +0300
net/mlx5e: Support TSO and TX checksum offloads for IP-in-IP
tunnels
Add TX offloads support for IP-in-IP tunneled packets by reporting
the needed netdev features.
Signed-off-by: Marina Varshaver <marinav@mellanox.com>
Signed-off-by: Avihu Hagag <avihuh@mellanox.com>
Reviewed-by: Aya Levin <ayal@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index afe24002987d..7d9a526c6017 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -4225,6 +4225,8 @@ static netdev_features_t mlx5e_tunnel_features_check(struct mlx5e_priv *priv,
switch (proto) {
case IPPROTO_GRE:
+ case IPPROTO_IPIP:
+ case IPPROTO_IPV6:
return features;
case IPPROTO_UDP:
udph = udp_hdr(skb);
@@ -4877,6 +4879,15 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
NETIF_F_GSO_GRE_CSUM;
}
+ if (mlx5e_tunnel_proto_supported(mdev, IPPROTO_IPIP)) {
+ netdev->hw_features |= NETIF_F_GSO_IPXIP4 |
+ NETIF_F_GSO_IPXIP6;
+ netdev->hw_enc_features |= NETIF_F_GSO_IPXIP4 |
+ NETIF_F_GSO_IPXIP6;
+ netdev->gso_partial_features |= NETIF_F_GSO_IPXIP4 |
+ NETIF_F_GSO_IPXIP6;
+ }
+
netdev->hw_features |= NETIF_F_GSO_PARTIAL;
netdev->gso_partial_features |= NETIF_F_GSO_UDP_L4;
netdev->hw_features |= NETIF_F_GSO_UDP_L4;
--
2.13.6

@ -0,0 +1,60 @@
From 3315feb7c1bc069a18103195cb16ba3d37f78adf Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:20 -0400
Subject: [PATCH 048/312] [netdrv] net/mlx5e: Remove unlikely() from WARN*()
condition
Message-id: <20200510145245.10054-58-ahleihel@redhat.com>
Patchwork-id: 306598
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 57/82] net/mlx5e: Remove unlikely() from WARN*() condition
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 7cf92ccb85554c9550bc0a8e892f68f92985024c
Author: Denis Efremov <efremov@linux.com>
Date: Thu Aug 29 19:50:17 2019 +0300
net/mlx5e: Remove unlikely() from WARN*() condition
"unlikely(WARN_ON_ONCE(x))" is excessive. WARN_ON_ONCE() already uses
unlikely() internally.
Signed-off-by: Denis Efremov <efremov@linux.com>
Cc: Boris Pismenny <borisp@mellanox.com>
Cc: Saeed Mahameed <saeedm@mellanox.com>
Cc: Leon Romanovsky <leon@kernel.org>
Cc: Joe Perches <joe@perches.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: netdev@vger.kernel.org
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
index 7833ddef0427..e5222d17df35 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
@@ -408,7 +408,7 @@ struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev,
goto out;
tls_ctx = tls_get_ctx(skb->sk);
- if (unlikely(WARN_ON_ONCE(tls_ctx->netdev != netdev)))
+ if (WARN_ON_ONCE(tls_ctx->netdev != netdev))
goto err_out;
priv_tx = mlx5e_get_ktls_tx_priv_ctx(tls_ctx);
--
2.13.6

@ -0,0 +1,58 @@
From 91eda209ba094c859befbe379805eac57bddd123 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:22 -0400
Subject: [PATCH 049/312] [netdrv] net/mlx5: Kconfig: Fix MLX5_CORE dependency
with PCI_HYPERV_INTERFACE
Message-id: <20200510145245.10054-60-ahleihel@redhat.com>
Patchwork-id: 306600
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 59/82] net/mlx5: Kconfig: Fix MLX5_CORE dependency with PCI_HYPERV_INTERFACE
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 4057a7652b74af25ba1197689fc144cdb766f423
Author: Mao Wenan <maowenan@huawei.com>
Date: Tue Aug 27 11:12:51 2019 +0800
net/mlx5: Kconfig: Fix MLX5_CORE dependency with PCI_HYPERV_INTERFACE
When MLX5_CORE=y and PCI_HYPERV_INTERFACE=m, below errors are found:
drivers/net/ethernet/mellanox/mlx5/core/en_main.o: In function `mlx5e_nic_enable':
en_main.c:(.text+0xb649): undefined reference to `mlx5e_hv_vhca_stats_create'
drivers/net/ethernet/mellanox/mlx5/core/en_main.o: In function `mlx5e_nic_disable':
en_main.c:(.text+0xb8c4): undefined reference to `mlx5e_hv_vhca_stats_destroy'
Fix this by making MLX5_CORE imply PCI_HYPERV_INTERFACE.
Fixes: cef35af34d6d ("net/mlx5e: Add mlx5e HV VHCA stats agent")
Signed-off-by: Mao Wenan <maowenan@huawei.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
index 92a561176705..ae7c28ba9f5a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
@@ -9,6 +9,7 @@ config MLX5_CORE
imply PTP_1588_CLOCK
imply VXLAN
imply MLXFW
+ imply PCI_HYPERV_INTERFACE
default n
---help---
Core driver for low level functionality of the ConnectX-4 and
--
2.13.6

@ -0,0 +1,126 @@
From 157c8134fb32202e02e283e8c9be3fcaee9d2f66 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:23 -0400
Subject: [PATCH 050/312] [netdrv] net/mlx5e: Use ipv6_stub to avoid dependency
with ipv6 being a module
Message-id: <20200510145245.10054-61-ahleihel@redhat.com>
Patchwork-id: 306601
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 60/82] net/mlx5e: Use ipv6_stub to avoid dependency with ipv6 being a module
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 5cc3a8c66dd5ab18bacef5dd54ccdbae5182e003
Author: Saeed Mahameed <saeedm@mellanox.com>
Date: Tue Aug 27 14:06:23 2019 -0700
net/mlx5e: Use ipv6_stub to avoid dependency with ipv6 being a module
mlx5 is dependent on IPv6 tristate since we use ipv6's nd_tbl directly,
alternatively we can use ipv6_stub->nd_tbl and remove the dependency.
Reported-by: Walter Harms <wharms@bfs.de>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Reviewed-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/Kconfig | 1 -
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 23 +++++++++++++----------
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 2 +-
3 files changed, 14 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
index ae7c28ba9f5a..361c783ec9b5 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
@@ -32,7 +32,6 @@ config MLX5_FPGA
config MLX5_CORE_EN
bool "Mellanox 5th generation network adapters (ConnectX series) Ethernet support"
depends on NETDEVICES && ETHERNET && INET && PCI && MLX5_CORE
- depends on IPV6=y || IPV6=n || MLX5_CORE=m
select PAGE_POOL
select DIMLIB
default n
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index 037983a8f149..2681bd39eab2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -38,6 +38,7 @@
#include <net/netevent.h>
#include <net/arp.h>
#include <net/devlink.h>
+#include <net/ipv6_stubs.h>
#include "eswitch.h"
#include "en.h"
@@ -475,16 +476,18 @@ void mlx5e_remove_sqs_fwd_rules(struct mlx5e_priv *priv)
mlx5e_sqs2vport_stop(esw, rep);
}
+static unsigned long mlx5e_rep_ipv6_interval(void)
+{
+ if (IS_ENABLED(CONFIG_IPV6) && ipv6_stub->nd_tbl)
+ return NEIGH_VAR(&ipv6_stub->nd_tbl->parms, DELAY_PROBE_TIME);
+
+ return ~0UL;
+}
+
static void mlx5e_rep_neigh_update_init_interval(struct mlx5e_rep_priv *rpriv)
{
-#if IS_ENABLED(CONFIG_IPV6)
- unsigned long ipv6_interval = NEIGH_VAR(&nd_tbl.parms,
- DELAY_PROBE_TIME);
-#else
- unsigned long ipv6_interval = ~0UL;
-#endif
- unsigned long ipv4_interval = NEIGH_VAR(&arp_tbl.parms,
- DELAY_PROBE_TIME);
+ unsigned long ipv4_interval = NEIGH_VAR(&arp_tbl.parms, DELAY_PROBE_TIME);
+ unsigned long ipv6_interval = mlx5e_rep_ipv6_interval();
struct net_device *netdev = rpriv->netdev;
struct mlx5e_priv *priv = netdev_priv(netdev);
@@ -893,7 +896,7 @@ static int mlx5e_rep_netevent_event(struct notifier_block *nb,
case NETEVENT_NEIGH_UPDATE:
n = ptr;
#if IS_ENABLED(CONFIG_IPV6)
- if (n->tbl != &nd_tbl && n->tbl != &arp_tbl)
+ if (n->tbl != ipv6_stub->nd_tbl && n->tbl != &arp_tbl)
#else
if (n->tbl != &arp_tbl)
#endif
@@ -920,7 +923,7 @@ static int mlx5e_rep_netevent_event(struct notifier_block *nb,
* done per device delay prob time parameter.
*/
#if IS_ENABLED(CONFIG_IPV6)
- if (!p->dev || (p->tbl != &nd_tbl && p->tbl != &arp_tbl))
+ if (!p->dev || (p->tbl != ipv6_stub->nd_tbl && p->tbl != &arp_tbl))
#else
if (!p->dev || p->tbl != &arp_tbl)
#endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 31d71e1f0545..9a49ae5ac4ce 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -1494,7 +1494,7 @@ void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe)
tbl = &arp_tbl;
#if IS_ENABLED(CONFIG_IPV6)
else if (m_neigh->family == AF_INET6)
- tbl = &nd_tbl;
+ tbl = ipv6_stub->nd_tbl;
#endif
else
return;
--
2.13.6

@ -0,0 +1,57 @@
From b2d6822ecd353c4d82679d6eee081130b40eac66 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:24 -0400
Subject: [PATCH 051/312] [netdrv] net/mlx5: Use PTR_ERR_OR_ZERO rather than
its implementation
Message-id: <20200510145245.10054-62-ahleihel@redhat.com>
Patchwork-id: 306602
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 61/82] net/mlx5: Use PTR_ERR_OR_ZERO rather than its implementation
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit a2b7189be6b5dd697c333beb91f988dfc3ca87fb
Author: zhong jiang <zhongjiang@huawei.com>
Date: Tue Sep 3 14:56:10 2019 +0800
net/mlx5: Use PTR_ERR_OR_ZERO rather than its implementation
PTR_ERR_OR_ZERO contains if(IS_ERR(...)) + PTR_ERR. It is better
to use it directly. hence just replace it.
Signed-off-by: zhong jiang <zhongjiang@huawei.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 9a49ae5ac4ce..ac372993c9d8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -988,10 +988,7 @@ mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv,
&flow_act, dest, dest_ix);
mutex_unlock(&priv->fs.tc.t_lock);
- if (IS_ERR(flow->rule[0]))
- return PTR_ERR(flow->rule[0]);
-
- return 0;
+ return PTR_ERR_OR_ZERO(flow->rule[0]);
}
static void mlx5e_tc_del_nic_flow(struct mlx5e_priv *priv,
--
2.13.6

@ -0,0 +1,65 @@
From 877b42f26b6e9ec1f6377f186b0312d34bcd6aac Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:25 -0400
Subject: [PATCH 052/312] [netdrv] net/mlx5e: kTLS, Remove unused function
parameter
Message-id: <20200510145245.10054-63-ahleihel@redhat.com>
Patchwork-id: 306603
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 62/82] net/mlx5e: kTLS, Remove unused function parameter
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit fa9e01c89539ec1f4efde0adc1a69a527f5ecb1e
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Mon Sep 2 12:04:35 2019 +0300
net/mlx5e: kTLS, Remove unused function parameter
SKB parameter is no longer used in tx_post_resync_dump(),
remove it.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
index e5222d17df35..d195366461c9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
@@ -256,8 +256,7 @@ struct mlx5e_dump_wqe {
};
static int
-tx_post_resync_dump(struct mlx5e_txqsq *sq, struct sk_buff *skb,
- skb_frag_t *frag, u32 tisn, bool first)
+tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn, bool first)
{
struct mlx5_wqe_ctrl_seg *cseg;
struct mlx5_wqe_data_seg *dseg;
@@ -371,8 +370,7 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
tx_post_resync_params(sq, priv_tx, info.rcd_sn);
for (i = 0; i < info.nr_frags; i++)
- if (tx_post_resync_dump(sq, skb, info.frags[i],
- priv_tx->tisn, !i))
+ if (tx_post_resync_dump(sq, info.frags[i], priv_tx->tisn, !i))
goto err_out;
/* If no dump WQE was sent, we need to have a fence NOP WQE before the
--
2.13.6

@ -0,0 +1,52 @@
From ec079f9d2196ec46943d99aa88a0af28e02724aa Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:26 -0400
Subject: [PATCH 053/312] [netdrv] net/mlx5: DR, Remove useless set memory to
zero use memset()
Message-id: <20200510145245.10054-64-ahleihel@redhat.com>
Patchwork-id: 306604
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 63/82] net/mlx5: DR, Remove useless set memory to zero use memset()
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit f6a8cddfb50a5d530400f10c435f420b15962800
Author: Wei Yongjun <weiyongjun1@huawei.com>
Date: Thu Sep 5 09:53:26 2019 +0000
net/mlx5: DR, Remove useless set memory to zero use memset()
The memory return by kzalloc() has already be set to zero, so
remove useless memset(0).
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
index e6c6bf4a9578..c7f10d4f8f8d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
@@ -902,7 +902,6 @@ int mlx5dr_send_ring_alloc(struct mlx5dr_domain *dmn)
goto clean_qp;
}
- memset(dmn->send_ring->buf, 0, size);
dmn->send_ring->buf_size = size;
dmn->send_ring->mr = dr_reg_mr(dmn->mdev,
--
2.13.6

@ -0,0 +1,89 @@
From 162279e737c8768b4fc24255dd3786b7012d0945 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:27 -0400
Subject: [PATCH 054/312] [netdrv] net/mlx5: DR, Remove redundant dev_name
print from err log
Message-id: <20200510145245.10054-65-ahleihel@redhat.com>
Patchwork-id: 306605
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 64/82] net/mlx5: DR, Remove redundant dev_name print from err log
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
commit 63d67f3059291e24bd7a2fa3f5eb7395442e8f90
Author: Saeed Mahameed <saeedm@mellanox.com>
Date: Thu Sep 5 12:34:36 2019 -0700
net/mlx5: DR, Remove redundant dev_name print from err log
mlx5_core_err already prints the name of the device.
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../net/ethernet/mellanox/mlx5/core/steering/dr_domain.c | 15 +++++----------
1 file changed, 5 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
index 791c3674aed1..a9da961d4d2f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
@@ -72,24 +72,21 @@ static int dr_domain_init_resources(struct mlx5dr_domain *dmn)
dmn->ste_icm_pool = mlx5dr_icm_pool_create(dmn, DR_ICM_TYPE_STE);
if (!dmn->ste_icm_pool) {
- mlx5dr_err(dmn, "Couldn't get icm memory for %s\n",
- dev_name(dmn->mdev->device));
+ mlx5dr_err(dmn, "Couldn't get icm memory\n");
ret = -ENOMEM;
goto clean_uar;
}
dmn->action_icm_pool = mlx5dr_icm_pool_create(dmn, DR_ICM_TYPE_MODIFY_ACTION);
if (!dmn->action_icm_pool) {
- mlx5dr_err(dmn, "Couldn't get action icm memory for %s\n",
- dev_name(dmn->mdev->device));
+ mlx5dr_err(dmn, "Couldn't get action icm memory\n");
ret = -ENOMEM;
goto free_ste_icm_pool;
}
ret = mlx5dr_send_ring_alloc(dmn);
if (ret) {
- mlx5dr_err(dmn, "Couldn't create send-ring for %s\n",
- dev_name(dmn->mdev->device));
+ mlx5dr_err(dmn, "Couldn't create send-ring\n");
goto free_action_icm_pool;
}
@@ -312,16 +309,14 @@ mlx5dr_domain_create(struct mlx5_core_dev *mdev, enum mlx5dr_domain_type type)
dmn->info.caps.log_icm_size);
if (!dmn->info.supp_sw_steering) {
- mlx5dr_err(dmn, "SW steering not supported for %s\n",
- dev_name(mdev->device));
+ mlx5dr_err(dmn, "SW steering is not supported\n");
goto uninit_caps;
}
/* Allocate resources */
ret = dr_domain_init_resources(dmn);
if (ret) {
- mlx5dr_err(dmn, "Failed init domain resources for %s\n",
- dev_name(mdev->device));
+ mlx5dr_err(dmn, "Failed init domain resources\n");
goto uninit_caps;
}
--
2.13.6

@ -0,0 +1,108 @@
From 04f6b2f616074ee8524c017915640770c17e365a Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:28 -0400
Subject: [PATCH 055/312] [netdrv] drivers: net: Fix Kconfig indentation
Message-id: <20200510145245.10054-66-ahleihel@redhat.com>
Patchwork-id: 306606
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 65/82] drivers: net: Fix Kconfig indentation
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc1
Conflicts:
- Take mlx5 changes only.
commit 02bc5eb990597796d8e8383d1b98e540af963bf1
Author: Krzysztof Kozlowski <krzk@kernel.org>
Date: Mon Sep 23 17:52:43 2019 +0200
drivers: net: Fix Kconfig indentation
Adjust indentation from spaces to tab (+optional two spaces) as in
coding style with command like:
$ sed -e 's/^ /\t/' -i */Kconfig
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Kalle Valo <kvalo@codeaurora.org>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/Kconfig | 36 ++++++++++++-------------
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
index 361c783ec9b5..6919161c8f9b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
@@ -19,15 +19,15 @@ config MLX5_ACCEL
bool
config MLX5_FPGA
- bool "Mellanox Technologies Innova support"
- depends on MLX5_CORE
+ bool "Mellanox Technologies Innova support"
+ depends on MLX5_CORE
select MLX5_ACCEL
- ---help---
- Build support for the Innova family of network cards by Mellanox
- Technologies. Innova network cards are comprised of a ConnectX chip
- and an FPGA chip on one board. If you select this option, the
- mlx5_core driver will include the Innova FPGA core and allow building
- sandbox-specific client drivers.
+ ---help---
+ Build support for the Innova family of network cards by Mellanox
+ Technologies. Innova network cards are comprised of a ConnectX chip
+ and an FPGA chip on one board. If you select this option, the
+ mlx5_core driver will include the Innova FPGA core and allow building
+ sandbox-specific client drivers.
config MLX5_CORE_EN
bool "Mellanox 5th generation network adapters (ConnectX series) Ethernet support"
@@ -57,14 +57,14 @@ config MLX5_EN_RXNFC
API.
config MLX5_MPFS
- bool "Mellanox Technologies MLX5 MPFS support"
- depends on MLX5_CORE_EN
+ bool "Mellanox Technologies MLX5 MPFS support"
+ depends on MLX5_CORE_EN
default y
- ---help---
+ ---help---
Mellanox Technologies Ethernet Multi-Physical Function Switch (MPFS)
- support in ConnectX NIC. MPFs is required for when multi-PF configuration
- is enabled to allow passing user configured unicast MAC addresses to the
- requesting PF.
+ support in ConnectX NIC. MPFs is required for when multi-PF configuration
+ is enabled to allow passing user configured unicast MAC addresses to the
+ requesting PF.
config MLX5_ESWITCH
bool "Mellanox Technologies MLX5 SRIOV E-Switch support"
@@ -72,10 +72,10 @@ config MLX5_ESWITCH
default y
---help---
Mellanox Technologies Ethernet SRIOV E-Switch support in ConnectX NIC.
- E-Switch provides internal SRIOV packet steering and switching for the
- enabled VFs and PF in two available modes:
- Legacy SRIOV mode (L2 mac vlan steering based).
- Switchdev mode (eswitch offloads).
+ E-Switch provides internal SRIOV packet steering and switching for the
+ enabled VFs and PF in two available modes:
+ Legacy SRIOV mode (L2 mac vlan steering based).
+ Switchdev mode (eswitch offloads).
config MLX5_CORE_EN_DCB
bool "Data Center Bridging (DCB) Support"
--
2.13.6

@ -0,0 +1,156 @@
From 0472c2b0a8bf58396dc7434fd8d96ce8f765f845 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:31 -0400
Subject: [PATCH 056/312] [netdrv] net/mlx5e: kTLS, Release reference on DUMPed
fragments in shutdown flow
Message-id: <20200510145245.10054-69-ahleihel@redhat.com>
Patchwork-id: 306611
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 68/82] net/mlx5e: kTLS, Release reference on DUMPed fragments in shutdown flow
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc6
commit 2c559361389b452ca23494080d0c65ab812706c1
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Wed Sep 18 13:45:38 2019 +0300
net/mlx5e: kTLS, Release reference on DUMPed fragments in shutdown flow
A call to kTLS completion handler was missing in the TXQSQ release
flow. Add it.
Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support")
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../ethernet/mellanox/mlx5/core/en_accel/ktls.h | 7 +++++-
.../ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 11 +++++++--
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c | 28 ++++++++++++----------
3 files changed, 30 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
index b7298f9ee3d3..c4c128908b6e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
@@ -86,7 +86,7 @@ struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev,
struct mlx5e_tx_wqe **wqe, u16 *pi);
void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq,
struct mlx5e_tx_wqe_info *wi,
- struct mlx5e_sq_dma *dma);
+ u32 *dma_fifo_cc);
#else
@@ -94,6 +94,11 @@ static inline void mlx5e_ktls_build_netdev(struct mlx5e_priv *priv)
{
}
+static inline void
+mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq,
+ struct mlx5e_tx_wqe_info *wi,
+ u32 *dma_fifo_cc) {}
+
#endif
#endif /* __MLX5E_TLS_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
index d195366461c9..90c6ce530a18 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
@@ -303,9 +303,16 @@ tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn, bool fir
void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq,
struct mlx5e_tx_wqe_info *wi,
- struct mlx5e_sq_dma *dma)
+ u32 *dma_fifo_cc)
{
- struct mlx5e_sq_stats *stats = sq->stats;
+ struct mlx5e_sq_stats *stats;
+ struct mlx5e_sq_dma *dma;
+
+ if (!wi->resync_dump_frag)
+ return;
+
+ dma = mlx5e_dma_get(sq, (*dma_fifo_cc)++);
+ stats = sq->stats;
mlx5e_tx_dma_unmap(sq->pdev, dma);
__skb_frag_unref(wi->resync_dump_frag);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
index 9cc22b62d73d..001752ace7f0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
@@ -483,14 +483,7 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
skb = wi->skb;
if (unlikely(!skb)) {
-#ifdef CONFIG_MLX5_EN_TLS
- if (wi->resync_dump_frag) {
- struct mlx5e_sq_dma *dma =
- mlx5e_dma_get(sq, dma_fifo_cc++);
-
- mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, dma);
- }
-#endif
+ mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, &dma_fifo_cc);
sqcc += wi->num_wqebbs;
continue;
}
@@ -546,29 +539,38 @@ void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq)
{
struct mlx5e_tx_wqe_info *wi;
struct sk_buff *skb;
+ u32 dma_fifo_cc;
+ u16 sqcc;
u16 ci;
int i;
- while (sq->cc != sq->pc) {
- ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sq->cc);
+ sqcc = sq->cc;
+ dma_fifo_cc = sq->dma_fifo_cc;
+
+ while (sqcc != sq->pc) {
+ ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sqcc);
wi = &sq->db.wqe_info[ci];
skb = wi->skb;
if (!skb) {
- sq->cc += wi->num_wqebbs;
+ mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, &dma_fifo_cc);
+ sqcc += wi->num_wqebbs;
continue;
}
for (i = 0; i < wi->num_dma; i++) {
struct mlx5e_sq_dma *dma =
- mlx5e_dma_get(sq, sq->dma_fifo_cc++);
+ mlx5e_dma_get(sq, dma_fifo_cc++);
mlx5e_tx_dma_unmap(sq->pdev, dma);
}
dev_kfree_skb_any(skb);
- sq->cc += wi->num_wqebbs;
+ sqcc += wi->num_wqebbs;
}
+
+ sq->dma_fifo_cc = dma_fifo_cc;
+ sq->cc = sqcc;
}
#ifdef CONFIG_MLX5_CORE_IPOIB
--
2.13.6

@ -0,0 +1,133 @@
From 4c84687ee7bae8c9bd1722d4159ed004a09d817d Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:32 -0400
Subject: [PATCH 057/312] [netdrv] net/mlx5e: kTLS, Size of a Dump WQE is fixed
Message-id: <20200510145245.10054-70-ahleihel@redhat.com>
Patchwork-id: 306608
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 69/82] net/mlx5e: kTLS, Size of a Dump WQE is fixed
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc6
commit 9b1fef2f23c1141c9936debe633ff16e44c6137b
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Sun Sep 1 13:53:26 2019 +0300
net/mlx5e: kTLS, Size of a Dump WQE is fixed
No Eth segment, so no dynamic inline headers.
The size of a Dump WQE is fixed, use constants and remove
unnecessary checks.
Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support")
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h | 2 +-
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h | 9 ++++++++-
.../net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 17 +++--------------
3 files changed, 12 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
index 182d5c5664eb..25f9dda578ac 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
@@ -23,7 +23,7 @@
#define MLX5E_SQ_TLS_ROOM \
(MLX5_SEND_WQE_MAX_WQEBBS + \
MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS + \
- MAX_SKB_FRAGS * MLX5E_KTLS_MAX_DUMP_WQEBBS)
+ MAX_SKB_FRAGS * MLX5E_KTLS_DUMP_WQEBBS)
#endif
#define INL_HDR_START_SZ (sizeof(((struct mlx5_wqe_eth_seg *)NULL)->inline_hdr.start))
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
index c4c128908b6e..eb692feba4a6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
@@ -21,7 +21,14 @@
MLX5_ST_SZ_BYTES(tls_progress_params))
#define MLX5E_KTLS_PROGRESS_WQEBBS \
(DIV_ROUND_UP(MLX5E_KTLS_PROGRESS_WQE_SZ, MLX5_SEND_WQE_BB))
-#define MLX5E_KTLS_MAX_DUMP_WQEBBS 2
+
+struct mlx5e_dump_wqe {
+ struct mlx5_wqe_ctrl_seg ctrl;
+ struct mlx5_wqe_data_seg data;
+};
+
+#define MLX5E_KTLS_DUMP_WQEBBS \
+ (DIV_ROUND_UP(sizeof(struct mlx5e_dump_wqe), MLX5_SEND_WQE_BB))
enum {
MLX5E_TLS_PROGRESS_PARAMS_AUTH_STATE_NO_OFFLOAD = 0,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
index 90c6ce530a18..ac54767b7d86 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
@@ -250,11 +250,6 @@ tx_post_resync_params(struct mlx5e_txqsq *sq,
mlx5e_ktls_tx_post_param_wqes(sq, priv_tx, skip_static_post, true);
}
-struct mlx5e_dump_wqe {
- struct mlx5_wqe_ctrl_seg ctrl;
- struct mlx5_wqe_data_seg data;
-};
-
static int
tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn, bool first)
{
@@ -262,7 +257,6 @@ tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn, bool fir
struct mlx5_wqe_data_seg *dseg;
struct mlx5e_dump_wqe *wqe;
dma_addr_t dma_addr = 0;
- u8 num_wqebbs;
u16 ds_cnt;
int fsz;
u16 pi;
@@ -270,7 +264,6 @@ tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn, bool fir
wqe = mlx5e_sq_fetch_wqe(sq, sizeof(*wqe), &pi);
ds_cnt = sizeof(*wqe) / MLX5_SEND_WQE_DS;
- num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS);
cseg = &wqe->ctrl;
dseg = &wqe->data;
@@ -291,12 +284,8 @@ tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn, bool fir
dseg->byte_count = cpu_to_be32(fsz);
mlx5e_dma_push(sq, dma_addr, fsz, MLX5E_DMA_MAP_PAGE);
- tx_fill_wi(sq, pi, num_wqebbs, frag, fsz);
- sq->pc += num_wqebbs;
-
- WARN(num_wqebbs > MLX5E_KTLS_MAX_DUMP_WQEBBS,
- "unexpected DUMP num_wqebbs, %d > %d",
- num_wqebbs, MLX5E_KTLS_MAX_DUMP_WQEBBS);
+ tx_fill_wi(sq, pi, MLX5E_KTLS_DUMP_WQEBBS, frag, fsz);
+ sq->pc += MLX5E_KTLS_DUMP_WQEBBS;
return 0;
}
@@ -368,7 +357,7 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
stats->tls_ooo++;
num_wqebbs = MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS +
- (info.nr_frags ? info.nr_frags * MLX5E_KTLS_MAX_DUMP_WQEBBS : 1);
+ (info.nr_frags ? info.nr_frags * MLX5E_KTLS_DUMP_WQEBBS : 1);
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
if (unlikely(contig_wqebbs_room < num_wqebbs))
--
2.13.6

@ -0,0 +1,148 @@
From 48b3c320e5d5e9ca3cef28dbcef96f5a8dca4e7b Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:33 -0400
Subject: [PATCH 058/312] [netdrv] net/mlx5e: kTLS, Save only the frag page to
release at completion
Message-id: <20200510145245.10054-71-ahleihel@redhat.com>
Patchwork-id: 306609
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 70/82] net/mlx5e: kTLS, Save only the frag page to release at completion
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc6
commit f45da3716fb2fb09e301a1b6edf200ff343dc06e
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Wed Sep 18 13:50:32 2019 +0300
net/mlx5e: kTLS, Save only the frag page to release at completion
In TX resync flow where DUMP WQEs are posted, keep a pointer to
the fragment page to unref it upon completion, instead of saving
the whole fragment.
In addition, move it the end of the arguments list in tx_fill_wi().
Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support")
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en.h | 2 +-
.../ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 27 +++++++++++-----------
2 files changed, 14 insertions(+), 15 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 35cf78134737..25bf9f026641 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -344,7 +344,7 @@ struct mlx5e_tx_wqe_info {
u8 num_wqebbs;
u8 num_dma;
#ifdef CONFIG_MLX5_EN_TLS
- skb_frag_t *resync_dump_frag;
+ struct page *resync_dump_frag_page;
#endif
};
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
index ac54767b7d86..6dfb22d705b2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
@@ -108,16 +108,15 @@ build_progress_params(struct mlx5e_tx_wqe *wqe, u16 pc, u32 sqn,
}
static void tx_fill_wi(struct mlx5e_txqsq *sq,
- u16 pi, u8 num_wqebbs,
- skb_frag_t *resync_dump_frag,
- u32 num_bytes)
+ u16 pi, u8 num_wqebbs, u32 num_bytes,
+ struct page *page)
{
struct mlx5e_tx_wqe_info *wi = &sq->db.wqe_info[pi];
- wi->skb = NULL;
- wi->num_wqebbs = num_wqebbs;
- wi->resync_dump_frag = resync_dump_frag;
- wi->num_bytes = num_bytes;
+ memset(wi, 0, sizeof(*wi));
+ wi->num_wqebbs = num_wqebbs;
+ wi->num_bytes = num_bytes;
+ wi->resync_dump_frag_page = page;
}
void mlx5e_ktls_tx_offload_set_pending(struct mlx5e_ktls_offload_context_tx *priv_tx)
@@ -145,7 +144,7 @@ post_static_params(struct mlx5e_txqsq *sq,
umr_wqe = mlx5e_sq_fetch_wqe(sq, MLX5E_KTLS_STATIC_UMR_WQE_SZ, &pi);
build_static_params(umr_wqe, sq->pc, sq->sqn, priv_tx, fence);
- tx_fill_wi(sq, pi, MLX5E_KTLS_STATIC_WQEBBS, NULL, 0);
+ tx_fill_wi(sq, pi, MLX5E_KTLS_STATIC_WQEBBS, 0, NULL);
sq->pc += MLX5E_KTLS_STATIC_WQEBBS;
}
@@ -159,7 +158,7 @@ post_progress_params(struct mlx5e_txqsq *sq,
wqe = mlx5e_sq_fetch_wqe(sq, MLX5E_KTLS_PROGRESS_WQE_SZ, &pi);
build_progress_params(wqe, sq->pc, sq->sqn, priv_tx, fence);
- tx_fill_wi(sq, pi, MLX5E_KTLS_PROGRESS_WQEBBS, NULL, 0);
+ tx_fill_wi(sq, pi, MLX5E_KTLS_PROGRESS_WQEBBS, 0, NULL);
sq->pc += MLX5E_KTLS_PROGRESS_WQEBBS;
}
@@ -211,7 +210,7 @@ static bool tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx,
while (remaining > 0) {
skb_frag_t *frag = &record->frags[i];
- __skb_frag_ref(frag);
+ get_page(skb_frag_page(frag));
remaining -= skb_frag_size(frag);
info->frags[i++] = frag;
}
@@ -284,7 +283,7 @@ tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn, bool fir
dseg->byte_count = cpu_to_be32(fsz);
mlx5e_dma_push(sq, dma_addr, fsz, MLX5E_DMA_MAP_PAGE);
- tx_fill_wi(sq, pi, MLX5E_KTLS_DUMP_WQEBBS, frag, fsz);
+ tx_fill_wi(sq, pi, MLX5E_KTLS_DUMP_WQEBBS, fsz, skb_frag_page(frag));
sq->pc += MLX5E_KTLS_DUMP_WQEBBS;
return 0;
@@ -297,14 +296,14 @@ void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq,
struct mlx5e_sq_stats *stats;
struct mlx5e_sq_dma *dma;
- if (!wi->resync_dump_frag)
+ if (!wi->resync_dump_frag_page)
return;
dma = mlx5e_dma_get(sq, (*dma_fifo_cc)++);
stats = sq->stats;
mlx5e_tx_dma_unmap(sq->pdev, dma);
- __skb_frag_unref(wi->resync_dump_frag);
+ put_page(wi->resync_dump_frag_page);
stats->tls_dump_packets++;
stats->tls_dump_bytes += wi->num_bytes;
}
@@ -314,7 +313,7 @@ static void tx_post_fence_nop(struct mlx5e_txqsq *sq)
struct mlx5_wq_cyc *wq = &sq->wq;
u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
- tx_fill_wi(sq, pi, 1, NULL, 0);
+ tx_fill_wi(sq, pi, 1, 0, NULL);
mlx5e_post_nop_fence(wq, sq->sqn, &sq->pc);
}
--
2.13.6

@ -0,0 +1,79 @@
From caac2c9de56837381f547ae1c0d9d180f1a1546c Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:34 -0400
Subject: [PATCH 059/312] [netdrv] net/mlx5e: kTLS, Save by-value copy of the
record frags
Message-id: <20200510145245.10054-72-ahleihel@redhat.com>
Patchwork-id: 306613
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 71/82] net/mlx5e: kTLS, Save by-value copy of the record frags
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc6
commit 310d9b9d37220b590909e90e724fc5f346a98775
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Wed Sep 18 13:57:40 2019 +0300
net/mlx5e: kTLS, Save by-value copy of the record frags
Access the record fragments only under the TLS ctx lock.
In the resync flow, save a copy of them to be used when
preparing and posting the required DUMP WQEs.
Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support")
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
index 6dfb22d705b2..334808b1863b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
@@ -179,7 +179,7 @@ struct tx_sync_info {
u64 rcd_sn;
s32 sync_len;
int nr_frags;
- skb_frag_t *frags[MAX_SKB_FRAGS];
+ skb_frag_t frags[MAX_SKB_FRAGS];
};
static bool tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx,
@@ -212,11 +212,11 @@ static bool tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx,
get_page(skb_frag_page(frag));
remaining -= skb_frag_size(frag);
- info->frags[i++] = frag;
+ info->frags[i++] = *frag;
}
/* reduce the part which will be sent with the original SKB */
if (remaining < 0)
- skb_frag_size_add(info->frags[i - 1], remaining);
+ skb_frag_size_add(&info->frags[i - 1], remaining);
info->nr_frags = i;
out:
spin_unlock_irqrestore(&tx_ctx->lock, flags);
@@ -365,7 +365,7 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
tx_post_resync_params(sq, priv_tx, info.rcd_sn);
for (i = 0; i < info.nr_frags; i++)
- if (tx_post_resync_dump(sq, info.frags[i], priv_tx->tisn, !i))
+ if (tx_post_resync_dump(sq, &info.frags[i], priv_tx->tisn, !i))
goto err_out;
/* If no dump WQE was sent, we need to have a fence NOP WQE before the
--
2.13.6

@ -0,0 +1,78 @@
From dcc63af43b8f506960083abc7aa249415234c31b Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:35 -0400
Subject: [PATCH 060/312] [netdrv] net/mlx5e: kTLS, Fix page refcnt leak in TX
resync error flow
Message-id: <20200510145245.10054-73-ahleihel@redhat.com>
Patchwork-id: 306612
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 72/82] net/mlx5e: kTLS, Fix page refcnt leak in TX resync error flow
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc6
commit b61b24bd135a7775a2839863bd1d58a462a5f1e5
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Wed Sep 18 13:57:40 2019 +0300
net/mlx5e: kTLS, Fix page refcnt leak in TX resync error flow
All references for frag pages that are obtained in tx_sync_info_get()
should be released.
Release usually occurs in the corresponding CQE of the WQE.
In error flows, not all fragments have a WQE posted for them, hence
no matching CQE will be generated.
For these pages, release the reference in the error flow.
Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support")
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
index 334808b1863b..5f1d18fb644e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
@@ -329,7 +329,7 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
struct tx_sync_info info = {};
u16 contig_wqebbs_room, pi;
u8 num_wqebbs;
- int i;
+ int i = 0;
if (!tx_sync_info_get(priv_tx, seq, &info)) {
/* We might get here if a retransmission reaches the driver
@@ -364,7 +364,7 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
tx_post_resync_params(sq, priv_tx, info.rcd_sn);
- for (i = 0; i < info.nr_frags; i++)
+ for (; i < info.nr_frags; i++)
if (tx_post_resync_dump(sq, &info.frags[i], priv_tx->tisn, !i))
goto err_out;
@@ -377,6 +377,9 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
return skb;
err_out:
+ for (; i < info.nr_frags; i++)
+ put_page(skb_frag_page(&info.frags[i]));
+
dev_kfree_skb_any(skb);
return NULL;
}
--
2.13.6

@ -0,0 +1,99 @@
From 16c3d368f72223cdfc308be9d40852d1d3cea81b Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:36 -0400
Subject: [PATCH 061/312] [netdrv] net/mlx5e: kTLS, Fix missing SQ edge fill
Message-id: <20200510145245.10054-74-ahleihel@redhat.com>
Patchwork-id: 306614
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 73/82] net/mlx5e: kTLS, Fix missing SQ edge fill
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc6
commit 700ec497424069fa4d8f3715759c4aaec016e840
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Mon Oct 7 13:59:11 2019 +0300
net/mlx5e: kTLS, Fix missing SQ edge fill
Before posting the context params WQEs, make sure there is enough
contiguous room for them, and fill frag edge if needed.
When posting only a nop, no need for room check, as it needs a single
WQEBB, meaning no contiguity issue.
Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support")
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 28 +++++++++++++++-------
1 file changed, 20 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
index 5f1d18fb644e..59e3f48470d9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
@@ -168,6 +168,14 @@ mlx5e_ktls_tx_post_param_wqes(struct mlx5e_txqsq *sq,
bool skip_static_post, bool fence_first_post)
{
bool progress_fence = skip_static_post || !fence_first_post;
+ struct mlx5_wq_cyc *wq = &sq->wq;
+ u16 contig_wqebbs_room, pi;
+
+ pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+ contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
+ if (unlikely(contig_wqebbs_room <
+ MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS))
+ mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
if (!skip_static_post)
post_static_params(sq, priv_tx, fence_first_post);
@@ -355,10 +363,20 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
stats->tls_ooo++;
- num_wqebbs = MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS +
- (info.nr_frags ? info.nr_frags * MLX5E_KTLS_DUMP_WQEBBS : 1);
+ tx_post_resync_params(sq, priv_tx, info.rcd_sn);
+
+ /* If no dump WQE was sent, we need to have a fence NOP WQE before the
+ * actual data xmit.
+ */
+ if (!info.nr_frags) {
+ tx_post_fence_nop(sq);
+ return skb;
+ }
+
+ num_wqebbs = info.nr_frags * MLX5E_KTLS_DUMP_WQEBBS;
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
+
if (unlikely(contig_wqebbs_room < num_wqebbs))
mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
@@ -368,12 +386,6 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
if (tx_post_resync_dump(sq, &info.frags[i], priv_tx->tisn, !i))
goto err_out;
- /* If no dump WQE was sent, we need to have a fence NOP WQE before the
- * actual data xmit.
- */
- if (!info.nr_frags)
- tx_post_fence_nop(sq);
-
return skb;
err_out:
--
2.13.6

@ -0,0 +1,195 @@
From 60eadaf04867375c4fc1dddc16aa6bd274efdc67 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:37 -0400
Subject: [PATCH 062/312] [netdrv] net/mlx5e: kTLS, Limit DUMP wqe size
Message-id: <20200510145245.10054-75-ahleihel@redhat.com>
Patchwork-id: 306616
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 74/82] net/mlx5e: kTLS, Limit DUMP wqe size
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc6
commit 84d1bb2b139e0184b1754aa1b5776186b475fce8
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Mon Oct 7 14:01:29 2019 +0300
net/mlx5e: kTLS, Limit DUMP wqe size
HW expects the data size in DUMP WQEs to be up to MTU.
Make sure they are in range.
We elevate the frag page refcount by 'n-1', in addition to the
one obtained in tx_sync_info_get(), having an overall of 'n'
references. We bulk increments by using a single page_ref_add()
command, to optimize perfermance.
The refcounts are released one by one, by the corresponding completions.
Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support")
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 +
drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h | 11 ++++---
.../ethernet/mellanox/mlx5/core/en_accel/ktls.h | 11 ++++++-
.../ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 34 +++++++++++++++++++---
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 7 ++++-
5 files changed, 52 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 25bf9f026641..319797f42105 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -409,6 +409,7 @@ struct mlx5e_txqsq {
struct device *pdev;
__be32 mkey_be;
unsigned long state;
+ unsigned int hw_mtu;
struct hwtstamp_config *tstamp;
struct mlx5_clock *clock;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
index 25f9dda578ac..7c8796d9743f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
@@ -15,15 +15,14 @@
#else
/* TLS offload requires additional stop_room for:
* - a resync SKB.
- * kTLS offload requires additional stop_room for:
- * - static params WQE,
- * - progress params WQE, and
- * - resync DUMP per frag.
+ * kTLS offload requires fixed additional stop_room for:
+ * - a static params WQE, and a progress params WQE.
+ * The additional MTU-depending room for the resync DUMP WQEs
+ * will be calculated and added in runtime.
*/
#define MLX5E_SQ_TLS_ROOM \
(MLX5_SEND_WQE_MAX_WQEBBS + \
- MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS + \
- MAX_SKB_FRAGS * MLX5E_KTLS_DUMP_WQEBBS)
+ MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS)
#endif
#define INL_HDR_START_SZ (sizeof(((struct mlx5_wqe_eth_seg *)NULL)->inline_hdr.start))
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
index eb692feba4a6..929966e6fbc4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
@@ -94,7 +94,16 @@ struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev,
void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq,
struct mlx5e_tx_wqe_info *wi,
u32 *dma_fifo_cc);
-
+static inline u8
+mlx5e_ktls_dumps_num_wqebbs(struct mlx5e_txqsq *sq, unsigned int nfrags,
+ unsigned int sync_len)
+{
+ /* Given the MTU and sync_len, calculates an upper bound for the
+ * number of WQEBBs needed for the TX resync DUMP WQEs of a record.
+ */
+ return MLX5E_KTLS_DUMP_WQEBBS *
+ (nfrags + DIV_ROUND_UP(sync_len, sq->hw_mtu));
+}
#else
static inline void mlx5e_ktls_build_netdev(struct mlx5e_priv *priv)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
index 59e3f48470d9..e10b0bb696da 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
@@ -373,7 +373,7 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
return skb;
}
- num_wqebbs = info.nr_frags * MLX5E_KTLS_DUMP_WQEBBS;
+ num_wqebbs = mlx5e_ktls_dumps_num_wqebbs(sq, info.nr_frags, info.sync_len);
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
@@ -382,14 +382,40 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
tx_post_resync_params(sq, priv_tx, info.rcd_sn);
- for (; i < info.nr_frags; i++)
- if (tx_post_resync_dump(sq, &info.frags[i], priv_tx->tisn, !i))
- goto err_out;
+ for (; i < info.nr_frags; i++) {
+ unsigned int orig_fsz, frag_offset = 0, n = 0;
+ skb_frag_t *f = &info.frags[i];
+
+ orig_fsz = skb_frag_size(f);
+
+ do {
+ bool fence = !(i || frag_offset);
+ unsigned int fsz;
+
+ n++;
+ fsz = min_t(unsigned int, sq->hw_mtu, orig_fsz - frag_offset);
+ skb_frag_size_set(f, fsz);
+ if (tx_post_resync_dump(sq, f, priv_tx->tisn, fence)) {
+ page_ref_add(skb_frag_page(f), n - 1);
+ goto err_out;
+ }
+
+ skb_frag_off_add(f, fsz);
+ frag_offset += fsz;
+ } while (frag_offset < orig_fsz);
+
+ page_ref_add(skb_frag_page(f), n - 1);
+ }
return skb;
err_out:
for (; i < info.nr_frags; i++)
+ /* The put_page() here undoes the page ref obtained in tx_sync_info_get().
+ * Page refs obtained for the DUMP WQEs above (by page_ref_add) will be
+ * released only upon their completions (or in mlx5e_free_txqsq_descs,
+ * if channel closes).
+ */
put_page(skb_frag_page(&info.frags[i]));
dev_kfree_skb_any(skb);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 7d9a526c6017..7cd3ac6a23a8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -1118,6 +1118,7 @@ static int mlx5e_alloc_txqsq(struct mlx5e_channel *c,
sq->txq_ix = txq_ix;
sq->uar_map = mdev->mlx5e_res.bfreg.map;
sq->min_inline_mode = params->tx_min_inline_mode;
+ sq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
sq->stats = &c->priv->channel_stats[c->ix].sq[tc];
sq->stop_room = MLX5E_SQ_STOP_ROOM;
INIT_WORK(&sq->recover_work, mlx5e_tx_err_cqe_work);
@@ -1125,10 +1126,14 @@ static int mlx5e_alloc_txqsq(struct mlx5e_channel *c,
set_bit(MLX5E_SQ_STATE_VLAN_NEED_L2_INLINE, &sq->state);
if (MLX5_IPSEC_DEV(c->priv->mdev))
set_bit(MLX5E_SQ_STATE_IPSEC, &sq->state);
+#ifdef CONFIG_MLX5_EN_TLS
if (mlx5_accel_is_tls_device(c->priv->mdev)) {
set_bit(MLX5E_SQ_STATE_TLS, &sq->state);
- sq->stop_room += MLX5E_SQ_TLS_ROOM;
+ sq->stop_room += MLX5E_SQ_TLS_ROOM +
+ mlx5e_ktls_dumps_num_wqebbs(sq, MAX_SKB_FRAGS,
+ TLS_MAX_PAYLOAD_SIZE);
}
+#endif
param->wq.db_numa_node = cpu_to_node(c->cpu);
err = mlx5_wq_cyc_create(mdev, &param->wq, sqc_wq, wq, &sq->wq_ctrl);
--
2.13.6

@ -0,0 +1,66 @@
From d84c54a3976bc805815e5a4a85f711f483ab3157 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:38 -0400
Subject: [PATCH 063/312] [netdrv] net/mlx5e: kTLS, Remove unneeded cipher type
checks
Message-id: <20200510145245.10054-76-ahleihel@redhat.com>
Patchwork-id: 306617
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 75/82] net/mlx5e: kTLS, Remove unneeded cipher type checks
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc6
commit ecdc65a3ec5d45725355479d63c23a20f4582104
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Sun Oct 6 18:25:17 2019 +0300
net/mlx5e: kTLS, Remove unneeded cipher type checks
Cipher type is checked upon connection addition.
No need to recheck it per every TX resync invocation.
Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support")
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
index e10b0bb696da..1bfeb558ff78 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
@@ -31,9 +31,6 @@ fill_static_params_ctx(void *ctx, struct mlx5e_ktls_offload_context_tx *priv_tx)
char *salt, *rec_seq;
u8 tls_version;
- if (WARN_ON(crypto_info->cipher_type != TLS_CIPHER_AES_GCM_128))
- return;
-
info = (struct tls12_crypto_info_aes_gcm_128 *)crypto_info;
EXTRACT_INFO_FIELDS;
@@ -243,9 +240,6 @@ tx_post_resync_params(struct mlx5e_txqsq *sq,
u16 rec_seq_sz;
char *rec_seq;
- if (WARN_ON(crypto_info->cipher_type != TLS_CIPHER_AES_GCM_128))
- return;
-
info = (struct tls12_crypto_info_aes_gcm_128 *)crypto_info;
rec_seq = info->rec_seq;
rec_seq_sz = sizeof(info->rec_seq);
--
2.13.6

@ -0,0 +1,107 @@
From 1bf2b8f0c26bc563683d7b063778bd6e532247f9 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:39 -0400
Subject: [PATCH 064/312] [netdrv] net/mlx5e: kTLS, Save a copy of the crypto
info
Message-id: <20200510145245.10054-77-ahleihel@redhat.com>
Patchwork-id: 306615
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 76/82] net/mlx5e: kTLS, Save a copy of the crypto info
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc6
commit af11a7a42454b17c77da5fa55b6b6325b11d60e5
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Sun Sep 22 14:05:24 2019 +0300
net/mlx5e: kTLS, Save a copy of the crypto info
Do not assume the crypto info is accessible during the
connection lifetime. Save a copy of it in the private
TX context.
Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support")
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c | 2 +-
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h | 2 +-
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 8 ++------
3 files changed, 4 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
index d2ff74d52720..46725cd743a3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
@@ -38,7 +38,7 @@ static int mlx5e_ktls_add(struct net_device *netdev, struct sock *sk,
return -ENOMEM;
tx_priv->expected_seq = start_offload_tcp_sn;
- tx_priv->crypto_info = crypto_info;
+ tx_priv->crypto_info = *(struct tls12_crypto_info_aes_gcm_128 *)crypto_info;
mlx5e_set_ktls_tx_priv_ctx(tls_ctx, tx_priv);
/* tc and underlay_qpn values are not in use for tls tis */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
index 929966e6fbc4..a3efa29a4629 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
@@ -44,7 +44,7 @@ enum {
struct mlx5e_ktls_offload_context_tx {
struct tls_offload_context_tx *tx_ctx;
- struct tls_crypto_info *crypto_info;
+ struct tls12_crypto_info_aes_gcm_128 crypto_info;
u32 expected_seq;
u32 tisn;
u32 key_id;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
index 1bfeb558ff78..badc6fd26a14 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
@@ -24,14 +24,12 @@ enum {
static void
fill_static_params_ctx(void *ctx, struct mlx5e_ktls_offload_context_tx *priv_tx)
{
- struct tls_crypto_info *crypto_info = priv_tx->crypto_info;
- struct tls12_crypto_info_aes_gcm_128 *info;
+ struct tls12_crypto_info_aes_gcm_128 *info = &priv_tx->crypto_info;
char *initial_rn, *gcm_iv;
u16 salt_sz, rec_seq_sz;
char *salt, *rec_seq;
u8 tls_version;
- info = (struct tls12_crypto_info_aes_gcm_128 *)crypto_info;
EXTRACT_INFO_FIELDS;
gcm_iv = MLX5_ADDR_OF(tls_static_params, ctx, gcm_iv);
@@ -233,14 +231,12 @@ tx_post_resync_params(struct mlx5e_txqsq *sq,
struct mlx5e_ktls_offload_context_tx *priv_tx,
u64 rcd_sn)
{
- struct tls_crypto_info *crypto_info = priv_tx->crypto_info;
- struct tls12_crypto_info_aes_gcm_128 *info;
+ struct tls12_crypto_info_aes_gcm_128 *info = &priv_tx->crypto_info;
__be64 rn_be = cpu_to_be64(rcd_sn);
bool skip_static_post;
u16 rec_seq_sz;
char *rec_seq;
- info = (struct tls12_crypto_info_aes_gcm_128 *)crypto_info;
rec_seq = info->rec_seq;
rec_seq_sz = sizeof(info->rec_seq);
--
2.13.6

@ -0,0 +1,275 @@
From dc53981deab557df58bbed93789ad82b019d94b5 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:40 -0400
Subject: [PATCH 065/312] [netdrv] net/mlx5e: kTLS, Enhance TX resync flow
Message-id: <20200510145245.10054-78-ahleihel@redhat.com>
Patchwork-id: 306619
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 77/82] net/mlx5e: kTLS, Enhance TX resync flow
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc6
commit 46a3ea98074e2a7731ab9b84ec60fc18a2f909e5
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Thu Oct 3 10:48:10 2019 +0300
net/mlx5e: kTLS, Enhance TX resync flow
Once the kTLS TX resync function is called, it used to return
a binary value, for success or failure.
However, in case the TLS SKB is a retransmission of the connection
handshake, it initiates the resync flow (as the tcp seq check holds),
while regular packet handle is expected.
In this patch, we identify this case and skip the resync operation
accordingly.
Counters:
- Add a counter (tls_skip_no_sync_data) to monitor this.
- Bump the dump counters up as they are used more frequently.
- Add a missing counter descriptor declaration for tls_resync_bytes
in sq_stats_desc.
Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support")
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 58 +++++++++++++---------
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c | 16 +++---
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h | 10 ++--
3 files changed, 51 insertions(+), 33 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
index badc6fd26a14..778dab1af8fc 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
@@ -185,26 +185,33 @@ struct tx_sync_info {
skb_frag_t frags[MAX_SKB_FRAGS];
};
-static bool tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx,
- u32 tcp_seq, struct tx_sync_info *info)
+enum mlx5e_ktls_sync_retval {
+ MLX5E_KTLS_SYNC_DONE,
+ MLX5E_KTLS_SYNC_FAIL,
+ MLX5E_KTLS_SYNC_SKIP_NO_DATA,
+};
+
+static enum mlx5e_ktls_sync_retval
+tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx,
+ u32 tcp_seq, struct tx_sync_info *info)
{
struct tls_offload_context_tx *tx_ctx = priv_tx->tx_ctx;
+ enum mlx5e_ktls_sync_retval ret = MLX5E_KTLS_SYNC_DONE;
struct tls_record_info *record;
int remaining, i = 0;
unsigned long flags;
- bool ret = true;
spin_lock_irqsave(&tx_ctx->lock, flags);
record = tls_get_record(tx_ctx, tcp_seq, &info->rcd_sn);
if (unlikely(!record)) {
- ret = false;
+ ret = MLX5E_KTLS_SYNC_FAIL;
goto out;
}
if (unlikely(tcp_seq < tls_record_start_seq(record))) {
- if (!tls_record_is_start_marker(record))
- ret = false;
+ ret = tls_record_is_start_marker(record) ?
+ MLX5E_KTLS_SYNC_SKIP_NO_DATA : MLX5E_KTLS_SYNC_FAIL;
goto out;
}
@@ -316,20 +323,26 @@ static void tx_post_fence_nop(struct mlx5e_txqsq *sq)
mlx5e_post_nop_fence(wq, sq->sqn, &sq->pc);
}
-static struct sk_buff *
+static enum mlx5e_ktls_sync_retval
mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
struct mlx5e_txqsq *sq,
- struct sk_buff *skb,
+ int datalen,
u32 seq)
{
struct mlx5e_sq_stats *stats = sq->stats;
struct mlx5_wq_cyc *wq = &sq->wq;
+ enum mlx5e_ktls_sync_retval ret;
struct tx_sync_info info = {};
u16 contig_wqebbs_room, pi;
u8 num_wqebbs;
int i = 0;
- if (!tx_sync_info_get(priv_tx, seq, &info)) {
+ ret = tx_sync_info_get(priv_tx, seq, &info);
+ if (unlikely(ret != MLX5E_KTLS_SYNC_DONE)) {
+ if (ret == MLX5E_KTLS_SYNC_SKIP_NO_DATA) {
+ stats->tls_skip_no_sync_data++;
+ return MLX5E_KTLS_SYNC_SKIP_NO_DATA;
+ }
/* We might get here if a retransmission reaches the driver
* after the relevant record is acked.
* It should be safe to drop the packet in this case
@@ -339,13 +352,8 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
}
if (unlikely(info.sync_len < 0)) {
- u32 payload;
- int headln;
-
- headln = skb_transport_offset(skb) + tcp_hdrlen(skb);
- payload = skb->len - headln;
- if (likely(payload <= -info.sync_len))
- return skb;
+ if (likely(datalen <= -info.sync_len))
+ return MLX5E_KTLS_SYNC_DONE;
stats->tls_drop_bypass_req++;
goto err_out;
@@ -360,7 +368,7 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
*/
if (!info.nr_frags) {
tx_post_fence_nop(sq);
- return skb;
+ return MLX5E_KTLS_SYNC_DONE;
}
num_wqebbs = mlx5e_ktls_dumps_num_wqebbs(sq, info.nr_frags, info.sync_len);
@@ -397,7 +405,7 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
page_ref_add(skb_frag_page(f), n - 1);
}
- return skb;
+ return MLX5E_KTLS_SYNC_DONE;
err_out:
for (; i < info.nr_frags; i++)
@@ -408,8 +416,7 @@ mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
*/
put_page(skb_frag_page(&info.frags[i]));
- dev_kfree_skb_any(skb);
- return NULL;
+ return MLX5E_KTLS_SYNC_FAIL;
}
struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev,
@@ -445,10 +452,15 @@ struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev,
seq = ntohl(tcp_hdr(skb)->seq);
if (unlikely(priv_tx->expected_seq != seq)) {
- skb = mlx5e_ktls_tx_handle_ooo(priv_tx, sq, skb, seq);
- if (unlikely(!skb))
+ enum mlx5e_ktls_sync_retval ret =
+ mlx5e_ktls_tx_handle_ooo(priv_tx, sq, datalen, seq);
+
+ if (likely(ret == MLX5E_KTLS_SYNC_DONE))
+ *wqe = mlx5e_sq_fetch_wqe(sq, sizeof(**wqe), pi);
+ else if (ret == MLX5E_KTLS_SYNC_FAIL)
+ goto err_out;
+ else /* ret == MLX5E_KTLS_SYNC_SKIP_NO_DATA */
goto out;
- *wqe = mlx5e_sq_fetch_wqe(sq, sizeof(**wqe), pi);
}
priv_tx->expected_seq = seq + datalen;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
index 79b3ec005f43..23587f55fad7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
@@ -104,11 +104,12 @@ static const struct counter_desc sw_stats_desc[] = {
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_encrypted_bytes) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ctx) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ooo) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_bytes) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_resync_bytes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_skip_no_sync_data) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_drop_no_sync_data) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_drop_bypass_req) },
- { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_packets) },
- { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_bytes) },
#endif
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_lro_packets) },
@@ -340,11 +341,12 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(sw)
s->tx_tls_encrypted_bytes += sq_stats->tls_encrypted_bytes;
s->tx_tls_ctx += sq_stats->tls_ctx;
s->tx_tls_ooo += sq_stats->tls_ooo;
+ s->tx_tls_dump_bytes += sq_stats->tls_dump_bytes;
+ s->tx_tls_dump_packets += sq_stats->tls_dump_packets;
s->tx_tls_resync_bytes += sq_stats->tls_resync_bytes;
+ s->tx_tls_skip_no_sync_data += sq_stats->tls_skip_no_sync_data;
s->tx_tls_drop_no_sync_data += sq_stats->tls_drop_no_sync_data;
s->tx_tls_drop_bypass_req += sq_stats->tls_drop_bypass_req;
- s->tx_tls_dump_bytes += sq_stats->tls_dump_bytes;
- s->tx_tls_dump_packets += sq_stats->tls_dump_packets;
#endif
s->tx_cqes += sq_stats->cqes;
}
@@ -1505,10 +1507,12 @@ static const struct counter_desc sq_stats_desc[] = {
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_bytes) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ctx) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ooo) },
- { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_drop_no_sync_data) },
- { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_drop_bypass_req) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_packets) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_bytes) },
+ { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_resync_bytes) },
+ { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_skip_no_sync_data) },
+ { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_drop_no_sync_data) },
+ { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_drop_bypass_req) },
#endif
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, csum_none) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, stopped) },
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
index ab1c3366ff7d..092b39ffa32a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
@@ -180,11 +180,12 @@ struct mlx5e_sw_stats {
u64 tx_tls_encrypted_bytes;
u64 tx_tls_ctx;
u64 tx_tls_ooo;
+ u64 tx_tls_dump_packets;
+ u64 tx_tls_dump_bytes;
u64 tx_tls_resync_bytes;
+ u64 tx_tls_skip_no_sync_data;
u64 tx_tls_drop_no_sync_data;
u64 tx_tls_drop_bypass_req;
- u64 tx_tls_dump_packets;
- u64 tx_tls_dump_bytes;
#endif
u64 rx_xsk_packets;
@@ -324,11 +325,12 @@ struct mlx5e_sq_stats {
u64 tls_encrypted_bytes;
u64 tls_ctx;
u64 tls_ooo;
+ u64 tls_dump_packets;
+ u64 tls_dump_bytes;
u64 tls_resync_bytes;
+ u64 tls_skip_no_sync_data;
u64 tls_drop_no_sync_data;
u64 tls_drop_bypass_req;
- u64 tls_dump_packets;
- u64 tls_dump_bytes;
#endif
/* less likely accessed in data path */
u64 csum_none;
--
2.13.6

@ -0,0 +1,56 @@
From c667484c074aea8fe652eeb7a9e5e24438436a69 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:41 -0400
Subject: [PATCH 066/312] [netdrv] net/mlx5e: Remove incorrect match criteria
assignment line
Message-id: <20200510145245.10054-79-ahleihel@redhat.com>
Patchwork-id: 306620
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 78/82] net/mlx5e: Remove incorrect match criteria assignment line
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc6
commit 752d3dc06d6936d5a357a18b6b51d91c7e134e88
Author: Dmytro Linkin <dmitrolin@mellanox.com>
Date: Thu Aug 29 15:24:27 2019 +0000
net/mlx5e: Remove incorrect match criteria assignment line
Driver have function, which enable match criteria for misc parameters
in dependence of eswitch capabilities.
Fixes: 4f5d1beadc10 ("Merge branch 'mlx5-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux")
Signed-off-by: Dmytro Linkin <dmitrolin@mellanox.com>
Reviewed-by: Jianbo Liu <jianbol@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Reviewed-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 93501e3c8b28..fa3249964ee9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -285,7 +285,6 @@ mlx5_eswitch_add_fwd_rule(struct mlx5_eswitch *esw,
mlx5_eswitch_set_rule_source_port(esw, spec, attr);
- spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS;
if (attr->outer_match_level != MLX5_MATCH_NONE)
spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;
--
2.13.6

@ -0,0 +1,77 @@
From 356f9793df0411479e5b156d637c2c5bcce95935 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:43 -0400
Subject: [PATCH 067/312] [netdrv] mlx5: reject unsupported external timestamp
flags
Message-id: <20200510145245.10054-81-ahleihel@redhat.com>
Patchwork-id: 306621
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 80/82] mlx5: reject unsupported external timestamp flags
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4-rc8
commit 2e0645a00e25f7122cad6da57ce3cc855df49ddd
Author: Jacob Keller <jacob.e.keller@intel.com>
Date: Thu Nov 14 10:45:00 2019 -0800
mlx5: reject unsupported external timestamp flags
Fix the mlx5 core PTP support to explicitly reject any future flags that
get added to the external timestamp request ioctl.
In order to maintain currently functioning code, this patch accepts all
three current flags. This is because the PTP_RISING_EDGE and
PTP_FALLING_EDGE flags have unclear semantics and each driver seems to
have interpreted them slightly differently.
[ RC: I'm not 100% sure what this driver does, but if I'm not wrong it
follows the dp83640:
flags Meaning
---------------------------------------------------- --------------------------
PTP_ENABLE_FEATURE Time stamp rising edge
PTP_ENABLE_FEATURE|PTP_RISING_EDGE Time stamp rising edge
PTP_ENABLE_FEATURE|PTP_FALLING_EDGE Time stamp falling edge
PTP_ENABLE_FEATURE|PTP_RISING_EDGE|PTP_FALLING_EDGE Time stamp falling edge
]
Cc: Feras Daoud <ferasda@mellanox.com>
Cc: Eugenia Emantayev <eugenia@mellanox.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Richard Cochran <richardcochran@gmail.com>
Reviewed-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
index 9a40f24e3193..34190e888521 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
@@ -242,6 +242,12 @@ static int mlx5_extts_configure(struct ptp_clock_info *ptp,
PTP_FALLING_EDGE))
return -EOPNOTSUPP;
+ /* Reject requests with unsupported flags */
+ if (rq->extts.flags & ~(PTP_ENABLE_FEATURE |
+ PTP_RISING_EDGE |
+ PTP_FALLING_EDGE))
+ return -EOPNOTSUPP;
+
if (rq->extts.index >= clock->ptp_info.n_pins)
return -EINVAL;
--
2.13.6

@ -0,0 +1,58 @@
From a5c0c1565d8c1a0284297a0a757bdbd9e4bace22 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:44 -0400
Subject: [PATCH 068/312] [netdrv] net/mlx5e: Fix ingress rate configuration
for representors
Message-id: <20200510145245.10054-82-ahleihel@redhat.com>
Patchwork-id: 306622
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 81/82] net/mlx5e: Fix ingress rate configuration for representors
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4
commit 7b83355f6df9ead2f8c4b06c105505a2999f5dc1
Author: Eli Cohen <eli@mellanox.com>
Date: Thu Nov 7 09:07:34 2019 +0200
net/mlx5e: Fix ingress rate configuration for representors
Current code uses the old method of prio encoding in
flow_cls_common_offload. Fix to follow the changes introduced in
commit ef01adae0e43 ("net: sched: use major priority number as hardware priority").
Fixes: fcb64c0f5640 ("net/mlx5: E-Switch, add ingress rate support")
Signed-off-by: Eli Cohen <eli@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index ac372993c9d8..ece33ff718a4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -4003,9 +4003,8 @@ int mlx5e_tc_configure_matchall(struct mlx5e_priv *priv,
struct tc_cls_matchall_offload *ma)
{
struct netlink_ext_ack *extack = ma->common.extack;
- int prio = TC_H_MAJ(ma->common.prio) >> 16;
- if (prio != 1) {
+ if (ma->common.prio != 1) {
NL_SET_ERR_MSG_MOD(extack, "only priority 1 is supported");
return -EINVAL;
}
--
2.13.6

@ -0,0 +1,61 @@
From 5e5a9d6b5e750e39e8f5bb8837d4f22dc2d9867a Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 14:52:45 -0400
Subject: [PATCH 069/312] [netdrv] net/mlx5e: Add missing capability bit check
for IP-in-IP
Message-id: <20200510145245.10054-83-ahleihel@redhat.com>
Patchwork-id: 306623
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789378 v2 82/82] net/mlx5e: Add missing capability bit check for IP-in-IP
Bugzilla: 1789378
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789378
Upstream: v5.4
commit 9c98f7ec01d78b5c12db97d1e5edb7022eefa398
Author: Marina Varshaver <marinav@mellanox.com>
Date: Tue Nov 19 18:52:13 2019 +0200
net/mlx5e: Add missing capability bit check for IP-in-IP
Device that doesn't support IP-in-IP offloads has to filter csum and gso
offload support, otherwise kernel will conclude that device is capable of
offloading csum and gso for IP-in-IP tunnels and that might result in
IP-in-IP tunnel not functioning.
Fixes: 25948b87dda2 ("net/mlx5e: Support TSO and TX checksum offloads for IP-in-IP")
Signed-off-by: Marina Varshaver <marinav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 7cd3ac6a23a8..2f337a70e157 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -4230,9 +4230,12 @@ static netdev_features_t mlx5e_tunnel_features_check(struct mlx5e_priv *priv,
switch (proto) {
case IPPROTO_GRE:
+ return features;
case IPPROTO_IPIP:
case IPPROTO_IPV6:
- return features;
+ if (mlx5e_tunnel_proto_supported(priv->mdev, IPPROTO_IPIP))
+ return features;
+ break;
case IPPROTO_UDP:
udph = udp_hdr(skb);
port = be16_to_cpu(udph->dest);
--
2.13.6

@ -0,0 +1,54 @@
From b3ef775e164cb586b2967356b0a9c03582920495 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:03:29 -0400
Subject: [PATCH 070/312] [include] net/mlx5: Expose optimal performance
scatter entries capability
Message-id: <20200510150452.10307-5-ahleihel@redhat.com>
Patchwork-id: 306628
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 04/87] net/mlx5: Expose optimal performance scatter entries capability
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 7d47433cf74f942a414171867d89c08640cfef45
Author: Yamin Friedman <yaminf@mellanox.com>
Date: Mon Oct 7 16:59:31 2019 +0300
net/mlx5: Expose optimal performance scatter entries capability
Expose maximum scatter entries per RDMA READ for optimal performance.
Signed-off-by: Yamin Friedman <yaminf@mellanox.com>
Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
include/linux/mlx5/mlx5_ifc.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index caa0bcd9dd0f..a77ca587c3cc 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -1156,7 +1156,7 @@ struct mlx5_ifc_cmd_hca_cap_bits {
u8 log_max_srq[0x5];
u8 reserved_at_b0[0x10];
- u8 reserved_at_c0[0x8];
+ u8 max_sgl_for_optimized_performance[0x8];
u8 log_max_cq_sz[0x8];
u8 reserved_at_d0[0xb];
u8 log_max_cq[0x5];
--
2.13.6

@ -0,0 +1,66 @@
From 7ae07e19237187f7fa84def13d5538e1015c20c7 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:03:47 -0400
Subject: [PATCH 071/312] [netdrv] net: Fix misspellings of "configure" and
"configuration"
Message-id: <20200510150452.10307-23-ahleihel@redhat.com>
Patchwork-id: 306646
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 22/87] net: Fix misspellings of "configure" and "configuration"
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
Conflicts:
- Take mlx5 changes only.
- drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
Small context diff due to missing empty line in the comment section,
apply the needed hunk as well as adding back the missing empty line
to avoid more conflicts.
commit c199ce4f9dd896c716aece33e6750be34aea1151
Author: Geert Uytterhoeven <geert+renesas@glider.be>
Date: Thu Oct 24 17:22:01 2019 +0200
net: Fix misspellings of "configure" and "configuration"
Fix various misspellings of "configuration" and "configure".
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
index 1fc4641077fd..ae99fac08b53 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
@@ -177,12 +177,14 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
* @xoff: <input> xoff value
* @port_buffer: <output> port receive buffer configuration
* @change: <output>
- * Update buffer configuration based on pfc configuraiton and
+ *
+ * Update buffer configuration based on pfc configuration and
* priority to buffer mapping.
* Buffer's lossy bit is changed to:
* lossless if there is at least one PFC enabled priority
* mapped to this buffer lossy if all priorities mapped to
* this buffer are PFC disabled
+ *
* @return: 0 if no error,
* sets change to true if buffer configuration was modified.
*/
--
2.13.6

@ -0,0 +1,137 @@
From 68272c4fdf21f6aa6e587a2b4eb9e8ed14a7b7d6 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:03:49 -0400
Subject: [PATCH 072/312] [netdrv] net/mlx5: E-Switch, Rename egress config to
generic name
Message-id: <20200510150452.10307-25-ahleihel@redhat.com>
Patchwork-id: 306648
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 24/87] net/mlx5: E-Switch, Rename egress config to generic name
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 6d94e610e4b6a77007d50952d3c859d3e300c0ab
Author: Vu Pham <vuhuong@mellanox.com>
Date: Mon Oct 28 23:34:58 2019 +0000
net/mlx5: E-Switch, Rename egress config to generic name
Refactor vport egress config in offloads mode
Refactoring vport egress configuration in offloads mode that
includes egress prio tag configuration.
This makes code symmetric to ingress configuration.
Signed-off-by: Vu Pham <vuhuong@mellanox.com>
Reviewed-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../ethernet/mellanox/mlx5/core/eswitch_offloads.c | 50 +++++++++++-----------
1 file changed, 26 insertions(+), 24 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index fa3249964ee9..b41b0c868099 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1866,32 +1866,16 @@ static int esw_vport_egress_prio_tag_config(struct mlx5_eswitch *esw,
struct mlx5_flow_spec *spec;
int err = 0;
- if (!MLX5_CAP_GEN(esw->dev, prio_tag_required))
- return 0;
-
/* For prio tag mode, there is only 1 FTEs:
* 1) prio tag packets - pop the prio tag VLAN, allow
* Unmatched traffic is allowed by default
*/
-
- esw_vport_cleanup_egress_rules(esw, vport);
-
- err = esw_vport_enable_egress_acl(esw, vport);
- if (err) {
- mlx5_core_warn(esw->dev,
- "failed to enable egress acl (%d) on vport[%d]\n",
- err, vport->vport);
- return err;
- }
-
esw_debug(esw->dev,
"vport[%d] configure prio tag egress rules\n", vport->vport);
spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
- if (!spec) {
- err = -ENOMEM;
- goto out_no_mem;
- }
+ if (!spec)
+ return -ENOMEM;
/* prio tag vlan rule - pop it so VF receives untagged packets */
MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.cvlan_tag);
@@ -1911,14 +1895,9 @@ static int esw_vport_egress_prio_tag_config(struct mlx5_eswitch *esw,
"vport[%d] configure egress pop prio tag vlan rule failed, err(%d)\n",
vport->vport, err);
vport->egress.allowed_vlan = NULL;
- goto out;
}
-out:
kvfree(spec);
-out_no_mem:
- if (err)
- esw_vport_cleanup_egress_rules(esw, vport);
return err;
}
@@ -1963,6 +1942,29 @@ static int esw_vport_ingress_common_config(struct mlx5_eswitch *esw,
return err;
}
+static int esw_vport_egress_config(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ int err;
+
+ if (!MLX5_CAP_GEN(esw->dev, prio_tag_required))
+ return 0;
+
+ esw_vport_cleanup_egress_rules(esw, vport);
+
+ err = esw_vport_enable_egress_acl(esw, vport);
+ if (err)
+ return err;
+
+ esw_debug(esw->dev, "vport(%d) configure egress rules\n", vport->vport);
+
+ err = esw_vport_egress_prio_tag_config(esw, vport);
+ if (err)
+ esw_vport_disable_egress_acl(esw, vport);
+
+ return err;
+}
+
static bool
esw_check_vport_match_metadata_supported(const struct mlx5_eswitch *esw)
{
@@ -2010,7 +2012,7 @@ static int esw_create_offloads_acl_tables(struct mlx5_eswitch *esw)
goto err_ingress;
if (mlx5_eswitch_is_vf_vport(esw, vport->vport)) {
- err = esw_vport_egress_prio_tag_config(esw, vport);
+ err = esw_vport_egress_config(esw, vport);
if (err)
goto err_egress;
}
--
2.13.6

@ -0,0 +1,66 @@
From f9d7ea58030ab80031731d50631b3f19503006f7 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:03:50 -0400
Subject: [PATCH 073/312] [netdrv] net/mlx5: E-Switch, Rename ingress acl
config in offloads mode
Message-id: <20200510150452.10307-26-ahleihel@redhat.com>
Patchwork-id: 306649
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 25/87] net/mlx5: E-Switch, Rename ingress acl config in offloads mode
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit b1a3380aa709082761c1dba89234ac16c19037c6
Author: Vu Pham <vuhuong@mellanox.com>
Date: Mon Oct 28 23:35:00 2019 +0000
net/mlx5: E-Switch, Rename ingress acl config in offloads mode
Changing the function name esw_ingress_acl_common_config() to
esw_ingress_acl_config() to be consistent with egress config
function naming in offloads mode.
Signed-off-by: Vu Pham <vuhuong@mellanox.com>
Reviewed-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index b41b0c868099..9e64bdf17861 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1901,8 +1901,8 @@ static int esw_vport_egress_prio_tag_config(struct mlx5_eswitch *esw,
return err;
}
-static int esw_vport_ingress_common_config(struct mlx5_eswitch *esw,
- struct mlx5_vport *vport)
+static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
{
int err;
@@ -2007,7 +2007,7 @@ static int esw_create_offloads_acl_tables(struct mlx5_eswitch *esw)
esw->flags |= MLX5_ESWITCH_VPORT_MATCH_METADATA;
mlx5_esw_for_all_vports(esw, i, vport) {
- err = esw_vport_ingress_common_config(esw, vport);
+ err = esw_vport_ingress_config(esw, vport);
if (err)
goto err_ingress;
--
2.13.6

@ -0,0 +1,225 @@
From 3f285c020ca420cf7657c4a51da96573ae038f06 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:03:51 -0400
Subject: [PATCH 074/312] [netdrv] net/mlx5: E-switch, Introduce and use vlan
rule config helper
Message-id: <20200510150452.10307-27-ahleihel@redhat.com>
Patchwork-id: 306650
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 26/87] net/mlx5: E-switch, Introduce and use vlan rule config helper
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit fdde49e00b9d2041086568b52670043a8def96ff
Author: Parav Pandit <parav@mellanox.com>
Date: Mon Oct 28 23:35:03 2019 +0000
net/mlx5: E-switch, Introduce and use vlan rule config helper
Between legacy mode and switchdev mode, only two fields are changed,
vlan_tag and flow action.
Hence to avoid duplicte code between two modes, introduce and and use
helper function to configure allowed VLAN rule.
While at it, get rid of duplicate debug message.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 68 ++++++++++++++--------
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 4 ++
.../ethernet/mellanox/mlx5/core/eswitch_offloads.c | 54 ++++-------------
3 files changed, 58 insertions(+), 68 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 386e82850ed5..773246f8e9c4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1323,6 +1323,43 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
return err;
}
+int mlx5_esw_create_vport_egress_acl_vlan(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport,
+ u16 vlan_id, u32 flow_action)
+{
+ struct mlx5_flow_act flow_act = {};
+ struct mlx5_flow_spec *spec;
+ int err = 0;
+
+ if (vport->egress.allowed_vlan)
+ return -EEXIST;
+
+ spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
+ if (!spec)
+ return -ENOMEM;
+
+ MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.cvlan_tag);
+ MLX5_SET_TO_ONES(fte_match_param, spec->match_value, outer_headers.cvlan_tag);
+ MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.first_vid);
+ MLX5_SET(fte_match_param, spec->match_value, outer_headers.first_vid, vlan_id);
+
+ spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+ flow_act.action = flow_action;
+ vport->egress.allowed_vlan =
+ mlx5_add_flow_rules(vport->egress.acl, spec,
+ &flow_act, NULL, 0);
+ if (IS_ERR(vport->egress.allowed_vlan)) {
+ err = PTR_ERR(vport->egress.allowed_vlan);
+ esw_warn(esw->dev,
+ "vport[%d] configure egress vlan rule failed, err(%d)\n",
+ vport->vport, err);
+ vport->egress.allowed_vlan = NULL;
+ }
+
+ kvfree(spec);
+ return err;
+}
+
static int esw_vport_egress_config(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
@@ -1353,34 +1390,17 @@ static int esw_vport_egress_config(struct mlx5_eswitch *esw,
"vport[%d] configure egress rules, vlan(%d) qos(%d)\n",
vport->vport, vport->info.vlan, vport->info.qos);
- spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
- if (!spec) {
- err = -ENOMEM;
- goto out;
- }
-
/* Allowed vlan rule */
- MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.cvlan_tag);
- MLX5_SET_TO_ONES(fte_match_param, spec->match_value, outer_headers.cvlan_tag);
- MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.first_vid);
- MLX5_SET(fte_match_param, spec->match_value, outer_headers.first_vid, vport->info.vlan);
+ err = mlx5_esw_create_vport_egress_acl_vlan(esw, vport, vport->info.vlan,
+ MLX5_FLOW_CONTEXT_ACTION_ALLOW);
+ if (err)
+ return err;
- spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
- flow_act.action = MLX5_FLOW_CONTEXT_ACTION_ALLOW;
- vport->egress.allowed_vlan =
- mlx5_add_flow_rules(vport->egress.acl, spec,
- &flow_act, NULL, 0);
- if (IS_ERR(vport->egress.allowed_vlan)) {
- err = PTR_ERR(vport->egress.allowed_vlan);
- esw_warn(esw->dev,
- "vport[%d] configure egress allowed vlan rule failed, err(%d)\n",
- vport->vport, err);
- vport->egress.allowed_vlan = NULL;
+ /* Drop others rule (star rule) */
+ spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
+ if (!spec)
goto out;
- }
- /* Drop others rule (star rule) */
- memset(spec, 0, sizeof(*spec));
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
/* Attach egress drop flow counter */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 436c633407d6..0cba334270d9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -423,6 +423,10 @@ int mlx5_eswitch_del_vlan_action(struct mlx5_eswitch *esw,
int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
u16 vport, u16 vlan, u8 qos, u8 set_flags);
+int mlx5_esw_create_vport_egress_acl_vlan(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport,
+ u16 vlan_id, u32 flow_action);
+
static inline bool mlx5_eswitch_vlan_actions_supported(struct mlx5_core_dev *dev,
u8 vlan_depth)
{
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 9e64bdf17861..657aeea3f879 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1859,48 +1859,6 @@ void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
}
}
-static int esw_vport_egress_prio_tag_config(struct mlx5_eswitch *esw,
- struct mlx5_vport *vport)
-{
- struct mlx5_flow_act flow_act = {0};
- struct mlx5_flow_spec *spec;
- int err = 0;
-
- /* For prio tag mode, there is only 1 FTEs:
- * 1) prio tag packets - pop the prio tag VLAN, allow
- * Unmatched traffic is allowed by default
- */
- esw_debug(esw->dev,
- "vport[%d] configure prio tag egress rules\n", vport->vport);
-
- spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
- if (!spec)
- return -ENOMEM;
-
- /* prio tag vlan rule - pop it so VF receives untagged packets */
- MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.cvlan_tag);
- MLX5_SET_TO_ONES(fte_match_param, spec->match_value, outer_headers.cvlan_tag);
- MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.first_vid);
- MLX5_SET(fte_match_param, spec->match_value, outer_headers.first_vid, 0);
-
- spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
- flow_act.action = MLX5_FLOW_CONTEXT_ACTION_VLAN_POP |
- MLX5_FLOW_CONTEXT_ACTION_ALLOW;
- vport->egress.allowed_vlan =
- mlx5_add_flow_rules(vport->egress.acl, spec,
- &flow_act, NULL, 0);
- if (IS_ERR(vport->egress.allowed_vlan)) {
- err = PTR_ERR(vport->egress.allowed_vlan);
- esw_warn(esw->dev,
- "vport[%d] configure egress pop prio tag vlan rule failed, err(%d)\n",
- vport->vport, err);
- vport->egress.allowed_vlan = NULL;
- }
-
- kvfree(spec);
- return err;
-}
-
static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
@@ -1956,9 +1914,17 @@ static int esw_vport_egress_config(struct mlx5_eswitch *esw,
if (err)
return err;
- esw_debug(esw->dev, "vport(%d) configure egress rules\n", vport->vport);
+ /* For prio tag mode, there is only 1 FTEs:
+ * 1) prio tag packets - pop the prio tag VLAN, allow
+ * Unmatched traffic is allowed by default
+ */
+ esw_debug(esw->dev,
+ "vport[%d] configure prio tag egress rules\n", vport->vport);
- err = esw_vport_egress_prio_tag_config(esw, vport);
+ /* prio tag vlan rule - pop it so VF receives untagged packets */
+ err = mlx5_esw_create_vport_egress_acl_vlan(esw, vport, 0,
+ MLX5_FLOW_CONTEXT_ACTION_VLAN_POP |
+ MLX5_FLOW_CONTEXT_ACTION_ALLOW);
if (err)
esw_vport_disable_egress_acl(esw, vport);
--
2.13.6

@ -0,0 +1,123 @@
From e537fcf35c72c352a3428f5ec0978fd66002f11f Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:03:52 -0400
Subject: [PATCH 075/312] [netdrv] net/mlx5: Introduce and use
mlx5_esw_is_manager_vport()
Message-id: <20200510150452.10307-28-ahleihel@redhat.com>
Patchwork-id: 306652
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 27/87] net/mlx5: Introduce and use mlx5_esw_is_manager_vport()
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit ea2300e02a71207b11111a44cbe7185a94f78a72
Author: Parav Pandit <parav@mellanox.com>
Date: Mon Oct 28 23:35:05 2019 +0000
net/mlx5: Introduce and use mlx5_esw_is_manager_vport()
Currently esw_enable_vport() does vport check for zero to enable drop
counters regardless of execution on ECPF/PF.
While esw_disable_vport() considers such scenario.
To keep consistency across code for checking for manager_vport,
introduce and use mlx5_esw_is_manager_vport() to check if a specified
vport is eswitch manager vport or not.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 13 +++++++------
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 6 ++++++
2 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 773246f8e9c4..76e2d5cba48b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -496,7 +496,7 @@ static int esw_add_uc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
/* Skip mlx5_mpfs_add_mac for eswitch_managers,
* it is already done by its netdev in mlx5e_execute_l2_action
*/
- if (esw->manager_vport == vport)
+ if (mlx5_esw_is_manager_vport(esw, vport))
goto fdb_add;
err = mlx5_mpfs_add_mac(esw->dev, mac);
@@ -528,7 +528,7 @@ static int esw_del_uc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
/* Skip mlx5_mpfs_del_mac for eswitch managers,
* it is already done by its netdev in mlx5e_execute_l2_action
*/
- if (!vaddr->mpfs || esw->manager_vport == vport)
+ if (!vaddr->mpfs || mlx5_esw_is_manager_vport(esw, vport))
goto fdb_del;
err = mlx5_mpfs_del_mac(esw->dev, mac);
@@ -1634,7 +1634,7 @@ static void esw_apply_vport_conf(struct mlx5_eswitch *esw,
u16 vport_num = vport->vport;
int flags;
- if (esw->manager_vport == vport_num)
+ if (mlx5_esw_is_manager_vport(esw, vport_num))
return;
mlx5_modify_vport_admin_state(esw->dev,
@@ -1708,7 +1708,8 @@ static void esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
esw_debug(esw->dev, "Enabling VPORT(%d)\n", vport_num);
/* Create steering drop counters for ingress and egress ACLs */
- if (vport_num && esw->mode == MLX5_ESWITCH_LEGACY)
+ if (!mlx5_esw_is_manager_vport(esw, vport_num) &&
+ esw->mode == MLX5_ESWITCH_LEGACY)
esw_vport_create_drop_counters(vport);
/* Restore old vport configuration */
@@ -1726,7 +1727,7 @@ static void esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
/* Esw manager is trusted by default. Host PF (vport 0) is trusted as well
* in smartNIC as it's a vport group manager.
*/
- if (esw->manager_vport == vport_num ||
+ if (mlx5_esw_is_manager_vport(esw, vport_num) ||
(!vport_num && mlx5_core_is_ecpf(esw->dev)))
vport->info.trusted = true;
@@ -1761,7 +1762,7 @@ static void esw_disable_vport(struct mlx5_eswitch *esw,
esw_vport_change_handle_locked(vport);
vport->enabled_events = 0;
esw_vport_disable_qos(esw, vport);
- if (esw->manager_vport != vport_num &&
+ if (!mlx5_esw_is_manager_vport(esw, vport_num) &&
esw->mode == MLX5_ESWITCH_LEGACY) {
mlx5_modify_vport_admin_state(esw->dev,
MLX5_VPORT_STATE_OP_MOD_ESW_VPORT,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 0cba334270d9..a90af41d8220 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -468,6 +468,12 @@ static inline u16 mlx5_eswitch_manager_vport(struct mlx5_core_dev *dev)
/* TODO: This mlx5e_tc function shouldn't be called by eswitch */
void mlx5e_tc_clean_fdb_peer_flows(struct mlx5_eswitch *esw);
+static inline bool
+mlx5_esw_is_manager_vport(const struct mlx5_eswitch *esw, u16 vport_num)
+{
+ return esw->manager_vport == vport_num;
+}
+
static inline u16 mlx5_eswitch_first_host_vport_num(struct mlx5_core_dev *dev)
{
return mlx5_core_is_ecpf_esw_manager(dev) ?
--
2.13.6

@ -0,0 +1,142 @@
From 18c6aef4724e84bf5304789fc51ce44c76cccd72 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:03:53 -0400
Subject: [PATCH 076/312] [netdrv] net/mlx5: Move metdata fields under offloads
structure
Message-id: <20200510150452.10307-29-ahleihel@redhat.com>
Patchwork-id: 306651
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 28/87] net/mlx5: Move metdata fields under offloads structure
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit d68316b5a1046b489097c5e5e24139548b79971f
Author: Parav Pandit <parav@mellanox.com>
Date: Mon Oct 28 23:35:10 2019 +0000
net/mlx5: Move metdata fields under offloads structure
Metadata fields are offload mode specific.
To improve code readability, move metadata under offloads structure.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 8 ++++++
.../ethernet/mellanox/mlx5/core/eswitch_offloads.c | 33 +++++++++++-----------
2 files changed, 25 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index a90af41d8220..f21d528057fa 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -69,11 +69,19 @@ struct vport_ingress {
struct mlx5_flow_group *allow_spoofchk_only_grp;
struct mlx5_flow_group *allow_untagged_only_grp;
struct mlx5_flow_group *drop_grp;
+#ifdef __GENKSYMS__
struct mlx5_modify_hdr *modify_metadata;
struct mlx5_flow_handle *modify_metadata_rule;
+#endif
struct mlx5_flow_handle *allow_rule;
struct mlx5_flow_handle *drop_rule;
struct mlx5_fc *drop_counter;
+#ifndef __GENKSYMS__
+ struct {
+ struct mlx5_modify_hdr *modify_metadata;
+ struct mlx5_flow_handle *modify_metadata_rule;
+ } offloads;
+#endif
};
struct vport_egress {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 657aeea3f879..00d126fa6e02 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1780,9 +1780,9 @@ static int esw_vport_ingress_prio_tag_config(struct mlx5_eswitch *esw,
flow_act.vlan[0].vid = 0;
flow_act.vlan[0].prio = 0;
- if (vport->ingress.modify_metadata_rule) {
+ if (vport->ingress.offloads.modify_metadata_rule) {
flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
- flow_act.modify_hdr = vport->ingress.modify_metadata;
+ flow_act.modify_hdr = vport->ingress.offloads.modify_metadata;
}
vport->ingress.allow_rule =
@@ -1818,11 +1818,11 @@ static int esw_vport_add_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
MLX5_SET(set_action_in, action, data,
mlx5_eswitch_get_vport_metadata_for_match(esw, vport->vport));
- vport->ingress.modify_metadata =
+ vport->ingress.offloads.modify_metadata =
mlx5_modify_header_alloc(esw->dev, MLX5_FLOW_NAMESPACE_ESW_INGRESS,
1, action);
- if (IS_ERR(vport->ingress.modify_metadata)) {
- err = PTR_ERR(vport->ingress.modify_metadata);
+ if (IS_ERR(vport->ingress.offloads.modify_metadata)) {
+ err = PTR_ERR(vport->ingress.offloads.modify_metadata);
esw_warn(esw->dev,
"failed to alloc modify header for vport %d ingress acl (%d)\n",
vport->vport, err);
@@ -1830,32 +1830,33 @@ static int esw_vport_add_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
}
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_MOD_HDR | MLX5_FLOW_CONTEXT_ACTION_ALLOW;
- flow_act.modify_hdr = vport->ingress.modify_metadata;
- vport->ingress.modify_metadata_rule = mlx5_add_flow_rules(vport->ingress.acl,
- &spec, &flow_act, NULL, 0);
- if (IS_ERR(vport->ingress.modify_metadata_rule)) {
- err = PTR_ERR(vport->ingress.modify_metadata_rule);
+ flow_act.modify_hdr = vport->ingress.offloads.modify_metadata;
+ vport->ingress.offloads.modify_metadata_rule =
+ mlx5_add_flow_rules(vport->ingress.acl,
+ &spec, &flow_act, NULL, 0);
+ if (IS_ERR(vport->ingress.offloads.modify_metadata_rule)) {
+ err = PTR_ERR(vport->ingress.offloads.modify_metadata_rule);
esw_warn(esw->dev,
"failed to add setting metadata rule for vport %d ingress acl, err(%d)\n",
vport->vport, err);
- vport->ingress.modify_metadata_rule = NULL;
+ vport->ingress.offloads.modify_metadata_rule = NULL;
goto out;
}
out:
if (err)
- mlx5_modify_header_dealloc(esw->dev, vport->ingress.modify_metadata);
+ mlx5_modify_header_dealloc(esw->dev, vport->ingress.offloads.modify_metadata);
return err;
}
void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
- if (vport->ingress.modify_metadata_rule) {
- mlx5_del_flow_rules(vport->ingress.modify_metadata_rule);
- mlx5_modify_header_dealloc(esw->dev, vport->ingress.modify_metadata);
+ if (vport->ingress.offloads.modify_metadata_rule) {
+ mlx5_del_flow_rules(vport->ingress.offloads.modify_metadata_rule);
+ mlx5_modify_header_dealloc(esw->dev, vport->ingress.offloads.modify_metadata);
- vport->ingress.modify_metadata_rule = NULL;
+ vport->ingress.offloads.modify_metadata_rule = NULL;
}
}
--
2.13.6

@ -0,0 +1,275 @@
From 51391126c3b108d32bcfbd30f7bce65ae5049097 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:03:54 -0400
Subject: [PATCH 077/312] [netdrv] net/mlx5: Move legacy drop counter and rule
under legacy structure
Message-id: <20200510150452.10307-30-ahleihel@redhat.com>
Patchwork-id: 306653
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 29/87] net/mlx5: Move legacy drop counter and rule under legacy structure
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 853b53520c9d11db7652e3603665b0ad475741a5
Author: Parav Pandit <parav@mellanox.com>
Date: Mon Oct 28 23:35:11 2019 +0000
net/mlx5: Move legacy drop counter and rule under legacy structure
To improve code readability, move legacy drop counters and droup rule
under legacy structure.
While at it,
(a) prefix drop flow counters helper with legacy_.
(b) nullify the rule pointers only if they were valid.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 82 ++++++++++++-----------
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 13 ++++
2 files changed, 55 insertions(+), 40 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 76e2d5cba48b..54b5f290ab9d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1035,14 +1035,15 @@ int esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
void esw_vport_cleanup_egress_rules(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
- if (!IS_ERR_OR_NULL(vport->egress.allowed_vlan))
+ if (!IS_ERR_OR_NULL(vport->egress.allowed_vlan)) {
mlx5_del_flow_rules(vport->egress.allowed_vlan);
+ vport->egress.allowed_vlan = NULL;
+ }
- if (!IS_ERR_OR_NULL(vport->egress.drop_rule))
- mlx5_del_flow_rules(vport->egress.drop_rule);
-
- vport->egress.allowed_vlan = NULL;
- vport->egress.drop_rule = NULL;
+ if (!IS_ERR_OR_NULL(vport->egress.legacy.drop_rule)) {
+ mlx5_del_flow_rules(vport->egress.legacy.drop_rule);
+ vport->egress.legacy.drop_rule = NULL;
+ }
}
void esw_vport_disable_egress_acl(struct mlx5_eswitch *esw,
@@ -1197,14 +1198,15 @@ int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
void esw_vport_cleanup_ingress_rules(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
- if (!IS_ERR_OR_NULL(vport->ingress.drop_rule))
- mlx5_del_flow_rules(vport->ingress.drop_rule);
+ if (!IS_ERR_OR_NULL(vport->ingress.legacy.drop_rule)) {
+ mlx5_del_flow_rules(vport->ingress.legacy.drop_rule);
+ vport->ingress.legacy.drop_rule = NULL;
+ }
- if (!IS_ERR_OR_NULL(vport->ingress.allow_rule))
+ if (!IS_ERR_OR_NULL(vport->ingress.allow_rule)) {
mlx5_del_flow_rules(vport->ingress.allow_rule);
-
- vport->ingress.drop_rule = NULL;
- vport->ingress.allow_rule = NULL;
+ vport->ingress.allow_rule = NULL;
+ }
esw_vport_del_ingress_acl_modify_metadata(esw, vport);
}
@@ -1233,7 +1235,7 @@ void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
- struct mlx5_fc *counter = vport->ingress.drop_counter;
+ struct mlx5_fc *counter = vport->ingress.legacy.drop_counter;
struct mlx5_flow_destination drop_ctr_dst = {0};
struct mlx5_flow_destination *dst = NULL;
struct mlx5_flow_act flow_act = {0};
@@ -1304,15 +1306,15 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
dst = &drop_ctr_dst;
dest_num++;
}
- vport->ingress.drop_rule =
+ vport->ingress.legacy.drop_rule =
mlx5_add_flow_rules(vport->ingress.acl, spec,
&flow_act, dst, dest_num);
- if (IS_ERR(vport->ingress.drop_rule)) {
- err = PTR_ERR(vport->ingress.drop_rule);
+ if (IS_ERR(vport->ingress.legacy.drop_rule)) {
+ err = PTR_ERR(vport->ingress.legacy.drop_rule);
esw_warn(esw->dev,
"vport[%d] configure ingress drop rule, err(%d)\n",
vport->vport, err);
- vport->ingress.drop_rule = NULL;
+ vport->ingress.legacy.drop_rule = NULL;
goto out;
}
@@ -1363,7 +1365,7 @@ int mlx5_esw_create_vport_egress_acl_vlan(struct mlx5_eswitch *esw,
static int esw_vport_egress_config(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
- struct mlx5_fc *counter = vport->egress.drop_counter;
+ struct mlx5_fc *counter = vport->egress.legacy.drop_counter;
struct mlx5_flow_destination drop_ctr_dst = {0};
struct mlx5_flow_destination *dst = NULL;
struct mlx5_flow_act flow_act = {0};
@@ -1411,15 +1413,15 @@ static int esw_vport_egress_config(struct mlx5_eswitch *esw,
dst = &drop_ctr_dst;
dest_num++;
}
- vport->egress.drop_rule =
+ vport->egress.legacy.drop_rule =
mlx5_add_flow_rules(vport->egress.acl, spec,
&flow_act, dst, dest_num);
- if (IS_ERR(vport->egress.drop_rule)) {
- err = PTR_ERR(vport->egress.drop_rule);
+ if (IS_ERR(vport->egress.legacy.drop_rule)) {
+ err = PTR_ERR(vport->egress.legacy.drop_rule);
esw_warn(esw->dev,
"vport[%d] configure egress drop rule failed, err(%d)\n",
vport->vport, err);
- vport->egress.drop_rule = NULL;
+ vport->egress.legacy.drop_rule = NULL;
}
out:
kvfree(spec);
@@ -1662,39 +1664,39 @@ static void esw_apply_vport_conf(struct mlx5_eswitch *esw,
}
}
-static void esw_vport_create_drop_counters(struct mlx5_vport *vport)
+static void esw_legacy_vport_create_drop_counters(struct mlx5_vport *vport)
{
struct mlx5_core_dev *dev = vport->dev;
if (MLX5_CAP_ESW_INGRESS_ACL(dev, flow_counter)) {
- vport->ingress.drop_counter = mlx5_fc_create(dev, false);
- if (IS_ERR(vport->ingress.drop_counter)) {
+ vport->ingress.legacy.drop_counter = mlx5_fc_create(dev, false);
+ if (IS_ERR(vport->ingress.legacy.drop_counter)) {
esw_warn(dev,
"vport[%d] configure ingress drop rule counter failed\n",
vport->vport);
- vport->ingress.drop_counter = NULL;
+ vport->ingress.legacy.drop_counter = NULL;
}
}
if (MLX5_CAP_ESW_EGRESS_ACL(dev, flow_counter)) {
- vport->egress.drop_counter = mlx5_fc_create(dev, false);
- if (IS_ERR(vport->egress.drop_counter)) {
+ vport->egress.legacy.drop_counter = mlx5_fc_create(dev, false);
+ if (IS_ERR(vport->egress.legacy.drop_counter)) {
esw_warn(dev,
"vport[%d] configure egress drop rule counter failed\n",
vport->vport);
- vport->egress.drop_counter = NULL;
+ vport->egress.legacy.drop_counter = NULL;
}
}
}
-static void esw_vport_destroy_drop_counters(struct mlx5_vport *vport)
+static void esw_legacy_vport_destroy_drop_counters(struct mlx5_vport *vport)
{
struct mlx5_core_dev *dev = vport->dev;
- if (vport->ingress.drop_counter)
- mlx5_fc_destroy(dev, vport->ingress.drop_counter);
- if (vport->egress.drop_counter)
- mlx5_fc_destroy(dev, vport->egress.drop_counter);
+ if (vport->ingress.legacy.drop_counter)
+ mlx5_fc_destroy(dev, vport->ingress.legacy.drop_counter);
+ if (vport->egress.legacy.drop_counter)
+ mlx5_fc_destroy(dev, vport->egress.legacy.drop_counter);
}
static void esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
@@ -1710,7 +1712,7 @@ static void esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
/* Create steering drop counters for ingress and egress ACLs */
if (!mlx5_esw_is_manager_vport(esw, vport_num) &&
esw->mode == MLX5_ESWITCH_LEGACY)
- esw_vport_create_drop_counters(vport);
+ esw_legacy_vport_create_drop_counters(vport);
/* Restore old vport configuration */
esw_apply_vport_conf(esw, vport);
@@ -1770,7 +1772,7 @@ static void esw_disable_vport(struct mlx5_eswitch *esw,
MLX5_VPORT_ADMIN_STATE_DOWN);
esw_vport_disable_egress_acl(esw, vport);
esw_vport_disable_ingress_acl(esw, vport);
- esw_vport_destroy_drop_counters(vport);
+ esw_legacy_vport_destroy_drop_counters(vport);
}
esw->enabled_vports--;
mutex_unlock(&esw->state_lock);
@@ -2503,12 +2505,12 @@ static int mlx5_eswitch_query_vport_drop_stats(struct mlx5_core_dev *dev,
if (!vport->enabled || esw->mode != MLX5_ESWITCH_LEGACY)
return 0;
- if (vport->egress.drop_counter)
- mlx5_fc_query(dev, vport->egress.drop_counter,
+ if (vport->egress.legacy.drop_counter)
+ mlx5_fc_query(dev, vport->egress.legacy.drop_counter,
&stats->rx_dropped, &bytes);
- if (vport->ingress.drop_counter)
- mlx5_fc_query(dev, vport->ingress.drop_counter,
+ if (vport->ingress.legacy.drop_counter)
+ mlx5_fc_query(dev, vport->ingress.legacy.drop_counter,
&stats->tx_dropped, &bytes);
if (!MLX5_CAP_GEN(dev, receive_discard_vport_down) &&
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index f21d528057fa..f12d446e2c87 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -74,10 +74,16 @@ struct vport_ingress {
struct mlx5_flow_handle *modify_metadata_rule;
#endif
struct mlx5_flow_handle *allow_rule;
+#ifdef __GENKSYMS__
struct mlx5_flow_handle *drop_rule;
struct mlx5_fc *drop_counter;
+#endif
#ifndef __GENKSYMS__
struct {
+ struct mlx5_flow_handle *drop_rule;
+ struct mlx5_fc *drop_counter;
+ } legacy;
+ struct {
struct mlx5_modify_hdr *modify_metadata;
struct mlx5_flow_handle *modify_metadata_rule;
} offloads;
@@ -89,8 +95,15 @@ struct vport_egress {
struct mlx5_flow_group *allowed_vlans_grp;
struct mlx5_flow_group *drop_grp;
struct mlx5_flow_handle *allowed_vlan;
+#ifdef __GENKSYMS__
struct mlx5_flow_handle *drop_rule;
struct mlx5_fc *drop_counter;
+#else
+ struct {
+ struct mlx5_flow_handle *drop_rule;
+ struct mlx5_fc *drop_counter;
+ } legacy;
+#endif
};
struct mlx5_vport_drop_stats {
--
2.13.6

@ -0,0 +1,119 @@
From c5c504f4dc8c98a1c62d8cee2cf175097fb68ff9 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:03:55 -0400
Subject: [PATCH 078/312] [netdrv] net/mlx5: Tide up state_lock and vport
enabled flag usage
Message-id: <20200510150452.10307-31-ahleihel@redhat.com>
Patchwork-id: 306654
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 30/87] net/mlx5: Tide up state_lock and vport enabled flag usage
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 77b094305b1ba23e716bb34d3e33c8fe30a5f487
Author: Parav Pandit <parav@mellanox.com>
Date: Mon Oct 28 23:35:13 2019 +0000
net/mlx5: Tide up state_lock and vport enabled flag usage
When eswitch is disabled, vport event handler is unregistered.
This unregistration already synchronizes with running EQ event handler
in below code flow.
mlx5_eswitch_disable()
mlx5_eswitch_event_handlers_unregister()
mlx5_eq_notifier_unregister()
atomic_notifier_chain_unregister()
synchronize_rcu()
notifier_callchain
eswitch_vport_event()
queue_work()
Additionally vport->enabled flag is set under state_lock during
esw_enable_vport() but is not read under state_lock in
(a) esw_disable_vport() and (b) under atomic context
eswitch_vport_event().
It is also necessary to synchronize with already scheduled vport event.
This is already achieved using below sequence.
mlx5_eswitch_event_handlers_unregister()
[..]
flush_workqueue()
Hence,
(a) Remove vport->enabled check in eswitch_vport_event() which
doesn't make any sense.
(b) Remove redundant flush_workqueue() on every vport disable.
(c) Keep esw_disable_vport() symmetric with esw_enable_vport() for
state_lock.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 14 +++++---------
1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 54b5f290ab9d..8067667fd59e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1745,18 +1745,16 @@ static void esw_disable_vport(struct mlx5_eswitch *esw,
{
u16 vport_num = vport->vport;
+ mutex_lock(&esw->state_lock);
if (!vport->enabled)
- return;
+ goto done;
esw_debug(esw->dev, "Disabling vport(%d)\n", vport_num);
/* Mark this vport as disabled to discard new events */
vport->enabled = false;
- /* Wait for current already scheduled events to complete */
- flush_workqueue(esw->work_queue);
/* Disable events from this vport */
arm_vport_context_events_cmd(esw->dev, vport->vport, 0);
- mutex_lock(&esw->state_lock);
/* We don't assume VFs will cleanup after themselves.
* Calling vport change handler while vport is disabled will cleanup
* the vport resources.
@@ -1775,6 +1773,8 @@ static void esw_disable_vport(struct mlx5_eswitch *esw,
esw_legacy_vport_destroy_drop_counters(vport);
}
esw->enabled_vports--;
+
+done:
mutex_unlock(&esw->state_lock);
}
@@ -1788,12 +1788,8 @@ static int eswitch_vport_event(struct notifier_block *nb,
vport_num = be16_to_cpu(eqe->data.vport_change.vport_num);
vport = mlx5_eswitch_get_vport(esw, vport_num);
- if (IS_ERR(vport))
- return NOTIFY_OK;
-
- if (vport->enabled)
+ if (!IS_ERR(vport))
queue_work(esw->work_queue, &vport->vport_change_handler);
-
return NOTIFY_OK;
}
--
2.13.6

@ -0,0 +1,194 @@
From a549a6c259fc46e49b55d5b7ce5ad1478d9a80b8 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:03:56 -0400
Subject: [PATCH 079/312] [netdrv] net/mlx5: E-switch, Prepare code to handle
vport enable error
Message-id: <20200510150452.10307-32-ahleihel@redhat.com>
Patchwork-id: 306655
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 31/87] net/mlx5: E-switch, Prepare code to handle vport enable error
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 925a6acc77a70f8b5bfd0df75e36557aa400b0a0
Author: Parav Pandit <parav@mellanox.com>
Date: Mon Oct 28 23:35:15 2019 +0000
net/mlx5: E-switch, Prepare code to handle vport enable error
In subsequent patch, esw_enable_vport() could fail and return error.
Prepare code to handle such error.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 62 ++++++++++++++++------
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 2 +-
.../ethernet/mellanox/mlx5/core/eswitch_offloads.c | 5 +-
3 files changed, 50 insertions(+), 19 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 8067667fd59e..2ecb993545f9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -443,6 +443,13 @@ static int esw_create_legacy_table(struct mlx5_eswitch *esw)
return err;
}
+static void esw_destroy_legacy_table(struct mlx5_eswitch *esw)
+{
+ esw_cleanup_vepa_rules(esw);
+ esw_destroy_legacy_fdb_table(esw);
+ esw_destroy_legacy_vepa_table(esw);
+}
+
#define MLX5_LEGACY_SRIOV_VPORT_EVENTS (MLX5_VPORT_UC_ADDR_CHANGE | \
MLX5_VPORT_MC_ADDR_CHANGE | \
MLX5_VPORT_PROMISC_CHANGE)
@@ -459,15 +466,10 @@ static int esw_legacy_enable(struct mlx5_eswitch *esw)
mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs)
vport->info.link_state = MLX5_VPORT_ADMIN_STATE_AUTO;
- mlx5_eswitch_enable_pf_vf_vports(esw, MLX5_LEGACY_SRIOV_VPORT_EVENTS);
- return 0;
-}
-
-static void esw_destroy_legacy_table(struct mlx5_eswitch *esw)
-{
- esw_cleanup_vepa_rules(esw);
- esw_destroy_legacy_fdb_table(esw);
- esw_destroy_legacy_vepa_table(esw);
+ ret = mlx5_eswitch_enable_pf_vf_vports(esw, MLX5_LEGACY_SRIOV_VPORT_EVENTS);
+ if (ret)
+ esw_destroy_legacy_table(esw);
+ return ret;
}
static void esw_legacy_disable(struct mlx5_eswitch *esw)
@@ -1699,8 +1701,8 @@ static void esw_legacy_vport_destroy_drop_counters(struct mlx5_vport *vport)
mlx5_fc_destroy(dev, vport->egress.legacy.drop_counter);
}
-static void esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
- enum mlx5_eswitch_vport_event enabled_events)
+static int esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
+ enum mlx5_eswitch_vport_event enabled_events)
{
u16 vport_num = vport->vport;
@@ -1738,6 +1740,7 @@ static void esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
esw->enabled_vports++;
esw_debug(esw->dev, "Enabled VPORT(%d)\n", vport_num);
mutex_unlock(&esw->state_lock);
+ return 0;
}
static void esw_disable_vport(struct mlx5_eswitch *esw,
@@ -1862,26 +1865,51 @@ static void mlx5_eswitch_clear_vf_vports_info(struct mlx5_eswitch *esw)
/* mlx5_eswitch_enable_pf_vf_vports() enables vports of PF, ECPF and VFs
* whichever are present on the eswitch.
*/
-void
+int
mlx5_eswitch_enable_pf_vf_vports(struct mlx5_eswitch *esw,
enum mlx5_eswitch_vport_event enabled_events)
{
struct mlx5_vport *vport;
+ int num_vfs;
+ int ret;
int i;
/* Enable PF vport */
vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_PF);
- esw_enable_vport(esw, vport, enabled_events);
+ ret = esw_enable_vport(esw, vport, enabled_events);
+ if (ret)
+ return ret;
- /* Enable ECPF vports */
+ /* Enable ECPF vport */
if (mlx5_ecpf_vport_exists(esw->dev)) {
vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_ECPF);
- esw_enable_vport(esw, vport, enabled_events);
+ ret = esw_enable_vport(esw, vport, enabled_events);
+ if (ret)
+ goto ecpf_err;
}
/* Enable VF vports */
- mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs)
- esw_enable_vport(esw, vport, enabled_events);
+ mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs) {
+ ret = esw_enable_vport(esw, vport, enabled_events);
+ if (ret)
+ goto vf_err;
+ }
+ return 0;
+
+vf_err:
+ num_vfs = i - 1;
+ mlx5_esw_for_each_vf_vport_reverse(esw, i, vport, num_vfs)
+ esw_disable_vport(esw, vport);
+
+ if (mlx5_ecpf_vport_exists(esw->dev)) {
+ vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_ECPF);
+ esw_disable_vport(esw, vport);
+ }
+
+ecpf_err:
+ vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_PF);
+ esw_disable_vport(esw, vport);
+ return ret;
}
/* mlx5_eswitch_disable_pf_vf_vports() disables vports of PF, ECPF and VFs
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index f12d446e2c87..d29df0c302f2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -626,7 +626,7 @@ bool mlx5_eswitch_is_vf_vport(const struct mlx5_eswitch *esw, u16 vport_num);
void mlx5_eswitch_update_num_of_vfs(struct mlx5_eswitch *esw, const int num_vfs);
int mlx5_esw_funcs_changed_handler(struct notifier_block *nb, unsigned long type, void *data);
-void
+int
mlx5_eswitch_enable_pf_vf_vports(struct mlx5_eswitch *esw,
enum mlx5_eswitch_vport_event enabled_events);
void mlx5_eswitch_disable_pf_vf_vports(struct mlx5_eswitch *esw);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 00d126fa6e02..b33543c5f68f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -2158,7 +2158,9 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs)
vport->info.link_state = MLX5_VPORT_ADMIN_STATE_DOWN;
- mlx5_eswitch_enable_pf_vf_vports(esw, MLX5_VPORT_UC_ADDR_CHANGE);
+ err = mlx5_eswitch_enable_pf_vf_vports(esw, MLX5_VPORT_UC_ADDR_CHANGE);
+ if (err)
+ goto err_vports;
err = esw_offloads_load_all_reps(esw);
if (err)
@@ -2171,6 +2173,7 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
err_reps:
mlx5_eswitch_disable_pf_vf_vports(esw);
+err_vports:
esw_set_passing_vport_metadata(esw, false);
err_vport_metadata:
esw_offloads_steering_cleanup(esw);
--
2.13.6

@ -0,0 +1,162 @@
From 13d9574432dadefa6a706a4d523082b56bb4d200 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:03:57 -0400
Subject: [PATCH 080/312] [netdrv] net/mlx5: E-switch, Legacy introduce and use
per vport acl tables APIs
Message-id: <20200510150452.10307-33-ahleihel@redhat.com>
Patchwork-id: 306657
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 32/87] net/mlx5: E-switch, Legacy introduce and use per vport acl tables APIs
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit f5d0c01d65adba2b898836894d200e85c8a8def3
Author: Parav Pandit <parav@mellanox.com>
Date: Mon Oct 28 23:35:17 2019 +0000
net/mlx5: E-switch, Legacy introduce and use per vport acl tables APIs
Introduce and use per vport ACL tables creation and destroy APIs, so that
subsequently patch can use them during enabling/disabling a vport in
unified way for legacy vs offloads mode.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 73 +++++++++++++++++++----
1 file changed, 60 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 2ecb993545f9..f854750a15c5 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1658,12 +1658,6 @@ static void esw_apply_vport_conf(struct mlx5_eswitch *esw,
SET_VLAN_STRIP | SET_VLAN_INSERT : 0;
modify_esw_vport_cvlan(esw->dev, vport_num, vport->info.vlan, vport->info.qos,
flags);
-
- /* Only legacy mode needs ACLs */
- if (esw->mode == MLX5_ESWITCH_LEGACY) {
- esw_vport_ingress_config(esw, vport);
- esw_vport_egress_config(esw, vport);
- }
}
static void esw_legacy_vport_create_drop_counters(struct mlx5_vport *vport)
@@ -1701,10 +1695,59 @@ static void esw_legacy_vport_destroy_drop_counters(struct mlx5_vport *vport)
mlx5_fc_destroy(dev, vport->egress.legacy.drop_counter);
}
+static int esw_vport_create_legacy_acl_tables(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ int ret;
+
+ /* Only non manager vports need ACL in legacy mode */
+ if (mlx5_esw_is_manager_vport(esw, vport->vport))
+ return 0;
+
+ ret = esw_vport_ingress_config(esw, vport);
+ if (ret)
+ return ret;
+
+ ret = esw_vport_egress_config(esw, vport);
+ if (ret)
+ esw_vport_disable_ingress_acl(esw, vport);
+
+ return ret;
+}
+
+static int esw_vport_setup_acl(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ if (esw->mode == MLX5_ESWITCH_LEGACY)
+ return esw_vport_create_legacy_acl_tables(esw, vport);
+
+ return 0;
+}
+
+static void esw_vport_destroy_legacy_acl_tables(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+
+{
+ if (mlx5_esw_is_manager_vport(esw, vport->vport))
+ return;
+
+ esw_vport_disable_egress_acl(esw, vport);
+ esw_vport_disable_ingress_acl(esw, vport);
+ esw_legacy_vport_destroy_drop_counters(vport);
+}
+
+static void esw_vport_cleanup_acl(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ if (esw->mode == MLX5_ESWITCH_LEGACY)
+ esw_vport_destroy_legacy_acl_tables(esw, vport);
+}
+
static int esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
enum mlx5_eswitch_vport_event enabled_events)
{
u16 vport_num = vport->vport;
+ int ret;
mutex_lock(&esw->state_lock);
WARN_ON(vport->enabled);
@@ -1719,6 +1762,10 @@ static int esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
/* Restore old vport configuration */
esw_apply_vport_conf(esw, vport);
+ ret = esw_vport_setup_acl(esw, vport);
+ if (ret)
+ goto done;
+
/* Attach vport to the eswitch rate limiter */
if (esw_vport_enable_qos(esw, vport, vport->info.max_rate,
vport->qos.bw_share))
@@ -1739,8 +1786,9 @@ static int esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
esw->enabled_vports++;
esw_debug(esw->dev, "Enabled VPORT(%d)\n", vport_num);
+done:
mutex_unlock(&esw->state_lock);
- return 0;
+ return ret;
}
static void esw_disable_vport(struct mlx5_eswitch *esw,
@@ -1765,16 +1813,15 @@ static void esw_disable_vport(struct mlx5_eswitch *esw,
esw_vport_change_handle_locked(vport);
vport->enabled_events = 0;
esw_vport_disable_qos(esw, vport);
- if (!mlx5_esw_is_manager_vport(esw, vport_num) &&
- esw->mode == MLX5_ESWITCH_LEGACY) {
+
+ if (!mlx5_esw_is_manager_vport(esw, vport->vport) &&
+ esw->mode == MLX5_ESWITCH_LEGACY)
mlx5_modify_vport_admin_state(esw->dev,
MLX5_VPORT_STATE_OP_MOD_ESW_VPORT,
vport_num, 1,
MLX5_VPORT_ADMIN_STATE_DOWN);
- esw_vport_disable_egress_acl(esw, vport);
- esw_vport_disable_ingress_acl(esw, vport);
- esw_legacy_vport_destroy_drop_counters(vport);
- }
+
+ esw_vport_cleanup_acl(esw, vport);
esw->enabled_vports--;
done:
--
2.13.6

@ -0,0 +1,161 @@
From 5e82bd06ba83b431da61f9c9b735dd9f427973ec Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:03:58 -0400
Subject: [PATCH 081/312] [netdrv] net/mlx5: Move ACL drop counters life cycle
close to ACL lifecycle
Message-id: <20200510150452.10307-34-ahleihel@redhat.com>
Patchwork-id: 306656
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 33/87] net/mlx5: Move ACL drop counters life cycle close to ACL lifecycle
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit b7752f8341c4fecc4720fbd58f868e114a57fdea
Author: Parav Pandit <parav@mellanox.com>
Date: Mon Oct 28 23:35:19 2019 +0000
net/mlx5: Move ACL drop counters life cycle close to ACL lifecycle
It is better to create/destroy ACL related drop counters where the actual
drop rule ACLs are created/destroyed, so that ACL configuration is self
contained for ingress and egress.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 74 +++++++++++------------
1 file changed, 35 insertions(+), 39 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index f854750a15c5..2d094bb7b8a1 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1660,58 +1660,55 @@ static void esw_apply_vport_conf(struct mlx5_eswitch *esw,
flags);
}
-static void esw_legacy_vport_create_drop_counters(struct mlx5_vport *vport)
+static int esw_vport_create_legacy_acl_tables(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
{
- struct mlx5_core_dev *dev = vport->dev;
+ int ret;
- if (MLX5_CAP_ESW_INGRESS_ACL(dev, flow_counter)) {
- vport->ingress.legacy.drop_counter = mlx5_fc_create(dev, false);
+ /* Only non manager vports need ACL in legacy mode */
+ if (mlx5_esw_is_manager_vport(esw, vport->vport))
+ return 0;
+
+ if (!mlx5_esw_is_manager_vport(esw, vport->vport) &&
+ MLX5_CAP_ESW_INGRESS_ACL(esw->dev, flow_counter)) {
+ vport->ingress.legacy.drop_counter = mlx5_fc_create(esw->dev, false);
if (IS_ERR(vport->ingress.legacy.drop_counter)) {
- esw_warn(dev,
+ esw_warn(esw->dev,
"vport[%d] configure ingress drop rule counter failed\n",
vport->vport);
vport->ingress.legacy.drop_counter = NULL;
}
}
- if (MLX5_CAP_ESW_EGRESS_ACL(dev, flow_counter)) {
- vport->egress.legacy.drop_counter = mlx5_fc_create(dev, false);
+ ret = esw_vport_ingress_config(esw, vport);
+ if (ret)
+ goto ingress_err;
+
+ if (!mlx5_esw_is_manager_vport(esw, vport->vport) &&
+ MLX5_CAP_ESW_EGRESS_ACL(esw->dev, flow_counter)) {
+ vport->egress.legacy.drop_counter = mlx5_fc_create(esw->dev, false);
if (IS_ERR(vport->egress.legacy.drop_counter)) {
- esw_warn(dev,
+ esw_warn(esw->dev,
"vport[%d] configure egress drop rule counter failed\n",
vport->vport);
vport->egress.legacy.drop_counter = NULL;
}
}
-}
-
-static void esw_legacy_vport_destroy_drop_counters(struct mlx5_vport *vport)
-{
- struct mlx5_core_dev *dev = vport->dev;
-
- if (vport->ingress.legacy.drop_counter)
- mlx5_fc_destroy(dev, vport->ingress.legacy.drop_counter);
- if (vport->egress.legacy.drop_counter)
- mlx5_fc_destroy(dev, vport->egress.legacy.drop_counter);
-}
-
-static int esw_vport_create_legacy_acl_tables(struct mlx5_eswitch *esw,
- struct mlx5_vport *vport)
-{
- int ret;
-
- /* Only non manager vports need ACL in legacy mode */
- if (mlx5_esw_is_manager_vport(esw, vport->vport))
- return 0;
-
- ret = esw_vport_ingress_config(esw, vport);
- if (ret)
- return ret;
ret = esw_vport_egress_config(esw, vport);
if (ret)
- esw_vport_disable_ingress_acl(esw, vport);
+ goto egress_err;
+
+ return 0;
+egress_err:
+ esw_vport_disable_ingress_acl(esw, vport);
+ mlx5_fc_destroy(esw->dev, vport->egress.legacy.drop_counter);
+ vport->egress.legacy.drop_counter = NULL;
+
+ingress_err:
+ mlx5_fc_destroy(esw->dev, vport->ingress.legacy.drop_counter);
+ vport->ingress.legacy.drop_counter = NULL;
return ret;
}
@@ -1732,8 +1729,12 @@ static void esw_vport_destroy_legacy_acl_tables(struct mlx5_eswitch *esw,
return;
esw_vport_disable_egress_acl(esw, vport);
+ mlx5_fc_destroy(esw->dev, vport->egress.legacy.drop_counter);
+ vport->egress.legacy.drop_counter = NULL;
+
esw_vport_disable_ingress_acl(esw, vport);
- esw_legacy_vport_destroy_drop_counters(vport);
+ mlx5_fc_destroy(esw->dev, vport->ingress.legacy.drop_counter);
+ vport->ingress.legacy.drop_counter = NULL;
}
static void esw_vport_cleanup_acl(struct mlx5_eswitch *esw,
@@ -1754,11 +1755,6 @@ static int esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
esw_debug(esw->dev, "Enabling VPORT(%d)\n", vport_num);
- /* Create steering drop counters for ingress and egress ACLs */
- if (!mlx5_esw_is_manager_vport(esw, vport_num) &&
- esw->mode == MLX5_ESWITCH_LEGACY)
- esw_legacy_vport_create_drop_counters(vport);
-
/* Restore old vport configuration */
esw_apply_vport_conf(esw, vport);
--
2.13.6

@ -0,0 +1,124 @@
From d8608d0e2b0bcaa440ea7bcc65ef20699846a27e Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:03:59 -0400
Subject: [PATCH 082/312] [netdrv] net/mlx5: E-switch, Offloads introduce and
use per vport acl tables APIs
Message-id: <20200510150452.10307-35-ahleihel@redhat.com>
Patchwork-id: 306658
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 34/87] net/mlx5: E-switch, Offloads introduce and use per vport acl tables APIs
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 89a0f1fb16adca959ea1485a856fbcfcd1d24208
Author: Parav Pandit <parav@mellanox.com>
Date: Mon Oct 28 23:35:20 2019 +0000
net/mlx5: E-switch, Offloads introduce and use per vport acl tables APIs
Introduce and use per vport ACL tables creation and destroy APIs, so that
subsequently patch can use them during enabling/disabling a vport.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
.../ethernet/mellanox/mlx5/core/eswitch_offloads.c | 49 ++++++++++++++--------
1 file changed, 32 insertions(+), 17 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index b33543c5f68f..756031dcf056 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1964,6 +1964,32 @@ static bool esw_use_vport_metadata(const struct mlx5_eswitch *esw)
esw_check_vport_match_metadata_supported(esw);
}
+static int
+esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ int err;
+
+ err = esw_vport_ingress_config(esw, vport);
+ if (err)
+ return err;
+
+ if (mlx5_eswitch_is_vf_vport(esw, vport->vport)) {
+ err = esw_vport_egress_config(esw, vport);
+ if (err)
+ esw_vport_disable_ingress_acl(esw, vport);
+ }
+ return err;
+}
+
+static void
+esw_vport_destroy_offloads_acl_tables(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ esw_vport_disable_egress_acl(esw, vport);
+ esw_vport_disable_ingress_acl(esw, vport);
+}
+
static int esw_create_offloads_acl_tables(struct mlx5_eswitch *esw)
{
struct mlx5_vport *vport;
@@ -1974,15 +2000,9 @@ static int esw_create_offloads_acl_tables(struct mlx5_eswitch *esw)
esw->flags |= MLX5_ESWITCH_VPORT_MATCH_METADATA;
mlx5_esw_for_all_vports(esw, i, vport) {
- err = esw_vport_ingress_config(esw, vport);
+ err = esw_vport_create_offloads_acl_tables(esw, vport);
if (err)
- goto err_ingress;
-
- if (mlx5_eswitch_is_vf_vport(esw, vport->vport)) {
- err = esw_vport_egress_config(esw, vport);
- if (err)
- goto err_egress;
- }
+ goto err_acl_table;
}
if (mlx5_eswitch_vport_match_metadata_enabled(esw))
@@ -1990,13 +2010,10 @@ static int esw_create_offloads_acl_tables(struct mlx5_eswitch *esw)
return 0;
-err_egress:
- esw_vport_disable_ingress_acl(esw, vport);
-err_ingress:
+err_acl_table:
for (j = MLX5_VPORT_PF; j < i; j++) {
vport = &esw->vports[j];
- esw_vport_disable_egress_acl(esw, vport);
- esw_vport_disable_ingress_acl(esw, vport);
+ esw_vport_destroy_offloads_acl_tables(esw, vport);
}
return err;
@@ -2007,10 +2024,8 @@ static void esw_destroy_offloads_acl_tables(struct mlx5_eswitch *esw)
struct mlx5_vport *vport;
int i;
- mlx5_esw_for_all_vports(esw, i, vport) {
- esw_vport_disable_egress_acl(esw, vport);
- esw_vport_disable_ingress_acl(esw, vport);
- }
+ mlx5_esw_for_all_vports(esw, i, vport)
+ esw_vport_destroy_offloads_acl_tables(esw, vport);
esw->flags &= ~MLX5_ESWITCH_VPORT_MATCH_METADATA;
}
--
2.13.6

@ -0,0 +1,198 @@
From 31d317151ad03b5040aa5ee117208ff4b688095b Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:00 -0400
Subject: [PATCH 083/312] [netdrv] net/mlx5: E-switch, Offloads shift ACL
programming during enable/disable vport
Message-id: <20200510150452.10307-36-ahleihel@redhat.com>
Patchwork-id: 306659
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 35/87] net/mlx5: E-switch, Offloads shift ACL programming during enable/disable vport
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
Conflicts:
- drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
Context diff due to already backported commit
1e62e222db2e ("net/mlx5: E-Switch, Use vport metadata matching only when mandatory")
---> In function esw_create_uplink_offloads_acl_tables, we now call esw_use_vport_metadata
instead of esw_check_vport_match_metadata_supported.
commit 748da30b376e034ae54b53e7e38e15cfa2bf4dda
Author: Vu Pham <vuhuong@mellanox.com>
Date: Mon Oct 28 23:35:22 2019 +0000
net/mlx5: E-switch, Offloads shift ACL programming during enable/disable vport
Currently legacy mode enables ACL while enabling vport, while offloads
mode enable ACL when moving to offloads mode.
Bring consistency to both modes by enabling/disabling ACL when
enabling/disabling a vport.
It also eliminates creating ingress ACL table on unused ECPF vport in
offloads mode.
Signed-off-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 6 ++--
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 7 ++++
.../ethernet/mellanox/mlx5/core/eswitch_offloads.c | 42 +++++++---------------
3 files changed, 24 insertions(+), 31 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 2d094bb7b8a1..91b5ec6c3e13 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1717,8 +1717,8 @@ static int esw_vport_setup_acl(struct mlx5_eswitch *esw,
{
if (esw->mode == MLX5_ESWITCH_LEGACY)
return esw_vport_create_legacy_acl_tables(esw, vport);
-
- return 0;
+ else
+ return esw_vport_create_offloads_acl_tables(esw, vport);
}
static void esw_vport_destroy_legacy_acl_tables(struct mlx5_eswitch *esw,
@@ -1742,6 +1742,8 @@ static void esw_vport_cleanup_acl(struct mlx5_eswitch *esw,
{
if (esw->mode == MLX5_ESWITCH_LEGACY)
esw_vport_destroy_legacy_acl_tables(esw, vport);
+ else
+ esw_vport_destroy_offloads_acl_tables(esw, vport);
}
static int esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index d29df0c302f2..0927019062d2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -631,6 +631,13 @@ mlx5_eswitch_enable_pf_vf_vports(struct mlx5_eswitch *esw,
enum mlx5_eswitch_vport_event enabled_events);
void mlx5_eswitch_disable_pf_vf_vports(struct mlx5_eswitch *esw);
+int
+esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport);
+void
+esw_vport_destroy_offloads_acl_tables(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport);
+
#else /* CONFIG_MLX5_ESWITCH */
/* eswitch API stubs */
static inline int mlx5_eswitch_init(struct mlx5_core_dev *dev) { return 0; }
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 756031dcf056..2485c2a7ad9d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1964,7 +1964,7 @@ static bool esw_use_vport_metadata(const struct mlx5_eswitch *esw)
esw_check_vport_match_metadata_supported(esw);
}
-static int
+int
esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
@@ -1982,7 +1982,7 @@ esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw,
return err;
}
-static void
+void
esw_vport_destroy_offloads_acl_tables(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
@@ -1990,43 +1990,27 @@ esw_vport_destroy_offloads_acl_tables(struct mlx5_eswitch *esw,
esw_vport_disable_ingress_acl(esw, vport);
}
-static int esw_create_offloads_acl_tables(struct mlx5_eswitch *esw)
+static int esw_create_uplink_offloads_acl_tables(struct mlx5_eswitch *esw)
{
struct mlx5_vport *vport;
- int i, j;
int err;
if (esw_use_vport_metadata(esw))
esw->flags |= MLX5_ESWITCH_VPORT_MATCH_METADATA;
- mlx5_esw_for_all_vports(esw, i, vport) {
- err = esw_vport_create_offloads_acl_tables(esw, vport);
- if (err)
- goto err_acl_table;
- }
-
- if (mlx5_eswitch_vport_match_metadata_enabled(esw))
- esw_info(esw->dev, "Use metadata reg_c as source vport to match\n");
-
- return 0;
-
-err_acl_table:
- for (j = MLX5_VPORT_PF; j < i; j++) {
- vport = &esw->vports[j];
- esw_vport_destroy_offloads_acl_tables(esw, vport);
- }
-
+ vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_UPLINK);
+ err = esw_vport_create_offloads_acl_tables(esw, vport);
+ if (err)
+ esw->flags &= ~MLX5_ESWITCH_VPORT_MATCH_METADATA;
return err;
}
-static void esw_destroy_offloads_acl_tables(struct mlx5_eswitch *esw)
+static void esw_destroy_uplink_offloads_acl_tables(struct mlx5_eswitch *esw)
{
struct mlx5_vport *vport;
- int i;
-
- mlx5_esw_for_all_vports(esw, i, vport)
- esw_vport_destroy_offloads_acl_tables(esw, vport);
+ vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_UPLINK);
+ esw_vport_destroy_offloads_acl_tables(esw, vport);
esw->flags &= ~MLX5_ESWITCH_VPORT_MATCH_METADATA;
}
@@ -2044,7 +2028,7 @@ static int esw_offloads_steering_init(struct mlx5_eswitch *esw)
memset(&esw->fdb_table.offloads, 0, sizeof(struct offloads_fdb));
mutex_init(&esw->fdb_table.offloads.fdb_prio_lock);
- err = esw_create_offloads_acl_tables(esw);
+ err = esw_create_uplink_offloads_acl_tables(esw);
if (err)
return err;
@@ -2069,7 +2053,7 @@ static int esw_offloads_steering_init(struct mlx5_eswitch *esw)
esw_destroy_offloads_fdb_tables(esw);
create_fdb_err:
- esw_destroy_offloads_acl_tables(esw);
+ esw_destroy_uplink_offloads_acl_tables(esw);
return err;
}
@@ -2079,7 +2063,7 @@ static void esw_offloads_steering_cleanup(struct mlx5_eswitch *esw)
esw_destroy_vport_rx_group(esw);
esw_destroy_offloads_table(esw);
esw_destroy_offloads_fdb_tables(esw);
- esw_destroy_offloads_acl_tables(esw);
+ esw_destroy_uplink_offloads_acl_tables(esw);
}
static void
--
2.13.6

@ -0,0 +1,104 @@
From 807c9a6c1824b43987f92a40a7ef47bd582a38e6 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:01 -0400
Subject: [PATCH 084/312] [netdrv] net/mlx5: Restrict metadata disablement to
offloads mode
Message-id: <20200510150452.10307-37-ahleihel@redhat.com>
Patchwork-id: 306660
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 36/87] net/mlx5: Restrict metadata disablement to offloads mode
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit a962d7a61e2404cda6a89bfa5cc193c62223bb5e
Author: Parav Pandit <parav@mellanox.com>
Date: Mon Oct 28 23:35:24 2019 +0000
net/mlx5: Restrict metadata disablement to offloads mode
Now that there is clear separation for acl setup/cleanup between legacy
and offloads mode, limit metdata disablement to offloads mode.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 2 --
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 2 --
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 9 ++++++---
3 files changed, 6 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 91b5ec6c3e13..97af7d793435 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1209,8 +1209,6 @@ void esw_vport_cleanup_ingress_rules(struct mlx5_eswitch *esw,
mlx5_del_flow_rules(vport->ingress.allow_rule);
vport->ingress.allow_rule = NULL;
}
-
- esw_vport_del_ingress_acl_modify_metadata(esw, vport);
}
void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 0927019062d2..777224ed18bc 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -282,8 +282,6 @@ void esw_vport_disable_egress_acl(struct mlx5_eswitch *esw,
struct mlx5_vport *vport);
void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
struct mlx5_vport *vport);
-void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
- struct mlx5_vport *vport);
int mlx5_esw_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num,
u32 rate_mbps);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 2485c2a7ad9d..767993b10110 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1849,8 +1849,8 @@ static int esw_vport_add_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
return err;
}
-void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
- struct mlx5_vport *vport)
+static void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
{
if (vport->ingress.offloads.modify_metadata_rule) {
mlx5_del_flow_rules(vport->ingress.offloads.modify_metadata_rule);
@@ -1976,8 +1976,10 @@ esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw,
if (mlx5_eswitch_is_vf_vport(esw, vport->vport)) {
err = esw_vport_egress_config(esw, vport);
- if (err)
+ if (err) {
+ esw_vport_del_ingress_acl_modify_metadata(esw, vport);
esw_vport_disable_ingress_acl(esw, vport);
+ }
}
return err;
}
@@ -1987,6 +1989,7 @@ esw_vport_destroy_offloads_acl_tables(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
esw_vport_disable_egress_acl(esw, vport);
+ esw_vport_del_ingress_acl_modify_metadata(esw, vport);
esw_vport_disable_ingress_acl(esw, vport);
}
--
2.13.6

@ -0,0 +1,588 @@
From ccb016735ab552893c77a5deeeef4d795c18448e Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:02 -0400
Subject: [PATCH 085/312] [netdrv] net/mlx5: Refactor ingress acl configuration
Message-id: <20200510150452.10307-38-ahleihel@redhat.com>
Patchwork-id: 306661
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 37/87] net/mlx5: Refactor ingress acl configuration
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 10652f39943ec19d32a6fa44a8523b0d40abcbcf
Author: Parav Pandit <parav@mellanox.com>
Date: Mon Oct 28 23:35:26 2019 +0000
net/mlx5: Refactor ingress acl configuration
Drop, untagged, spoof check and untagged spoof check flow groups are
limited to legacy mode only.
Therefore, following refactoring is done to
(a) improve code readability
(b) have better code split between legacy and offloads mode
1. Move legacy flow groups under legacy structure
2. Add validity check for group deletion
3. Restrict scope of esw_vport_disable_ingress_acl to legacy mode
4. Rename esw_vport_enable_ingress_acl() to
esw_vport_create_ingress_acl_table() and limit its scope to
table creation
5. Introduce legacy flow groups creation helper
esw_legacy_create_ingress_acl_groups() and keep its scope to legacy mode
6. Reduce offloads ingress groups from 4 to just 1 metadata group
per vport
7. Removed redundant IS_ERR_OR_NULL as entries are marked NULL on free.
8. Shortern error message to remove redundant 'E-switch'
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 228 ++++++++++++---------
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 17 +-
.../ethernet/mellanox/mlx5/core/eswitch_offloads.c | 67 +++++-
3 files changed, 201 insertions(+), 111 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 97af7d793435..1937198405e1 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1065,57 +1065,21 @@ void esw_vport_disable_egress_acl(struct mlx5_eswitch *esw,
vport->egress.acl = NULL;
}
-int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
- struct mlx5_vport *vport)
+static int
+esw_vport_create_legacy_ingress_acl_groups(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
{
int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
struct mlx5_core_dev *dev = esw->dev;
- struct mlx5_flow_namespace *root_ns;
- struct mlx5_flow_table *acl;
struct mlx5_flow_group *g;
void *match_criteria;
u32 *flow_group_in;
- /* The ingress acl table contains 4 groups
- * (2 active rules at the same time -
- * 1 allow rule from one of the first 3 groups.
- * 1 drop rule from the last group):
- * 1)Allow untagged traffic with smac=original mac.
- * 2)Allow untagged traffic.
- * 3)Allow traffic with smac=original mac.
- * 4)Drop all other traffic.
- */
- int table_size = 4;
- int err = 0;
-
- if (!MLX5_CAP_ESW_INGRESS_ACL(dev, ft_support))
- return -EOPNOTSUPP;
-
- if (!IS_ERR_OR_NULL(vport->ingress.acl))
- return 0;
-
- esw_debug(dev, "Create vport[%d] ingress ACL log_max_size(%d)\n",
- vport->vport, MLX5_CAP_ESW_INGRESS_ACL(dev, log_max_ft_size));
-
- root_ns = mlx5_get_flow_vport_acl_namespace(dev, MLX5_FLOW_NAMESPACE_ESW_INGRESS,
- mlx5_eswitch_vport_num_to_index(esw, vport->vport));
- if (!root_ns) {
- esw_warn(dev, "Failed to get E-Switch ingress flow namespace for vport (%d)\n", vport->vport);
- return -EOPNOTSUPP;
- }
+ int err;
flow_group_in = kvzalloc(inlen, GFP_KERNEL);
if (!flow_group_in)
return -ENOMEM;
- acl = mlx5_create_vport_flow_table(root_ns, 0, table_size, 0, vport->vport);
- if (IS_ERR(acl)) {
- err = PTR_ERR(acl);
- esw_warn(dev, "Failed to create E-Switch vport[%d] ingress flow Table, err(%d)\n",
- vport->vport, err);
- goto out;
- }
- vport->ingress.acl = acl;
-
match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
@@ -1125,14 +1089,14 @@ int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 0);
- g = mlx5_create_flow_group(acl, flow_group_in);
+ g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
if (IS_ERR(g)) {
err = PTR_ERR(g);
- esw_warn(dev, "Failed to create E-Switch vport[%d] ingress untagged spoofchk flow group, err(%d)\n",
+ esw_warn(dev, "vport[%d] ingress create untagged spoofchk flow group, err(%d)\n",
vport->vport, err);
- goto out;
+ goto spoof_err;
}
- vport->ingress.allow_untagged_spoofchk_grp = g;
+ vport->ingress.legacy.allow_untagged_spoofchk_grp = g;
memset(flow_group_in, 0, inlen);
MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
@@ -1140,14 +1104,14 @@ int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 1);
MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1);
- g = mlx5_create_flow_group(acl, flow_group_in);
+ g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
if (IS_ERR(g)) {
err = PTR_ERR(g);
- esw_warn(dev, "Failed to create E-Switch vport[%d] ingress untagged flow group, err(%d)\n",
+ esw_warn(dev, "vport[%d] ingress create untagged flow group, err(%d)\n",
vport->vport, err);
- goto out;
+ goto untagged_err;
}
- vport->ingress.allow_untagged_only_grp = g;
+ vport->ingress.legacy.allow_untagged_only_grp = g;
memset(flow_group_in, 0, inlen);
MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
@@ -1156,80 +1120,134 @@ int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 2);
MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 2);
- g = mlx5_create_flow_group(acl, flow_group_in);
+ g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
if (IS_ERR(g)) {
err = PTR_ERR(g);
- esw_warn(dev, "Failed to create E-Switch vport[%d] ingress spoofchk flow group, err(%d)\n",
+ esw_warn(dev, "vport[%d] ingress create spoofchk flow group, err(%d)\n",
vport->vport, err);
- goto out;
+ goto allow_spoof_err;
}
- vport->ingress.allow_spoofchk_only_grp = g;
+ vport->ingress.legacy.allow_spoofchk_only_grp = g;
memset(flow_group_in, 0, inlen);
MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 3);
MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 3);
- g = mlx5_create_flow_group(acl, flow_group_in);
+ g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
if (IS_ERR(g)) {
err = PTR_ERR(g);
- esw_warn(dev, "Failed to create E-Switch vport[%d] ingress drop flow group, err(%d)\n",
+ esw_warn(dev, "vport[%d] ingress create drop flow group, err(%d)\n",
vport->vport, err);
- goto out;
+ goto drop_err;
}
- vport->ingress.drop_grp = g;
+ vport->ingress.legacy.drop_grp = g;
+ kvfree(flow_group_in);
+ return 0;
-out:
- if (err) {
- if (!IS_ERR_OR_NULL(vport->ingress.allow_spoofchk_only_grp))
- mlx5_destroy_flow_group(
- vport->ingress.allow_spoofchk_only_grp);
- if (!IS_ERR_OR_NULL(vport->ingress.allow_untagged_only_grp))
- mlx5_destroy_flow_group(
- vport->ingress.allow_untagged_only_grp);
- if (!IS_ERR_OR_NULL(vport->ingress.allow_untagged_spoofchk_grp))
- mlx5_destroy_flow_group(
- vport->ingress.allow_untagged_spoofchk_grp);
- if (!IS_ERR_OR_NULL(vport->ingress.acl))
- mlx5_destroy_flow_table(vport->ingress.acl);
+drop_err:
+ if (!IS_ERR_OR_NULL(vport->ingress.legacy.allow_spoofchk_only_grp)) {
+ mlx5_destroy_flow_group(vport->ingress.legacy.allow_spoofchk_only_grp);
+ vport->ingress.legacy.allow_spoofchk_only_grp = NULL;
}
-
+allow_spoof_err:
+ if (!IS_ERR_OR_NULL(vport->ingress.legacy.allow_untagged_only_grp)) {
+ mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_only_grp);
+ vport->ingress.legacy.allow_untagged_only_grp = NULL;
+ }
+untagged_err:
+ if (!IS_ERR_OR_NULL(vport->ingress.legacy.allow_untagged_spoofchk_grp)) {
+ mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_spoofchk_grp);
+ vport->ingress.legacy.allow_untagged_spoofchk_grp = NULL;
+ }
+spoof_err:
kvfree(flow_group_in);
return err;
}
+int esw_vport_create_ingress_acl_table(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport, int table_size)
+{
+ struct mlx5_core_dev *dev = esw->dev;
+ struct mlx5_flow_namespace *root_ns;
+ struct mlx5_flow_table *acl;
+ int vport_index;
+ int err;
+
+ if (!MLX5_CAP_ESW_INGRESS_ACL(dev, ft_support))
+ return -EOPNOTSUPP;
+
+ esw_debug(dev, "Create vport[%d] ingress ACL log_max_size(%d)\n",
+ vport->vport, MLX5_CAP_ESW_INGRESS_ACL(dev, log_max_ft_size));
+
+ vport_index = mlx5_eswitch_vport_num_to_index(esw, vport->vport);
+ root_ns = mlx5_get_flow_vport_acl_namespace(dev, MLX5_FLOW_NAMESPACE_ESW_INGRESS,
+ vport_index);
+ if (!root_ns) {
+ esw_warn(dev, "Failed to get E-Switch ingress flow namespace for vport (%d)\n",
+ vport->vport);
+ return -EOPNOTSUPP;
+ }
+
+ acl = mlx5_create_vport_flow_table(root_ns, 0, table_size, 0, vport->vport);
+ if (IS_ERR(acl)) {
+ err = PTR_ERR(acl);
+ esw_warn(dev, "vport[%d] ingress create flow Table, err(%d)\n",
+ vport->vport, err);
+ return err;
+ }
+ vport->ingress.acl = acl;
+ return 0;
+}
+
+void esw_vport_destroy_ingress_acl_table(struct mlx5_vport *vport)
+{
+ if (!vport->ingress.acl)
+ return;
+
+ mlx5_destroy_flow_table(vport->ingress.acl);
+ vport->ingress.acl = NULL;
+}
+
void esw_vport_cleanup_ingress_rules(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
- if (!IS_ERR_OR_NULL(vport->ingress.legacy.drop_rule)) {
+ if (vport->ingress.legacy.drop_rule) {
mlx5_del_flow_rules(vport->ingress.legacy.drop_rule);
vport->ingress.legacy.drop_rule = NULL;
}
- if (!IS_ERR_OR_NULL(vport->ingress.allow_rule)) {
+ if (vport->ingress.allow_rule) {
mlx5_del_flow_rules(vport->ingress.allow_rule);
vport->ingress.allow_rule = NULL;
}
}
-void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
- struct mlx5_vport *vport)
+static void esw_vport_disable_legacy_ingress_acl(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
{
- if (IS_ERR_OR_NULL(vport->ingress.acl))
+ if (!vport->ingress.acl)
return;
esw_debug(esw->dev, "Destroy vport[%d] E-Switch ingress ACL\n", vport->vport);
esw_vport_cleanup_ingress_rules(esw, vport);
- mlx5_destroy_flow_group(vport->ingress.allow_spoofchk_only_grp);
- mlx5_destroy_flow_group(vport->ingress.allow_untagged_only_grp);
- mlx5_destroy_flow_group(vport->ingress.allow_untagged_spoofchk_grp);
- mlx5_destroy_flow_group(vport->ingress.drop_grp);
- mlx5_destroy_flow_table(vport->ingress.acl);
- vport->ingress.acl = NULL;
- vport->ingress.drop_grp = NULL;
- vport->ingress.allow_spoofchk_only_grp = NULL;
- vport->ingress.allow_untagged_only_grp = NULL;
- vport->ingress.allow_untagged_spoofchk_grp = NULL;
+ if (vport->ingress.legacy.allow_spoofchk_only_grp) {
+ mlx5_destroy_flow_group(vport->ingress.legacy.allow_spoofchk_only_grp);
+ vport->ingress.legacy.allow_spoofchk_only_grp = NULL;
+ }
+ if (vport->ingress.legacy.allow_untagged_only_grp) {
+ mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_only_grp);
+ vport->ingress.legacy.allow_untagged_only_grp = NULL;
+ }
+ if (vport->ingress.legacy.allow_untagged_spoofchk_grp) {
+ mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_spoofchk_grp);
+ vport->ingress.legacy.allow_untagged_spoofchk_grp = NULL;
+ }
+ if (vport->ingress.legacy.drop_grp) {
+ mlx5_destroy_flow_group(vport->ingress.legacy.drop_grp);
+ vport->ingress.legacy.drop_grp = NULL;
+ }
+ esw_vport_destroy_ingress_acl_table(vport);
}
static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
@@ -1244,19 +1262,36 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
int err = 0;
u8 *smac_v;
+ /* The ingress acl table contains 4 groups
+ * (2 active rules at the same time -
+ * 1 allow rule from one of the first 3 groups.
+ * 1 drop rule from the last group):
+ * 1)Allow untagged traffic with smac=original mac.
+ * 2)Allow untagged traffic.
+ * 3)Allow traffic with smac=original mac.
+ * 4)Drop all other traffic.
+ */
+ int table_size = 4;
+
esw_vport_cleanup_ingress_rules(esw, vport);
if (!vport->info.vlan && !vport->info.qos && !vport->info.spoofchk) {
- esw_vport_disable_ingress_acl(esw, vport);
+ esw_vport_disable_legacy_ingress_acl(esw, vport);
return 0;
}
- err = esw_vport_enable_ingress_acl(esw, vport);
- if (err) {
- mlx5_core_warn(esw->dev,
- "failed to enable ingress acl (%d) on vport[%d]\n",
- err, vport->vport);
- return err;
+ if (!vport->ingress.acl) {
+ err = esw_vport_create_ingress_acl_table(esw, vport, table_size);
+ if (err) {
+ esw_warn(esw->dev,
+ "vport[%d] enable ingress acl err (%d)\n",
+ err, vport->vport);
+ return err;
+ }
+
+ err = esw_vport_create_legacy_ingress_acl_groups(esw, vport);
+ if (err)
+ goto out;
}
esw_debug(esw->dev,
@@ -1317,10 +1352,11 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
vport->ingress.legacy.drop_rule = NULL;
goto out;
}
+ kvfree(spec);
+ return 0;
out:
- if (err)
- esw_vport_cleanup_ingress_rules(esw, vport);
+ esw_vport_disable_legacy_ingress_acl(esw, vport);
kvfree(spec);
return err;
}
@@ -1700,7 +1736,7 @@ static int esw_vport_create_legacy_acl_tables(struct mlx5_eswitch *esw,
return 0;
egress_err:
- esw_vport_disable_ingress_acl(esw, vport);
+ esw_vport_disable_legacy_ingress_acl(esw, vport);
mlx5_fc_destroy(esw->dev, vport->egress.legacy.drop_counter);
vport->egress.legacy.drop_counter = NULL;
@@ -1730,7 +1766,7 @@ static void esw_vport_destroy_legacy_acl_tables(struct mlx5_eswitch *esw,
mlx5_fc_destroy(esw->dev, vport->egress.legacy.drop_counter);
vport->egress.legacy.drop_counter = NULL;
- esw_vport_disable_ingress_acl(esw, vport);
+ esw_vport_disable_legacy_ingress_acl(esw, vport);
mlx5_fc_destroy(esw->dev, vport->ingress.legacy.drop_counter);
vport->ingress.legacy.drop_counter = NULL;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 777224ed18bc..963d0df0d66b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -65,25 +65,30 @@
struct vport_ingress {
struct mlx5_flow_table *acl;
+#ifdef __GENKSYMS__
struct mlx5_flow_group *allow_untagged_spoofchk_grp;
struct mlx5_flow_group *allow_spoofchk_only_grp;
struct mlx5_flow_group *allow_untagged_only_grp;
struct mlx5_flow_group *drop_grp;
-#ifdef __GENKSYMS__
struct mlx5_modify_hdr *modify_metadata;
struct mlx5_flow_handle *modify_metadata_rule;
#endif
- struct mlx5_flow_handle *allow_rule;
+ struct mlx5_flow_handle *allow_rule;
#ifdef __GENKSYMS__
struct mlx5_flow_handle *drop_rule;
struct mlx5_fc *drop_counter;
#endif
#ifndef __GENKSYMS__
struct {
+ struct mlx5_flow_group *allow_spoofchk_only_grp;
+ struct mlx5_flow_group *allow_untagged_spoofchk_grp;
+ struct mlx5_flow_group *allow_untagged_only_grp;
+ struct mlx5_flow_group *drop_grp;
struct mlx5_flow_handle *drop_rule;
struct mlx5_fc *drop_counter;
} legacy;
struct {
+ struct mlx5_flow_group *metadata_grp;
struct mlx5_modify_hdr *modify_metadata;
struct mlx5_flow_handle *modify_metadata_rule;
} offloads;
@@ -272,16 +277,16 @@ void esw_offloads_cleanup_reps(struct mlx5_eswitch *esw);
int esw_offloads_init_reps(struct mlx5_eswitch *esw);
void esw_vport_cleanup_ingress_rules(struct mlx5_eswitch *esw,
struct mlx5_vport *vport);
-int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
- struct mlx5_vport *vport);
+int esw_vport_create_ingress_acl_table(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport,
+ int table_size);
+void esw_vport_destroy_ingress_acl_table(struct mlx5_vport *vport);
void esw_vport_cleanup_egress_rules(struct mlx5_eswitch *esw,
struct mlx5_vport *vport);
int esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
struct mlx5_vport *vport);
void esw_vport_disable_egress_acl(struct mlx5_eswitch *esw,
struct mlx5_vport *vport);
-void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
- struct mlx5_vport *vport);
int mlx5_esw_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num,
u32 rate_mbps);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 767993b10110..7fe085fa3d29 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1860,6 +1860,44 @@ static void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
}
}
+static int esw_vport_create_ingress_acl_group(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+ struct mlx5_flow_group *g;
+ u32 *flow_group_in;
+ int ret = 0;
+
+ flow_group_in = kvzalloc(inlen, GFP_KERNEL);
+ if (!flow_group_in)
+ return -ENOMEM;
+
+ memset(flow_group_in, 0, inlen);
+ MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
+ MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 0);
+
+ g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
+ if (IS_ERR(g)) {
+ ret = PTR_ERR(g);
+ esw_warn(esw->dev,
+ "Failed to create vport[%d] ingress metdata group, err(%d)\n",
+ vport->vport, ret);
+ goto grp_err;
+ }
+ vport->ingress.offloads.metadata_grp = g;
+grp_err:
+ kvfree(flow_group_in);
+ return ret;
+}
+
+static void esw_vport_destroy_ingress_acl_group(struct mlx5_vport *vport)
+{
+ if (vport->ingress.offloads.metadata_grp) {
+ mlx5_destroy_flow_group(vport->ingress.offloads.metadata_grp);
+ vport->ingress.offloads.metadata_grp = NULL;
+ }
+}
+
static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
@@ -1870,8 +1908,7 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
return 0;
esw_vport_cleanup_ingress_rules(esw, vport);
-
- err = esw_vport_enable_ingress_acl(esw, vport);
+ err = esw_vport_create_ingress_acl_table(esw, vport, 1);
if (err) {
esw_warn(esw->dev,
"failed to enable ingress acl (%d) on vport[%d]\n",
@@ -1879,25 +1916,34 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
return err;
}
+ err = esw_vport_create_ingress_acl_group(esw, vport);
+ if (err)
+ goto group_err;
+
esw_debug(esw->dev,
"vport[%d] configure ingress rules\n", vport->vport);
if (mlx5_eswitch_vport_match_metadata_enabled(esw)) {
err = esw_vport_add_ingress_acl_modify_metadata(esw, vport);
if (err)
- goto out;
+ goto metadata_err;
}
if (MLX5_CAP_GEN(esw->dev, prio_tag_required) &&
mlx5_eswitch_is_vf_vport(esw, vport->vport)) {
err = esw_vport_ingress_prio_tag_config(esw, vport);
if (err)
- goto out;
+ goto prio_tag_err;
}
+ return 0;
-out:
- if (err)
- esw_vport_disable_ingress_acl(esw, vport);
+prio_tag_err:
+ esw_vport_del_ingress_acl_modify_metadata(esw, vport);
+metadata_err:
+ esw_vport_cleanup_ingress_rules(esw, vport);
+ esw_vport_destroy_ingress_acl_group(vport);
+group_err:
+ esw_vport_destroy_ingress_acl_table(vport);
return err;
}
@@ -1978,7 +2024,8 @@ esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw,
err = esw_vport_egress_config(esw, vport);
if (err) {
esw_vport_del_ingress_acl_modify_metadata(esw, vport);
- esw_vport_disable_ingress_acl(esw, vport);
+ esw_vport_cleanup_ingress_rules(esw, vport);
+ esw_vport_destroy_ingress_acl_table(vport);
}
}
return err;
@@ -1990,7 +2037,9 @@ esw_vport_destroy_offloads_acl_tables(struct mlx5_eswitch *esw,
{
esw_vport_disable_egress_acl(esw, vport);
esw_vport_del_ingress_acl_modify_metadata(esw, vport);
- esw_vport_disable_ingress_acl(esw, vport);
+ esw_vport_cleanup_ingress_rules(esw, vport);
+ esw_vport_destroy_ingress_acl_group(vport);
+ esw_vport_destroy_ingress_acl_table(vport);
}
static int esw_create_uplink_offloads_acl_tables(struct mlx5_eswitch *esw)
--
2.13.6

@ -0,0 +1,178 @@
From a3c4a2bce469b8cc656cf14145d310cd3531ae2e Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:04 -0400
Subject: [PATCH 086/312] [netdrv] net/mlx5: FPGA, support network cards with
standalone FPGA
Message-id: <20200510150452.10307-40-ahleihel@redhat.com>
Patchwork-id: 306663
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 39/87] net/mlx5: FPGA, support network cards with standalone FPGA
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit cc4db579e69b4c92a51fdc9f44bc671b40427824
Author: Igor Leshenko <igorle@mellanox.com>
Date: Thu Sep 5 18:56:28 2019 +0300
net/mlx5: FPGA, support network cards with standalone FPGA
Not all mlx5 cards with FPGA device use it for network processing.
mlx5_core driver configures network connection to FPGA device
for all mlx5 cards with installed FPGA. If FPGA is not a part of
network path, driver crashes in this case
Check FPGA name in function mlx5_fpga_device_start() and continue
integrate FPGA into packets flow only for dedicated cards.
Currently there are Newton and Edison cards.
Signed-off-by: Igor Leshenko <igorle@mellanox.com>
Reviewed-by: Meir Lichtinger <meirl@mellanox.com>
Reviewed-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/fpga/cmd.h | 10 ++--
.../net/ethernet/mellanox/mlx5/core/fpga/core.c | 61 +++++++++++++++-------
2 files changed, 46 insertions(+), 25 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/cmd.h b/drivers/net/ethernet/mellanox/mlx5/core/fpga/cmd.h
index eb8b0fe0b4e1..11621d265d7e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/cmd.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/cmd.h
@@ -35,11 +35,11 @@
#include <linux/mlx5/driver.h>
-enum mlx5_fpga_device_id {
- MLX5_FPGA_DEVICE_UNKNOWN = 0,
- MLX5_FPGA_DEVICE_KU040 = 1,
- MLX5_FPGA_DEVICE_KU060 = 2,
- MLX5_FPGA_DEVICE_KU060_2 = 3,
+enum mlx5_fpga_id {
+ MLX5_FPGA_NEWTON = 0,
+ MLX5_FPGA_EDISON = 1,
+ MLX5_FPGA_MORSE = 2,
+ MLX5_FPGA_MORSEQ = 3,
};
enum mlx5_fpga_image {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/core.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/core.c
index d046d1ec2a86..2ce4241459ce 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/core.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/core.c
@@ -81,19 +81,28 @@ static const char *mlx5_fpga_image_name(enum mlx5_fpga_image image)
}
}
-static const char *mlx5_fpga_device_name(u32 device)
+static const char *mlx5_fpga_name(u32 fpga_id)
{
- switch (device) {
- case MLX5_FPGA_DEVICE_KU040:
- return "ku040";
- case MLX5_FPGA_DEVICE_KU060:
- return "ku060";
- case MLX5_FPGA_DEVICE_KU060_2:
- return "ku060_2";
- case MLX5_FPGA_DEVICE_UNKNOWN:
- default:
- return "unknown";
+ static char ret[32];
+
+ switch (fpga_id) {
+ case MLX5_FPGA_NEWTON:
+ return "Newton";
+ case MLX5_FPGA_EDISON:
+ return "Edison";
+ case MLX5_FPGA_MORSE:
+ return "Morse";
+ case MLX5_FPGA_MORSEQ:
+ return "MorseQ";
}
+
+ snprintf(ret, sizeof(ret), "Unknown %d", fpga_id);
+ return ret;
+}
+
+static int mlx5_is_fpga_lookaside(u32 fpga_id)
+{
+ return fpga_id != MLX5_FPGA_NEWTON && fpga_id != MLX5_FPGA_EDISON;
}
static int mlx5_fpga_device_load_check(struct mlx5_fpga_device *fdev)
@@ -110,8 +119,12 @@ static int mlx5_fpga_device_load_check(struct mlx5_fpga_device *fdev)
fdev->last_admin_image = query.admin_image;
fdev->last_oper_image = query.oper_image;
- mlx5_fpga_dbg(fdev, "Status %u; Admin image %u; Oper image %u\n",
- query.status, query.admin_image, query.oper_image);
+ mlx5_fpga_info(fdev, "Status %u; Admin image %u; Oper image %u\n",
+ query.status, query.admin_image, query.oper_image);
+
+ /* for FPGA lookaside projects FPGA load status is not important */
+ if (mlx5_is_fpga_lookaside(MLX5_CAP_FPGA(fdev->mdev, fpga_id)))
+ return 0;
if (query.status != MLX5_FPGA_STATUS_SUCCESS) {
mlx5_fpga_err(fdev, "%s image failed to load; status %u\n",
@@ -167,25 +180,30 @@ int mlx5_fpga_device_start(struct mlx5_core_dev *mdev)
struct mlx5_fpga_device *fdev = mdev->fpga;
unsigned int max_num_qps;
unsigned long flags;
- u32 fpga_device_id;
+ u32 fpga_id;
int err;
if (!fdev)
return 0;
- err = mlx5_fpga_device_load_check(fdev);
+ err = mlx5_fpga_caps(fdev->mdev);
if (err)
goto out;
- err = mlx5_fpga_caps(fdev->mdev);
+ err = mlx5_fpga_device_load_check(fdev);
if (err)
goto out;
- fpga_device_id = MLX5_CAP_FPGA(fdev->mdev, fpga_device);
- mlx5_fpga_info(fdev, "%s:%u; %s image, version %u; SBU %06x:%04x version %d\n",
- mlx5_fpga_device_name(fpga_device_id),
- fpga_device_id,
+ fpga_id = MLX5_CAP_FPGA(fdev->mdev, fpga_id);
+ mlx5_fpga_info(fdev, "FPGA card %s:%u\n", mlx5_fpga_name(fpga_id), fpga_id);
+
+ /* No QPs if FPGA does not participate in net processing */
+ if (mlx5_is_fpga_lookaside(fpga_id))
+ goto out;
+
+ mlx5_fpga_info(fdev, "%s(%d): image, version %u; SBU %06x:%04x version %d\n",
mlx5_fpga_image_name(fdev->last_oper_image),
+ fdev->last_oper_image,
MLX5_CAP_FPGA(fdev->mdev, image_version),
MLX5_CAP_FPGA(fdev->mdev, ieee_vendor_id),
MLX5_CAP_FPGA(fdev->mdev, sandbox_product_id),
@@ -264,6 +282,9 @@ void mlx5_fpga_device_stop(struct mlx5_core_dev *mdev)
if (!fdev)
return;
+ if (mlx5_is_fpga_lookaside(MLX5_CAP_FPGA(fdev->mdev, fpga_id)))
+ return;
+
spin_lock_irqsave(&fdev->state_lock, flags);
if (fdev->state != MLX5_FPGA_STATUS_SUCCESS) {
spin_unlock_irqrestore(&fdev->state_lock, flags);
--
2.13.6

@ -0,0 +1,63 @@
From 97090ed92050b2a62a9c572b895dba75ce9e7fa2 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:05 -0400
Subject: [PATCH 087/312] [netdrv] net/mlx5: Remove unneeded variable in
mlx5_unload_one
Message-id: <20200510150452.10307-41-ahleihel@redhat.com>
Patchwork-id: 306665
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 40/87] net/mlx5: Remove unneeded variable in mlx5_unload_one
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 32680da7103439095ba8c2dbe30c3e4d0e05e4c2
Author: zhong jiang <zhongjiang@huawei.com>
Date: Fri Sep 13 00:59:02 2019 +0800
net/mlx5: Remove unneeded variable in mlx5_unload_one
mlx5_unload_one do not need local variable to store different value,
Hence just remove it.
Signed-off-by: zhong jiang <zhongjiang@huawei.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/main.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index 490bd80c586a..57e376e4e938 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -1252,8 +1252,6 @@ static int mlx5_load_one(struct mlx5_core_dev *dev, bool boot)
static int mlx5_unload_one(struct mlx5_core_dev *dev, bool cleanup)
{
- int err = 0;
-
if (cleanup) {
mlx5_unregister_device(dev);
mlx5_drain_health_wq(dev);
@@ -1281,7 +1279,7 @@ static int mlx5_unload_one(struct mlx5_core_dev *dev, bool cleanup)
mlx5_function_teardown(dev, cleanup);
out:
mutex_unlock(&dev->intf_state_mutex);
- return err;
+ return 0;
}
static int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx)
--
2.13.6

@ -0,0 +1,65 @@
From 0a412d2add9b9647bd09dd2eb19f0eb5d470ebdf Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:06 -0400
Subject: [PATCH 088/312] [netdrv] net/mlx5e: Verify that rule has at least one
fwd/drop action
Message-id: <20200510150452.10307-42-ahleihel@redhat.com>
Patchwork-id: 306664
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 41/87] net/mlx5e: Verify that rule has at least one fwd/drop action
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit ae2741e2b6ce2bf1b656b1152c4ef147ff35b096
Author: Vlad Buslov <vladbu@mellanox.com>
Date: Wed Sep 11 21:14:54 2019 +0300
net/mlx5e: Verify that rule has at least one fwd/drop action
Currently, mlx5 tc layer doesn't verify that rule has at least one forward
or drop action which leads to following firmware syndrome when user tries
to offload such action:
[ 1824.860501] mlx5_core 0000:81:00.0: mlx5_cmd_check:753:(pid 29458): SET_FLOW_TABLE_ENTRY(0x936) op_mod(0x0) failed, status bad parameter(0x3), syndrome (0x144b7a)
Add check at the end of parse_tc_fdb_actions() that verifies that resulting
attribute has action fwd or drop flag set.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Paul Blakey <paulb@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index ece33ff718a4..b13e7996ad83 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -3446,6 +3446,12 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
attr->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
}
+ if (!(attr->action &
+ (MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_DROP))) {
+ NL_SET_ERR_MSG(extack, "Rule must have at least one forward/drop action");
+ return -EOPNOTSUPP;
+ }
+
if (attr->split_count > 0 && !mlx5_esw_has_fwd_fdb(priv->mdev)) {
NL_SET_ERR_MSG_MOD(extack,
"current firmware doesn't support split rule for port mirroring");
--
2.13.6

@ -0,0 +1,90 @@
From 3ee7fedb0cc980a0923043e8dd7b87ec83998925 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:07 -0400
Subject: [PATCH 089/312] [netdrv] net/mlx5: Do not hold group lock while
allocating FTE in software
Message-id: <20200510150452.10307-43-ahleihel@redhat.com>
Patchwork-id: 306666
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 42/87] net/mlx5: Do not hold group lock while allocating FTE in software
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 84c7af637512be9c3254189bd5910dae0d2a8602
Author: Parav Pandit <parav@mellanox.com>
Date: Thu Sep 19 17:22:19 2019 -0500
net/mlx5: Do not hold group lock while allocating FTE in software
FTE memory allocation using alloc_fte() doesn't have any dependency
on the flow group.
Hence, do not hold flow group lock while performing alloc_fte().
This helps to reduce contention of flow group lock.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Daniel Jurgens <danielj@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
index 495396f42153..e8064bd87aad 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
@@ -1817,6 +1817,13 @@ _mlx5_add_flow_rules(struct mlx5_flow_table *ft,
return rule;
}
+ fte = alloc_fte(ft, spec, flow_act);
+ if (IS_ERR(fte)) {
+ up_write_ref_node(&ft->node, false);
+ err = PTR_ERR(fte);
+ goto err_alloc_fte;
+ }
+
nested_down_write_ref_node(&g->node, FS_LOCK_PARENT);
up_write_ref_node(&ft->node, false);
@@ -1824,17 +1831,9 @@ _mlx5_add_flow_rules(struct mlx5_flow_table *ft,
if (err)
goto err_release_fg;
- fte = alloc_fte(ft, spec, flow_act);
- if (IS_ERR(fte)) {
- err = PTR_ERR(fte);
- goto err_release_fg;
- }
-
err = insert_fte(g, fte);
- if (err) {
- kmem_cache_free(steering->ftes_cache, fte);
+ if (err)
goto err_release_fg;
- }
nested_down_write_ref_node(&fte->node, FS_LOCK_CHILD);
up_write_ref_node(&g->node, false);
@@ -1846,6 +1845,8 @@ _mlx5_add_flow_rules(struct mlx5_flow_table *ft,
err_release_fg:
up_write_ref_node(&g->node, false);
+ kmem_cache_free(steering->ftes_cache, fte);
+err_alloc_fte:
tree_put_node(&g->node, false);
return ERR_PTR(err);
}
--
2.13.6

@ -0,0 +1,202 @@
From 82116044164f1f78e4eec9f31231adc6976b928d Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:08 -0400
Subject: [PATCH 090/312] [netdrv] net/mlx5: Support lockless FTE read lookups
Message-id: <20200510150452.10307-44-ahleihel@redhat.com>
Patchwork-id: 306667
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 43/87] net/mlx5: Support lockless FTE read lookups
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 7dee607ed0e04500459db53001d8e02f8831f084
Author: Parav Pandit <parav@mellanox.com>
Date: Wed Sep 18 18:50:32 2019 -0500
net/mlx5: Support lockless FTE read lookups
During connection tracking offloads with high number of connections,
(40K connections per second), flow table group lock contention is
observed.
To improve the performance by reducing lock contention, lockless
FTE read lookup is performed as described below.
Each flow table entry is refcounted.
Flow table entry is removed when refcount drops to zero.
rhash table allows rcu protected lookup.
Each hash table entry insertion and removal is write lock protected.
Hence, it is possible to perform lockless lookup in rhash table using
following scheme.
(a) Guard FTE entry lookup per group using rcu read lock.
(b) Before freeing the FTE entry, wait for all readers to finish
accessing the FTE.
Below example of one reader and write in parallel racing, shows
protection in effect with rcu lock.
lookup_fte_locked()
rcu_read_lock();
search_hash_table()
existing_flow_group_write_lock();
tree_put_node(fte)
drop_ref_cnt(fte)
del_sw_fte(fte)
del_hash_table_entry();
call_rcu();
existing_flow_group_write_unlock();
get_ref_cnt(fte) fails
rcu_read_unlock();
rcu grace period();
[..]
kmem_cache_free(fte);
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 70 ++++++++++++++++++-----
drivers/net/ethernet/mellanox/mlx5/core/fs_core.h | 1 +
2 files changed, 56 insertions(+), 15 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
index e8064bd87aad..6e1ef05becce 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
@@ -531,9 +531,16 @@ static void del_hw_fte(struct fs_node *node)
}
}
+static void del_sw_fte_rcu(struct rcu_head *head)
+{
+ struct fs_fte *fte = container_of(head, struct fs_fte, rcu);
+ struct mlx5_flow_steering *steering = get_steering(&fte->node);
+
+ kmem_cache_free(steering->ftes_cache, fte);
+}
+
static void del_sw_fte(struct fs_node *node)
{
- struct mlx5_flow_steering *steering = get_steering(node);
struct mlx5_flow_group *fg;
struct fs_fte *fte;
int err;
@@ -546,7 +553,8 @@ static void del_sw_fte(struct fs_node *node)
rhash_fte);
WARN_ON(err);
ida_simple_remove(&fg->fte_allocator, fte->index - fg->start_index);
- kmem_cache_free(steering->ftes_cache, fte);
+
+ call_rcu(&fte->rcu, del_sw_fte_rcu);
}
static void del_hw_flow_group(struct fs_node *node)
@@ -1626,22 +1634,47 @@ static u64 matched_fgs_get_version(struct list_head *match_head)
}
static struct fs_fte *
-lookup_fte_locked(struct mlx5_flow_group *g,
- const u32 *match_value,
- bool take_write)
+lookup_fte_for_write_locked(struct mlx5_flow_group *g, const u32 *match_value)
{
struct fs_fte *fte_tmp;
- if (take_write)
- nested_down_write_ref_node(&g->node, FS_LOCK_PARENT);
- else
- nested_down_read_ref_node(&g->node, FS_LOCK_PARENT);
- fte_tmp = rhashtable_lookup_fast(&g->ftes_hash, match_value,
- rhash_fte);
+ nested_down_write_ref_node(&g->node, FS_LOCK_PARENT);
+
+ fte_tmp = rhashtable_lookup_fast(&g->ftes_hash, match_value, rhash_fte);
if (!fte_tmp || !tree_get_node(&fte_tmp->node)) {
fte_tmp = NULL;
goto out;
}
+
+ if (!fte_tmp->node.active) {
+ tree_put_node(&fte_tmp->node, false);
+ fte_tmp = NULL;
+ goto out;
+ }
+ nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD);
+
+out:
+ up_write_ref_node(&g->node, false);
+ return fte_tmp;
+}
+
+static struct fs_fte *
+lookup_fte_for_read_locked(struct mlx5_flow_group *g, const u32 *match_value)
+{
+ struct fs_fte *fte_tmp;
+
+ if (!tree_get_node(&g->node))
+ return NULL;
+
+ rcu_read_lock();
+ fte_tmp = rhashtable_lookup(&g->ftes_hash, match_value, rhash_fte);
+ if (!fte_tmp || !tree_get_node(&fte_tmp->node)) {
+ rcu_read_unlock();
+ fte_tmp = NULL;
+ goto out;
+ }
+ rcu_read_unlock();
+
if (!fte_tmp->node.active) {
tree_put_node(&fte_tmp->node, false);
fte_tmp = NULL;
@@ -1649,14 +1682,21 @@ lookup_fte_locked(struct mlx5_flow_group *g,
}
nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD);
+
out:
- if (take_write)
- up_write_ref_node(&g->node, false);
- else
- up_read_ref_node(&g->node);
+ tree_put_node(&g->node, false);
return fte_tmp;
}
+static struct fs_fte *
+lookup_fte_locked(struct mlx5_flow_group *g, const u32 *match_value, bool write)
+{
+ if (write)
+ return lookup_fte_for_write_locked(g, match_value);
+ else
+ return lookup_fte_for_read_locked(g, match_value);
+}
+
static struct mlx5_flow_handle *
try_add_to_existing_fg(struct mlx5_flow_table *ft,
struct list_head *match_head,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
index c6221ccbdddf..8e4ca13f4d74 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
@@ -205,6 +205,7 @@ struct fs_fte {
enum fs_fte_status status;
struct mlx5_fc *counter;
struct rhash_head hash;
+ struct rcu_head rcu;
int modify_mask;
};
--
2.13.6

@ -0,0 +1,115 @@
From cb1711c38d4d4209ecb17851818a4c7e2a3176c3 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:09 -0400
Subject: [PATCH 091/312] [netdrv] net/mlx5e: TX, Dump WQs wqe descriptors on
CQE with error events
Message-id: <20200510150452.10307-45-ahleihel@redhat.com>
Patchwork-id: 306668
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 44/87] net/mlx5e: TX, Dump WQs wqe descriptors on CQE with error events
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 130c7b46c93d313ca07d85a30d90021e424c7e9b
Author: Saeed Mahameed <saeedm@mellanox.com>
Date: Tue May 7 08:56:38 2019 -0700
net/mlx5e: TX, Dump WQs wqe descriptors on CQE with error events
Dump the Work Queue's TX WQE descriptor when a completion with
error is received.
Example:
[5.331832] mlx5_core 0000:00:04.0 enp0s4: Error cqe on cqn 0xa, ci 0x1, TXQ-SQ qpn 0xe, opcode 0xd, syndrome 0x2, vendor syndrome 0x0
[5.333127] 00000000: 55 65 02 75 31 fe c2 d2 6b 6c 62 1e f9 e1 d8 5c
[5.333837] 00000010: d3 b2 6c b8 89 e4 84 20 0b f4 3c e0 f3 75 41 ca
[5.334568] 00000020: 46 00 00 00 cd 70 a0 92 18 3a 01 de 00 00 00 00
[5.335313] 00000030: 7d bc 05 89 b2 e9 00 02 1e 00 00 0e 00 00 30 d2
[5.335972] WQE DUMP: WQ size 1024 WQ cur size 0, WQE index 0x0, len: 64
[5.336710] 00000000: 00 00 00 1e 00 00 0e 04 00 00 00 08 00 00 00 00
[5.337524] 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 12 33 33
[5.338151] 00000020: 00 00 00 16 52 54 00 00 00 01 86 dd 60 00 00 00
[5.338740] 00000030: 00 00 00 48 00 00 00 00 00 00 00 00 66 ba 58 14
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c | 6 ++++++
drivers/net/ethernet/mellanox/mlx5/core/wq.c | 18 ++++++++++++++++++
drivers/net/ethernet/mellanox/mlx5/core/wq.h | 1 +
3 files changed, 25 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
index 001752ace7f0..3ce27194ee7e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
@@ -462,8 +462,14 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
if (unlikely(get_cqe_opcode(cqe) == MLX5_CQE_REQ_ERR)) {
if (!test_and_set_bit(MLX5E_SQ_STATE_RECOVERING,
&sq->state)) {
+ struct mlx5e_tx_wqe_info *wi;
+ u16 ci;
+
+ ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sqcc);
+ wi = &sq->db.wqe_info[ci];
mlx5e_dump_error_cqe(sq,
(struct mlx5_err_cqe *)cqe);
+ mlx5_wq_cyc_wqe_dump(&sq->wq, ci, wi->num_wqebbs);
queue_work(cq->channel->priv->wq,
&sq->recover_work);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.c b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
index dd2315ce4441..dab2625e1e59 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
@@ -96,6 +96,24 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
return err;
}
+void mlx5_wq_cyc_wqe_dump(struct mlx5_wq_cyc *wq, u16 ix, u8 nstrides)
+{
+ size_t len;
+ void *wqe;
+
+ if (!net_ratelimit())
+ return;
+
+ nstrides = max_t(u8, nstrides, 1);
+
+ len = nstrides << wq->fbc.log_stride;
+ wqe = mlx5_wq_cyc_get_wqe(wq, ix);
+
+ pr_info("WQE DUMP: WQ size %d WQ cur size %d, WQE index 0x%x, len: %ld\n",
+ mlx5_wq_cyc_get_size(wq), wq->cur_sz, ix, len);
+ print_hex_dump(KERN_WARNING, "", DUMP_PREFIX_OFFSET, 16, 1, wqe, len, false);
+}
+
int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *qpc, struct mlx5_wq_qp *wq,
struct mlx5_wq_ctrl *wq_ctrl)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.h b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
index 55791f71a778..27338c3c6136 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
@@ -79,6 +79,7 @@ struct mlx5_wq_ll {
int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *wqc, struct mlx5_wq_cyc *wq,
struct mlx5_wq_ctrl *wq_ctrl);
+void mlx5_wq_cyc_wqe_dump(struct mlx5_wq_cyc *wq, u16 ix, u8 nstrides);
u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq);
int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
--
2.13.6

@ -0,0 +1,135 @@
From ff649813dd587b6fe99a52b44bc8aef6cba9e5d1 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:10 -0400
Subject: [PATCH 092/312] [netdrv] net/mlx5: WQ, Move short getters into header
file
Message-id: <20200510150452.10307-46-ahleihel@redhat.com>
Patchwork-id: 306669
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 45/87] net/mlx5: WQ, Move short getters into header file
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 769619ee39dfa8297a1fe2bc2865eb1e73a9f824
Author: Tariq Toukan <tariqt@mellanox.com>
Date: Wed Oct 16 13:29:16 2019 +0300
net/mlx5: WQ, Move short getters into header file
Move short Work Queue API getter functions into the WQ
header file.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/wq.c | 20 --------------------
drivers/net/ethernet/mellanox/mlx5/core/wq.h | 24 ++++++++++++++++++++----
2 files changed, 20 insertions(+), 24 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.c b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
index dab2625e1e59..f2a0e72285ba 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
@@ -34,26 +34,6 @@
#include "wq.h"
#include "mlx5_core.h"
-u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq)
-{
- return (u32)wq->fbc.sz_m1 + 1;
-}
-
-u32 mlx5_cqwq_get_size(struct mlx5_cqwq *wq)
-{
- return wq->fbc.sz_m1 + 1;
-}
-
-u8 mlx5_cqwq_get_log_stride_size(struct mlx5_cqwq *wq)
-{
- return wq->fbc.log_stride;
-}
-
-u32 mlx5_wq_ll_get_size(struct mlx5_wq_ll *wq)
-{
- return (u32)wq->fbc.sz_m1 + 1;
-}
-
static u32 wq_get_byte_sz(u8 log_sz, u8 log_stride)
{
return ((u32)1 << log_sz) << log_stride;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.h b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
index 27338c3c6136..d9a94bc223c0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
@@ -80,7 +80,6 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *wqc, struct mlx5_wq_cyc *wq,
struct mlx5_wq_ctrl *wq_ctrl);
void mlx5_wq_cyc_wqe_dump(struct mlx5_wq_cyc *wq, u16 ix, u8 nstrides);
-u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq);
int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *qpc, struct mlx5_wq_qp *wq,
@@ -89,16 +88,18 @@ int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
int mlx5_cqwq_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *cqc, struct mlx5_cqwq *wq,
struct mlx5_wq_ctrl *wq_ctrl);
-u32 mlx5_cqwq_get_size(struct mlx5_cqwq *wq);
-u8 mlx5_cqwq_get_log_stride_size(struct mlx5_cqwq *wq);
int mlx5_wq_ll_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *wqc, struct mlx5_wq_ll *wq,
struct mlx5_wq_ctrl *wq_ctrl);
-u32 mlx5_wq_ll_get_size(struct mlx5_wq_ll *wq);
void mlx5_wq_destroy(struct mlx5_wq_ctrl *wq_ctrl);
+static inline u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq)
+{
+ return (u32)wq->fbc.sz_m1 + 1;
+}
+
static inline int mlx5_wq_cyc_is_full(struct mlx5_wq_cyc *wq)
{
return wq->cur_sz == wq->sz;
@@ -169,6 +170,16 @@ static inline int mlx5_wq_cyc_cc_bigger(u16 cc1, u16 cc2)
return !equal && !smaller;
}
+static inline u32 mlx5_cqwq_get_size(struct mlx5_cqwq *wq)
+{
+ return wq->fbc.sz_m1 + 1;
+}
+
+static inline u8 mlx5_cqwq_get_log_stride_size(struct mlx5_cqwq *wq)
+{
+ return wq->fbc.log_stride;
+}
+
static inline u32 mlx5_cqwq_ctr2ix(struct mlx5_cqwq *wq, u32 ctr)
{
return ctr & wq->fbc.sz_m1;
@@ -225,6 +236,11 @@ static inline struct mlx5_cqe64 *mlx5_cqwq_get_cqe(struct mlx5_cqwq *wq)
return cqe;
}
+static inline u32 mlx5_wq_ll_get_size(struct mlx5_wq_ll *wq)
+{
+ return (u32)wq->fbc.sz_m1 + 1;
+}
+
static inline int mlx5_wq_ll_is_full(struct mlx5_wq_ll *wq)
{
return wq->cur_sz == wq->fbc.sz_m1;
--
2.13.6

@ -0,0 +1,267 @@
From 1d4ac7b4c1c443681ec5bb74e185884e00755ed6 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:11 -0400
Subject: [PATCH 093/312] [netdrv] net/mlx5e: Bit sized fields rewrite support
Message-id: <20200510150452.10307-47-ahleihel@redhat.com>
Patchwork-id: 306670
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 46/87] net/mlx5e: Bit sized fields rewrite support
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 88f30bbcbaaa1b124fcc622ff49e3d427da9c96c
Author: Dmytro Linkin <dmitrolin@mellanox.com>
Date: Wed Oct 2 07:37:08 2019 +0000
net/mlx5e: Bit sized fields rewrite support
This patch doesn't change any functionality, but is a pre-step for
adding support for rewriting of bit-sized fields, like DSCP and ECN
in IPv4 header, similar fields in IPv6, etc.
Signed-off-by: Dmytro Linkin <dmitrolin@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 122 ++++++++++++------------
1 file changed, 62 insertions(+), 60 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index b13e7996ad83..ab6d99d6ba14 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -2244,13 +2244,14 @@ static int set_pedit_val(u8 hdr_type, u32 mask, u32 val, u32 offset,
struct mlx5_fields {
u8 field;
- u8 size;
+ u8 field_bsize;
+ u32 field_mask;
u32 offset;
u32 match_offset;
};
-#define OFFLOAD(fw_field, size, field, off, match_field) \
- {MLX5_ACTION_IN_FIELD_OUT_ ## fw_field, size, \
+#define OFFLOAD(fw_field, field_bsize, field_mask, field, off, match_field) \
+ {MLX5_ACTION_IN_FIELD_OUT_ ## fw_field, field_bsize, field_mask, \
offsetof(struct pedit_headers, field) + (off), \
MLX5_BYTE_OFF(fte_match_set_lyr_2_4, match_field)}
@@ -2268,18 +2269,18 @@ struct mlx5_fields {
})
static bool cmp_val_mask(void *valp, void *maskp, void *matchvalp,
- void *matchmaskp, int size)
+ void *matchmaskp, u8 bsize)
{
bool same = false;
- switch (size) {
- case sizeof(u8):
+ switch (bsize) {
+ case 8:
same = SAME_VAL_MASK(u8, valp, maskp, matchvalp, matchmaskp);
break;
- case sizeof(u16):
+ case 16:
same = SAME_VAL_MASK(u16, valp, maskp, matchvalp, matchmaskp);
break;
- case sizeof(u32):
+ case 32:
same = SAME_VAL_MASK(u32, valp, maskp, matchvalp, matchmaskp);
break;
}
@@ -2288,41 +2289,42 @@ static bool cmp_val_mask(void *valp, void *maskp, void *matchvalp,
}
static struct mlx5_fields fields[] = {
- OFFLOAD(DMAC_47_16, 4, eth.h_dest[0], 0, dmac_47_16),
- OFFLOAD(DMAC_15_0, 2, eth.h_dest[4], 0, dmac_15_0),
- OFFLOAD(SMAC_47_16, 4, eth.h_source[0], 0, smac_47_16),
- OFFLOAD(SMAC_15_0, 2, eth.h_source[4], 0, smac_15_0),
- OFFLOAD(ETHERTYPE, 2, eth.h_proto, 0, ethertype),
- OFFLOAD(FIRST_VID, 2, vlan.h_vlan_TCI, 0, first_vid),
-
- OFFLOAD(IP_TTL, 1, ip4.ttl, 0, ttl_hoplimit),
- OFFLOAD(SIPV4, 4, ip4.saddr, 0, src_ipv4_src_ipv6.ipv4_layout.ipv4),
- OFFLOAD(DIPV4, 4, ip4.daddr, 0, dst_ipv4_dst_ipv6.ipv4_layout.ipv4),
-
- OFFLOAD(SIPV6_127_96, 4, ip6.saddr.s6_addr32[0], 0,
+ OFFLOAD(DMAC_47_16, 32, U32_MAX, eth.h_dest[0], 0, dmac_47_16),
+ OFFLOAD(DMAC_15_0, 16, U16_MAX, eth.h_dest[4], 0, dmac_15_0),
+ OFFLOAD(SMAC_47_16, 32, U32_MAX, eth.h_source[0], 0, smac_47_16),
+ OFFLOAD(SMAC_15_0, 16, U16_MAX, eth.h_source[4], 0, smac_15_0),
+ OFFLOAD(ETHERTYPE, 16, U16_MAX, eth.h_proto, 0, ethertype),
+ OFFLOAD(FIRST_VID, 16, U16_MAX, vlan.h_vlan_TCI, 0, first_vid),
+
+ OFFLOAD(IP_TTL, 8, U8_MAX, ip4.ttl, 0, ttl_hoplimit),
+ OFFLOAD(SIPV4, 32, U32_MAX, ip4.saddr, 0, src_ipv4_src_ipv6.ipv4_layout.ipv4),
+ OFFLOAD(DIPV4, 32, U32_MAX, ip4.daddr, 0, dst_ipv4_dst_ipv6.ipv4_layout.ipv4),
+
+ OFFLOAD(SIPV6_127_96, 32, U32_MAX, ip6.saddr.s6_addr32[0], 0,
src_ipv4_src_ipv6.ipv6_layout.ipv6[0]),
- OFFLOAD(SIPV6_95_64, 4, ip6.saddr.s6_addr32[1], 0,
+ OFFLOAD(SIPV6_95_64, 32, U32_MAX, ip6.saddr.s6_addr32[1], 0,
src_ipv4_src_ipv6.ipv6_layout.ipv6[4]),
- OFFLOAD(SIPV6_63_32, 4, ip6.saddr.s6_addr32[2], 0,
+ OFFLOAD(SIPV6_63_32, 32, U32_MAX, ip6.saddr.s6_addr32[2], 0,
src_ipv4_src_ipv6.ipv6_layout.ipv6[8]),
- OFFLOAD(SIPV6_31_0, 4, ip6.saddr.s6_addr32[3], 0,
+ OFFLOAD(SIPV6_31_0, 32, U32_MAX, ip6.saddr.s6_addr32[3], 0,
src_ipv4_src_ipv6.ipv6_layout.ipv6[12]),
- OFFLOAD(DIPV6_127_96, 4, ip6.daddr.s6_addr32[0], 0,
+ OFFLOAD(DIPV6_127_96, 32, U32_MAX, ip6.daddr.s6_addr32[0], 0,
dst_ipv4_dst_ipv6.ipv6_layout.ipv6[0]),
- OFFLOAD(DIPV6_95_64, 4, ip6.daddr.s6_addr32[1], 0,
+ OFFLOAD(DIPV6_95_64, 32, U32_MAX, ip6.daddr.s6_addr32[1], 0,
dst_ipv4_dst_ipv6.ipv6_layout.ipv6[4]),
- OFFLOAD(DIPV6_63_32, 4, ip6.daddr.s6_addr32[2], 0,
+ OFFLOAD(DIPV6_63_32, 32, U32_MAX, ip6.daddr.s6_addr32[2], 0,
dst_ipv4_dst_ipv6.ipv6_layout.ipv6[8]),
- OFFLOAD(DIPV6_31_0, 4, ip6.daddr.s6_addr32[3], 0,
+ OFFLOAD(DIPV6_31_0, 32, U32_MAX, ip6.daddr.s6_addr32[3], 0,
dst_ipv4_dst_ipv6.ipv6_layout.ipv6[12]),
- OFFLOAD(IPV6_HOPLIMIT, 1, ip6.hop_limit, 0, ttl_hoplimit),
+ OFFLOAD(IPV6_HOPLIMIT, 8, U8_MAX, ip6.hop_limit, 0, ttl_hoplimit),
- OFFLOAD(TCP_SPORT, 2, tcp.source, 0, tcp_sport),
- OFFLOAD(TCP_DPORT, 2, tcp.dest, 0, tcp_dport),
- OFFLOAD(TCP_FLAGS, 1, tcp.ack_seq, 5, tcp_flags),
+ OFFLOAD(TCP_SPORT, 16, U16_MAX, tcp.source, 0, tcp_sport),
+ OFFLOAD(TCP_DPORT, 16, U16_MAX, tcp.dest, 0, tcp_dport),
+ /* in linux iphdr tcp_flags is 8 bits long */
+ OFFLOAD(TCP_FLAGS, 8, U8_MAX, tcp.ack_seq, 5, tcp_flags),
- OFFLOAD(UDP_SPORT, 2, udp.source, 0, udp_sport),
- OFFLOAD(UDP_DPORT, 2, udp.dest, 0, udp_dport),
+ OFFLOAD(UDP_SPORT, 16, U16_MAX, udp.source, 0, udp_sport),
+ OFFLOAD(UDP_DPORT, 16, U16_MAX, udp.dest, 0, udp_dport),
};
/* On input attr->max_mod_hdr_actions tells how many HW actions can be parsed at
@@ -2335,19 +2337,17 @@ static int offload_pedit_fields(struct pedit_headers_action *hdrs,
struct netlink_ext_ack *extack)
{
struct pedit_headers *set_masks, *add_masks, *set_vals, *add_vals;
- void *headers_c = get_match_headers_criteria(*action_flags,
- &parse_attr->spec);
- void *headers_v = get_match_headers_value(*action_flags,
- &parse_attr->spec);
int i, action_size, nactions, max_actions, first, last, next_z;
- void *s_masks_p, *a_masks_p, *vals_p;
+ void *headers_c, *headers_v, *action, *vals_p;
+ u32 *s_masks_p, *a_masks_p, s_mask, a_mask;
struct mlx5_fields *f;
- u8 cmd, field_bsize;
- u32 s_mask, a_mask;
unsigned long mask;
__be32 mask_be32;
__be16 mask_be16;
- void *action;
+ u8 cmd;
+
+ headers_c = get_match_headers_criteria(*action_flags, &parse_attr->spec);
+ headers_v = get_match_headers_value(*action_flags, &parse_attr->spec);
set_masks = &hdrs[0].masks;
add_masks = &hdrs[1].masks;
@@ -2372,8 +2372,8 @@ static int offload_pedit_fields(struct pedit_headers_action *hdrs,
s_masks_p = (void *)set_masks + f->offset;
a_masks_p = (void *)add_masks + f->offset;
- memcpy(&s_mask, s_masks_p, f->size);
- memcpy(&a_mask, a_masks_p, f->size);
+ s_mask = *s_masks_p & f->field_mask;
+ a_mask = *a_masks_p & f->field_mask;
if (!s_mask && !a_mask) /* nothing to offload here */
continue;
@@ -2402,38 +2402,34 @@ static int offload_pedit_fields(struct pedit_headers_action *hdrs,
vals_p = (void *)set_vals + f->offset;
/* don't rewrite if we have a match on the same value */
if (cmp_val_mask(vals_p, s_masks_p, match_val,
- match_mask, f->size))
+ match_mask, f->field_bsize))
skip = true;
/* clear to denote we consumed this field */
- memset(s_masks_p, 0, f->size);
+ *s_masks_p &= ~f->field_mask;
} else {
- u32 zero = 0;
-
cmd = MLX5_ACTION_TYPE_ADD;
mask = a_mask;
vals_p = (void *)add_vals + f->offset;
/* add 0 is no change */
- if (!memcmp(vals_p, &zero, f->size))
+ if ((*(u32 *)vals_p & f->field_mask) == 0)
skip = true;
/* clear to denote we consumed this field */
- memset(a_masks_p, 0, f->size);
+ *a_masks_p &= ~f->field_mask;
}
if (skip)
continue;
- field_bsize = f->size * BITS_PER_BYTE;
-
- if (field_bsize == 32) {
+ if (f->field_bsize == 32) {
mask_be32 = *(__be32 *)&mask;
mask = (__force unsigned long)cpu_to_le32(be32_to_cpu(mask_be32));
- } else if (field_bsize == 16) {
+ } else if (f->field_bsize == 16) {
mask_be16 = *(__be16 *)&mask;
mask = (__force unsigned long)cpu_to_le16(be16_to_cpu(mask_be16));
}
- first = find_first_bit(&mask, field_bsize);
- next_z = find_next_zero_bit(&mask, field_bsize, first);
- last = find_last_bit(&mask, field_bsize);
+ first = find_first_bit(&mask, f->field_bsize);
+ next_z = find_next_zero_bit(&mask, f->field_bsize, first);
+ last = find_last_bit(&mask, f->field_bsize);
if (first < next_z && next_z < last) {
NL_SET_ERR_MSG_MOD(extack,
"rewrite of few sub-fields isn't supported");
@@ -2446,16 +2442,22 @@ static int offload_pedit_fields(struct pedit_headers_action *hdrs,
MLX5_SET(set_action_in, action, field, f->field);
if (cmd == MLX5_ACTION_TYPE_SET) {
- MLX5_SET(set_action_in, action, offset, first);
+ int start;
+
+ /* if field is bit sized it can start not from first bit */
+ start = find_first_bit((unsigned long *)&f->field_mask,
+ f->field_bsize);
+
+ MLX5_SET(set_action_in, action, offset, first - start);
/* length is num of bits to be written, zero means length of 32 */
MLX5_SET(set_action_in, action, length, (last - first + 1));
}
- if (field_bsize == 32)
+ if (f->field_bsize == 32)
MLX5_SET(set_action_in, action, data, ntohl(*(__be32 *)vals_p) >> first);
- else if (field_bsize == 16)
+ else if (f->field_bsize == 16)
MLX5_SET(set_action_in, action, data, ntohs(*(__be16 *)vals_p) >> first);
- else if (field_bsize == 8)
+ else if (f->field_bsize == 8)
MLX5_SET(set_action_in, action, data, *(u8 *)vals_p >> first);
action += action_size;
--
2.13.6

@ -0,0 +1,62 @@
From 9555891ed1fbd0e9a491b35499dabb75fd5d6782 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:12 -0400
Subject: [PATCH 094/312] [netdrv] net/mlx5e: Add ToS (DSCP) header rewrite
support
Message-id: <20200510150452.10307-48-ahleihel@redhat.com>
Patchwork-id: 306671
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 47/87] net/mlx5e: Add ToS (DSCP) header rewrite support
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit ab9341b54969a2d02dbb7819e2f17c2f0d9cf5b5
Author: Dmytro Linkin <dmitrolin@mellanox.com>
Date: Mon Oct 7 10:48:00 2019 +0000
net/mlx5e: Add ToS (DSCP) header rewrite support
Add support for rewriting of DSCP part of ToS field.
Next commands, for example, can be used to offload rewrite action:
OVS:
$ ovs-ofctl add-flow ovs-sriov "ip, in_port=REP, \
actions=mod_nw_tos:68, output:NIC"
iproute2 (used retain mask, as tc command rewrite whole ToS field):
$ tc filter add dev REP ingress protocol ip prio 1 flower skip_sw \
ip_proto icmp action pedit munge ip tos set 68 retain 0xfc pipe \
action mirred egress redirect dev NIC
Signed-off-by: Dmytro Linkin <dmitrolin@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index ab6d99d6ba14..1a4b8d995826 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -2296,6 +2296,7 @@ static struct mlx5_fields fields[] = {
OFFLOAD(ETHERTYPE, 16, U16_MAX, eth.h_proto, 0, ethertype),
OFFLOAD(FIRST_VID, 16, U16_MAX, vlan.h_vlan_TCI, 0, first_vid),
+ OFFLOAD(IP_DSCP, 8, 0xfc, ip4.tos, 0, ip_dscp),
OFFLOAD(IP_TTL, 8, U8_MAX, ip4.ttl, 0, ttl_hoplimit),
OFFLOAD(SIPV4, 32, U32_MAX, ip4.saddr, 0, src_ipv4_src_ipv6.ipv4_layout.ipv4),
OFFLOAD(DIPV4, 32, U32_MAX, ip4.daddr, 0, dst_ipv4_dst_ipv6.ipv4_layout.ipv4),
--
2.13.6

@ -0,0 +1,56 @@
From 33326c01f2afd8a6879e9bcc963dc2c90c13f9a8 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:13 -0400
Subject: [PATCH 095/312] [netdrv] net/mlx5: rate limit alloc_ent error
messages
Message-id: <20200510150452.10307-49-ahleihel@redhat.com>
Patchwork-id: 306672
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 48/87] net/mlx5: rate limit alloc_ent error messages
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 5a212e0cac548e5e4fb3f2ba1b5b2f6c8949687d
Author: Li RongQing <lirongqing@baidu.com>
Date: Thu Oct 24 16:23:33 2019 +0800
net/mlx5: rate limit alloc_ent error messages
when debug a bug, which triggers TX hang, and kernel log is
spammed with the following info message
[ 1172.044764] mlx5_core 0000:21:00.0: cmd_work_handler:930:(pid 8):
failed to allocate command entry
Signed-off-by: Li RongQing <lirongqing@baidu.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
index 8242f96ab931..71a52b890f38 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -866,7 +866,7 @@ static void cmd_work_handler(struct work_struct *work)
if (!ent->page_queue) {
alloc_ret = alloc_ent(cmd);
if (alloc_ret < 0) {
- mlx5_core_err(dev, "failed to allocate command entry\n");
+ mlx5_core_err_rl(dev, "failed to allocate command entry\n");
if (ent->callback) {
ent->callback(-EAGAIN, ent->context);
mlx5_free_cmd_msg(dev, ent->out);
--
2.13.6

@ -0,0 +1,328 @@
From 8e8051d3aa6145a96ad1457fc55cb31426fc2bdf Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:14 -0400
Subject: [PATCH 096/312] [netdrv] net/mlx5: LAG, Use port enumerators
Message-id: <20200510150452.10307-50-ahleihel@redhat.com>
Patchwork-id: 306674
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 49/87] net/mlx5: LAG, Use port enumerators
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
Conflicts:
- drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
Various context diff due to missing commit:
5481d73f8154 ("ipv4: Use accessors for fib_info nexthop data")
And already backported commit:
1cdc14e9d134 ("net/mlx5: LAG, Use affinity type enumerators")
commit 84d2dbb0aaaf1098aa2c2ca07003bf3f973732ac
Author: Erez Alfasi <ereza@mellanox.com>
Date: Mon Sep 16 13:59:58 2019 +0300
net/mlx5: LAG, Use port enumerators
Instead of using explicit array indexes, simply use
ports enumerators to make the code more readable.
Fixes: 7907f23adc18 ("net/mlx5: Implement RoCE LAG feature")
Signed-off-by: Erez Alfasi <ereza@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/lag.c | 65 +++++++++++++-----------
drivers/net/ethernet/mellanox/mlx5/core/lag.h | 5 ++
drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c | 56 ++++++++++----------
3 files changed, 69 insertions(+), 57 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
index c5ef2ff26465..fc0d9583475d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
@@ -145,34 +145,35 @@ static void mlx5_infer_tx_affinity_mapping(struct lag_tracker *tracker,
{
*port1 = 1;
*port2 = 2;
- if (!tracker->netdev_state[0].tx_enabled ||
- !tracker->netdev_state[0].link_up) {
+ if (!tracker->netdev_state[MLX5_LAG_P1].tx_enabled ||
+ !tracker->netdev_state[MLX5_LAG_P1].link_up) {
*port1 = 2;
return;
}
- if (!tracker->netdev_state[1].tx_enabled ||
- !tracker->netdev_state[1].link_up)
+ if (!tracker->netdev_state[MLX5_LAG_P2].tx_enabled ||
+ !tracker->netdev_state[MLX5_LAG_P2].link_up)
*port2 = 1;
}
void mlx5_modify_lag(struct mlx5_lag *ldev,
struct lag_tracker *tracker)
{
- struct mlx5_core_dev *dev0 = ldev->pf[0].dev;
+ struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
u8 v2p_port1, v2p_port2;
int err;
mlx5_infer_tx_affinity_mapping(tracker, &v2p_port1,
&v2p_port2);
- if (v2p_port1 != ldev->v2p_map[0] ||
- v2p_port2 != ldev->v2p_map[1]) {
- ldev->v2p_map[0] = v2p_port1;
- ldev->v2p_map[1] = v2p_port2;
+ if (v2p_port1 != ldev->v2p_map[MLX5_LAG_P1] ||
+ v2p_port2 != ldev->v2p_map[MLX5_LAG_P2]) {
+ ldev->v2p_map[MLX5_LAG_P1] = v2p_port1;
+ ldev->v2p_map[MLX5_LAG_P2] = v2p_port2;
mlx5_core_info(dev0, "modify lag map port 1:%d port 2:%d",
- ldev->v2p_map[0], ldev->v2p_map[1]);
+ ldev->v2p_map[MLX5_LAG_P1],
+ ldev->v2p_map[MLX5_LAG_P2]);
err = mlx5_cmd_modify_lag(dev0, v2p_port1, v2p_port2);
if (err)
@@ -185,16 +186,17 @@ void mlx5_modify_lag(struct mlx5_lag *ldev,
static int mlx5_create_lag(struct mlx5_lag *ldev,
struct lag_tracker *tracker)
{
- struct mlx5_core_dev *dev0 = ldev->pf[0].dev;
+ struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
int err;
- mlx5_infer_tx_affinity_mapping(tracker, &ldev->v2p_map[0],
- &ldev->v2p_map[1]);
+ mlx5_infer_tx_affinity_mapping(tracker, &ldev->v2p_map[MLX5_LAG_P1],
+ &ldev->v2p_map[MLX5_LAG_P2]);
mlx5_core_info(dev0, "lag map port 1:%d port 2:%d",
- ldev->v2p_map[0], ldev->v2p_map[1]);
+ ldev->v2p_map[MLX5_LAG_P1], ldev->v2p_map[MLX5_LAG_P2]);
- err = mlx5_cmd_create_lag(dev0, ldev->v2p_map[0], ldev->v2p_map[1]);
+ err = mlx5_cmd_create_lag(dev0, ldev->v2p_map[MLX5_LAG_P1],
+ ldev->v2p_map[MLX5_LAG_P2]);
if (err)
mlx5_core_err(dev0,
"Failed to create LAG (%d)\n",
@@ -207,7 +209,7 @@ int mlx5_activate_lag(struct mlx5_lag *ldev,
u8 flags)
{
bool roce_lag = !!(flags & MLX5_LAG_FLAG_ROCE);
- struct mlx5_core_dev *dev0 = ldev->pf[0].dev;
+ struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
int err;
err = mlx5_create_lag(ldev, tracker);
@@ -229,7 +231,7 @@ int mlx5_activate_lag(struct mlx5_lag *ldev,
static int mlx5_deactivate_lag(struct mlx5_lag *ldev)
{
- struct mlx5_core_dev *dev0 = ldev->pf[0].dev;
+ struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
bool roce_lag = __mlx5_lag_is_roce(ldev);
int err;
@@ -252,14 +254,15 @@ static int mlx5_deactivate_lag(struct mlx5_lag *ldev)
static bool mlx5_lag_check_prereq(struct mlx5_lag *ldev)
{
- if (!ldev->pf[0].dev || !ldev->pf[1].dev)
+ if (!ldev->pf[MLX5_LAG_P1].dev || !ldev->pf[MLX5_LAG_P2].dev)
return false;
#ifdef CONFIG_MLX5_ESWITCH
- return mlx5_esw_lag_prereq(ldev->pf[0].dev, ldev->pf[1].dev);
+ return mlx5_esw_lag_prereq(ldev->pf[MLX5_LAG_P1].dev,
+ ldev->pf[MLX5_LAG_P2].dev);
#else
- return (!mlx5_sriov_is_enabled(ldev->pf[0].dev) &&
- !mlx5_sriov_is_enabled(ldev->pf[1].dev));
+ return (!mlx5_sriov_is_enabled(ldev->pf[MLX5_LAG_P1].dev) &&
+ !mlx5_sriov_is_enabled(ldev->pf[MLX5_LAG_P2].dev));
#endif
}
@@ -285,8 +288,8 @@ static void mlx5_lag_remove_ib_devices(struct mlx5_lag *ldev)
static void mlx5_do_bond(struct mlx5_lag *ldev)
{
- struct mlx5_core_dev *dev0 = ldev->pf[0].dev;
- struct mlx5_core_dev *dev1 = ldev->pf[1].dev;
+ struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
+ struct mlx5_core_dev *dev1 = ldev->pf[MLX5_LAG_P2].dev;
struct lag_tracker tracker;
bool do_bond, roce_lag;
int err;
@@ -692,10 +695,11 @@ struct net_device *mlx5_lag_get_roce_netdev(struct mlx5_core_dev *dev)
goto unlock;
if (ldev->tracker.tx_type == NETDEV_LAG_TX_TYPE_ACTIVEBACKUP) {
- ndev = ldev->tracker.netdev_state[0].tx_enabled ?
- ldev->pf[0].netdev : ldev->pf[1].netdev;
+ ndev = ldev->tracker.netdev_state[MLX5_LAG_P1].tx_enabled ?
+ ldev->pf[MLX5_LAG_P1].netdev :
+ ldev->pf[MLX5_LAG_P2].netdev;
} else {
- ndev = ldev->pf[0].netdev;
+ ndev = ldev->pf[MLX5_LAG_P1].netdev;
}
if (ndev)
dev_hold(ndev);
@@ -717,7 +721,8 @@ bool mlx5_lag_intf_add(struct mlx5_interface *intf, struct mlx5_priv *priv)
return true;
ldev = mlx5_lag_dev_get(dev);
- if (!ldev || !__mlx5_lag_is_roce(ldev) || ldev->pf[0].dev == dev)
+ if (!ldev || !__mlx5_lag_is_roce(ldev) ||
+ ldev->pf[MLX5_LAG_P1].dev == dev)
return true;
/* If bonded, we do not add an IB device for PF1. */
@@ -746,11 +751,11 @@ int mlx5_lag_query_cong_counters(struct mlx5_core_dev *dev,
ldev = mlx5_lag_dev_get(dev);
if (ldev && __mlx5_lag_is_roce(ldev)) {
num_ports = MLX5_MAX_PORTS;
- mdev[0] = ldev->pf[0].dev;
- mdev[1] = ldev->pf[1].dev;
+ mdev[MLX5_LAG_P1] = ldev->pf[MLX5_LAG_P1].dev;
+ mdev[MLX5_LAG_P2] = ldev->pf[MLX5_LAG_P2].dev;
} else {
num_ports = 1;
- mdev[0] = dev;
+ mdev[MLX5_LAG_P1] = dev;
}
for (i = 0; i < num_ports; ++i) {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.h b/drivers/net/ethernet/mellanox/mlx5/core/lag.h
index 1dea0b1c9826..f1068aac6406 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.h
@@ -8,6 +8,11 @@
#include "lag_mp.h"
enum {
+ MLX5_LAG_P1,
+ MLX5_LAG_P2,
+};
+
+enum {
MLX5_LAG_FLAG_ROCE = 1 << 0,
MLX5_LAG_FLAG_SRIOV = 1 << 1,
MLX5_LAG_FLAG_MULTIPATH = 1 << 2,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
index a5addeadc732..151ba67e4d25 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
@@ -10,10 +10,11 @@
static bool mlx5_lag_multipath_check_prereq(struct mlx5_lag *ldev)
{
- if (!ldev->pf[0].dev || !ldev->pf[1].dev)
+ if (!ldev->pf[MLX5_LAG_P1].dev || !ldev->pf[MLX5_LAG_P2].dev)
return false;
- return mlx5_esw_multipath_prereq(ldev->pf[0].dev, ldev->pf[1].dev);
+ return mlx5_esw_multipath_prereq(ldev->pf[MLX5_LAG_P1].dev,
+ ldev->pf[MLX5_LAG_P2].dev);
}
static bool __mlx5_lag_is_multipath(struct mlx5_lag *ldev)
@@ -52,36 +53,36 @@ static void mlx5_lag_set_port_affinity(struct mlx5_lag *ldev,
switch (port) {
case MLX5_LAG_NORMAL_AFFINITY:
- tracker.netdev_state[0].tx_enabled = true;
- tracker.netdev_state[1].tx_enabled = true;
- tracker.netdev_state[0].link_up = true;
- tracker.netdev_state[1].link_up = true;
+ tracker.netdev_state[MLX5_LAG_P1].tx_enabled = true;
+ tracker.netdev_state[MLX5_LAG_P2].tx_enabled = true;
+ tracker.netdev_state[MLX5_LAG_P1].link_up = true;
+ tracker.netdev_state[MLX5_LAG_P2].link_up = true;
break;
case MLX5_LAG_P1_AFFINITY:
- tracker.netdev_state[0].tx_enabled = true;
- tracker.netdev_state[0].link_up = true;
- tracker.netdev_state[1].tx_enabled = false;
- tracker.netdev_state[1].link_up = false;
+ tracker.netdev_state[MLX5_LAG_P1].tx_enabled = true;
+ tracker.netdev_state[MLX5_LAG_P1].link_up = true;
+ tracker.netdev_state[MLX5_LAG_P2].tx_enabled = false;
+ tracker.netdev_state[MLX5_LAG_P2].link_up = false;
break;
case MLX5_LAG_P2_AFFINITY:
- tracker.netdev_state[0].tx_enabled = false;
- tracker.netdev_state[0].link_up = false;
- tracker.netdev_state[1].tx_enabled = true;
- tracker.netdev_state[1].link_up = true;
+ tracker.netdev_state[MLX5_LAG_P1].tx_enabled = false;
+ tracker.netdev_state[MLX5_LAG_P1].link_up = false;
+ tracker.netdev_state[MLX5_LAG_P2].tx_enabled = true;
+ tracker.netdev_state[MLX5_LAG_P2].link_up = true;
break;
default:
- mlx5_core_warn(ldev->pf[0].dev, "Invalid affinity port %d",
- port);
+ mlx5_core_warn(ldev->pf[MLX5_LAG_P1].dev,
+ "Invalid affinity port %d", port);
return;
}
- if (tracker.netdev_state[0].tx_enabled)
- mlx5_notifier_call_chain(ldev->pf[0].dev->priv.events,
+ if (tracker.netdev_state[MLX5_LAG_P1].tx_enabled)
+ mlx5_notifier_call_chain(ldev->pf[MLX5_LAG_P1].dev->priv.events,
MLX5_DEV_EVENT_PORT_AFFINITY,
(void *)0);
- if (tracker.netdev_state[1].tx_enabled)
- mlx5_notifier_call_chain(ldev->pf[1].dev->priv.events,
+ if (tracker.netdev_state[MLX5_LAG_P2].tx_enabled)
+ mlx5_notifier_call_chain(ldev->pf[MLX5_LAG_P2].dev->priv.events,
MLX5_DEV_EVENT_PORT_AFFINITY,
(void *)0);
@@ -135,11 +136,12 @@ static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
return;
/* Verify next hops are ports of the same hca */
- if (!(fi->fib_nh[0].nh_dev == ldev->pf[0].netdev &&
- fi->fib_nh[1].nh_dev == ldev->pf[1].netdev) &&
- !(fi->fib_nh[0].nh_dev == ldev->pf[1].netdev &&
- fi->fib_nh[1].nh_dev == ldev->pf[0].netdev)) {
- mlx5_core_warn(ldev->pf[0].dev, "Multipath offload require two ports of the same HCA\n");
+ if (!(fi->fib_nh[0].nh_dev == ldev->pf[MLX5_LAG_P1].netdev &&
+ fi->fib_nh[1].nh_dev == ldev->pf[MLX5_LAG_P2].netdev) &&
+ !(fi->fib_nh[0].nh_dev == ldev->pf[MLX5_LAG_P2].netdev &&
+ fi->fib_nh[1].nh_dev == ldev->pf[MLX5_LAG_P1].netdev)) {
+ mlx5_core_warn(ldev->pf[MLX5_LAG_P1].dev,
+ "Multipath offload require two ports of the same HCA\n");
return;
}
@@ -255,8 +257,8 @@ static int mlx5_lag_fib_event(struct notifier_block *nb,
fen_info = container_of(info, struct fib_entry_notifier_info,
info);
fi = fen_info->fi;
- if (fi->fib_dev != ldev->pf[0].netdev &&
- fi->fib_dev != ldev->pf[1].netdev) {
+ if (fi->fib_dev != ldev->pf[MLX5_LAG_P1].netdev &&
+ fi->fib_dev != ldev->pf[MLX5_LAG_P2].netdev) {
return NOTIFY_DONE;
}
fib_work = mlx5_lag_init_fib_work(ldev, event);
--
2.13.6

@ -0,0 +1,57 @@
From 54b8e94b33419c07a2e04193b185412a08d4786f Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:16 -0400
Subject: [PATCH 097/312] [netdrv] net/mlx5: fix kvfree of uninitialized
pointer spec
Message-id: <20200510150452.10307-52-ahleihel@redhat.com>
Patchwork-id: 306675
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 51/87] net/mlx5: fix kvfree of uninitialized pointer spec
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 8b3f2eb038d3098b37715afced1e62bbc72da90f
Author: Colin Ian King <colin.king@canonical.com>
Date: Tue Nov 5 18:27:40 2019 +0000
net/mlx5: fix kvfree of uninitialized pointer spec
Currently when a call to esw_vport_create_legacy_ingress_acl_group
fails the error exit path to label 'out' will cause a kvfree on the
uninitialized pointer spec. Fix this by ensuring pointer spec is
initialized to NULL to avoid this issue.
Addresses-Coverity: ("Uninitialized pointer read")
Fixes: 10652f39943e ("net/mlx5: Refactor ingress acl configuration")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 1937198405e1..93cf6eb77163 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1257,7 +1257,7 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
struct mlx5_flow_destination drop_ctr_dst = {0};
struct mlx5_flow_destination *dst = NULL;
struct mlx5_flow_act flow_act = {0};
- struct mlx5_flow_spec *spec;
+ struct mlx5_flow_spec *spec = NULL;
int dest_num = 0;
int err = 0;
u8 *smac_v;
--
2.13.6

@ -0,0 +1,53 @@
From 7747b8a366a2a6eeb89b862ccd8f7411bc000126 Mon Sep 17 00:00:00 2001
From: Alaa Hleihel <ahleihel@redhat.com>
Date: Sun, 10 May 2020 15:04:17 -0400
Subject: [PATCH 098/312] [netdrv] net/mlx5: fix spelling mistake "metdata" ->
"metadata"
Message-id: <20200510150452.10307-53-ahleihel@redhat.com>
Patchwork-id: 306677
Patchwork-instance: patchwork
O-Subject: [RHEL8.3 BZ 1789380 v2 52/87] net/mlx5: fix spelling mistake "metdata" -> "metadata"
Bugzilla: 1789380
RH-Acked-by: Kamal Heib <kheib@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Tony Camuso <tcamuso@redhat.com>
RH-Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Bugzilla: http://bugzilla.redhat.com/1789380
Upstream: v5.5-rc1
commit 9ea7f01f470a25bb795224cc0ecc57c91a1519c6
Author: Colin Ian King <colin.king@canonical.com>
Date: Tue Nov 5 14:54:16 2019 +0000
net/mlx5: fix spelling mistake "metdata" -> "metadata"
There is a spelling mistake in a esw_warn warning message. Fix it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Alaa Hleihel <ahleihel@redhat.com>
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 7fe085fa3d29..fe1946b89a11 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1880,7 +1880,7 @@ static int esw_vport_create_ingress_acl_group(struct mlx5_eswitch *esw,
if (IS_ERR(g)) {
ret = PTR_ERR(g);
esw_warn(esw->dev,
- "Failed to create vport[%d] ingress metdata group, err(%d)\n",
+ "Failed to create vport[%d] ingress metadata group, err(%d)\n",
vport->vport, ret);
goto grp_err;
}
--
2.13.6

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save