Skip to content

Commit

Permalink
doc: Correct OpenFlow tables numbers.
Browse files Browse the repository at this point in the history
Corrected OpenFlow table numbers in accordance
with real values.

Signed-off-by: Alexandra Rukomoinikova <arukomoinikova@k2.cloud>
Signed-off-by: 0-day Robot <robot@bytheb.org>
  • Loading branch information
Alexandra Rukomoinikova authored and ovsrobot committed Jan 10, 2025
1 parent a033bf5 commit c06e99a
Show file tree
Hide file tree
Showing 2 changed files with 55 additions and 55 deletions.
26 changes: 13 additions & 13 deletions controller/physical.c
Original file line number Diff line number Diff line change
Expand Up @@ -926,12 +926,12 @@ put_local_common_flows(uint32_t dp_key,

uint32_t port_key = pb->tunnel_key;

/* Table 40, priority 100.
/* Table 43, priority 100.
* =======================
*
* Implements output to local hypervisor. Each flow matches a
* logical output port on the local hypervisor, and resubmits to
* table 41.
* table 44.
*/

ofpbuf_clear(ofpacts_p);
Expand All @@ -941,13 +941,13 @@ put_local_common_flows(uint32_t dp_key,

put_zones_ofpacts(zone_ids, ofpacts_p);

/* Resubmit to table 41. */
/* Resubmit to table 44. */
put_resubmit(OFTABLE_CHECK_LOOPBACK, ofpacts_p);
ofctrl_add_flow(flow_table, OFTABLE_LOCAL_OUTPUT, 100,
pb->header_.uuid.parts[0], &match, ofpacts_p,
&pb->header_.uuid);

/* Table 41, Priority 100.
/* Table 44, Priority 100.
* =======================
*
* Drop packets whose logical inport and outport are the same
Expand Down Expand Up @@ -1575,7 +1575,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name,
|| ha_chassis_group_is_active(binding->ha_chassis_group,
active_tunnels, chassis))) {

/* Table 40, priority 100.
/* Table 43, priority 100.
* =======================
*
* Implements output to local hypervisor. Each flow matches a
Expand Down Expand Up @@ -1918,7 +1918,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name,
}
}

/* Table 39, priority 150.
/* Table 42, priority 150.
* =======================
*
* Handles packets received from ports of type "localport". These
Expand All @@ -1938,7 +1938,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name,
}
} else if (access_type == PORT_LOCALNET && !always_tunnel) {
/* Remote port connected by localnet port */
/* Table 40, priority 100.
/* Table 43, priority 100.
* =======================
*
* Implements switching to localnet port. Each flow matches a
Expand All @@ -1958,7 +1958,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name,

put_load(localnet_port->tunnel_key, MFF_LOG_OUTPORT, 0, 32, ofpacts_p);

/* Resubmit to table 40. */
/* Resubmit to table 43. */
put_resubmit(OFTABLE_LOCAL_OUTPUT, ofpacts_p);
ofctrl_add_flow(flow_table, OFTABLE_LOCAL_OUTPUT, 100,
binding->header_.uuid.parts[0],
Expand All @@ -1977,7 +1977,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name,
const char *redirect_type = smap_get(&binding->options,
"redirect-type");

/* Table 40, priority 100.
/* Table 43, priority 100.
* =======================
*
* Handles traffic that needs to be sent to a remote hypervisor. Each
Expand Down Expand Up @@ -2666,7 +2666,7 @@ physical_run(struct physical_ctx *p_ctx,
*/
add_default_drop_flow(p_ctx, OFTABLE_PHY_TO_LOG, flow_table);

/* Table 37-38, priority 0.
/* Table 40-43, priority 0.
* ========================
*
* Default resubmit actions for OFTABLE_OUTPUT_LARGE_PKT_* tables.
Expand All @@ -2692,7 +2692,7 @@ physical_run(struct physical_ctx *p_ctx,
ofctrl_add_flow(flow_table, OFTABLE_OUTPUT_LARGE_PKT_PROCESS, 0, 0, &match,
&ofpacts, hc_uuid);

/* Table 39, priority 150.
/* Table 42, priority 150.
* =======================
*
* Handles packets received from a VXLAN tunnel which get resubmitted to
Expand All @@ -2711,7 +2711,7 @@ physical_run(struct physical_ctx *p_ctx,
ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 150, 0,
&match, &ofpacts, hc_uuid);

/* Table 39, priority 150.
/* Table 42, priority 150.
* =======================
*
* Packets that should not be sent to other hypervisors.
Expand All @@ -2725,7 +2725,7 @@ physical_run(struct physical_ctx *p_ctx,
ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 150, 0,
&match, &ofpacts, hc_uuid);

/* Table 39, Priority 0.
/* Table 42, Priority 0.
* =======================
*
* Resubmit packets that are not directed at tunnels or part of a
Expand Down
84 changes: 42 additions & 42 deletions ovn-architecture.7.xml
Original file line number Diff line number Diff line change
Expand Up @@ -1233,8 +1233,8 @@
output port field, and since they do not carry a logical output port
field in the tunnel key, when a packet is received from ramp switch
VXLAN tunnel by an OVN hypervisor, the packet is resubmitted to table 8
to determine the output port(s); when the packet reaches table 39,
these packets are resubmitted to table 40 for local delivery by
to determine the output port(s); when the packet reaches table 42,
these packets are resubmitted to table 43 for local delivery by
checking a MLF_RCV_FROM_RAMP flag, which is set when the packet
arrives from a ramp tunnel.
</p>
Expand Down Expand Up @@ -1332,20 +1332,20 @@
output port is known. These pieces of information are obtained
from the tunnel encapsulation metadata (see <code>Tunnel
Encapsulations</code> for encoding details). Then the actions resubmit
to table 38 to enter the logical egress pipeline.
to table 45 to enter the logical egress pipeline.
</p>
</li>

<li>
<p>
OpenFlow tables 8 through 31 execute the logical ingress pipeline from
OpenFlow tables 8 through 39 execute the logical ingress pipeline from
the <code>Logical_Flow</code> table in the OVN Southbound database.
These tables are expressed entirely in terms of logical concepts like
logical ports and logical datapaths. A big part of
<code>ovn-controller</code>'s job is to translate them into equivalent
OpenFlow (in particular it translates the table numbers:
<code>Logical_Flow</code> tables 0 through 23 become OpenFlow tables 8
through 31).
<code>Logical_Flow</code> tables 0 through 29 become OpenFlow tables 8
through 39).
</p>

<p>
Expand Down Expand Up @@ -1387,9 +1387,9 @@
<dl>
<dt><code>output:</code></dt>
<dd>
Implemented by resubmitting the packet to table 37. If the pipeline
Implemented by resubmitting the packet to table 40. If the pipeline
executes more than one <code>output</code> action, then each one is
separately resubmitted to table 37. This can be used to send
separately resubmitted to table 40. This can be used to send
multiple copies of the packet to multiple ports. (If the packet was
not modified between the <code>output</code> actions, and some of the
copies are destined to the same hypervisor, then using a logical
Expand Down Expand Up @@ -1453,58 +1453,58 @@

<li>
<p>
OpenFlow tables 37 through 41 implement the <code>output</code> action
in the logical ingress pipeline. Specifically, table 37 serves as an
entry point to egress pipeline. Table 37 detects IP packets that are
too big for a corresponding interface. Table 38 produces ICMPv4
OpenFlow tables 40 through 44 implement the <code>output</code> action
in the logical ingress pipeline. Specifically, table 40 serves as an
entry point to egress pipeline. Table 40 detects IP packets that are
too big for a corresponding interface. Table 41 produces ICMPv4
Fragmentation Needed (or ICMPv6 Too Big) errors and deliver them back
to the offending port. table 39 handles packets to remote hypervisors,
table 40 handles packets to the local hypervisor, and table 41 checks
to the offending port. table 42 handles packets to remote hypervisors,
table 43 handles packets to the local hypervisor, and table 44 checks
whether packets whose logical ingress and egress port are the same
should be discarded.
</p>

<p>
Logical patch ports are a special case. Logical patch ports do not
have a physical location and effectively reside on every hypervisor.
Thus, flow table 40, for output to ports on the local hypervisor,
Thus, flow table 43, for output to ports on the local hypervisor,
naturally implements output to unicast logical patch ports too.
However, applying the same logic to a logical patch port that is part
of a logical multicast group yields packet duplication, because each
hypervisor that contains a logical port in the multicast group will
also output the packet to the logical patch port. Thus, multicast
groups implement output to logical patch ports in table 39.
groups implement output to logical patch ports in table 42.
</p>

<p>
Each flow in table 39 matches on a logical output port for unicast or
Each flow in table 42 matches on a logical output port for unicast or
multicast logical ports that include a logical port on a remote
hypervisor. Each flow's actions implement sending a packet to the port
it matches. For unicast logical output ports on remote hypervisors,
the actions set the tunnel key to the correct value, then send the
packet on the tunnel port to the correct hypervisor. (When the remote
hypervisor receives the packet, table 0 there will recognize it as a
tunneled packet and pass it along to table 40.) For multicast logical
tunneled packet and pass it along to table 43.) For multicast logical
output ports, the actions send one copy of the packet to each remote
hypervisor, in the same way as for unicast destinations. If a
multicast group includes a logical port or ports on the local
hypervisor, then its actions also resubmit to table 40. Table 39 also
hypervisor, then its actions also resubmit to table 43. Table 42 also
includes:
</p>

<ul>
<li>
A higher-priority rule to match packets received from ramp switch
tunnels, based on flag MLF_RCV_FROM_RAMP, and resubmit these packets
to table 40 for local delivery. Packets received from ramp switch
to table 43 for local delivery. Packets received from ramp switch
tunnels reach here because of a lack of logical output port field in
the tunnel key and thus these packets needed to be submitted to table
8 to determine the output port.
</li>
<li>
A higher-priority rule to match packets received from ports of type
<code>localport</code>, based on the logical input port, and resubmit
these packets to table 40 for local delivery. Ports of type
these packets to table 43 for local delivery. Ports of type
<code>localport</code> exist on every hypervisor and by definition
their traffic should never go out through a tunnel.
</li>
Expand All @@ -1519,41 +1519,41 @@
packets, the packets only need to be delivered to local ports.
</li>
<li>
A fallback flow that resubmits to table 40 if there is no other
A fallback flow that resubmits to table 43 if there is no other
match.
</li>
</ul>

<p>
Flows in table 40 resemble those in table 39 but for logical ports that
Flows in table 43 resemble those in table 42 but for logical ports that
reside locally rather than remotely. For unicast logical output ports
on the local hypervisor, the actions just resubmit to table 41. For
on the local hypervisor, the actions just resubmit to table 44. For
multicast output ports that include one or more logical ports on the
local hypervisor, for each such logical port <var>P</var>, the actions
change the logical output port to <var>P</var>, then resubmit to table
41.
44.
</p>

<p>
A special case is that when a localnet port exists on the datapath,
remote port is connected by switching to the localnet port. In this
case, instead of adding a flow in table 39 to reach the remote port, a
flow is added in table 40 to switch the logical outport to the localnet
port, and resubmit to table 40 as if it were unicasted to a logical
case, instead of adding a flow in table 42 to reach the remote port, a
flow is added in table 43 to switch the logical outport to the localnet
port, and resubmit to table 43 as if it were unicasted to a logical
port on the local hypervisor.
</p>

<p>
Table 41 matches and drops packets for which the logical input and
Table 44 matches and drops packets for which the logical input and
output ports are the same and the MLF_ALLOW_LOOPBACK flag is not
set. It also drops MLF_LOCAL_ONLY packets directed to a localnet port.
It resubmits other packets to table 42.
It resubmits other packets to table 45.
</p>
</li>

<li>
<p>
OpenFlow tables 42 through 62 execute the logical egress pipeline from
OpenFlow tables 45 through 62 execute the logical egress pipeline from
the <code>Logical_Flow</code> table in the OVN Southbound database.
The egress pipeline can perform a final stage of validation before
packet delivery. Eventually, it may execute an <code>output</code>
Expand All @@ -1572,7 +1572,7 @@
<li>
<p>
Table 64 bypasses OpenFlow loopback when MLF_ALLOW_LOOPBACK is set.
Logical loopback was handled in table 41, but OpenFlow by default also
Logical loopback was handled in table 44, but OpenFlow by default also
prevents loopback to the OpenFlow ingress port. Thus, when
MLF_ALLOW_LOOPBACK is set, OpenFlow table 64 saves the OpenFlow ingress
port, sets it to zero, resubmits to table 65 for logical-to-physical
Expand Down Expand Up @@ -1610,8 +1610,8 @@
traverse tables 0 to 65 as described in the previous section
<code>Architectural Physical Life Cycle of a Packet</code>, using the
logical datapath representing the logical switch that the sender is
attached to. At table 39, the packet will use the fallback flow that
resubmits locally to table 40 on the same hypervisor. In this case,
attached to. At table 42, the packet will use the fallback flow that
resubmits locally to table 43 on the same hypervisor. In this case,
all of the processing from table 0 to table 65 occurs on the hypervisor
where the sender resides.
</p>
Expand Down Expand Up @@ -1669,9 +1669,9 @@
When a hypervisor processes a packet on a logical datapath
representing a logical switch, and the logical egress port is a
<code>l3gateway</code> port representing connectivity to a gateway
router, the packet will match a flow in table 39 that sends the
router, the packet will match a flow in table 42 that sends the
packet on a tunnel port to the chassis where the gateway router
resides. This processing in table 39 is done in the same manner as
resides. This processing in table 42 is done in the same manner as
for VIFs.
</p>

Expand Down Expand Up @@ -1764,21 +1764,21 @@
chassis, one additional mechanism is required. When a packet
leaves the ingress pipeline and the logical egress port is the
distributed gateway port, one of two different sets of actions is
required at table 39:
required at table 42:
</p>

<ul>
<li>
If the packet can be handled locally on the sender's hypervisor
(e.g. one-to-one NAT traffic), then the packet should just be
resubmitted locally to table 40, in the normal manner for
resubmitted locally to table 43, in the normal manner for
distributed logical patch ports.
</li>

<li>
However, if the packet needs to be handled on the chassis
associated with the distributed gateway port (e.g. one-to-many
SNAT traffic or non-NAT traffic), then table 39 must send the
SNAT traffic or non-NAT traffic), then table 42 must send the
packet on a tunnel port to that chassis.
</li>
</ul>
Expand All @@ -1790,11 +1790,11 @@
egress port to the type <code>chassisredirect</code> logical port is
simply a way to indicate that although the packet is destined for
the distributed gateway port, it needs to be redirected to a
different chassis. At table 39, packets with this logical egress
port are sent to a specific chassis, in the same way that table 39
different chassis. At table 42, packets with this logical egress
port are sent to a specific chassis, in the same way that table 42
directs packets whose logical egress port is a VIF or a type
<code>l3gateway</code> port to different chassis. Once the packet
arrives at that chassis, table 40 resets the logical egress port to
arrives at that chassis, table 43 resets the logical egress port to
the value representing the distributed gateway port. For each
distributed gateway port, there is one type
<code>chassisredirect</code> port, in addition to the distributed
Expand Down

0 comments on commit c06e99a

Please sign in to comment.