Skip to content

Commit

Permalink
inclusion: bench enact_candidate weight (#5270)
Browse files Browse the repository at this point in the history
On top of #5082.

## Background

Previously, before #3479, we would
[include](https://github.com/paritytech/polkadot-sdk/blame/75074952a859f90213ea25257b71ec2189dbcfc1/polkadot/runtime/parachains/src/builder.rs#L508C12-L508C44)
the cost enacting the candidate into the cost of processing a single
bitfield.
[Now](https://github.com/paritytech/polkadot-sdk/blame/dd48544a573dd02da2082cec1dda7ce735e2e719/polkadot/runtime/parachains/src/builder.rs#L529)
it is different, although the benchmarks seems to be not-up-to date.
Including the cost of enacting a candidate into a processing a single
bitfield cost was incorrect, since we multiple that by the number of
bitfields we have. Instead, we should separate calculate the cost of
processing a single bitfield without enactment, and multiple the cost of
enactment by the actual number of processed candidates (which is limited
by the number cores, not validators).

## Bench

Previously, the weight of `enact_candidate` was calculated manually
(without a benchmark) and then neglected:
https://github.com/paritytech/polkadot-sdk/blob/dd48544a573dd02da2082cec1dda7ce735e2e719/polkadot/runtime/parachains/src/inclusion/mod.rs#L584

In this PR, we have a benchmark for it and it's based on the number of
ump and sent hrmp messages as well as whether the candidate has a
runtime upgrade (new_validation_code).
The differences from the previous attempt
paritytech/polkadot#6929 are that
* we don't include the cost of enactment into the cost of processing a
backed candidate.
The reason for it is that enactment happens not in the same block as
backing (typically the next one), since we process bitfields before
backing votes.
* we don't take into account the size of the runtime upgrade, the
benchmark weight doesn't seem to depend much on it, but rather whether
there was one or not.

Similarly to the previous attempt, we don't account for dmp messages
(fixed cost). Also we don't account properly for received hrmp messages
(hrmp_watermark) because the cost of it depends on the runtime state and
can't be statically deduced in the benchmark (unless we pass the
information about channels as benchmark u32 arguments).

The total weight cost of processing a parainherent now includes the cost
of enactment of each candidate, but we don't do filtering based on that
(because we enact after processing bitfields and making other changes to
the storage).

## Numbers

```
Reads = 7 + (0 * u) + (3 * h) + (8 * c)
Writes = 10 + (1 * u) + (3 * h) + (7 * c)
```
In addition, there is a fixed cost of a few of ms (!) per candidate. 

This might result a full block slightly overflowing its weight with 200
enacted candidates, which in turn could prevent non-mandatory
transactions from being included in a block.

Given our modest limits on max ump and hrmp messages:
```
  maxUpwardMessageNumPerCandidate: 16
  hrmpMaxMessageNumPerCandidate: 10
```
and the fact that runtime upgrades are can't happen very frequently
(`validation_upgrade_cooldown`), we might only go over the limits in
case of many disputes.

TODOs:
- [x] Fix the overweight test
- [x] Generate the weights for Westend and Rococo
- [x] PRDoc

---------

Co-authored-by: command-bot <>
Co-authored-by: Alin Dima <alin@parity.io>
  • Loading branch information
ordian and alindima authored Aug 29, 2024
1 parent ba48e4b commit ddd58c1
Show file tree
Hide file tree
Showing 15 changed files with 515 additions and 180 deletions.
4 changes: 1 addition & 3 deletions polkadot/node/core/pvf/common/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,7 @@ libc = { workspace = true }
nix = { features = ["resource", "sched"], workspace = true }
thiserror = { workspace = true }

codec = { features = [
"derive",
], workspace = true }
codec = { features = ["derive"], workspace = true }

polkadot-parachain-primitives = { workspace = true, default-features = true }
polkadot-primitives = { workspace = true, default-features = true }
Expand Down
31 changes: 16 additions & 15 deletions polkadot/runtime/parachains/src/builder.rs
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,21 @@ fn account<AccountId: Decode>(name: &'static str, index: u32, seed: u32) -> Acco
.expect("infinite input; no invalid input; qed")
}

pub fn generate_validator_pairs<T: frame_system::Config>(
validator_count: u32,
) -> Vec<(T::AccountId, ValidatorId)> {
(0..validator_count)
.map(|i| {
let public = ValidatorId::generate_pair(None);

// The account Id is not actually used anywhere, just necessary to fulfill the
// expected type of the `validators` param of `test_trigger_on_new_session`.
let account: T::AccountId = account("validator", i, i);
(account, public)
})
.collect()
}

/// Create a 32 byte slice based on the given number.
fn byte32_slice_from(n: u32) -> [u8; 32] {
let mut slice = [0u8; 32];
Expand Down Expand Up @@ -423,20 +438,6 @@ impl<T: paras_inherent::Config> BenchBuilder<T> {
}
}

/// Generate validator key pairs and account ids.
fn generate_validator_pairs(validator_count: u32) -> Vec<(T::AccountId, ValidatorId)> {
(0..validator_count)
.map(|i| {
let public = ValidatorId::generate_pair(None);

// The account Id is not actually used anywhere, just necessary to fulfill the
// expected type of the `validators` param of `test_trigger_on_new_session`.
let account: T::AccountId = account("validator", i, i);
(account, public)
})
.collect()
}

fn signing_context(&self) -> SigningContext<T::Hash> {
SigningContext {
parent_hash: Self::header(self.block_number).hash(),
Expand Down Expand Up @@ -800,7 +801,7 @@ impl<T: paras_inherent::Config> BenchBuilder<T> {
c.scheduler_params.num_cores = used_cores as u32;
});

let validator_ids = Self::generate_validator_pairs(self.max_validators());
let validator_ids = generate_validator_pairs::<T>(self.max_validators());
let target_session = SessionIndex::from(self.target_session);
let builder = self.setup_session(target_session, validator_ids, used_cores, extra_cores);

Expand Down
3 changes: 1 addition & 2 deletions polkadot/runtime/parachains/src/dmp.rs
Original file line number Diff line number Diff line change
Expand Up @@ -287,7 +287,7 @@ impl<T: Config> Pallet<T> {
}

/// Prunes the specified number of messages from the downward message queue of the given para.
pub(crate) fn prune_dmq(para: ParaId, processed_downward_messages: u32) -> Weight {
pub(crate) fn prune_dmq(para: ParaId, processed_downward_messages: u32) {
let q_len = DownwardMessageQueues::<T>::mutate(para, |q| {
let processed_downward_messages = processed_downward_messages as usize;
if processed_downward_messages > q.len() {
Expand All @@ -306,7 +306,6 @@ impl<T: Config> Pallet<T> {
if q_len <= (threshold as usize) {
Self::decrease_fee_factor(para);
}
T::DbWeight::get().reads_writes(1, 1)
}

/// Returns the Head of Message Queue Chain for the given para or `None` if there is none
Expand Down
19 changes: 2 additions & 17 deletions polkadot/runtime/parachains/src/hrmp.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1305,9 +1305,7 @@ impl<T: Config> Pallet<T> {
remaining
}

pub(crate) fn prune_hrmp(recipient: ParaId, new_hrmp_watermark: BlockNumberFor<T>) -> Weight {
let mut weight = Weight::zero();

pub(crate) fn prune_hrmp(recipient: ParaId, new_hrmp_watermark: BlockNumberFor<T>) {
// sift through the incoming messages digest to collect the paras that sent at least one
// message to this parachain between the old and new watermarks.
let senders = HrmpChannelDigests::<T>::mutate(&recipient, |digest| {
Expand All @@ -1323,7 +1321,6 @@ impl<T: Config> Pallet<T> {
*digest = leftover;
senders
});
weight += T::DbWeight::get().reads_writes(1, 1);

// having all senders we can trivially find out the channels which we need to prune.
let channels_to_prune =
Expand Down Expand Up @@ -1356,21 +1353,13 @@ impl<T: Config> Pallet<T> {
channel.total_size -= pruned_size as u32;
}
});

weight += T::DbWeight::get().reads_writes(2, 2);
}

HrmpWatermarks::<T>::insert(&recipient, new_hrmp_watermark);
weight += T::DbWeight::get().reads_writes(0, 1);

weight
}

/// Process the outbound HRMP messages by putting them into the appropriate recipient queues.
///
/// Returns the amount of weight consumed.
pub(crate) fn queue_outbound_hrmp(sender: ParaId, out_hrmp_msgs: HorizontalMessages) -> Weight {
let mut weight = Weight::zero();
pub(crate) fn queue_outbound_hrmp(sender: ParaId, out_hrmp_msgs: HorizontalMessages) {
let now = frame_system::Pallet::<T>::block_number();

for out_msg in out_hrmp_msgs {
Expand Down Expand Up @@ -1426,11 +1415,7 @@ impl<T: Config> Pallet<T> {
recipient_digest.push((now, vec![sender]));
}
HrmpChannelDigests::<T>::insert(&channel_id.recipient, recipient_digest);

weight += T::DbWeight::get().reads_writes(2, 2);
}

weight
}

/// Initiate opening a channel from a parachain to a given recipient with given channel
Expand Down
123 changes: 117 additions & 6 deletions polkadot/runtime/parachains/src/inclusion/benchmarking.rs
Original file line number Diff line number Diff line change
Expand Up @@ -15,23 +15,134 @@
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.

use super::*;
use crate::{
builder::generate_validator_pairs,
configuration,
hrmp::{HrmpChannel, HrmpChannels},
initializer, HeadData, ValidationCode,
};
use bitvec::{bitvec, prelude::Lsb0};
use frame_benchmarking::benchmarks;
use pallet_message_queue as mq;
use polkadot_primitives::{
CandidateCommitments, CollatorId, CollatorSignature, CommittedCandidateReceipt, HrmpChannelId,
OutboundHrmpMessage, SessionIndex,
};
use sp_core::sr25519;

fn create_candidate_commitments<T: crate::hrmp::pallet::Config>(
para_id: ParaId,
head_data: HeadData,
max_msg_len: usize,
ump_msg_count: u32,
hrmp_msg_count: u32,
code_upgrade: bool,
) -> CandidateCommitments {
let upward_messages = {
let unbounded = create_messages(max_msg_len, ump_msg_count as _);
BoundedVec::truncate_from(unbounded)
};

let horizontal_messages = {
let unbounded = create_messages(max_msg_len, hrmp_msg_count as _);

for n in 0..unbounded.len() {
let channel_id = HrmpChannelId { sender: para_id, recipient: para_id + n as u32 + 1 };
HrmpChannels::<T>::insert(
&channel_id,
HrmpChannel {
sender_deposit: 42,
recipient_deposit: 42,
max_capacity: 10_000_000,
max_total_size: 1_000_000_000,
max_message_size: 10_000_000,
msg_count: 0,
total_size: 0,
mqc_head: None,
},
);
}

let unbounded = unbounded
.into_iter()
.enumerate()
.map(|(n, data)| OutboundHrmpMessage { recipient: para_id + n as u32 + 1, data })
.collect();
BoundedVec::truncate_from(unbounded)
};

let new_validation_code = code_upgrade.then_some(ValidationCode(vec![42u8; 1024]));

CandidateCommitments::<u32> {
upward_messages,
horizontal_messages,
new_validation_code,
head_data,
processed_downward_messages: 0,
hrmp_watermark: 10,
}
}

fn create_messages(msg_len: usize, n_msgs: usize) -> Vec<Vec<u8>> {
let best_number = 73_u8; // Chuck Norris of numbers
vec![vec![best_number; msg_len]; n_msgs]
}

benchmarks! {
where_clause {
where
T: mq::Config,
T: mq::Config + configuration::Config + initializer::Config,
}

receive_upward_messages {
let i in 1 .. 1000;
enact_candidate {
let u in 1 .. 32;
let h in 1 .. 32;
let c in 0 .. 1;

let para = 42_u32.into(); // not especially important.

let max_len = mq::MaxMessageLenOf::<T>::get() as usize;
let para = 42u32.into(); // not especially important.
let upward_messages = vec![vec![0; max_len]; i as usize];

let config = configuration::ActiveConfig::<T>::get();
let n_validators = config.max_validators.unwrap_or(500);
let validators = generate_validator_pairs::<T>(n_validators);

let session = SessionIndex::from(0u32);
initializer::Pallet::<T>::test_trigger_on_new_session(
false,
session,
validators.iter().map(|(a, v)| (a, v.clone())),
None,
);
let backing_group_size = config.scheduler_params.max_validators_per_core.unwrap_or(5);
let head_data = HeadData(vec![0xFF; 1024]);

let relay_parent_number = BlockNumberFor::<T>::from(10u32);
let commitments = create_candidate_commitments::<T>(para, head_data, max_len, u, h, c != 0);
let backers = bitvec![u8, Lsb0; 1; backing_group_size as usize];
let availability_votes = bitvec![u8, Lsb0; 1; n_validators as usize];
let core_index = CoreIndex::from(0);
let backing_group = GroupIndex::from(0);

let descriptor = CandidateDescriptor::<T::Hash> {
para_id: para,
relay_parent: Default::default(),
collator: CollatorId::from(sr25519::Public::from_raw([42u8; 32])),
persisted_validation_data_hash: Default::default(),
pov_hash: Default::default(),
erasure_root: Default::default(),
signature: CollatorSignature::from(sr25519::Signature::from_raw([42u8; 64])),
para_head: Default::default(),
validation_code_hash: ValidationCode(vec![1, 2, 3]).hash(),
};

let receipt = CommittedCandidateReceipt::<T::Hash> {
descriptor,
commitments,
};

Pallet::<T>::receive_upward_messages(para, vec![vec![0; max_len]; 1].as_slice());
}: { Pallet::<T>::receive_upward_messages(para, upward_messages.as_slice()) }
} : { Pallet::<T>::enact_candidate(relay_parent_number, receipt, backers, availability_votes, core_index, backing_group) }

impl_benchmark_test_suite!(
Pallet,
Expand Down
Loading

0 comments on commit ddd58c1

Please sign in to comment.