You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the tiled_conv_auto function from gemmini.h expects the inputs to be in NHWC and the weights to be in KhKwIO format, but the standard layout in PyTorch is NCHW for input/output tensors and OIKhKw for the weights.
Right now for testing I manually permute my input data to match the format expected by Gemmini which works, but it causes significant overhead so I was wondering how difficult it would be to make this a variable parameter?
I tried modifying the following lines in gemmini.h:
The main problem is that switching from NHWC to NCHW changes the innermost dimension. Changing outer dimensions would just involve setting new strides in LoopConv.scala and tiled_conv (as you began doing), but changing the innermost dimension requires more intricate changes to how Gemmini expects the data to be laid out in its scratchpad.
Moin,
the
tiled_conv_auto
function from gemmini.h expects the inputs to be in NHWC and the weights to be in KhKwIO format, but the standard layout in PyTorch is NCHW for input/output tensors and OIKhKw for the weights.Right now for testing I manually permute my input data to match the format expected by Gemmini which works, but it causes significant overhead so I was wondering how difficult it would be to make this a variable parameter?
I tried modifying the following lines in
gemmini.h
:But it does not work, I would appreciate any input on this
Best Regards
The text was updated successfully, but these errors were encountered: