-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix support const as input to linear layers in pytorch #1080
Conversation
@@ -56,7 +56,7 @@ def __init__(self, unit_test, func, const, input_reverse_order=False): | |||
def get_tpc(self): | |||
tp = generate_test_tp_model({'weights_n_bits': 32, | |||
'activation_n_bits': 32, | |||
'enable_weights_quantization': False, | |||
'enable_weights_quantization': True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe test with AND without weights quantization?
@lapid92 |
@@ -239,7 +239,9 @@ def insert_positional_weights_to_input_list(self, input_tensors: List) -> List: | |||
for pos, weight in sorted((pos, weight) for pos, weight in self.weights.items() | |||
if isinstance(pos, int)): | |||
assert pos <= len(input_tensors), 'Positional weight index mismatch' | |||
input_tensors.insert(pos, weight) | |||
# Insert only positional weights that are not subject to quantization. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe explain why?
@@ -239,7 +239,9 @@ def insert_positional_weights_to_input_list(self, input_tensors: List) -> List: | |||
for pos, weight in sorted((pos, weight) for pos, weight in self.weights.items() | |||
if isinstance(pos, int)): | |||
assert pos <= len(input_tensors), 'Positional weight index mismatch' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change this to if with a "critical" logger message
Pull Request Description:
The issue is that positional weights are only inserted as inputs to the node if the positional weights of the node are being quantized, and in any case where the node weights are not being quantized.
The PR addresses the scenario where the node weights are being quantized and the positional weights are not quantized.
Checklist before requesting a review: