Skip to content

Commit

Permalink
Improve consistency of new sections
Browse files Browse the repository at this point in the history
  • Loading branch information
Markus-MS committed Aug 30, 2024
1 parent 86709fb commit 3108d02
Show file tree
Hide file tree
Showing 6 changed files with 134 additions and 131 deletions.
2 changes: 1 addition & 1 deletion config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,6 @@ enableEmoji = true
[markup.goldmark.extensions.passthrough]
enable = true
[markup.goldmark.extensions.passthrough.delimiters]
block = [['\[', '\]'], ['$$', '$$']]
block = [['$$', '$$']]
inline = [['\(', '\)']]

10 changes: 5 additions & 5 deletions content/docs/crypto/constant_time_tool/Dudect.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ If Dudect consistently reports that the measurements are `maybe constant time`,
## Improving Measurement Accuracy

To ensure Dudect detects potential timing leakages, it is crucial to reduce noise that might overshadow timing differences between the input classes.
Dudect statically removes outlier measurements, but a large number of measurements may still be necessary.
Dudect statistically removes outlier measurements, but a large number of measurements may still be necessary.
Precise measurements can help reduce the time needed to detect leakages.
To improve the accuracy of the collected measurements, it is recommended to:

Expand Down Expand Up @@ -249,15 +249,15 @@ The template code above will run in an endless loop until the measurements are s
### CI Setup
Constant monitoring is essential to detect any timing leakages introduced by new changes.
Continuous testing ensures the assumption of constant-time is not violated when new code is introduced.
Continuous testing ensures the assumption of constant time is not violated when new code is introduced.
The following bash script can be used as a template to get started.
It assumes the following file structure:
```none
├── tests
   ├── ct_test.c // Contains the dudect loop
  └── dudect.h
┣ 📂 tests
   ┣ 📜 ct_test.c // Contains the dudect loop
  ┗ 📜 dudect.h
```

Running the bash script automatically compiles the `ct_test.c` file using `clang` and executes the binary on the second core using `taskset -c 2` and stops execution after 5 minutes.
Expand Down
20 changes: 10 additions & 10 deletions content/docs/crypto/constant_time_tool/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ math: true

{{< math >}}
Timing attacks are side channels that exploit variations in execution time to extract secret information. Unlike cryptanalysis, which seeks to find weaknesses and break the theoretical security guarantees of a cryptographic protocol, timing attacks leverage implementation flaws in specific protocols.
While specific cryptographic constructions, such as asymmetric cryptography, may be more vulnerable to timing attacks, this attack vector can potentially affect any cryptographic implementation. To mitigate timing attacks, it is best practice to ensure implementations are constant time, meaning the execution time of cryptographic functions should remain constant regardless of the input. In practice, one should ensure that **the code path and any memory accesses are independent of secret data**. Not all timing differences are exploitable, but removing any differences ensures the security of the implementation. To ensure that an implementation is constant time, cryptography practitioners have developed various tools to detect non-constant time code.
While specific cryptographic constructions, such as asymmetric cryptography, may be more vulnerable to timing attacks, this attack vector can potentially affect any cryptographic implementation. To mitigate timing attacks, it is best practice to ensure implementations are constant-time, meaning the execution time of cryptographic functions should remain constant regardless of the input. In practice, one should ensure that **the code path and any memory accesses are independent of secret data**. Not all timing differences are exploitable, but removing any differences ensures the security of the implementation. To ensure that an implementation is constant time, cryptography practitioners have developed various tools to detect non-constant time code.

This entry is divided into two sections.
The first provides [background](#background) information on timing attacks and a concrete [example](#example-modular-exponentiation-timing-attacks).
Expand Down Expand Up @@ -98,7 +98,7 @@ $$

The resulting code branches depending on the exponent bit \(d_i\) violating the *conditional jump* principle described in the previous section.
If the exponent bit \(d_i = 1\), an additional multiplication \(r = r * y\) is performed, resulting in a longer execution time and, therefore, leaking the number of 1 and 0 bits present in \(d\).
Furthermore, a commonly used technique for modular multiplication called *Montgomery multiplication* is not constant time and performs an additional computation depending on the modulus and the multiplication result.
Furthermore, a commonly used technique for modular multiplication called *Montgomery multiplication* is not constant time and performs an additional computation depending on the modulus and the multiplication result.
If an intermediate value of the multiplication exceeds the modulus \(N\), a reduction step needs to be performed.
The additional reduction step will lead to an observable difference in timings.

Expand Down Expand Up @@ -144,7 +144,7 @@ By analyzing these timing differences, an attacker can infer whether a specific
## Constant Time Tooling

To mitigate the risk of timing attacks, it is best practice to implement cryptographic algorithms in a *constant time* manner, meaning the execution time remains uniform regardless of the input.
To ensure the absence of such timing differences, the cryptographic community has created various timing tools that aim to detect potential timing differences.
To ensure the absence of such timing differences, the cryptographic community has created various timing tools that aim to detect potential timing differences.
These tools differentiate themselves by the programming language they target (most often C) and the fundamental approach of timing analysis.
We can group the different approaches into four distinct categories:

Expand All @@ -170,13 +170,13 @@ Popular formal tools include:

| Pros | Cons |
| :---- | :---- |
| **Guarantee**: Formal verification proves the absence of timing leaks under the analyzed model. | Complexity: These tools tend to require more expertise in both cryptography and formal methods, making them less accessible to general developers. |
| **Flexibility**: Many tools utilize LLVM bytecode, allowing for use with various languages. | Modeling and restrictions: Assumptions made during formalization may not perfectly reflect reality, potentially leading to incomplete verification. For example, any changes introduced during the compilation stage may lead to a binary that is different from the analyzed model. |
| **Guarantee**: Formal verification proves the absence of timing leaks under the analyzed model. | **Complexity**: These tools tend to require more expertise in both cryptography and formal methods, making them less accessible to general developers. |
| **Flexibility**: Many tools utilize LLVM bytecode, allowing for use with various languages. | **Modeling and restrictions**: Assumptions made during formalization may not perfectly reflect reality, potentially leading to incomplete verification. For example, any changes introduced during the compilation stage may lead to a binary that is different from the analyzed model. |

### Symbolic tools

Symbolic tools use symbolic execution to find timing leakages by analyzing how different execution paths and memory accesses depend on symbolic variables, particularly secret data.
Symbolic execution can also provide concrete values for which a certain property is violated, which can be useful for understanding the underlying issue.
Symbolic execution can also provide concrete values for which a certain property is violated, which can be useful for understanding the underlying issue.
Most symbolic execution tools focus on cache timing attacks, making them a useful tool for threat models involving an attacker who shares the same cache.

Popular symbolic tools include:
Expand All @@ -186,7 +186,7 @@ Popular symbolic tools include:

| Pros | Cons |
| :---- | :---- |
| **Counterexamples**: Symbolic execution can provide concrete counterexamples or test cases that demonstrate the existence of a vulnerability, making it easier to understand and reproduce the issue. | **Time intensive**: Symbolic execution will explore all possible paths, which results in long execution times. |
| **Counter examples**: Symbolic execution can provide concrete counterexamples or test cases that demonstrate the existence of a vulnerability, making it easier to understand and reproduce the issue. | **Time intensive**: Symbolic execution will explore all possible paths, which results in long execution times. |

### Dynamic Tools

Expand All @@ -199,8 +199,8 @@ Popular dynamic tools include:

| Pros | Cons |
| :---- | :---- |
| Granular Analysis: Allows specification of sensitive memory regions, making them effective for targeted analysis. | Limited Coverage: Can only track execution paths encountered during testing, potentially missing some vulnerabilities. |
| Flexibility: Can be adapted to various contexts by annotating different codebase parts. | |
| **Granular Analysis**: Allows specification of sensitive memory regions, making them effective for targeted analysis. | **Limited Coverage**: Can only track execution paths encountered during testing, potentially missing some vulnerabilities. |
| **Flexibility**: Can be adapted to various contexts by annotating different codebase parts. | |

### Statistical Tools

Expand All @@ -215,7 +215,7 @@ Popular statistical tools include:

| Pros | Cons |
| -------- | ------- |
| **Setup**: Setting up statistical tests can be quite straightforward and, in some cases, can be done without needing access to source code. | **Debugging**: When a timing difference is detected, statistical tools typically do not provide information about the cause or location of the issue within the code. |
| **Setup**: Setting up statistical tests can be quite straightforward and, in some cases, can be done without needing access to source code.| **Debugging**: When a timing difference is detected, statistical tools typically do not provide information about the cause or location of the issue within the code. |
| **Practical**: Statistical tools measure real-time leakage, providing real-world results that include all potential variables (for example, compiler optimizations, architecture differences). | **Noise**: The quality of the recorded measurements is only as precise as the testing harness allows. If other parts of the code take significantly more time, a timing vulnerability might go undetected. |

## Further Reading
Expand Down
105 changes: 52 additions & 53 deletions content/docs/crypto/constant_time_tool/timecop.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ Verify the installation with:
valgrind --version
```

After the installation of Valgrind, all that is needed is to include the header file `poison.h` of TimeCop, which can be found [here](https://post-apocalyptic-crypto.org/timecop/#source-code)
After the installation of Valgrind, all that is needed is to include the header file `poison.h` of Timecop, which can be found [here](https://post-apocalyptic-crypto.org/timecop/#source-code)

```C
#include "poison.h"
Expand Down Expand Up @@ -101,14 +101,14 @@ Valgrind will issue a report.

Consider the following example of the propagation of uninitialized values:

```C
1int main(){
2 │ int x;
3 │ int z[10] = {0};
4 │ int y = x + 1;
5 │ int a = z[y];
6 │ return 0;
7 │ }
```C {linenos=inline}
int main(){
int x;
int z[10] = {0};
int y = x + 1;
int a = z[y];
return 0;
}
```

Running Valgrind on a binary with debug symbols enabled will generate a report pinpointing where uninitialized values are used.
Expand All @@ -125,67 +125,66 @@ Running Valgrind on a binary with debug symbols enabled will generate a report p

## Timecop Macros

Timecop uses Valgrind's capabilities to track uninitialized values as a proxy for detecting constant time violations. It uses Valgrind's internal functionality to manually declare memory regions as undefined and wraps these internal functions in C macros.
Timecop uses Valgrind's capabilities to track uninitialized values as a proxy for detecting constant time violations. It uses Valgrind's internal functionality to manually declare memory regions as undefined and wraps these internal functions in C macros.

It provides three C macros:

- `poison(addr, len)`: Marks the memory region from `[addr] <-> [addr+len]` as undefined. Valgrind will report any conditional jumps or memory accesses during runtime.
- `unpoison(addr, len)`: Undoes the poison operation by marking the memory region as defined.
- `is_poisoned(addr, len)`: Checks if any part of the memory region is poisoned.

Since many constant time violations occur due to memory access or control flow changes, which depend on a secret value, using Valgrind's ability to track these operations can help developers find timing vulnerabilities.
Since many constant time violations occur due to memory access or control flow changes, which depend on a secret value, using Valgrind's ability to track these operations can help developers find timing vulnerabilities.
Importantly, Valgrind does not report any other operations performed on the secret value, such as math operations.

## Example

Below is a simple example of a modular exponentiation operation used in RSA, which we described in the intro section.

```C
1 │ #include <stdio.h>
2
3 │ #include "valgrind/memcheck.h"
4
5 │ #define poison(addr, len) VALGRIND_MAKE_MEM_UNDEFINED(addr, len)
6 │ #define unpoison(addr, len) VALGRIND_MAKE_MEM_DEFINED(addr, len)
7 │ #define is_poisoned(addr, len) VALGRIND_CHECK_MEM_IS_DEFINED(addr, len)
8 │
9 │ typedef unsigned long long ull;
10 │
11 │ ull mod_exp(ull y, ull d, ull n) {
12 │ ull r = 1;
13 │ y = y % n;
14 │ while (d > 0) {
15 │ if (d & 1) {
16 │ r = (r * y) % n;
17 │ }
18 │ y = (y * y) % n;
19 │ d >>= 1;
20 │ }
21 │ return r;
22 │ }
23 │
24 │ ull rsa_decrypt(ull ct, ull d, ull n) {
25 │ return mod_exp(ct, d, n);
26 │ }
27 │
28 │ int main() {
29 │ ull n = 3233;
30 │ ull d = 413;
31 │ ull ciphertext = 2790;
32 │ // Poison the memory location of the secret exponent d
33 │ poison(&d, sizeof(ull));
34 │ ull plaintext = rsa_decrypt(ciphertext, d, n);
35 │ unpoison(&d, sizeof(ull));
36 │
37 │ printf("pt: %llu\n", plaintext);
38 │ return 0;
39 │ }
```C {linenos=inline}
#include <stdio.h>
#include "valgrind/memcheck.h"

#define poison(addr, len) VALGRIND_MAKE_MEM_UNDEFINED(addr, len)
#define unpoison(addr, len) VALGRIND_MAKE_MEM_DEFINED(addr, len)
#define is_poisoned(addr, len) VALGRIND_CHECK_MEM_IS_DEFINED(addr, len)

typedef unsigned long long ull;

ull mod_exp(ull y, ull d, ull n) {
ull r = 1;
y = y % n;
while (d > 0) {
if (d & 1) {
r = (r * y) % n;
}
y = (y * y) % n;
d >>= 1;
}
return r;
}

ull rsa_decrypt(ull ct, ull d, ull n) {
return mod_exp(ct, d, n);
}

int main() {
ull n = 3233;
ull d = 413;
ull ciphertext = 2790;
// Poison the memory location of the secret exponent d
poison(&d, sizeof(ull));
ull plaintext = rsa_decrypt(ciphertext, d, n);
unpoison(&d, sizeof(ull));

printf("pt: %llu\n", plaintext);
return 0;
}
```
Poisoning the memory region of the private exponent `d` will mark this value as uninitialized, and Valgrind will report any branching or memory access based on the secret exponent `d`.
Valgrind correctly identifies the problematic lines of code where the timing assumptions are not met.
```none
```bash
> valgrind --leak-check=full --show-leak-kinds=all --track-origins=yes ./toy_example
==72317== Conditional jump or move depends on uninitialised value(s)
Expand Down Expand Up @@ -227,7 +226,7 @@ GDB will now connect to Valgrind and stop at any reported errors.

## Limitations

1. **Microarchitecture Leakage**: TimeCop and Valgrind cannot detect if individual instructions take more time depending on the input they are provided.
1. **Microarchitecture Leakage**: Timecop and Valgrind cannot detect if individual instructions take more time depending on the input they are provided.
2. **Coverage**: This approach won't find potential vulnerabilities if the vulnerable code is not executed during the runtime.

An alternative approach using [MemorySanitizer](https://clang.llvm.org/docs/MemorySanitizer.html) in Clang offers similar benefits without requiring a library but modifies the binary at compile time. More information and a tutorial are available [here](https://crocs-muni.github.io/ct-tools/tutorials/memsan).
Loading

0 comments on commit 3108d02

Please sign in to comment.