-
Notifications
You must be signed in to change notification settings - Fork 0
/
Installation Instructions KIND and HAProxy
488 lines (425 loc) · 26.2 KB
/
Installation Instructions KIND and HAProxy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
Create a VM (ubuntu 22.04) and permanently fix the IP address to 192.168.101.135. Use the following method
===============================================================================
root@ubuntuserver2204:/home/ubuntu# cd /etc/netplan
root@ubuntuserver2204:/etc/netplan# ls
00-installer-config.yaml 00-installer-config.yaml.bak
root@ubuntuserver2204:/etc/netplan#
change the 00-installer-config.yaml file to
# This is the network config written by 'subiquity'
network:
ethernets:
ens33:
addresses: [192.168.101.135/24]
nameservers:
addresses : [8.8.8.8]
routes:
- to: default
via: 192.168.101.2
Create the K8 Cluster with 1 Master and 3 Worker nodes
======================================================
kind create cluster --config kind-example-config.yaml
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.30.0) 🖼
✓ Preparing nodes 📦 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a nice day! 👋
Check the Kind Cluster
======================
root@ubuntuserver2204:/home/ubuntu# kubectl cluster-info --context kind-kind
Kubernetes control plane is running at https://127.0.0.1:32915
CoreDNS is running at https://127.0.0.1:32915/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root@ubuntuserver2204:/home/ubuntu#
Check my Juice Shop YAML Files
==============================
root@ubuntuserver2204:/home/ubuntu# more juice-shop-deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: juice-shop2
spec:
template:
metadata:
labels:
app: juice-shop2
spec:
containers:
- name: juice-shop2
image: bkimminich/juice-shop
selector:
matchLabels:
app: juice-shop2
...
root@ubuntuserver2204:/home/ubuntu# more juice-shop-service.yaml
---
kind: Service
apiVersion: v1
metadata:
name: juice-shop2
spec:
type: NodePort
selector:
app: juice-shop2
ports:
- name: http
port: 8000
targetPort: 3000
...
root@ubuntuserver2204:/home/ubuntu#
Launch the Juice Shop Service
==========================
...
root@ubuntuserver2204:/home/ubuntu# kubectl create -f juice-shop-deployment.yaml
deployment.apps/juice-shop2 created
root@ubuntuserver2204:/home/ubuntu# kubectl create -f juice-shop-service.yaml
service/juice-shop2 created
root@ubuntuserver2204:/home/ubuntu#
Check the juice-shop2 service is properly created
=================================================
root@ubuntuserver2204:/home/ubuntu# kubectl get svc juice-shop2
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
juice-shop2 NodePort 10.96.161.235 <none> 8000:30934/TCP 79s
root@ubuntuserver2204:/home/ubuntu#
Check the end point is created
==============================
root@ubuntuserver2204:/home/ubuntu# kubectl get ep juice-shop2
NAME ENDPOINTS AGE
juice-shop2 10.244.1.2:3000 2m14s
root@ubuntuserver2204:/home/ubuntu#
Check juice-shop2 Nodes and Pods are created (3 worker nodes are shown)
============================================
root@ubuntuserver2204:/home/ubuntu# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
juice-shop2-7649b9bbb7-bknqx 1/1 Running 0 3m8s 10.244.1.2 kind-worker3 <none> <none>
root@ubuntuserver2204:/home/ubuntu# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready control-plane 7m4s v1.30.0 172.18.0.2 <none> Debian GNU/Linux 12 (bookworm) 5.15.0-122-generic containerd://1.7.15
kind-worker Ready <none> 6m37s v1.30.0 172.18.0.5 <none> Debian GNU/Linux 12 (bookworm) 5.15.0-122-generic containerd://1.7.15
kind-worker2 Ready <none> 6m37s v1.30.0 172.18.0.4 <none> Debian GNU/Linux 12 (bookworm) 5.15.0-122-generic containerd://1.7.15
kind-worker3 Ready <none> 6m37s v1.30.0 172.18.0.3 <none> Debian GNU/Linux 12 (bookworm) 5.15.0-122-generic containerd://1.7.15
root@ubuntuserver2204:/home/ubuntu#
Now we have to create a HAProxy front-end loadbalancer to load balance a browser client request to the juice-shop website that is served from the 3 worker nodes
================================================================================================================================================================
Configuring HA Proxy
====================
To start, let’s install the HA Proxy package on the two servers: ha-proxy-001 and ha-proxy-002. For this we will perform the commands below on both machines.
# apt install -y haproxy
After the installation is complete, we will configure the HA Proxy. For that we need to edit the file /etc/haproxy/haproxy.cfg and add the lines below at the end of the file.
# nano /etc/haproxy/haproxy.cfg
Copy lines 140 to 152 and Add to the bottom of the existing haproxy.cfg file
frontend kubernetes
mode tcp
bind 192.168.101.135:6443
option tcplog
default_backend k8s-control-plane
backend k8s-control-plane
mode tcp
balance roundrobin
option tcp-check
server k8s-slave-001.lan.int 172.18.0.3:30934 check fall 3 rise 2
server k8s-slave-002.lan.int 172.18.0.4:30934 check fall 3 rise 2
server k8s-slave-003.lan.int 172.18.0.5:30934 check fall 3 rise 2
Edit the host file (add the following into the host file at /etc/hosts)
==================
172.18.0.3 k8s-slave-001.lan.int
172.18.0.4 k8s-slave-002.lan.int
172.18.0.5 k8s-slave-003.lan.int
Restart the service so that it loads the new settings.
# sudo systemctl restart haproxy
# sudo systemctl status haproxy
You should see follow status for haproxy (especially line 170)
root@ubuntuserver2204:/home/ubuntu# sudo systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2024-09-30 09:27:12 UTC; 20h ago
Docs: man:haproxy(1)
file:/usr/share/doc/haproxy/configuration.txt.gz
Process: 31646 ExecStartPre=/usr/sbin/haproxy -Ws -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SUCCESS)
Main PID: 31648 (haproxy)
Tasks: 3 (limit: 4520)
Memory: 69.7M
CPU: 50.097s
CGroup: /system.slice/haproxy.service
├─31648 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
└─31650 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
Sep 30 09:45:10 ubuntuserver2204 haproxy[31650]: [NOTICE] (31650) : haproxy version is 2.4.24-0ubuntu0.22.04.1
Sep 30 09:45:10 ubuntuserver2204 haproxy[31650]: [NOTICE] (31650) : path to executable is /usr/sbin/haproxy
Sep 30 09:45:10 ubuntuserver2204 haproxy[31650]: [ALERT] (31650) : backend 'k8s-control-plane' has no server available!
Sep 30 09:45:51 ubuntuserver2204 haproxy[31650]: [WARNING] (31650) : Server k8s-control-plane/k8s-slave-001.lan.int is UP, reason: Layer4 check passed, check duration: 0ms. 1 active a>
Sep 30 09:45:51 ubuntuserver2204 haproxy[31650]: [WARNING] (31650) : Server k8s-control-plane/k8s-slave-003.lan.int is UP, reason: Layer4 check passed, check duration: 0ms. 2 active a>
Sep 30 09:45:51 ubuntuserver2204 haproxy[31650]: [WARNING] (31650) : Server k8s-control-plane/k8s-slave-002.lan.int is UP, reason: Layer4 check passed, check duration: 0ms. 3 active a>
Oct 01 03:16:57 ubuntuserver2204 haproxy[31650]: [WARNING] (31650) : Server k8s-control-plane/k8s-slave-001.lan.int is DOWN, reason: Layer4 timeout, info: " at initial connection step>
Oct 01 03:16:57 ubuntuserver2204 haproxy[31650]: [WARNING] (31650) : Server k8s-control-plane/k8s-slave-003.lan.int is DOWN, reason: Layer4 timeout, info: " at initial connection step>
Oct 01 03:16:59 ubuntuserver2204 haproxy[31650]: [WARNING] (31650) : Server k8s-control-plane/k8s-slave-002.lan.int is DOWN, reason: Layer4 timeout, info: " at initial connection step>
Oct 01 03:16:59 ubuntuserver2204 haproxy[31650]: [ALERT] (31650) : backend 'k8s-control-plane' has no server available!
Keepalived is not required unless you have two HAProxy set up and you need to configure VRRP so that there is a virtual IP address that will be shared between two HAProxy servers.
Configuring Keepalived
======================
Now let’s install Keepalived
# apt install -y keepalived
In order for Keepalived to work correctly, the ability to bind non-local IP addresses must be configured in the operating system. For that, let’s run these commands: (!!! in the last successful install, i think i didn't run this)
# sudo echo "net.ipv4.ip_nonlocal_bind=1" >> /etc/sysctl.conf
# Run the following command to check if update was applied (!!! in the last successful install, i think i didn't run this)
# sudo sysctl -p
You should see the following response
net.ipv4.ip_nonlocal_bind = 1 # Is correct!
To implement an extra layer of protection in our Keepalived configuration, let’s add a new group and user that will run the scripts.
====================================================================================================================================
# sudo groupadd -r keepalived_script
# sudo useradd -r -s /sbin/nologin -g keepalived_script -M keepalived_script
Now we can configure Keepalived, for that edit the file: /etc/keepalived/keepalived.conf
========================================================================================
global_defs {
# Don't run scripts configured to be run as root if any part of the path
# is writable by a non-root user.
enable_script_security
}
vrrp_script chk_haproxy {
script "/usr/bin/pgrep haproxy"
interval 2 # check every 2 seconds
weight 2 # add 2 points of priority if OK
}
vrrp_instance VI_1 {
interface ens33 # change here to match your network interface name.
state MASTER # change here to BACKUP on the Backup server.
virtual_router_id 51
priority 101 # 101 master, 100 backup change here according to server
virtual_ipaddress {
192.168.101.135
}
track_script {
chk_haproxy
}
}
Check to see if the HAProxy Frontend Loadbalancer is working properly
=====================================================================
root@ubuntuserver2204:/home/ubuntu# netstat -ant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 192.168.101.135:6443 0.0.0.0:* LISTEN
Line 248 shows that the HAProxy is running correctly, i.e. any client browser that enters http://192.168.101.135:6443 will be redirected by HAProxy to one of the worker nodes running the Juice-Shop website.
Now go to another VM machine that contains the client browser and enter http://192.168.101.135:6443, you should be able to see the juice-shop website.
To show in which worker node is the JuiceShop website running in,
=================================================================
root@ubuntuserver2204:/home/ubuntu# kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default juice-shop2-7649b9bbb7-bknqx 1/1 Running 0 57m 10.244.1.2 kind-worker3 <none> <none>
kube-system coredns-7db6d8ff4d-b2jhl 1/1 Running 0 61m 10.244.0.3 kind-control-plane <none> <none>
kube-system coredns-7db6d8ff4d-kv6b7 1/1 Running 0 61m 10.244.0.4 kind-control-plane <none> <none>
kube-system etcd-kind-control-plane 1/1 Running 0 61m 172.18.0.2 kind-control-plane <none> <none>
kube-system kindnet-8fxx6 1/1 Running 0 60m 172.18.0.4 kind-worker2 <none> <none>
kube-system kindnet-kmff5 1/1 Running 0 60m 172.18.0.3 kind-worker3 <none> <none>
kube-system kindnet-lcmj5 1/1 Running 0 61m 172.18.0.2 kind-control-plane <none> <none>
kube-system kindnet-lzzj6 1/1 Running 0 60m 172.18.0.5 kind-worker <none> <none>
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 61m 172.18.0.2 kind-control-plane <none> <none>
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 61m 172.18.0.2 kind-control-plane <none> <none>
kube-system kube-proxy-sr7mr 1/1 Running 0 60m 172.18.0.4 kind-worker2 <none> <none>
kube-system kube-proxy-v2hwc 1/1 Running 0 60m 172.18.0.3 kind-worker3 <none> <none>
kube-system kube-proxy-wcrsg 1/1 Running 0 60m 172.18.0.5 kind-worker <none> <none>
kube-system kube-proxy-wq2ln 1/1 Running 0 61m 172.18.0.2 kind-control-plane <none> <none>
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 61m 172.18.0.2 kind-control-plane <none> <none>
local-path-storage local-path-provisioner-988d74bc-6zrcm 1/1 Running 0 61m 10.244.0.2 kind-control-plane <none> <none>
The above shows it is running in kind-worker3, see line 258, now we can drain it and see that it will shift to another worker node
# kubectl drain --ignore-daemonsets kind-worker2
# kubectl drain --ignore-daemonsets kind-worker3
node/kind-worker3 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-pf8tp, kube-system/kube-proxy-h9c7j
evicting pod default/juice-shop2-7649b9bbb7-9tn9q
Now check again, worker 2 and worker 3 no longer runs the workload for juiceshop2
# kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default juice-shop2-7649b9bbb7-2ndzg 1/1 Running 0 104s 10.244.3.2 kind-worker <none> <none>
kube-system coredns-7db6d8ff4d-b2jhl 1/1 Running 0 64m 10.244.0.3 kind-control-plane <none> <none>
kube-system coredns-7db6d8ff4d-kv6b7 1/1 Running 0 64m 10.244.0.4 kind-control-plane <none> <none>
kube-system etcd-kind-control-plane 1/1 Running 0 64m 172.18.0.2 kind-control-plane <none> <none>
kube-system kindnet-8fxx6 1/1 Running 0 64m 172.18.0.4 kind-worker2 <none> <none>
kube-system kindnet-kmff5 1/1 Running 0 64m 172.18.0.3 kind-worker3 <none> <none>
kube-system kindnet-lcmj5 1/1 Running 0 64m 172.18.0.2 kind-control-plane <none> <none>
kube-system kindnet-lzzj6 1/1 Running 0 64m 172.18.0.5 kind-worker <none> <none>
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 64m 172.18.0.2 kind-control-plane <none> <none>
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 64m 172.18.0.2 kind-control-plane <none> <none>
kube-system kube-proxy-sr7mr 1/1 Running 0 64m 172.18.0.4 kind-worker2 <none> <none>
kube-system kube-proxy-v2hwc 1/1 Running 0 64m 172.18.0.3 kind-worker3 <none> <none>
kube-system kube-proxy-wcrsg 1/1 Running 0 64m 172.18.0.5 kind-worker <none> <none>
kube-system kube-proxy-wq2ln 1/1 Running 0 64m 172.18.0.2 kind-control-plane <none> <none>
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 64m 172.18.0.2 kind-control-plane <none> <none>
local-path-storage local-path-provisioner-988d74bc-6zrcm 1/1 Running 0 64m 10.244.0.2 kind-control-plane <none> <none>
Clean-up: Delete the K8 Cluster with 1 Master and 3 Worker nodes
======================================================
root@ubuntuserver2204:/home/ubuntu# kind delete cluster
Deleting cluster "kind" ...
Deleted nodes: ["kind-control-plane" "kind-worker2" "kind-worker" "kind-worker3"]
root@ubuntuserver2204:/home/ubuntu#
Notes: Explain the usage of HAProxy and Keepalived
==================================================
HAProxy
=======
Function: HAProxy is primarily a load balancer and proxy server.
Layer: It operates at Layer 4 (Transport Layer) and Layer 7 (Application Layer) of the OSI model.
Purpose: It distributes incoming traffic across multiple backend servers to optimize resource use, maximize throughput, minimize response time, and avoid overload on any single server.
Features: Supports various load balancing algorithms, SSL/TLS termination, TCP/HTTP proxying, and content switching. It can handle thousands of concurrent connections efficiently1.
Use Case: Ideal for web applications needing high performance and advanced routing capabilities.
Keepalived (this is needed when there are two HAProxy and you need a virtual IP address to loadbalance between the two HAProxies)
==========
Function: Keepalived is primarily used for high availability.
Layer: It operates at Layer 3 (Network Layer).
Purpose: It ensures service availability by using the Virtual Router Redundancy Protocol (VRRP) to create a virtual IP address that can be automatically transferred between multiple servers in case of failure.
Features: Focuses on failover and redundancy, ensuring that if one server goes down, another can take over seamlessly1.
Use Case: Ideal for maintaining high availability of services by managing virtual IP addresses and failover mechanisms.
Key Differences
Primary Focus: HAProxy focuses on load balancing and optimizing web traffic, while Keepalived focuses on high availability and failover.
Complexity: HAProxy has a more complex configuration due to its extensive features for load balancing and proxying. Keepalived has a simpler configuration as it mainly deals with high availability1.
Performance: HAProxy is known for its high performance and scalability, handling large volumes of traffic efficiently. Keepalived is more lightweight, focusing on ensuring service continuity rather than performance1.
In summary, if you need to distribute traffic and optimize performance, HAProxy is the go-to tool. If your priority is to ensure that your services remain available even if a server fails, Keepalived is the better choice.
Troubleshooting Section
=======================
Check your hosts file
=====================
root@ubuntuserver2204:/home/ubuntu# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 ubuntuserver2204
192.168.101.135 juice-shop.local
172.18.0.2 k8s-slave-001.lan.int
172.18.0.5 k8s-slave-002.lan.int
172.18.0.4 k8s-slave-003.lan.int
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
root@ubuntuserver2204:/home/ubuntu#
Check the haproxy.cfg configuration
===================================
root@ubuntuserver2204:/home/ubuntu# cd /etc/haproxy/haproxy.cfg
bash: cd: /etc/haproxy/haproxy.cfg: Not a directory
root@ubuntuserver2204:/home/ubuntu# cd /etc/haproxy
root@ubuntuserver2204:/etc/haproxy# ls
errors haproxy.cfg
root@ubuntuserver2204:/etc/haproxy# cat haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend kubernetes
mode tcp
bind 192.168.101.135:6443
option tcplog
default_backend k8s-control-plane
backend k8s-control-plane
mode tcp
balance roundrobin
option tcp-check
server k8s-slave-001.lan.int 172.18.0.2:32037 check fall 3 rise 2
server k8s-slave-002.lan.int 172.18.0.5:32037 check fall 3 rise 2
server k8s-slave-003.lan.int 172.18.0.4:32037 check fall 3 rise 2
root@ubuntuserver2204:/etc/haproxy#
Check the keepalived.cgf file
=============================
root@ubuntuserver2204:/etc/haproxy# cd /etc/keepalived/
root@ubuntuserver2204:/etc/keepalived# ls
keepalive.conf keepalived.conf
root@ubuntuserver2204:/etc/keepalived#
root@ubuntuserver2204:/etc/keepalived# cat keepalived.conf
global_defs {
# Don't run scripts configured to be run as root if any part of the path
# is writable by a non-root user.
enable_script_security
}
vrrp_script chk_haproxy {
script "/usr/bin/pgrep haproxy"
interval 2 # check every 2 seconds
weight 2 # add 2 points of priority if OK
}
vrrp_instance VI_1 {
interface ens33 # change here to match your network interface name.
state MASTER # change here to BACKUP on the Backup server.
virtual_router_id 51
priority 101 # 101 master, 100 backup change here according to server
virtual_ipaddress {
192.168.101.135
}
track_script {
chk_haproxy
}
}
root@ubuntuserver2204:/etc/keepalived#
Check status of haproxy service
===============================
root@ubuntuserver2204:/etc/keepalived# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2024-10-01 07:04:45 UTC; 14min ago
Docs: man:haproxy(1)
file:/usr/share/doc/haproxy/configuration.txt.gz
Process: 9841 ExecStartPre=/usr/sbin/haproxy -Ws -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SUCCESS)
Main PID: 9843 (haproxy)
Tasks: 3 (limit: 4521)
Memory: 70.2M
CPU: 4.089s
CGroup: /system.slice/haproxy.service
├─9843 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
└─9845 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
Oct 01 07:04:45 ubuntuserver2204 systemd[1]: Starting HAProxy Load Balancer...
Oct 01 07:04:45 ubuntuserver2204 haproxy[9843]: [NOTICE] (9843) : New worker #1 (9845) forked
Oct 01 07:04:45 ubuntuserver2204 systemd[1]: Started HAProxy Load Balancer.
root@ubuntuserver2204:/etc/keepalived#
Check status of keepalived service
==================================
root@ubuntuserver2204:/etc/keepalived# systemctl status keepalived
● keepalived.service - Keepalive Daemon (LVS and VRRP)
Loaded: loaded (/lib/systemd/system/keepalived.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2024-10-01 07:13:15 UTC; 7min ago
Main PID: 10233 (keepalived)
Tasks: 2 (limit: 4521)
Memory: 5.3M
CPU: 22.720s
CGroup: /system.slice/keepalived.service
├─10233 /usr/sbin/keepalived --dont-fork
└─10234 /usr/sbin/keepalived --dont-fork
Oct 01 07:13:15 ubuntuserver2204 Keepalived[10233]: Configuration file /etc/keepalived/keepalived.conf
Oct 01 07:13:15 ubuntuserver2204 Keepalived[10233]: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 01 07:13:15 ubuntuserver2204 Keepalived[10233]: Starting VRRP child process, pid=10234
Oct 01 07:13:15 ubuntuserver2204 systemd[1]: keepalived.service: Got notification message from PID 10234, but reception only permitted for main PID 10233
Oct 01 07:13:15 ubuntuserver2204 Keepalived[10233]: Startup complete
Oct 01 07:13:15 ubuntuserver2204 systemd[1]: Started Keepalive Daemon (LVS and VRRP).
Oct 01 07:13:15 ubuntuserver2204 Keepalived_vrrp[10234]: (VI_1) entering FAULT state (no IPv4 address for interface)
Oct 01 07:13:16 ubuntuserver2204 Keepalived_vrrp[10234]: (VI_1) entering FAULT state
Oct 01 07:13:16 ubuntuserver2204 Keepalived_vrrp[10234]: VRRP_Script(chk_haproxy) succeeded
Oct 01 07:13:16 ubuntuserver2204 Keepalived_vrrp[10234]: (VI_1) Changing effective priority from 101 to 103
root@ubuntuserver2204:/etc/keepalived#