You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: labs/lab12/readme.md
+19-12Lines changed: 19 additions & 12 deletions
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ By the end of the lab you will be able to:
45
45
46
46
The NLK controller uses an API key to send updates to your NGINX instance. Create a new one following these steps:
47
47
48
-
1. Using the N4A Web Console, in your N4A deployment, Settings, click on `NGINX API keys`, then `+ New API Key`.
48
+
1. Using the N4A Web Console, in your N4A deployment, Settings, click on `NGINX Loadbalancer for Kubernetes`, then `+ New API Key`.
49
49
50
50
1. On the right in `Add API Key` sidebar, give it a name. Optionally change the Expiration Date. In this example, you will use:
51
51
@@ -74,7 +74,7 @@ Go the Azure Marketplane, and search for `NGINX`. Or click on this link to take
74
74
75
75
1. Click on `Get It Now`, then `Continue`.
76
76
77
-
Select the Subscription and Resource Group for the deployment; Select `No` for a new AKS cluster. You will use your existing clusters from Lab3 for this lab exercise.
77
+
Select your Subscription and Resource Group for the deployment; Select `No` for a new AKS cluster. You will use your existing clusters from Lab3 for this lab exercise.
78
78
79
79
1. Click `Next`, and chose `n4a-aks1` under Cluster Details.
80
80
@@ -84,11 +84,10 @@ Go the Azure Marketplane, and search for `NGINX`. Or click on this link to take
84
84
- Check the `Allow minor version updates`
85
85
- Paste your Dataplane API Key value
86
86
- Paste your Dataplane API Endpoint URL **and ADD `nplus` to the end**
87
-
- Optional: Add new KeyValue pair: `nlk.config.logLevel``info`
88
87
89
88
1. Click `Next`, Review your settings.
90
89
91
-
If you scroll to the bottom, you will see your entered data, take a screenshot if you did not SAVE it somewhere. If you are satisifed with your Settiings, click `Create`.*You can safely ignore the billing warning, NLK is free of charge at the time of this writing.*
90
+
If you scroll to the bottom, you will see your entered data, take a screenshot if you did not SAVE it somewhere. If you are satisifed with your Settiings, click `Create`.
92
91
93
92

94
93
@@ -311,7 +310,7 @@ NGINX Ingress will then route the requests to the correct Services and Pods. (N
311
310
312
311
1. Using curl, see if the NGINX Cafe application works and what Header Values are returned, ready for coffee?
@@ -383,7 +382,12 @@ NGINX Ingress will then route the requests to the correct Services and Pods. (N
383
382
384
383
```
385
384
386
-
1. You can see this using Azure Metrics as well, as shown here. Notice that you can add the Filter and Splitting, to see the IP Addresses of the NGINX Upstreams. The Upstream `aks1-nlk-upstreams` and it's IP Addresses should match your Worker Node IPs, and the Port number should match your NodePort `nginx-ingress` Service.
385
+
1. Using Chrome, open `http://cafe.example.com/coffee`, and Right-Click `Inspect`. Choose `Network`, and then a webpage opject. You should find the `X-Aks1-Upstream` Header, with the same Worker `NodeIP:NodePort` value. As you Refresh Chrome, this Header Value will change as Nginx is load balancing to all your AKS workers.
386
+
387
+

388
+
389
+
390
+
1. You can see this using Azure Metrics as well, as shown here. Select `plus.http.upstreams.peers.state.up`, and then add the Filter for Upstream, and select`peer.address` Value for Splitting, to see the IP:Port Addresses of the NGINX Upstreams. The Upstream `aks1-nlk-upstreams` and it's IP Addresses should match your Worker Node IPs, and the Port number should match your NodePort `nginx-ingress` Service.
@@ -460,19 +464,17 @@ If you want to see what the NLK Controller is doing, you have to change the Logg
460
464
461
465
Now for the actual Scaling Test!! Does the NLK Controller detect when you `scale your AKS Cluster nodes up/down` (Node Scaling)? You will test that now.
462
466
463
-
1. Using the Azure Portal web console, manually scale your `n4a-aks1 nodepool` from 3 to 5 workers.
467
+
1. Using the Azure Portal web console, manually scale your `n4a-aks1 Node pool` from 3 to 5 workers. Find your `n4a-aks1` cluster, Select Settings, then Node pools. Click the `Scale node pool` button, then change it to 5 (or whatever you'd like to test), and click `Apply`. It will take several minutes for the Workers to join the Cluster.
464
468
465
469

466
470
467
-
Watching the NLK Logs, you should see some NLK `Updated messages` scroll by.
468
-
469
-
1. Open a new Terminal, check with Curl, do you find 5 different IP addresses in the in `X-Aks1-Upstream Header` values?
471
+
1. Open a new Terminal, check again with Curl, do you find 5 different IP addresses in the in`X-Aks1-Upstream Header` values?
Confirm - what are the 5 n4a-aks1 Node IPs? Ask Kubernetes ...
477
+
1. Confirmation - what are the 5 n4a-aks1 NodeIPs? Ask Kubernetes ...
476
478
477
479
```bash
478
480
kubectl config use-context n4a-aks1
@@ -488,8 +490,9 @@ Now for the actual Scaling Test!! Does the NLK Controller detect when you `scal
488
490
InternalIP: 172.16.10.7
489
491
```
490
492
493
+
Type `Ctrl-C` when you are finished with the curl command.
491
494
492
-
1. Go back your N4A Metrics Panel, and check the `plus.http.upstream.peer.address` of your Metrics Filter... you should also find 5 IP:NodePort Addresses, one for each upstream/worker.
495
+
1. Go back your N4A Metrics Panel, and check the `peer.address` of your Metrics Split... you should also find 5 IP:NodePort Addresses, one for each upstream/worker.
@@ -521,6 +524,10 @@ As it is the `end of day` business time for your company, the workload demands o
521
524
522
525
If you are curious, you can change the NLK Controller LogLevel to `debug`, and see many more details about what NLK is doing.
523
526
527
+
1. Using the Azure Portal, Select your `n4a-aks1` cluster, then Settings, Extensions. Select your `aks1nlk` extension. Scroll Down and Expand `Configuration settings`, and change the `nlk.config.logLevel` to `debug` and click Save, shown.
528
+
529
+

530
+
524
531
1. While watching the debug log, Scale your n4a-aks1 cluster back to 3 Nodes. You will find something like this:
0 commit comments