Exercise 4.1: Working With CPU and Memory Constraints: Stress
Exercise 4.1: Working With CPU and Memory Constraints: Stress
LABS 1
Overview
We will continue working with our cluster, which we built in the previous lab. We will work with resource limits, more
with namespaces and then a complex deployment which you can explore to further understand the architecture and
relationships.
Use SSH or PuTTY to connect to the nodes you installed in the previous exercise. We will deploy an application called
stress inside a container, and then use resource limits to constrain the resources the application has access to
use.
1. Use a container called stress, which we will name hog, to generate load. Verify you have a container running.
student@lfs458-node-1a0a:~$ kubectl create deployment hog --image vish/stress
deployment.apps/hog created
2. Use the describe argument to view details, then view the output in YAML format. Note there are no settings limiting
resource usage. Instead, there are empty curly brackets.
student@lfs458-node-1a0a:~$ kubectl describe deployment hog
Name: hog
Namespace: default
CreationTimestamp: Tue, 08 Jan 2019 17:01:54 +0000
Labels: app=hog
Annotations: deployment.kubernetes.io/revision: 1
<output_omitted>
<output_omitted>
template:
metadata:
creationTimestamp: null
labels:
app: hog
spec:
containers:
- image: vish/stress
imagePullPolicy: Always
name: stress
resources: {}
terminationMessagePath: /dev/termination-log
<output_omitted>
3. We will use the YAML output to create our own configuration file. The --export option can be useful to not include
unique parameters. Again, the option has a deprecation message and may be removed in a future release.
4. If you did not use the --export option we will need to remove the status output, creationTimestamp and other
settings, as we don’t want to set unique generated parameters. We will also add in memory limits found below.
student@lfs458-node-1a0a:~$ vim hog.yaml
hog.yaml
1 ....
2 imagePullPolicy: Always
3 name: hog
4 resources: # Edit to remove {}
5 limits: # Add these 4 lines
6 memory: "4Gi"
7 requests:
8 memory: "2500Mi"
9 terminationMessagePath: /dev/termination-log
10 terminationMessagePolicy: File
11 ....
6. Verify the change has been made. The deployment should now show resource limits.
7. View the stdio of the hog container. Note how much memory has been allocated.
8. Open a second and third terminal to access both master and second nodes. Run top to view resource usage. You
should not see unusual resource usage at this point. The dockerd and top processes should be using about the same
amount of resources. The stress command should not be using enough resources to show up.
9. Edit the hog configuration file and add arguments for stress to consume CPU and memory. The args: entry should be
indented the same number of spaces as resources:.
student@lfs458-node-1a0a:~$ vim hog.yaml
hog.yaml
1 ....
2 resources:
3 limits:
4 cpu: "1"
5 memory: "4Gi"
6 requests:
7 cpu: "0.5"
8 memory: "500Mi"
9 args:
10 - -cpus
11 - "2"
12 - -mem-total
13 - "950Mi"
14 - -mem-alloc-size
15 - "100Mi"
16 - -mem-alloc-sleep
17 - "1s"
18 ....
10. Delete and recreate the deployment. You should see increased CPU usage almost immediately and memory allocation
happen in 100M chunks allocated to the stress program via the running top command. Check both nodes as the
container could deployed to either.
student@lfs458-node-1a0a:~$ kubectl delete deployment hog
deployment.apps "hog" deleted
goroutine 1 [running]:
panic(0x5ff9a0, 0xc820014cb0)
/usr/local/go/src/runtime/panic.go:481 +0x3e6
k8s.io/kubernetes/pkg/api/resource.MustParse(0x7ffe460c0e69, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/google/home/vishnuk/go/src/k8s.io/kubernetes/pkg/api/resource/quantity.go:134 +0x287
main.main()
/usr/local/google/home/vishnuk/go/src/github.com/vishh/stress/main.go:24 +0x43
Here is an example of an improper parameter. The container is running, but not allocating memory. It should
show the usage requested from the YAML file.
student@lfs458-node-1a0a:~$ kubectl get po
NAME READY STATUS RESTARTS AGE
hog-1603763060-x3vnn 1/1 Running 0 8s