0% found this document useful (0 votes)
453 views

EX280 Demo

EX280 Demo

Uploaded by

averey.gohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
453 views

EX280 Demo

EX280 Demo

Uploaded by

averey.gohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

RedHat

EX280 Exam
Red Hat Certified OpenShift

Questions & Answers


(Demo Version - Limited Content)

Thank you for Downloading EX280 exam PDF Demo

Get Full File:


https://www.certsland.com/ex280-dumps/

www.certsland.com
Questions & Answers PDF Page 2

Question: 1

You are tasked with deploying a highly available application in OpenShift. Create a Deployment using
YAML to deploy the nginx container with three replicas, ensuring that it runs successfully. Verify that the
Deployment is active, all replicas are running, and the application can serve requests properly. Provide a
complete walkthrough of the process, including necessary commands to check deployment status.

Answer: See the Solution below.

Solution:

1. Create a Deployment YAML file named nginx-deployment.yaml with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80

2. Deploy the file using the command:

kubectl apply -f nginx-deployment.yaml

3. Check the status of the deployment:

kubectl get deployments


kubectl get pods

4. Test the application by exposing the Deployment:

kubectl expose deployment nginx-deployment --type=NodePort --port=80


kubectl get svc

www.certsland.com
Questions & Answers PDF Page 3

5. Use the NodePort and cluster IP to confirm that the application is serving requests.

Explanation:

Deployments provide a scalable and declarative way to manage applications. YAML manifests ensure
the configuration is consistent, while NodePort services expose the application for testing. Verifying
replicas ensures that the application is running as expected and resilient.

Question: 2

Your team requires an application to load specific configuration data dynamically during runtime. Create
a ConfigMap to hold key-value pairs for application settings, and update an existing Deployment to use
this ConfigMap. Provide a complete YAML definition for both the ConfigMap and the updated
Deployment, and demonstrate how to validate that the configuration is applied correctly.

Answer: See the Solution below.

Solution:

1. Create a ConfigMap YAML file named app-config.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
APP_ENV: production
APP_DEBUG: "false"

2. Apply the ConfigMap using:

kubectl apply -f app-config.yaml

3. Update the Deployment YAML to reference the ConfigMap:

apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app

www.certsland.com
Questions & Answers PDF Page 4

spec:
containers:
- name: app-container
image: nginx:latest
env:
- name: APP_ENV
valueFrom:
configMapKeyRef:
name: app-config
key: APP_ENV
- name: APP_DEBUG
valueFrom:
configMapKeyRef:
name: app-config
key: APP_DEBUG

4. Apply the updated Deployment:

kubectl apply -f app-deployment.yaml

5. Verify the pod environment variables:

kubectl exec -it <pod-name> -- env | grep APP

Explanation:

ConfigMaps decouple configuration data from the application code, enabling environment-specific
settings without altering the deployment logic. Using environment variables from ConfigMaps ensures
flexibility and reduces maintenance complexity.

Question: 3

Perform a rolling update of an application to upgrade the nginx image from 1.19 to 1.21. Ensure zero
downtime during the update and verify that all replicas are running the new version.

Answer: See the Solution below.

Solution:

1. Update the Deployment:

kubectl set image deployment/nginx-deployment nginx=nginx:1.21

2. Monitor the rollout status:

kubectl rollout status deployment/nginx-deployment

3. Verify the updated pods:

www.certsland.com
Questions & Answers PDF Page 5

kubectl get pods -o wide


kubectl describe pods | grep "nginx:1.21"

Explanation:

Rolling updates replace pods incrementally, ensuring that applications remain available during the
update process. Monitoring confirms the successful rollout.

Question: 4

Deploy an application across multiple namespaces using a common Deployment YAML file. Include
steps to create the namespaces, apply the deployment, and verify that the pods are running in each
namespace.

Answer: See the Solution below.

Solution:

1. Create namespaces:

kubectl create namespace ns1


kubectl create namespace ns2

2. Create a Deployment YAML file app-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-namespace-app
spec:
replicas: 2
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
containers:
- name: app-container
image: nginx:latest

3. Apply the deployment in each namespace:

kubectl apply -f app-deployment.yaml -n ns1


kubectl apply -f app-deployment.yaml -n ns2

4. Verify the pods:

www.certsland.com
Questions & Answers PDF Page 6

kubectl get pods -n ns1


kubectl get pods -n ns2

Explanation:

Deploying across namespaces ensures workload isolation while reusing common configurations.
Verification confirms that resources are created and operational.

Question: 5

Configure an Ingress resource to expose an application using a custom domain name. Include steps to
create the Ingress YAML and validate that the domain resolves to the application.

Answer: See the Solution below.

Solution:

1. Create an Ingress YAML file ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: custom-domain.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80

2. Apply the Ingress resource:

kubectl apply -f ingress.yaml

3. Update DNS or /etc/hosts to point custom-domain.example.com to the cluster IP.

4. Verify accessibility:

curl http://custom-domain.example.com

Explanation:

www.certsland.com
Questions & Answers PDF Page 7

Ingress provides an HTTP(S) layer to expose services using custom domains, offering centralized traffic
management.

Question: 6

Configure a StatefulSet to deploy a MySQL database with persistent storage. Include steps to define the
StatefulSet, create a PersistentVolume (PV) and PersistentVolumeClaim (PVC), and verify the database
is running correctly.

Answer: See the Solution below.

Solution:

1. Create a PV YAML file mysql-pv.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/mysql

2. Create a PVC YAML file mysql-pvc.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi

3. Create a StatefulSet YAML file mysql-statefulset.yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: "mysql"
replicas: 1
www.certsland.com
Questions & Answers PDF Page 8

selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: rootpassword
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi

4. Apply all YAML files:

kubectl apply -f mysql-pv.yaml


kubectl apply -f mysql-pvc.yaml
kubectl apply -f mysql-statefulset.yaml

5. Verify the StatefulSet and pod:

kubectl get statefulsets


kubectl get pods

Explanation:

StatefulSets ensure stable identities for applications requiring persistent data, like databases. Coupling
them with PVs and PVCs ensures data persistence across restarts.

Question: 7

Diagnose and fix an issue where a Deployment fails due to exceeding the configured ResourceQuota.

Answer: See the Solution below.

www.certsland.com
Questions & Answers PDF Page 9

Solution:

1. Check the ResourceQuota usage:

kubectl get resourcequota -n <namespace>

2. Review the Deployment resource requests:

kubectl describe deployment <deployment-name>

3. Adjust Deployment resource requests to fit within the quota:

resources:
requests:
cpu: "100m"
memory: "128Mi"

4. Reapply the Deployment:

kubectl apply -f deployment.yaml

Explanation:

ResourceQuotas ensure fair resource distribution. Adjusting Deployment configurations avoids conflicts
and ensures compliance.

Question: 8

Set up and validate OpenShift pod affinity to ensure that pods are scheduled on the same node.

Answer: See the Solution below.

Solution:

1. Update the pod spec with affinity rules:

affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: "kubernetes.io/hostname"

2. Apply the updated configuration:

www.certsland.com
Questions & Answers PDF Page 10

kubectl apply -f pod.yaml

3. Verify pod placement:

kubectl get pods -o wide

Explanation:

Pod affinity ensures co-location of related workloads, optimizing resource usage and inter-pod
communication.

Question: 9

Scale an application deployment horizontally to handle increased load.

Answer: See the Solution below.

Solution:

1. Scale the deployment:

kubectl scale deployment nginx-deployment --replicas=5

2. Verify the scaling:

kubectl get pods

Explanation:

Horizontal scaling adds replicas, ensuring applications handle increased traffic effectively.

Question: 10

Update an existing Deployment to add a readiness probe. Validate that the readiness probe works
correctly.

Answer: See the Solution below.

Solution:

1. Update the Deployment YAML to include a readiness probe:

readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10

www.certsland.com
Questions & Answers PDF Page 11

2. Apply the updated Deployment:

kubectl apply -f deployment.yaml

3. Verify pod readiness:

kubectl get pods


kubectl describe pod <pod-name>

Explanation:

Readiness probes ensure that only fully initialized and functional pods receive traffic, improving
application reliability.

Question: 11

Deploy an application using Kustomize with environment-specific overlays for dev and prod. Validate the
deployment.

Answer: See the Solution below.

Solution:

1. Create a base Kustomize directory:

# base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app-container
image: nginx

2. Create overlays for dev and prod:

# overlays/dev/kustomization.yaml
resources:
- ../../base
www.certsland.com
Questions & Answers PDF Page 12

- replicas:
my-app: 1

# overlays/prod/kustomization.yaml
resources:
- ../../base
replicas:
my-app: 3

3. Deploy the prod overlay:

kubectl apply -k overlays/prod

Explanation:

Kustomize overlays provide a clean way to manage environment-specific configurations without


duplicating base manifests.

Question: 12

Deploy a Job that runs a database migration script. Validate its execution and logs.

Answer: See the Solution below.

Solution:

1. Create a Job YAML file migration-job.yaml:

apiVersion: batch/v1
kind: Job
metadata:
name: db-migration
spec:
template:
spec:
containers:
- name: migration
image: postgres
command: ["sh", "-c", "psql -U user -d dbname -f /scripts/migration.sql"]
volumeMounts:
- name: script-volume
mountPath: /scripts
volumes:
- name: script-volume
configMap:
name: migration-script
restartPolicy: Never

2. Apply the Job:


www.certsland.com
Questions & Answers PDF Page 13

kubectl apply -f migration-job.yaml

3. Validate execution:

kubectl get jobs


kubectl logs job/db-migration

Explanation:

Jobs are ideal for executing one-time tasks like database migrations. Logs help verify task success or
troubleshoot issues.

Question: 13

Create a ConfigMap with multiple configuration keys. Inject them as individual environment variables into
a pod. Validate their presence.

Answer: See the Solution below.

Solution:

1. Create a ConfigMap:

kubectl create configmap multi-config --from-literal=key1=value1 --from-literal=key2=value2

2. Update the pod YAML to inject the ConfigMap as environment variables:

envFrom:
- configMapRef:
name: multi-config

3. Deploy the pod and validate:

kubectl apply -f pod.yaml


kubectl exec <pod-name> -- printenv | grep key

Explanation:

ConfigMaps with multiple keys allow flexible configuration injection into pods, supporting modular and
scalable application designs.

Question: 14

Create a StatefulSet for a Cassandra database cluster with non-shared storage for each node. Validate
the cluster functionality after restarting pods.

Answer: See the Solution below.

www.certsland.com
Questions & Answers PDF Page 14

Solution:

1. Create a StatefulSet YAML for Cassandra:

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
spec:
serviceName: "cassandra-service"
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
containers:
- name: cassandra
image: cassandra:3.11
ports:
- containerPort: 9042
volumeMounts:
- name: cassandra-data
mountPath: /var/lib/cassandra
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi

2. Apply the StatefulSet:

kubectl apply -f cassandra-statefulset.yaml

3. Insert data into the cluster:

cqlsh <cassandra-pod-ip> -e "INSERT INTO keyspace.table (id, value) VALUES (1, 'data');"

4. Restart a pod and validate data persistence:

kubectl delete pod cassandra-0


cqlsh <cassandra-pod-ip> -e "SELECT * FROM keyspace.table;"

www.certsland.com
Questions & Answers PDF Page 15

Explanation:

StatefulSets ensure each Cassandra node has its own persistent storage, allowing data retention even if
pods are restarted or moved.

Question: 15

Deploy an application that dynamically provisions storage using a CSI driver. Validate the storage
binding.

Answer: See the Solution below.

Solution:

1. Install the CSI driver for your environment (e.g., AWS EBS, Ceph):

kubectl apply -f csi-driver.yaml

2. Create a StorageClass that uses the CSI driver:

provisioner: ebs.csi.aws.com

3. Deploy a PVC and pod using the StorageClass:

kubectl apply -f pvc-and-pod.yaml


kubectl exec <pod-name> -- ls /mnt

Explanation:

CSI drivers provide a standardized way to integrate external storage solutions with Kubernetes, enabling
dynamic provisioning.

Question: 16

Manually configure resource quotas for a namespace to limit total application resource usage. Validate
enforcement.

Answer: See the Solution below.

Solution:

1. Create a ResourceQuota YAML:

apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:

www.certsland.com
Questions & Answers PDF Page 16

hard:
requests.cpu: "2"
requests.memory: "2Gi"
limits.cpu: "4"
limits.memory: "4Gi"

2. Apply the ResourceQuota:

kubectl apply -f resourcequota.yaml -n my-namespace

3. Deploy a pod exceeding limits and validate:

kubectl describe quota compute-resources -n my-namespace

Explanation:

Resource quotas restrict namespace resource usage, preventing excessive consumption and
maintaining cluster fairness.

Question: 17

Use a combination of DeploymentConfig and ImageStream to implement a blue-green deployment.


Validate the process.

Answer: See the Solution below.

Solution:

1. Create two DeploymentConfigs (blue and green):

oc apply -f blue-deploymentconfig.yaml
oc apply -f green-deploymentconfig.yaml

2. Tag the ImageStream to switch traffic:

oc tag nginx:1.22 nginx-stream:stable

3. Validate the deployment:

curl http://<app-url>

Explanation:

Blue-green deployments ensure seamless transitions between versions by maintaining separate


environments for live and testing versions.

Question: 18

www.certsland.com
Questions & Answers PDF Page 17

Modify the password policy for HTPasswd users to enforce complexity requirements.

Answer: See the Solution below.

Solution:

1. Edit the HTPasswd file and add complex passwords:

htpasswd /etc/origin/htpasswd admin

2. Update the HTPasswd secret:

oc create secret generic htpasswd-secret --from-file=htpasswd=/etc/origin/htpasswd -n openshift-config

3. Validate by logging in:

oc login -u admin -p <complex-password>

Explanation:

Enforcing complex passwords improves account security, reducing the risk of unauthorized access.

Question: 19

Set up an OAuth identity provider to integrate with an external authentication service. Validate user login
through the external provider.

Answer: See the Solution below.

Solution:

1. Edit the OAuth configuration to include the external identity provider:

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: external-idp
type: OpenID
mappingMethod: claim
openID:
clientID: my-client-id
clientSecret:
name: my-client-secret
claims:
preferredUsername:
- email
www.certsland.com
Questions & Answers PDF Page 18

name:
- name
email:
- email
urls:
authorize: https://idp.example.com/authorize
token: https://idp.example.com/token
userInfo: https://idp.example.com/userinfo

2. Apply the configuration:

oc apply -f oauth-config.yaml

3. Validate by logging in through the external provider:

oc login --token=<external-idp-token>

Explanation:

Integrating external identity providers centralizes authentication management and supports single sign-
on (SSO) capabilities.

Question: 20

Restrict project creation to specific users. Validate the restricted behavior for other users.

Answer: See the Solution below.

Solution:

1. Remove the self-provisioner role from all authenticated users:

oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated

2. Assign self-provisioner to a specific user:

oc adm policy add-cluster-role-to-user self-provisioner user1

3. Validate restricted access for others:

oc login -u user2 -p <password>


oc new-project restricted-project

Explanation:

Restricting project creation ensures tighter control over resource usage and aligns with organizational
policies.

www.certsland.com
Questions & Answers PDF Page 19

Question: 21

Configure and test application network policies to restrict communication between pods in OpenShift.

Answer: See the Solution below.

Solution:

1. Create a network policy to allow only specific pod-to-pod communication:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-app-communication
namespace: default
spec:
podSelector:
matchLabels:
app: my-app
ingress:
- from:
- podSelector:
matchLabels:
app: my-other-app

2. Apply the network policy:

oc apply -f networkpolicy.yaml

3. Test the policy by attempting to communicate between pods:

oc exec <pod-name> -- ping <target-pod-name>

Explanation:

Network policies enable fine-grained control over traffic between pods, which can be used to restrict
access based on labels or namespaces.

Question: 22

Troubleshoot a failed TLS handshake between the ingress controller and a backend service.

Answer: See the Solution below.

Solution:

1. Check the ingress route:

oc get route my-secure-app -o yaml


www.certsland.com
Questions & Answers PDF Page 20

2. Verify the TLS secret:

oc get secret my-tls-secret -n dev-namespace

3. Examine ingress controller logs:

oc logs <ingress-pod> -n openshift-ingress

4. Update the backend service certificates if needed:

oc create secret tls backend-tls --cert=backend-cert.pem --key=backend-key.pem -n dev-namespace

Explanation:

TLS handshake failures are often due to mismatched certificates or misconfigured routes. Logs and
secret validation help diagnose and resolve the issue.

Question: 23

Configure an internal-only service accessible by other services in the cluster but not externally. Validate
restricted access.

Answer: See the Solution below.

Solution:

1. Create a service without exposing it externally:

apiVersion: v1
kind: Service
metadata:
name: internal-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080

2. Validate internal access:

oc exec <pod-name> -- curl http://internal-service

3. Test external restriction:

curl http://<cluster-external-ip>:80

Explanation:
www.certsland.com
Questions & Answers PDF Page 21

Internal-only services improve security by restricting external access, suitable for backend services that
don’t require public exposure.

Question: 24

Create a custom health check for an application and configure a network policy to block traffic to
unhealthy pods.

Answer: See the Solution below.

Solution:

1. Define a liveness probe in the pod spec:

livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 5

2. Create a network policy to allow traffic only to healthy pods:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: block-unhealthy-pods
namespace: dev-namespace
spec:
podSelector:
matchExpressions:
- key: health
operator: In
values:
- healthy

3. Validate the policy by simulating a pod failure:

oc exec <pod-name> -- curl http://<unhealthy-pod>

Explanation:

Custom health checks and network policies ensure that only healthy pods serve traffic, improving
application reliability.

Question: 25

Configure a default project template to automatically apply quotas and limits when new projects are

www.certsland.com
Questions & Answers PDF Page 22

created. Validate by creating a new project.

Answer: See the Solution below.

Solution:

1. Define a default project template YAML:

apiVersion: template.openshift.io/v1
kind: Template
metadata:
name: default-project-template
objects:
- apiVersion: v1
kind: ResourceQuota
metadata:
name: default-quota
spec:
hard:
pods: "10"
- apiVersion: v1
kind: LimitRange
metadata:
name: default-limit
spec:
limits:
- type: Container
default:
cpu: "1"
memory: "512Mi"

2. Apply the template:

oc create -f default-project-template.yaml

3. Use the template for a new project:

oc new-project test-project --template=default-project-template

4. Validate applied quotas and limits:

oc describe quota default-quota -n test-project

Explanation:

Default project templates automate resource governance for new projects, ensuring consistent
application of quotas and limits.

Question: 26
www.certsland.com
Questions & Answers PDF Page 23

Configure a project quota to limit the total storage capacity of PersistentVolumeClaims (PVCs) in a
namespace. Validate by exceeding the storage limit.

Answer: See the Solution below.

Solution:

1. Create a quota YAML file:

apiVersion: v1
kind: ResourceQuota
metadata:
name: pvc-storage-quota
namespace: storage-project
spec:
hard:
requests.storage: "50Gi"

2. Apply the quota:

oc apply -f pvc-storage-quota.yaml

3. Test enforcement by creating PVCs:

oc create pvc pvc-{1..3} --request-storage=20Gi -n storage-project

4. Verify quota enforcement:

oc describe quota pvc-storage-quota -n storage-project

Explanation:

PVC storage quotas ensure storage usage within a namespace does not exceed the allocated capacity,
preserving cluster resources

Question: 27

Create a project template to include a default Deployment, Service, and Route for a new application.
Validate by creating a project and checking all resources.

Answer: See the Solution below.

Solution:

1. Define the project template YAML:

apiVersion: template.openshift.io/v1
kind: Template
metadata:
www.certsland.com
Questions & Answers PDF Page 24

name: project-template-app
objects:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: nginx
- apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
- apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: app-route
spec:
host: myapp.example.com
to:
kind: Service
name: app-service

2. Apply the template:

oc create -f project-template-app.yaml

3. Create a project using the template:

oc new-project test-project --template=project-template-app

4. Validate resources:

oc get deployment app-deployment -n test-project


www.certsland.com
Questions & Answers PDF Page 25

oc get service app-service -n test-project


oc get route app-route -n test-project

Explanation:

Combining Deployment, Service, and Route in a project template automates the setup of a fully
functional application stack for new projects.

Question: 28

Install the OpenShift GitOps Operator and validate its deployment.

Answer: See the Solution below.

Solution:

1. Install the operator using the CLI:

oc apply -f
https://raw.githubusercontent.com/redhat-developer/gitops-operator/main/deploy/crds/gitops-operator.ya
ml

2. Check the operator's deployment status:

oc get pods -n openshift-gitops

3. Validate the operator's CSV:

oc get csv -n openshift-gitops

Explanation:

The OpenShift GitOps Operator simplifies GitOps workflows, enabling application and infrastructure
automation.

Question: 29

Install the Service Mesh Operator for all namespaces in the cluster. Validate its deployment.

Answer: See the Solution below.

Solution:

1. Install the operator via OperatorHub:

oc apply -f - <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription

www.certsland.com
Questions & Answers PDF Page 26

metadata:
name: servicemeshoperator
namespace: openshift-operators
spec:
channel: stable
name: servicemeshoperator
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF

2. Validate deployment:

oc get pods -n openshift-operators


oc get csv -n openshift-operators

Explanation:

The Service Mesh Operator enables service-to-service communication with features like traffic control,
observability, and security.

Question: 30

Create a secret for a Docker registry credential and use it to pull a private image. Validate the
deployment.

Answer: See the Solution below.

Solution:

1. Create a Docker registry secret:

oc create secret docker-registry my-docker-secret \


--docker-username=<username> \
--docker-password=<password> \
--docker-server=<registry-url> \
-n app-security

2. Use the secret in a pod:

apiVersion: v1
kind: Pod
metadata:
name: private-registry-pod
namespace: app-security
spec:
imagePullSecrets:
- name: my-docker-secret
containers:

www.certsland.com
Questions & Answers PDF Page 27

- name: private-app
image: <registry-url>/<image>:<tag>

3. Validate the deployment:

oc get pods -n app-security

Explanation:

Docker registry secrets enable secure authentication to private image repositories, ensuring only
authorized access.

Thank You for Being Our Valued Customer


We Hope You Enjoy Your Purchase
RedHat EX280 Exam Question & Answers
Red Hat Certified Specialist in MultiCluster Management
Exam
www.certsland.com
Thank You for trying EX280 PDF Demo

https://www.certsland.com/ex280-dumps/

Start Your EX280 Preparation

[Limited Time Offer] Use Coupon " SAVE20 " for extra 20%
discount on the purchase of PDF file. Test your
EX280 preparation with actual exam questions

www.certsland.com

You might also like