1. Select two reasons for using containers to deploy applications. (Choose 2 responses.)
- It creates consistency across
development, testing, and production environments.
- It provides tight coupling between applications and
operating systems.
- Allocating resources in which to run containers is
not necessary.
- Migrating workloads is simpler.
2. How do containers access an operating
system?
- Each container has its own instance of an operating
system.
- Containers use a shared base operating
system stored in a shared kernel layer.
- Containers use a shared base operating system
stored in a Cloud Storage bucket.
- Containers use a shared base operating system
stored in a shared runtime layer.
Explanation: The container
runtime is the component that is accountable for the management and execution
of containers. Containers get access to an operating system via this component.
In order to provide containers with the required isolation and resources, the
container runtime engages in interaction with the kernel of the host operating
system.
3. What is a Kubernetes pod?
- A group of VMs
- A group of containers
- A group of clusters
- A group of nodes
Explanation: The
Kubernetes object model is comprised of a number of different units, the
smallest and most basic of which is the pod. One or more containers may be
contained into a pod, which is a representation of a single instance of a
process that is currently operating within a cluster. Containers contained
inside a pod are able to connect with one another via the use of localhost
since they share the same network namespace.
4. What is a Kubernetes cluster?
- A group of containers that provide high
availability for applications.
- A group of machines where Kubernetes
can schedule workloads.
- A group of pods that manage the administration of a
Kubernetes application.
Explanation: Specifically,
a Kubernetes cluster is a collection of nodes, which are either real or virtual
computers, that are clustered together for the purpose of running containerized
applications. Over the course of the cluster, Kubernetes is responsible for
orchestrating and managing the deployment, scaling, and operation of these many
apps. In order to automate the deployment and maintenance of containerized
workloads, it offers a single collection of application programming interfaces
(APIs) and tools.
5. Where do the resources used to build Google Kubernetes Engine
clusters come from?
- Compute Engine
- Cloud Storage
- Bare metal servers
- App Engine
Explanation: GKE
clusters, in a nutshell, make use of the many services and resources that are
made available by Google Cloud Platform in order to establish a Kubernetes
environment that is completely controlled. For the purpose of simplifying the
deployment, scaling, and maintenance of containerized applications, GKE is
responsible for managing and orchestrating these applications' resources.
6. How do you keep your Kubernetes version updated in Google
Kubernetes Engine?
- The Google Kubernetes Engine team
periodically performs automatic upgrades of your cluster to newer stable
versions.
- You need to
stop your cluster and manually update the Kubernetes version in your
cluster.
- You are required to set up a cron job to
periodically check the Kubernetes version in your cluster.
- You cannot update a running cluster. You need to
create a copy of the cluster with the updated Kubernetes version.
7. Anthos provides a rich set of tools for
monitoring and maintaining the consistency of your applications across which of
the following locations?
- Applications hosted on-premises only.
- Applications hosted with one cloud provider only.
- Applications hosted with multiple cloud providers
only.
- Applications hosted on-premises, in the
cloud, or in multiple clouds.
Explanation: Anthos
offers a comprehensive collection of tools that may be used for the purpose of
monitoring and ensuring that your applications remain consistent across hybrid
and multi-cloud settings. Anthos is a platform that was built by Google Cloud
that gives you the ability to construct, deploy, and manage applications in a
consistent manner across on-premises data centers, Google Cloud, and other
cloud providers.
Anthos gives you the ability to administer and monitor your
applications in a seamless manner across several locations, hence offering a
unified and uniform approach to the process. The purpose of this endeavor is to
provide enterprises with the ability to deploy and operate their applications
inside a multi-cloud or hybrid cloud architecture while simultaneously
preserving operation consistency and visibility
8. App Engine is best suited to the development and hosting of
which type of application?
- Applications that require at least one instance
running at all times.
- A web application
- Applications that require full control of the
hardware they are running on
- A long-running batch processing application
9. Which statements are true about App
Engine?
- The daily billing for an App Engine
application can drop to zero.
- App Engine charges you based on the resources you
pre allocate instead of the resources you use.
- App Engine manages the hardware and
networking infrastructure required to run your code.
- Developers who write for App Engine do not need to
code their applications in any particular way to use the service.
- App Engine requires you to supply or code your own
application load balancing and logging services.
10. What are the advantages of using App
Engine’s flexible environment instead of its standard environment?
- You can use SSH to connect to the
virtual machines on which your application runs.
- Google provides automatic in-place security
patches.
- Your application can execute code in background
threads.
- Your application can write to the local
disk.
- You can install third-party binaries.
Explanation: While
the standard environment is more limited, it is suited for certain use cases.
On the other hand, the flexible environment offers more flexibility and
customization choices, which makes it ideal for a wider variety of applications
and development situations.
11. Which Google Cloud service should you choose to perform
business analytics and billing on a customer-facing API?
- Cloud Endpoints
- Compute Engine API
- Cloud Run API
- Apigee Edge
12. Select the managed compute platform
that lets you run stateless containers through web requests or Pub/Sub events.
- Cloud Run
- Cloud Source Repositories
- Cloud Endpoints
- Apigee Edge
Explanation: The
managed computing platform known as Google Cloud execute is the one that
enables you to execute stateless containers by means of HTTP requests or
Pub/Sub events.
The Google Cloud Run platform is a fully managed computing
environment that automatically grows your containerized apps across the cloud.
In a serverless environment, it enables you to create and execute stateless
containers, which may be triggered by either HTTP requests (web requests) or
Pub/Sub events.
The underlying infrastructure is abstracted away by this
serverless platform, which also manages scalability, load balancing, and
container orchestration automatically. This makes it simple to build and
execute containerized apps without having to manage servers.
13. Cloud Run can only pull images from:
- Self-hosted registries
- Docker Hub
- Artifact Registry
- GitHub
14. Why would a developer choose to store source code in Cloud
Source Repositories?
- To reduce work
- To have total control over the hosting
infrastructure
- To keep code private to a Google Cloud
project
- It is the only way to access your source code in a
repository.
15. Why might a Google Cloud customer
choose to use Cloud Functions?
- Cloud Functions is the primary way to run Node.js
applications in Google Cloud.
- Their application has a legacy monolithic structure
that they want to separate into microservices.
- Cloud Functions is a free service for hosting
compute operations.
- Their application contains event-driven
code that they don’t want to provision compute resources for.
Explanation: To
summarize, Cloud Functions is a strong serverless computing service that
provides flexibility, scalability, and cost effectiveness. As a result, it is
an appealing option for the construction of event-driven applications and
microservices in the cloud.
16. Select the advantage of putting the event-driven components
of your application into Cloud Functions.
- Cloud Functions handles scaling these
components seamlessly.
- In Cloud Functions, processing is always free of
charge.
- In Cloud Functions, code can be written in C# or
C++.
- Cloud Functions eliminates the need to use a
separate service to trigger application events.
Explanation: When
there are a certain amount of incoming events or requests, Cloud Functions will
automatically scale appropriately. As the number of events rises, Cloud
Functions can easily scale out to accommodate the load. This ensures that your
application will continue to be responsive even when there is an increase in
the amount of traffic moving through it. The elimination of the need for human
intervention in the management of server resources is made possible by this
autonomous scaling, which offers a solution that is both scalable and
cost-efficient for event-driven workloads.
17. Why might a Google Cloud customer choose to use Terraform?
- Terraform can be used as an
infrastructure management system for Google Cloud resources.
- Terraform can be used as a version-control system
for your Google Cloud infrastructure layout.
- Terraform can be used to enforce maximum resource
utilization and spending limits on your Google Cloud resources.
- Terraform can be used as an infrastructure
management system for Kubernetes pods.
Explanation: In
conclusion, businesses often use Terraform for the management of their
infrastructure because of its IaC principles, support for many clouds,
declarative configuration, plan and preview capabilities, modularity, and
connection with continuous integration and continuous delivery pipelines. These
characteristics make Terraform a flexible and widely accepted solution.
18. There are “Four Golden Signals” that measure a system’s
performance and reliability. What are they?
- Availability, durability, scalability, resiliency
- Latency, traffic, saturation, errors
- Get, post, put, delete
- KPIs, SLIs, SLOs, SLAs
Explanation: The
overall health, performance, and dependability of a system may be evaluated
with the use of these four signals, which are significant indicators.
Monitoring and analyzing these signals enables enterprises to proactively
detect and fix problems, enhance performance, and guarantee a great experience
for users.
19. Which definition best describes a service level indicator
(SLI)?
- A key performance indicator; for example, clicks
per session or customer signups
- A percentage goal of a measure you intend your
service to achieve
- A contract with your customers regarding service
performance
- A time-bound measurable attribute of a
service
Explanation: The
performance of a service may be evaluated using service level indicators
(SLIs), which are metrics that quantify different characteristics of the
service, such as response time, error rate, availability, or other relevant
indications. The use of these metrics offers a clear and quantitative method
for evaluating the degree to which a service is reaching the objectives it has
set for itself and the expectations of its users. Establishing and monitoring
service level objectives (SLOs) and service level agreements (SLAs) both
require the use of service level indicators (SLIs), which are an essential
component.
20. Which option describes a commitment made to your customers
that your systems and applications will have only a certain amount of
“downtime”?
- Service level agreement
- Service level indicator
- Key performance indicator
- Service level objective
Explanation: A
service level agreement (SLA) is a legal agreement between a service provider
and its clients that describes the anticipated quality of service. This
agreement includes performance indicators, availability objectives, and
guarantees about downtime. In the event that the service standards that were
agreed upon are not fulfilled, the service level agreement (SLA) often contains
penalties or remedies. The SLA is the document that sets the conditions under
which the service provider promises to supply and maintain the service.
21. You want to create alerts on your Google Cloud resources,
such as when health checks fail. Which is the best Google Cloud product to use?
- Cloud Trace
- Cloud Monitoring
- Cloud Functions
- Cloud Debugger
22. Select the two correct statements
about Cloud Logging.
- Cloud Logging lets you define uptime checks.
- Cloud Logging lets you view logs from
your applications and filter and search on them.
- Cloud Logging requires the use of a third-party
monitoring agent.
- Cloud Logging requires you to store your logs in
BigQuery or Cloud Storage.
- Cloud Logging lets you define metrics based on your logs.