Free GCP ACE Practice Questions
(Google Cloud Associate Cloud Engineer)

The GCP ACE exam tests your knowledge of Google Cloud architecture and best practices. Practice real-world scenarios to prepare for the fundamentals of Google Cloud and best practices.

GCP ACE Practice Questions

10 Free Questions • Updated for 2026 • No dumps

Designed by experts and updated regularly based on exam changes.

1

A media company needs to process large video files nightly, requiring significant compute power for varying durations. They want to minimize costs without compromising job completion. Which Google Cloud compute option provides the highest cost efficiency for these batch workloads?

A Google Kubernetes Engine (GKE) clusters.
B Cloud Run for containerized applications.
C Compute Engine Preemptible VMs.
D Compute Engine On-Demand VMs.

✅ Correct Answer: C

Explanation: Compute Engine Preemptible VMs offer significant cost savings (up to 80%) for fault-tolerant, flexible workloads that can tolerate interruptions, making them ideal for large batch processing that can restart if preempted. On-Demand VMs are more expensive. Cloud Run is serverless for event-driven web requests, not large batch processing. GKE is for container orchestration, which can use Preemptible VMs, but the direct cost-saving mechanism is the Preemptible VM itself.
2

A company needs to store petabytes of archival data that is rarely accessed but must be retained for regulatory compliance for 10 years. Retrieval times can be several hours. They prioritize the lowest possible storage cost. Which Google Cloud Storage class is the most appropriate?

A Archive storage class.
B Standard storage class.
C Coldline storage class.
D Nearline storage class.

✅ Correct Answer: A

Explanation: The Archive storage class is designed for long-term data archiving with very low storage costs and retrieval times that can be hours, perfectly matching the requirements for rarely accessed, long-term compliance data. Nearline and Coldline offer faster retrieval but at a higher cost. Standard storage is for frequently accessed data and is the most expensive per GB.
3

A development team wants to deploy a containerized application to Google Kubernetes Engine (GKE). They need to ensure that the application can automatically scale horizontally based on CPU utilization and gracefully handle node failures. Which GKE features are essential to configure?

A Manual Pod scaling and node pool resizing.
B Horizontal Pod Autoscaler and Node Auto-provisioning.
C GKE private clusters and network policies.
D Cloud Logging and Cloud Monitoring.

✅ Correct Answer: B

Explanation: Horizontal Pod Autoscaler (HPA) automatically scales the number of pods based on metrics like CPU utilization. Node Auto-provisioning (or Cluster Autoscaler) dynamically adds or removes nodes to handle pod demand and gracefully recovers from node failures. Manual scaling is not automated. Private clusters and network policies are for security. Logging and monitoring are for observability, not scaling or self-healing.
4

An application requires a globally consistent, fully managed NoSQL document database that can scale to petabytes of data with single-digit millisecond latency. The application needs to handle millions of reads and writes per second. Which Google Cloud database service meets these requirements?

A Cloud Firestore in Datastore mode.
B Cloud SQL for PostgreSQL.
C Memorystore for Redis.
D Cloud Spanner for relational data.

✅ Correct Answer: A

Explanation: Cloud Firestore in Datastore mode (formerly Cloud Datastore) is a highly scalable, fully managed NoSQL document database designed for high-performance and global availability, capable of handling large datasets and high throughput with low latency. Cloud SQL is relational. Cloud Spanner is a globally distributed relational database, but the requirement is NoSQL. Memorystore for Redis is an in-memory cache, not a primary document database.
5

An application running on Compute Engine is experiencing intermittent performance degradation. The operations team needs to identify the root cause quickly, analyzing CPU usage, memory consumption, and network latency across all instances. Which Google Cloud tool provides a centralized view for this performance analysis?

A Cloud Trace for distributed tracing.
B Google Cloud Console activity logs.
C Cloud Logging for application logs.
D Cloud Monitoring dashboards and metrics explorer.

✅ Correct Answer: D

Explanation: Cloud Monitoring (formerly Stackdriver Monitoring) provides comprehensive metrics collection for all Google Cloud services, allowing the creation of dashboards and using Metrics Explorer to visualize CPU, memory, and network metrics across instances for performance analysis. Cloud Logging collects application logs, which are different from system metrics. Cloud Trace is for distributed application tracing. Activity logs show administrative actions, not instance performance.
6

A junior developer needs permission to deploy applications to a specific GKE cluster but should not be able to modify other project resources like Cloud SQL databases. Which IAM strategy should be implemented to grant access with the least privilege?

A Create an IAM service account for the developer with broad permissions.
B Grant the developer the 'Editor' primitive role for the entire project.
C Provide the developer with the project owner credentials.
D Assign a custom IAM role with GKE deployment permissions to the developer.

✅ Correct Answer: D

Explanation: Creating a custom IAM role with only the necessary GKE deployment permissions and assigning it to the developer adheres strictly to the principle of least privilege. The 'Editor' role grants broad permissions across the project, violating least privilege. Project owner credentials are highly privileged and should never be shared. Service accounts are for services, not individual users, and assigning broad permissions to one is a security risk.
7

A multi-tier application has web servers in a public subnet and database servers in a private subnet within a Google Cloud VPC. The web servers must be accessible from the internet, but the database servers must remain entirely private. How is this network segmentation primarily enforced?

A Configuring separate VPC networks for each tier.
B VPC Firewall Rules configured with network tags or service accounts.
C Using a single network with IP-based access restrictions on VMs.
D Deploying a Network Load Balancer in front of the database servers.

✅ Correct Answer: B

Explanation: VPC Firewall Rules allow granular control over ingress and egress traffic based on IP ranges, ports, protocols, and crucially, network tags or service accounts. This enables allowing internet traffic to web servers while strictly denying it to database servers within the same VPC. Separate VPCs add unnecessary complexity. IP-based restrictions on VMs are less scalable and manageable. A Network Load Balancer would expose, not protect, database servers.
8

A security team needs to ensure that all Google Cloud Storage buckets containing sensitive data are encrypted with customer-managed encryption keys (CMEK) and that this policy is enforced across the organization. How can this be effectively managed?

A Manually configure CMEK for each new Cloud Storage bucket.
B Implement Cloud Functions to check and enforce CMEK after bucket creation.
C Rely on Google-managed encryption keys for all buckets.
D Use an Organization Policy Constraint to require CMEK for Cloud Storage.

✅ Correct Answer: D

Explanation: Organization Policy Constraints are a powerful way to centrally enforce specific configurations across an entire Google Cloud organization, including requiring CMEK for Cloud Storage buckets. Manually configuring each bucket is error-prone and not scalable. Google-managed keys don't meet the CMEK requirement. Cloud Functions would be a reactive, detective control, not a preventative enforcement mechanism like Organization Policy.
9

A company needs to connect their on-premises data center to several VPCs in Google Cloud, located in different regions. They require high-throughput, low-latency, and secure private connectivity, avoiding the public internet. Which Google Cloud networking solution best meets these requirements?

A Multiple Site-to-Site Cloud VPN tunnels over the public internet.
B VPC Network Peering between all VPCs and on-premises via VPN.
C Cloud Interconnect (Dedicated or Partner) with Cloud VPN.
D Direct peering between on-premises and each individual VPC.

✅ Correct Answer: C

Explanation: Cloud Interconnect (Dedicated or Partner) provides dedicated, high-bandwidth, low-latency private connections between on-premises networks and Google Cloud. Combining it with Cloud VPN (over the Interconnect) secures traffic and allows connection to multiple VPCs in different regions via a VPN Gateway. Site-to-Site VPN over the public internet doesn't guarantee low latency or avoid the public internet. VPC peering connects VPCs, not on-premises to multiple VPCs. Direct peering is not scalable for multiple VPCs.
10

A business process requires a lightweight function to be executed automatically whenever a new file is uploaded to a specific Cloud Storage bucket. The function should be billed only when active and scale instantly based on upload volume. Which Google Cloud service is the ideal choice?

A Cloud Run for containerized applications.
B Compute Engine with a web server and cron job.
C Google Kubernetes Engine (GKE) for microservices.
D Cloud Functions for event-driven serverless execution.

✅ Correct Answer: D

Explanation: Cloud Functions are specifically designed for event-driven, serverless execution of lightweight code snippets, automatically scaling and billing only for compute time used. They are ideal for responding to events like Cloud Storage uploads. Cloud Run is for containerized applications. Compute Engine requires server management and continuous billing. GKE is for container orchestration, adding unnecessary complexity for a simple event-driven function.

Practice More Questions

Take full-length timed quizzes and track your performance.

Start Free Practice

Frequently Asked Questions

Are these questions real exam dumps?

No, examOS does not use or promote exam dumps. All questions are concept-focused, scenario-based, and designed to help you understand architectural decisions and real-world trade-offs.

How does ExamOS help me prepare better?

ExamOS provides short, timed quizzes aligned with official exam domains. Each question includes detailed explanations so you can learn the reasoning behind the correct answer, not just memorize it.

Is ExamOS free to use?

Yes. You get free credits when you register, which you can use to take practice quizzes. You can earn additional credits through referrals or upgrade later for unlimited practice.

Related Practice Exams