You work for an online cloud education provider that provides hands-on labs for training students. Recently, you noticed a spike in CPU activity for one of your EC2 instances and you suspect it is being used to mine bitcoin rather than for educational purposes. Somehow, your production environment has been compromised and you need to quickly identify the root cause of this compromise. Which AWS service would be best suited to identify the root cause?
1-Using Amazon Detective, you can analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities.
可帮助您分析、调查和快速识别安全结果或可疑活动的根本原因。 Detective 会自动从您的AWS资源中收集日志数据。 然后,它使用机器学习、统计分析和图论来生成可视化效果,帮助您更快、更高效地进行安全调查。
You have configured a VPC with both a public and a private subnet. You need to deploy a web server and a database. You want the web server to be accessed from the Internet by customers. Which is the proper configuration for this architecture?
3-The web server in the public subnet with an internet gateway will facilitate internet access. The purpose of a VPC is to create a private, secure environment, but public subnets are used within the VPC (Virtual Private Cloud) for internet access.
You work in healthcare for an IVF clinic. You host an application on AWS, which allows patients to track their medication during IVF cycles. The application also allows them to view test results, which contain sensitive medical data. You have a regulatory requirement that the application is secure and you must use a firewall managed by AWS that enables control and visibility over VPC-to-VPC traffic and prevents the VPCs hosting your sensitive application resources from accessing domains using unauthorized protocols. What AWS service would support this?
1-The AWS Network Firewall infrastructure is managed by AWS, so you don’t have to worry about building and maintaining your own network security infrastructure. AWS Network Firewall’s stateful firewall can incorporate context from traffic flows, like tracking connections and protocol identification, to enforce policies such as preventing your VPCs from accessing domains using an unauthorized protocol. AWS Network Firewall gives you control and visibility of VPC-to-VPC traffic to logically separate networks hosting sensitive applications or line-of-business resources.
A consultant is hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. What is true of the default security group?
1-Your VPC includes a default security group. You can't delete this group, however, you can change the group's rules. The procedure is the same as modifying any other security group. For more information, see Adding, removing, and updating rules. Control traffic to your AWS resources using security groups - Amazon Virtual Private Cloud
A small company has nearly 200 users who already have AWS accounts in the company AWS environment. A new S3 bucket has been created which will need to allow roughly a third of all users access to sensitive information in the bucket. What is the most time efficient way to get these users access to the bucket?
1-
An international company has many clients around the world. These clients need to transfer gigabytes to terabytes of data quickly and on a regular basis to an S3 bucket. Which S3 feature will enable these long distance data transfers in a secure and fast manner?
3-Multipart upload allows you to upload a single object as a set of parts. After all parts of your object are uploaded, Amazon S3 then presents the data as a single object. With this feature you can create parallel uploads, pause and resume an object upload, and begin uploads before you know the total object size.
you might want to use Transfer Acceleration on a bucket for various reasons, including the following: You have customers that upload to a centralized bucket from all over the world. You transfer gigabytes to terabytes of data on a regular basis across continents. You are unable to utilize all of your available bandwidth over the Internet when uploading to Amazon S3. Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration - Amazon Simple Storage Service
Your company is storing highly sensitive data in S3 Buckets. The data includes personal and financial information. An audit has determined that this data must be stored in a secured manner and any data stored in the buckets already or data coming into the buckets must be analyzed and alerts sent out flagging improperly stored data. Which AWS service can be used to meet this requirement?
4-Amazon Macie is a fully-managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Macie automatically provides an inventory of Amazon S3 buckets including a list of unencrypted buckets, publicly accessible buckets, and buckets shared with AWS accounts outside those you have defined in AWS Organizations. Then, Macie applies machine learning and pattern matching techniques to the buckets you select to identify and alert you to sensitive data, such as personally identifiable information (PII). Macie’s alerts, or findings, can be searched and filtered in the AWS Management Console and sent to Amazon CloudWatch Events for easy integration with existing workflow or event management systems, or to be used in combination with AWS services, such as AWS Step Functions to take automated remediation actions. Reference - Sensitive Data Discovery and Protection - Amazon Macie - AWS
You are managing S3 buckets in your organization. One of the buckets in your organization has gotten some bizarre uploads and you would like to be aware of these types of uploads as soon as possible. Because of that, you configure event notifications for this bucket. Which of the following is NOT a supported destination for event notifications?
4-SES is a NOT supported destination for S3 event notifications. The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. Amazon S3 can send event notification messages to the following destinations. You specify the ARN value of these destinations in the notification configuration.
- Publish event messages to an Amazon Simple Notification Service (Amazon SNS) topic
- Publish event messages to an Amazon Simple Queue Service (Amazon SQS) queue Note that if the destination queue or topic is SSE enabled, Amazon S3 will need access to the associated AWS Key Management Service (AWS KMS) customer master key (CMK) to enable message encryption.
- Publish event messages to AWS Lambda by invoking a Lambda function and providing the event message as an argument Amazon S3 Event Notifications - Amazon Simple Storage Service
The company you work for has reshuffled teams a bit and you’ve been moved from the AWS IAM team to the AWS network team. One of your first assignments is to review the subnets in the main VPCs. You have recommended that the company add some private subnets and segregate databases from public traffic. What differentiates a public subnet from a private subnet?
2-A public subnet is a subnet that's associated with a route table that has a route to an internet gateway. Reference: VPC with public and private subnets (NAT) - Overview.
You work for an organization that has multiple AWS accounts in multiple regions and multiple applications. You have been tasked with making sure that all your firewall rules across these multiple accounts and regions are consistent. You need to do this as quickly and efficiently as possible. Which AWS service would help you achieve this?
1-AWS Firewall Manager is a security management service in a single pane of glass. This allows you to centrally set up and manage firewall rules across multiple AWS accounts and applications in AWS Organizations.
You have been evaluating the NACLs in your company. Currently, you are looking at the default network ACL. Which statement is true regarding subnets and NACLs?
2-Each subnet in your VPC must be associated with a network ACL. If you don't explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL. Control traffic to subnets using network ACLs - Amazon Virtual Private Cloud
You work for an online education company that offers a 7-day unlimited access free trial for all new users. You discover that someone has been taking advantage of this and has created a script to register a new user every time the 7-day trial ends. They also use this script to download large amounts of video files, which they then put up on popular pirate websites. You need to find a way to automate the detection of fraud like this using machine learning and artificial intelligence. Which AWS service would best suit this?
3-Amazon Fraud Detector is an AWS AI service that is built to detect fraud in your data.
A small software team is creating an application which will give subscribers real-time weather updates. The application will run on EC2 and will make several requests to AWS services such as S3 and DynamoDB. What is the best way to grant permissions to these other AWS services?
3-Create an IAM role in the following situations: You're creating an application that runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance and that application makes requests to AWS. Don't create an IAM user and pass the user's credentials to the application or embed the credentials in the application. Instead, create an IAM role that you attach to the EC2 instance to give temporary security credentials to applications running on the instance. When an application uses these credentials in AWS, it can perform all of the operations that are allowed by the policies attached to the role. For details, see Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances.
You have been evaluating the NACLs in your company. Most of the NACLs are configured the same:
100 All Traffic Allow
200 All Traffic Deny
* All Traffic Deny
How can the last rule * All Traffic Deny
be edited?
2-The default network ACL is configured to allow all traffic to flow in and out of the subnets with which it is associated. Each network ACL also includes a rule whose rule number is an asterisk. This rule ensures that if a packet doesn't match any of the other numbered rules, it's denied. You can't modify or remove this rule.
A consultant is hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. How many security groups can be attached to an EC2 instance?
4-A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC can be assigned to a different set of security groups. If you launch an instance using the Amazon EC2 API or a command-line tool and you don't specify a security group, the instance is automatically assigned to the default security group for the VPC. If you launch an instance using the Amazon EC2 console, you have an option to create a new security group for the instance. For each security group, you add rules that control the inbound traffic to instances and a separate set of rules that control the outbound traffic. This section describes the basic things that you need to know about security groups for your VPC and their rules. Control traffic to your AWS resources using security groups - Amazon Virtual Private Cloud
A new startup company decides to use AWS to host their web application. They configure a VPC as well as two subnets within the VPC. They also attach an internet gateway to the VPC. In the first subnet, they create the EC2 instance which will host their web application. They finish the configuration by making the application accessible from the Internet. The second subnet has an instance hosting a smaller, secondary application. But this application is not currently accessible from the Internet. What could be potential problems?
1- 2
To enable access to or from the internet for instances in a subnet in a VPC, you must do the following:
- Attach an internet gateway to your VPC.
- Add a route to your subnet's route table that directs internet-bound traffic to the internet gateway. If a subnet is associated with a route table that has a route to an internet gateway, it's known as a public subnet. If a subnet is associated with a route table that does not have a route to an internet gateway, it's known as a private subnet.
- Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
- Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance. Connect to the internet using an internet gateway - Amazon Virtual Private Cloud
To enable access to or from the internet for instances in a subnet in a VPC, you must do the following:
- Attach an internet gateway to your VPC.
- Add a route to your subnet's route table that directs internet-bound traffic to the internet gateway. If a subnet is associated with a route table that has a route to an internet gateway, it's known as a public subnet. If a subnet is associated with a route table that does not have a route to an internet gateway, it's known as a private subnet.
- Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
- Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance. Connect to the internet using an internet gateway - Amazon Virtual Private Cloud
Recent worldwide events have dictated that you perform your duties as a Solutions Architect from home. You need to be able to manage several EC2 instances while working from home and have been testing the ability to SSH into these instances. One instance in particular has been a problem and you cannot SSH into this instance. What should you check first to troubleshoot this issue?
1-A rule that allows access to TCP port 22 (SSH) from your home IP address enables you to SSH into the instances associated with the security group. AWS Documentation: Security group rules.
The company you work for has reshuffled teams a bit and you’ve been moved from the AWS IAM team to the AWS Network team. One of your first assignments is to review the subnets in the main VPCs. What are two key concepts regarding subnets?
2-Each subnet must be associated with a route table, which specifies the allowed routes for outbound traffic leaving the subnet. Every subnet that you create is automatically associated with the main route table for the VPC. You can change the association, and you can change the contents of the main route table.
Reference: Subnet routing
4-When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each subnet must reside entirely within one Availability Zone and cannot span zones.
Reference: VPC and subnet basics
You have been evaluating the NACLs in your company. Currently, you are looking at the default network ACL. What is true about the default network ACL?
2- The default network ACL is configured to allow all traffic to flow in and out of the subnets with which it is associated. You are able to add and remove your own rules from the default network ACL. However, each network ACL also includes a rule whose rule number is an asterisk. This rule ensures that if a packet doesn't match any of the other numbered rules, it's denied. You can't modify or remove this rule. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#default-network-acl
You work for a company that needs to pursue a FedRAMP assessment and accreditation. They need to generate a FedRAMP Customer Package, which is a report designed to get accreditation. The report contains a number of sections, such as AWS East/West and GovCloud Executive Briefing, Control Implementation Summary (CIS), Customer Responsibility Matrix (CRM), and E-Authentication. You need this information as quickly as possible. Which AWS service should you use to find this information?
4-AWS Artifact is a single source you can visit to get the compliance-related information that matters to you, such as AWS security and compliance reports or select online agreements.
A new startup company decides to use AWS to host their web application. They configure a VPC as well as two subnets within the VPC. They also attach an internet gateway to the VPC. In the first subnet, they create an EC2 instance to host a web application. There is a network ACL and a security group, which both have the proper ingress and egress to and from the internet. There is a route in the route table to the internet gateway. The EC2 instances added to the subnet need to have a globally unique IP address to ensure internet access. Which is not a globally unique IP address?
4-Public IPv4 address, elastic IP address, and IPv6 address are globally unique addresses. The IPv4 addresses known for not being unique are private IPs. These are found in the following ranges: from 10.0.0.0 to 10.255.255.255, from 172.16.0.0 to 172.31.255.255, and from 192.168.0.0 to 192.168.255.255. Reference: RFC1918.
A company has an application for sharing static content, such as photos. The popularity of the application has grown, and the company is now sharing content worldwide. This worldwide service has caused some issues with latency. What AWS services can be used to host a static website, serve content to globally dispersed users, and address latency issues, while keeping cost under control? Choose two.
2-4-Amazon S3 is an object storage built to store and retrieve any amount of data from anywhere on the Internet. It’s a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs. AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing, or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2, or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.
You are working for a large financial institution and have been tasked with creating a relational database solution to deal with a read-heavy workload. The database needs to be highly available within the Oregon region and quickly recover if an Availability Zone goes offline. Which of the following would you select to meet these requirements?
2-4-Multi-AZ creates a secondary database in another AZ within the region you are in. If something were to happen to the primary database, RDS would automatically fail over to the secondary copy. This allows your database achieve high availability with minimal work on your part. Amazon RDS Multi AZ Deployments | Cloud Relational Database | Amazon Web Services
Amazon RDS uses the MariaDB, MySQL, Oracle, PostgreSQL, and Microsoft SQL Server DB engines' built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance. Updates made to the source DB instance are asynchronously copied to the read replica. You can reduce the load on your source DB instance by routing read queries from your applications to the read replica. Using read replicas, you can elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. Working with MySQL read replicas - Amazon Relational Database Service
A small development team with very limited AWS knowledge has begun the process of creating and deploying a new frontend application based on React within AWS. The application is simple and does not need any backend processing via traditional databases. The application does, however, require GraphQL interactions to complete the required processing of data. Which AWS service can the team use to complete this?
1-This offers a simplified GraphQL interface for development teams to use within AWS. Reference: What is AWS AppSync?
You have configured an Auto Scaling Group of EC2 instances fronted by an Application Load Balancer and backed by an RDS database. You want to begin monitoring the EC2 instances using CloudWatch metrics. Which metric is not readily available out of the box?
2-Memory utilization is not available as an out of the box metric in CloudWatch. You can, however, collect memory metrics when you configure a custom metric for CloudWatch. Types of custom metrics that you can set up include:
- Memory utilization
- Disk swap utilization
- Disk space utilization
- Page file utilization
- Log collection
Your application is housed on an Auto Scaling Group of EC2 instances. The application is backed by the Multi-AZ MySQL RDS database and an additional read replica. You need to simulate some failures for disaster recovery drills. Which event will not cause an RDS to perform a failover to the standby replica?
4-When you provision a Multi-AZ DB instance, Amazon RDS automatically creates a primary DB instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
You suspect that one of the AWS services your company is using has gone down. Which service can provide you proactive and transparent notifications about the status of your specific AWS environment?
3-AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view of the performance and availability of the AWS services underlying your AWS resources. The dashboard displays relevant and timely information to help you manage events in progress, and provides proactive notification to help you plan for scheduled activities. With Personal Health Dashboard, alerts are triggered by changes in the health of AWS resources, giving you event visibility and guidance to help quickly diagnose and resolve issues. AWS Health Dashboard
You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling groups that you need to create. One requirement is that you need to reuse some software licenses and therefore need to use dedicated hosts on EC2 instances in your Auto Scaling groups. What step must you take to meet this requirement?
3-In addition to the features of Amazon EC2 Auto Scaling that you can configure by using launch configurations, launch templates provide more advanced Amazon EC2 configuration options. For example, you must use launch templates to use Amazon EC2 Dedicated Hosts. Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use. While Amazon EC2 Dedicated Instances also run on dedicated hardware, the advantage of using Dedicated Hosts over Dedicated Instances is that you can bring eligible software licenses from external vendors and use them on EC2 instances.
You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that utilize launch configurations. Many of these launch configurations are similar yet have subtle differences. You’d like to use multiple versions of these launch configurations. An ideal approach would be to have a default launch configuration and then have additional versions that add additional features. Which option best meets these requirements?
1-A launch template is similar to a launch configuration, in that it specifies instance configuration information. Included are the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that you use to launch EC2 instances. However, defining a launch template instead of a launch configuration allows you to have multiple versions of a template. With versioning, you can create a subset of the full set of parameters and then reuse it to create other templates or template versions. For example, you can create a default template that defines common configuration parameters and allow the other parameters to be specified as part of another version of the same template.
Launch templates - Amazon EC2 Auto Scaling
A gaming company is creating an application which simply provides a leaderboard for specific games. The leaderboard will use DynamoDB for data, and simply needs to be updated in near real-time. An EC2 instance will be configured to house the application which will be accessed by subscribers from the Internet. Which step is NOT necessary for internet traffic to flow to and from the Internet?
4-The application needs to be able to communicate with the DynamoDB table, but this has nothing to do with the necessary steps for internet traffic flow to and from the application instance.
You have two EC2 instances running in the same VPC, but in different subnets. You are removing the secondary ENI from an EC2 instance and attaching it to another EC2 instance. You want this to be fast and with limited disruption. So you want to attach the ENI to the EC2 instance when it’s running. What is this called?
3-Here are some best practices for configuring network interfaces. You can attach a network interface to an instance when it's running (hot attach), when it's stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can't detach the primary network interface. You can move a network interface from one instance to another if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a secondary private IPv4 address. Elastic network interfaces - Amazon Elastic Compute Cloud
Jamal recently joined a small company as a Site Reliability Engineer on the cloud development team. The team leverages numerous AWS Lambda functions with several backend AWS resources, as well as other backend microservices. A recent update to some of the different functions' code has begun to cause massive delays within the application workloads. The development initially turned on more detailed logging within their code base; however, this did not provide the application insights required to troubleshoot the issue. What can Jamal do to more easily gain a better understanding of the response times of the affected AWS Lambda functions, as well as all the connected downstream resources within the entire application flow?
4-AWS X-Ray collects data about requests that your application serves and helps gain insights into that data to identify issues and opportunities for optimization. AWS Lambda integrates easily with AWS X-Ray by toggling the feature on within the function configuration. Reference: Scorekeep diagram
After several issues with your application and unplanned downtime, your recommendation to migrate your application to AWS is approved. You have set up high availability on the front end with a load balancer and an Auto Scaling Group. What step can you take with your database to configure high-availability and ensure minimal downtime (under five minutes)?
2-In the event of a planned or unplanned outage of your DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have enabled Multi-AZ. The time it takes for the failover to complete depends on the database activity and other conditions at the time the primary DB instance became unavailable. Failover times are typically 60–120 seconds. However, large transactions or a lengthy recovery process can increase failover time. When the failover is complete, it can take additional time for the RDS console to reflect the new Availability Zone. Note the above sentences. Large transactions could cause a problem in getting back up within five minutes, but this is clearly the best of the available choices to attempt to meet this requirement. We must move through our questions on the exam quickly, but always evaluate all the answers for the best possible solution.
Configuring and managing a Multi-AZ deployment - Amazon Relational Database Service
An accounting company has big data applications for analyzing actuary data. The company is migrating some of its services to the cloud, and for the foreseeable future, will be operating in a hybrid environment. They need a storage service that provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. Which AWS service can meet these requirements?
4-Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS offers 2 storage classes: the Standard storage class and the Infrequent Access storage class (EFS IA). EFS IA provides price/performance that's cost-optimized for files not accessed every day. By simply enabling EFS Lifecycle Management on your file system, files not accessed according to the lifecycle policy you choose will be automatically and transparently moved into EFS IA. Amazon EFS
Several instances you are creating have a specific data requirement. The requirement states that the data on the root device needs to persist independently from the lifetime of the instance. After considering AWS storage options, which is the simplest way to meet these requirements?
DeleteOnTermination
attribute to false using a block device mapping.3-An Amazon EBS-backed instance can be stopped and later restarted without affecting data stored in the attached volumes. By default, the root volume for an AMI backed by Amazon EBS is deleted when the instance terminates. You can change the default behavior to ensure that the volume persists after the instance terminates. To change the default behavior, set the
DeleteOnTermination
attribute to false using a block device mapping.
A database outage has been very costly to your organization. You have been tasked with configuring a more highly-available architecture. The main requirement is that the chosen architecture needs to meet an aggressive RTO in case of disaster. You have decided to use an Amazon RDS for MySQL Multi-AZ deployment . How is the replication handled for Amazon RDS for MySQL with a Multi-AZ configuration?
4-In a Multi-AZ DB instance deployment, Amazon RDS for MySQL automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance. It can also help protect your databases against DB instance failure and Availability Zone disruption. AWS Documentation: Multi-AZ DB instance deployments.
A company has a great deal of data in S3 buckets for which they want to create a database. Creating the RDS database, normalizing the data, and migrating to the RDS database will take time and is the long-term plan. But there's an immediate need to query this data to retrieve information necessary for an audit. Which AWS service will enable querying data in S3 using standard SQL commands?
1-Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you only pay for the queries you run.
Athena is easy to use. Simply point to your data in Amazon S3, define the schema, and start querying using standard SQL. Most results are delivered within seconds. With Athena, there’s no need for complex ETL jobs to prepare your data for analysis. This makes it easy for anyone with SQL skills to quickly analyze large-scale datasets. Interactive SQL - Serverless Query Service - Amazon Athena - AWS
Your company uses IoT devices installed in businesses to provide those business real-time data for analysis. You have decided to use AWS Kinesis Data Firehose to stream the data to multiple backend storing services for analytics. Which service listed is not a viable solution to stream the real time data to?
3- Amazon Athena is correct because Amazon Kinesis Data Firehose cannot load streaming data to Athena. Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security. Amazon Kinesis Data Firehose - Streaming Data Pipeline - Amazon Web Services
You work for an online school that teaches IT by recording their screen and narrating what they are doing. The school is becoming quite popular, and you need to convert the video files into many different formats to support various laptops, tablets, and mobile devices. Which AWS service should you consider using?
2-Amazon Elastic Transcoder allows businesses and developers to convert media files from their original source format into versions that are optimized for various devices, such as smartphones, tablets, and PCs.
A professional baseball league has chosen to use a key-value and document database for storage, processing, and data delivery. Many of the data requirements involve high-speed processing of data such as a Doppler radar system which samples the position of the baseball 2000 times per second. Which AWS data storage can meet these requirements?
2-Amazon DynamoDB is a NoSQL database that supports key-value and document data models, and enables developers to build modern, serverless applications that can start small and scale globally to support petabytes of data and tens of millions of read and write requests per second. DynamoDB is designed to run high-performance, internet-scale applications that would overburden traditional relational databases. Amazon DynamoDB Features | NoSQL Key-Value Database | Amazon Web Services
You have just started work at a small startup in the Seattle area. Your first job is to help containerize your company's microservices and move them to AWS. The team has selected ECS as their orchestration service of choice. You've discovered the code currently uses access keys and secret access keys in order to communicate with S3. How can you best handle this authentication for the newly containerized application?
1-It's always a good idea to use roles over hard-coded credentials. One of the best parts of using ECS is the ease of attaching roles to your containers. This allows the container to have an individual role even if it's running with other containers on the same EC2 instance. Task definition parameters - Amazon Elastic Container Service
A large, big-box hardware chain is setting up a new inventory management system. They have developed a system using IoT sensors which captures the removal of items from the store shelves in near real-time and want to use this information to update their inventory system. The company wants to analyze this data in the hopes of being ahead of demand and properly managing logistics and delivery of in-demand items.
Which AWS service can be used to capture this data as close to real-time as possible, while being able to both transform and load the streaming data into Amazon S3 or Elasticsearch?
1-Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near-real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully-managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, transform, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security. Amazon Kinesis Data Firehose - Streaming Data Pipeline - Amazon Web Services
You have been tasked to review your company disaster recovery plan due to some new requirements. The driving factor is that the Recovery Time Objective has become very aggressive. Because of this, it has been decided to configure Multi-AZ deployments for the RDS MySQL databases. Unrelated to DR, it has been determined that some read traffic needs to be offloaded from the master database. What step can be taken to meet this requirement?
3-Amazon RDS Read Replicas for MySQL and MariaDB now support Multi-AZ deployments. Combining Read Replicas with Multi-AZ enables you to build a resilient disaster recovery strategy and simplify your database engine upgrade process. Amazon RDS Read Replicas enable you to create one or more read-only copies of your database instance within the same AWS Region or in a different AWS Region. Updates made to the source database are then asynchronously copied to your Read Replicas. In addition to providing scalability for read-heavy workloads, Read Replicas can be promoted to become a standalone database instance when needed. Amazon RDS Read Replicas Now Support Multi-AZ Deployments.
You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. The application to be deployed on these instances is a life insurance application which requires path-based and host-based routing. Which type of load balancer will you need to use?
2-
Only the Application Load Balancer can support path-based and host-based routing. Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:
- Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
- Support for host-based routing. You can configure rules for your listener that forward requests based on the host field in the HTTP header. This enables you to route requests to multiple domains using a single load balancer.
- Support for routing based on fields in the request, such as standard and custom HTTP headers and methods, query parameters, and source IP addresses.
- Support for routing requests to multiple applications on a single EC2 instance. You can register each instance or IP address with the same target group using multiple ports.
- Support for redirecting requests from one URL to another.
- Support for returning a custom HTTP response.
- Support for registering targets by IP address, including targets outside the VPC for the load balancer.
- Support for registering Lambda functions as targets.
- Support for the load balancer to authenticate users of your applications through their corporate or social identities before routing requests.
- Support for containerized applications. Amazon Elastic Container Service (Amazon ECS) can select an unused port when scheduling a task and register the task with a target group using this port. This enables you to make efficient use of your clusters.
- Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling group enables you to scale each service dynamically based on demand.
- Access logs contain additional information and are stored in compressed format.
- Improved load balancer performance. What is an Application Load Balancer? - Elastic Load Balancing Network Traffic Distribution – Elastic Load Balancing FAQs – Amazon Web Services
You have multiple EC2 instances housing applications in a VPC in a single Availability Zone. Your EC2 workloads need low-latency network performance, high network throughput, and a tightly-coupled node-to-node communication. What's the best measure you can do to ensure this throughput?
1-A cluster placement group is a logical grouping of instances within a single Availability Zone. A cluster placement group can span peered VPCs in the same Region. Instances in the same cluster placement group enjoy a higher per-flow throughput limit for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network. Reference: Placement groups.
You work for an oil and gas company as a lead in data analytics. The company is using IoT devices to better understand their assets in the field (for example, pumps, generators, valve assemblies, and so on). Your task is to monitor the IoT devices in real-time to provide valuable insight that can help you maintain the reliability, availability, and performance of your IoT devices. What tool can you use to process streaming data in real time with standard SQL without having to learn new programming languages or processing frameworks?
4-Monitoring IoT devices in real-time can provide valuable insight that can help you maintain the reliability, availability, and performance of your IoT devices. You can track time series data on device connectivity and activity. This insight can help you react quickly to changing conditions and emerging situations. Amazon Web Services (AWS) offers a comprehensive set of powerful, flexible, and simple-to-use services that enable you to extract insights and actionable information in real time. Amazon Kinesis is a platform for streaming data on AWS, offering key capabilities to cost-effectively process streaming data at any scale. Kinesis capabilities include Amazon Kinesis Data Analytics, the easiest way to process streaming data in real time with standard SQL without having to learn new programming languages or processing frameworks. Overview - Real-Time IoT Device Monitoring with Kinesis Data Analytics
You work for a security company that manufactures doorbells with cameras built in. They are designing an application so that when people ring the doorbell, the camera will activate and stream video from the doorbell to the user's mobile device. You need to implement an AWS service to handle the streaming of potentially millions of devices, which you will then run analytics and other processing on the streams. Which AWS service would best suit this?
4-Amazon Kinesis Video Streams is used to stream media content from a large number of devices to AWS and then run analytics, machine learning, playback, and other processing.
Your company has asked you to look into some latency issues with the company web app. The application is backed by an AWS RDS database. Your analysis has determined that the requests made of the application are very read heavy, and this is where improvements can be made. Which service can you use to store frequently accessed data in-memory?
4-Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases. There are two types of ElastiCache available: Memcached and Redis. Here is a good overview and comparison between them: Redis vs. Memcached | AWS
A team member has been tasked to configure four EC2 instances for four separate applications. These are not high-traffic apps, so there is no need for an Auto Scaling group. The instances are all in the same public subnet and each instance has an EIP address, and all of the instances have the same security group. But none of the instances can send or receive internet traffic. You verify that all the instances have a public IP address. You also verify that an internet gateway has been configured. What is the most likely issue?
4-The question details all of the configuration needed for internet access, except for a route to the IGW in the route table. This is definitely a key step in any checklist for internet connectivity. It is quite possible to have a subnet with the 'Public' attribute set but no route to the internet in the assigned route table. (Test it yourself.) This may have been a setup error, or someone may have altered the shared route table for a special case instead of creating a new route table for the special case.
- Set up to use Amazon ECS - Amazon Elastic Container Service
Bill is a cloud solutions architect for a small technology startup company. The company started out completely on-premises, but Bill has finally convinced them to explore shifting their application to AWS. The application is fairly complex and leverages message brokers that communicate using AMQP 1.0 protocols to exchange data between nodes and complete workloads.
Which service should Bill use to design the new AWS cloud-based architecture?
4-Amazon MQ offers a managed broker service in AWS. It is meant for applications that need a specific message broker like RabbitMQ and ActiveMQ, as well as very specific messaging protocols (AMQP, STOMP, OpenWire, WebSocket, and MQTT) and frameworks.
Reference: Amazon MQ
A pharmaceutical company has begun to explore using AWS cloud services for their computation workloads for processing incoming orders. Currently, they process orders on-premises using self-managed virtual machines with batch software installed. The current infrastructure design does not scale well and is cumbersome to update. In addition, each processed batch job takes roughly 30-45 minutes to complete. The processing times cannot be reduced due to the complexity of the application code, and they want to make the new solution as hands-off as possible with automatic scaling based on the number of queued orders.
Which AWS service would you recommend they use for this application design that best meets their needs and is cost optimized?
4-AWS Batch is perfect for long-running (>15 minutes) batch computation workloads within AWS while leveraging managed compute infrastructure. It automatically provisions compute resources and then optimizes workload distribution based on the quantity and scale of your workloads.
Reference: AWS Batch
Your boss has tasked you with decoupling your existing web frontend from the backend. Both applications run on EC2 instances. After you investigate the existing architecture, you find that (on average) the backend resources are processing about 50,000 requests per second and will need something that supports their extreme level of message processing. It's also important that each request is processed only 1 time. What can you do to decouple these resources?
3-This would be a great choice, as SQS Standard can handle this level of extreme performance. If the application didn't require this level of performance, then SQS FIFO would be the better and easier choice. Quotas related to messages - Amazon Simple Queue Service
A company is running a teaching application which is consumed by users all over the world. The application is translated into 5 different languages. All of these language files need to be stored somewhere that is highly-durable and can be accessed frequently. As content is added to the site, the storage demands will grow by a factor of five, so the storage must be highly-scalable as well. Which storage option will be highly-durable, cost-effective, and highly-scalable?
3-Glacier can be very cheap, but as you read a question, try to compile a complete list of the requirements given. One of those requirements is frequently-accessed. That requirement eliminates Glacier
You have been tasked with migrating an application and the servers it runs on to the company AWS cloud environment. You have created a checklist of steps necessary to perform this migration. A subsection in the checklist is security considerations. One of the things that you need to consider is the shared responsibility model. Which option does AWS handle under the shared responsibility model?
3-Security and compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates. The customer assumes responsibility for, and management of, the guest operating system (including updates and security patches), other associated application software, and the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose, as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations. The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment.
AWS responsibility “Security of the Cloud”: AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
Shared Responsibility Model - Amazon Web Services (AWS)
A software company is looking for compute capacity in the cloud for a fault-tolerant and flexible application. The application is not mission-critical, so occasional downtime is acceptable. What type of EC2 servers can be used to meet these requirements at the lowest cost?
3-
You are put in charge of your company’s Disaster Recovery planning. As part of this plan, you intend to create all of the company infrastructure with CloudFormation templates. The templates can then be saved in another region and used to launch a new environment in case of disaster. What determines the costs associated with CloudFormation templates?
1-There is no additional charge for using AWS CloudFormation with resource providers in the following namespaces: AWS::, Alexa::, and Custom::*. In this case you pay for AWS resources (such as Amazon EC2 instances, Elastic Load Balancing load balancers, etc.) created using AWS CloudFormation as if you created them manually. You only pay for what you use, as you use it; there are no minimum fees and no required upfront commitments. When you use resource providers with AWS CloudFormation outside the namespaces mentioned above, you incur charges per handler operation. Handler operations are create, update, delete, read, or list actions on a resource.
Provision Infrastructure As Code – AWS CloudFormation Pricing – Amazon Web Services
Your company is storing stack traces for application errors in an S3 Bucket. The engineers using these stack traces review them when addressing application issues. It has been decided that the files only need to be kept for four weeks then they can be purged. How can you meet this requirement in S3?
3-To manage your objects so that they are stored cost-effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:
Transition actions define when objects transition to another storage class. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them, or archive objects to the S3 Glacier storage class one year after creating them.
Expiration actions define when objects expire. Amazon S3 deletes expired objects on your behalf.
The lifecycle expiration costs depend on when you choose to expire objects.
Managing your storage lifecycle - Amazon Simple Storage Service
Your application team has been approved to create a new machine learning application over the next two years. You intend to leverage numerous Amazon SageMaker instances and components to back your application. Your manager is worried about the cost potential of the services involved.
How could you maximize your savings opportunities for the Amazon SageMaker service?