AWS Certified Solutions Architect - Associate (SAA-C03) - Exam 3

You work at a mortgage brokerage firm in New York City. An intern has recently joined the company and you discover that they have been storing customer data in public S3 buckets. Because the company uses so many different S3 buckets, you need to identify a quick and efficient way to discover what personally identifiable information (PII) is being stored in S3. Which AWS service should you use?

  1. Amazon Inspector 是一项漏洞管理服务,持续扫描您的AWS工作负载中是否存在软件漏洞和意外网络暴露
  2. Amazon Athena
  3. AWS Trusted Advisor
  4. Amazon Macie 会持续评估您的Amazon S3 环境,并提供您所有账户的数据安全状况摘要。 您可以按元数据变量(例如存储桶名称、标签和安全控制措施,如加密状态或公共可访问性)搜索和筛选S3 存储桶并对其进行排序。

4-Amazon Athena is a service used to run SQL queries in S3 and would not automatically detect PII without first writing the SQL queries.

Amazon Macie is a quick and efficient way to discover what personally identifiable information (PII) is being stored in S3

A small startup is beginning to configure IAM for their organization. The user logins have been created and now the focus will shift to the permissions to grant to those users. An admin starts creating identity-based policies. To which item can an identity-based policy not be attached?

  1. groups
  2. users
  3. roles
  4. resources

4-Resource-based policies are attached to a resource. For example, you can attach resource-based policies to Amazon S3 buckets, Amazon SQS queues, and AWS Key Management Service encryption keys. For a list of services that support resource-based policies, see AWS services that work with IAM. Reference: Identity-based policies and resource-based policies.

You are a solutions architect for an online gambling company. You notice a series of web-layer DDoS attacks. This is coming from a large number of multiple IP addresses. In order to mitigate these web-layer DDoS attacks, you have been asked to implement a rule capable of blocking all IPs that have more than 2,000 requests in the last 5 minute interval. What should you do?

  1. Update your VPC's network access control list (NACL) and block access to the IP addresses as and when they come in
  2. Create a standard rule on your AWS WAF and associate the web access control list (ACL) to the Application Load Balancer
  3. Use AWS Trusted Advisor to filter the traffic
  4. Create a rate-based rule on your AWS WAF and associate the web access control list (ACL) to the Application Load Balancer

4-A rate-based rule tracks the rate of requests for each originating IP address, and triggers the rule action on IPs with rates that go over a limit. You set the limit as the number of requests per 5-minute time span. You can use this type of rule to put a temporary block on requests from an IP address that's sending excessive requests. AWS Documentation: Rate-based rule statement.

you have a secure web application hosted on AWS using Application Load Balancers, Auto Scaling, and a fleet of EC2 instances connected to an RDS database. You need to ensure that your RDS database can only be accessed using the profile credentials specific to your EC2 instances (via an authentication token). How can you achieve this?

  1. Using Active Directory federation via Amazon Inspector
  2. Using IAM database authentication
  3. Using IAM roles
  4. Using Amazon Cognito

2-IAM has database authentication capabilities that would allow an RDS database to only be accessed using the profile credentials specific to your EC2 instances.

Janelle works as a cloud solutions architect for a large enterprise that has begun the process of migrating to AWS for all of their application needs. The CTO and CISO have already decided that AWS Organizations is a required service for the multi-account environment that will be put into place. Janelle has been brought in to help solve the primary concern of member AWS accounts not following the required compliance rules set forth by the company. They want to both send alerts on configuration changes and prevent specific actions from occurring. Which solution would be the most efficient in solving this projected problem?

  1. Create individual AWS Config rules in each AWS account. Set up AWS Lambda functions in each AWS account to remediate any suspected drift.
  2. Create new AWS accounts using AWS Control Tower. Leverage the preventative and detective guardrails that come with it to prevent governance drift as well as send alerts on suspicious activities.
  3. Create a set of Global AWS Config rules that can cover all Regions in the management account that apply to the member accounts. Set up an AWS Lambda function in the management AWS account to alert an administrator when drift is detected.
  4. Install third-party SIEM software on Amazon EC2 instances in each account. Attach to them a Read-Only IAM instance profile within the respective account. Have them generate alerts for each flagged activity.

 2-AWS Control Tower allows you to implement account governance and compliance enforcement for an AWS organization. It leverages SCPs for preventative guardrails and AWS Config for detective guardrails. Reference: What Is AWS Control Tower? Guardrails in AWS Control Tower 

编排扩展AWS Organizations了的功能。 为了防止您的组织和账户出现偏差(偏离最佳实践),AWS Control Tower 采用了控制措施(有时也称为护栏)。 例如,您可以使用控件来帮助确保创建而不是更改安全日志和必要的跨账户访问权限

You work for a Fintech company that is launching a new cryptocurrency trading platform hosted on AWS. Because of the nature of the cryptocurrency industry, you have been asked to implement a Cloud Security Posture Management (CSPM) service that performs security best practice checks, aggregates alerts, and enables automated remediation. Which AWS service would meet this requirement?

  1. Amazon Inspector
  2. AWS Trusted Advisor
  3. AWS Security Hub
  4. Amazon GuardDuty 是一种威胁检测服务,可持续监控恶意活动和未经授权的行为,

3-AWS Security Hub is a Cloud Security Posture Management service that performs security best practice checks, aggregates alerts, and enables automated remediation.

是一项云安全状况管理服务,可执行安全最佳实践检查,整合警告并支持自动修复

You work for a startup that has recently been acquired by a large insurance company. As per the insurance company's internal security controls, you need to be able to monitor and record all API calls made in your AWS infrastructure. What AWS service should you use to achieve this?

  1. Amazon CloudWatch 是一种监控和管理服务,可提供有关AWS、本地、混合和其他云应用程序和基础设施资源的数据和可行洞察
  2. AWS Cloud Audit 安全审计学习路径,专为参与评估云中受监管工作负载的现有和潜在审计、风险和合规性专业人员而设计
  3. AWS CloudTrail  可帮助您对AWS 账户进行操作和风险审核、监管和合规性检查。 用户、角色或AWS 服务执行的操作将记录为CloudTrail 中的事件。
  4. AWS Trusted Advisor

3-AWS CloudTrail is used to monitor and record all API calls made in your AWS infrastructure.

A financial institution has begun using AWS services and plans to migrate as much of their IT infrastructure and applications to AWS as possible. The nature of the business dictates that strict compliance practices be in place. The AWS team has configured AWS CloudTrail to help meet compliance requirements and be ready for any upcoming audits. Which item is not a feature of AWS CloudTrail?

  1. Monitor Auto Scaling Groups and optimize resource utilization.
  2. Answer simple questions about user activity.
  3. Enables compliance.
  4. Track changes to resources.

1-This is a feature provided by CloudWatch

CloudTrail provides visibility into user activity by recording actions taken on your account. CloudTrail records important information about each action, including who made the request, the services used, the actions performed, parameters for the actions, and the response elements returned by the AWS service. This information helps you to track changes made to your AWS resources and to troubleshoot operational issues. CloudTrail makes it easier to ensure compliance with internal policies and regulatory standards. Customers who need to track changes to resources, answer simple questions about user activity, demonstrate compliance, troubleshoot, or perform security analysis should use CloudTrail.

You work for an insurance company that uses an AWS web application to look up customers' credit scores. For security purposes, this web application cannot traverse the internet or leave the Amazon network. It needs to communicate to Amazon DynamoDB and Amazon S3 in a custom VPC. What networking technology should you implement to achieve this?

  1. Use AWS Direct Connect to connect directly to Amazon DynamoDB and Amazon S3.
  2. Use VPC endpoints to connect the AWS web application to Amazon DynamoDB and Amazon S3.
  3. Use AWS VPN CloudHub to connect the web application to Amazon DynamoDB and Amazon S3.
  4. Use AWS WAF to connect the web application to Amazon DynamoDB and Amazon S3.

2-you can access Amazon S3 or Amazon DynamoDB from your VPC using VPC endpoints. VPC endpoints enable resources within your VPC to access AWS services with no exposure to the public internet. Your AWS resources do not require public IP addresses, and you do not need an internet gateway, a NAT device, or a virtual private gateway in your VPC. You use endpoint policies to control access to AWS services. Traffic between your VPC and the AWS service does not leave the Amazon network.

You have a serverless image sharing website that utilizes S3 to store high- quality images. Unfortunately, your competitors start linking to your website and borrowing your photos. How can you prevent this?

  1. Restrict public access to the bucket and turn on presigned URLs with expiry dates.
  2. Block the IP addresses of the websites using AWS WAF.
  3. Store the images in an RDS database and restrict access.
  4. Enable CloudFront on the website.

1-

You work for an insurance company that stores a lot of confidential medical data. They are migrating to AWS and have an encryption requirement where you need to manage the hardware security modules (HSMs) that generate and store the encryption keys. You also create the symmetric keys and asymmetric key pairs that the HSM stores. Which AWS service should you use to meet these requirements?

  1. AWS CloudTrail
  2. AWS Key Management Service (KMS)
  3. AWS Trusted Key Advisor
  4. AWS CloudHSM

4-With AWS CloudHSM, you can generate both symmetric keys and asymmetric key pairs. You can also manage the HSM that generates and stores your encryption keys.

A recent audit of IT services deployed within many of the AWS Organization member accounts in your company has caused numerous remediation tasks for the SecOps team, as well as the member account owners. Post-remediation efforts, the CISO has asked you to identify a solution within AWS for preventing this from repeating. They would like you to instead find a way to allow end users in the accounts to deploy preapproved services within AWS to avoid them accidentally using the offending services. Which of the following is the optimal approach for this solution?

  1. Create approved CloudFormation templates containing the required services that are used throughout the organization. Send email templates out to the account owners, so they can reference them as needed.
  2. Create a CloudFormation Stack Set for each approved IT service. Have an organization administrator manually deploy these templates to the targeted accounts after approval.
  3. Create approved Terraform templates containing the required services that are used throughout the organization. Create a shared catalog within AWS Service Catalog, list the templates as products, and then share the catalog with your Organization.
  4. Create approved CloudFormation templates containing the required services that can be used throughout the organization. Load the templates to a shared catalog within AWS Service Catalog. List the templates as products, and then share the catalog with your Organization.

4-AWS Service Catalog使组织能够创建和管理已批准使用的IT 服务的目录AWS。这些IT 服务可以包括从虚拟机映像、服务器、软件、数据库等到完整的多层应用程序架构的所有内容

AWS Service Catalog offers a way to control which services are being deployed to AWS accounts. You create CloudFormation templates that get uploaded to a catalog that you can share with an organization. End users can then use this catalog to deploy preapproved IT services into their AWS accounts. Reference: Using the end user console view Using the Provisioned products page

Your company has a small web application hosted on an EC2 instance. The application has just been deployed but no one is able to connect to the web application from a browser. You had recently ssh’d into this EC2 instance to perform a small update, but you also cannot browse to the application from Google Chrome. You have checked and there is an internet gateway attached to the VPC and a route in the route table to the internet gateway. Which situation most likely exists?

  1. The instance security group has ingress on port 443 but not port 22.
  2. The instance security group has no ingress on port 22 or port 80.
  3. The instance security group has ingress on port 22 but not port 80.
  4. The instance security group has ingress on port 80 but not port 22.

3

You work for an online bank that is migrating a customer portal to AWS. Because of the legislative requirements, you need a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation. Which service should you use?

  1. Amazon GuardDuty
  2. Amazon Inspector
  3. AWS Shield   是一项托管服务,用于保护AWS 上运行的应用程序免受分布式拒绝服务(DDoS) 攻击。
  4. AWS CloudTrail

1-Amazon GuardDuty is a continuous security monitoring service that analyzes and processes the following data sources: AWS CloudTrail management event logs, AWS CloudTrail data events for S3, DNS logs, EKS audit logs, and VPC flow logs. It uses threat intelligence feeds, such as lists of malicious IP addresses and domains, and machine learning to identify unexpected and potentially unauthorized and malicious activity within your AWS environment.

You need to be able to perform vulnerability scans on your large fleet of EC2 instances. Which AWS service should you choose?

  1. AWS Trusted Advisor
  2. Amazon Macie
  3. Amazon Inspector
  4. Amazon Athena

3

You work for a pharmaceutical company that recently had a major outage due to a sophisticated DDoS attack. They need you to implement DDoS mitigation to prevent this from happening again. They require you to have near real-time visibility into attacks, as well as 24/7 access to a dedicated team who can help mitigate this in the future. Which AWS service should you recommend?

  1. AWS DDoS Prevention Standard
  2. AWS DDoS Prevention Advanced
  3. AWS Shield Advanced 托管的DDos防护
  4. AWS Shield

3-AWS Shield Advanced has a dedicated team to help you respond to attacks.

AWS Shield Standard和AWS Shield Advanced

You work for a Fintech company that is migrating its application to AWS. You have a small team of six developers who need varying levels of access to the AWS platform. Using IAM, what is the most secure way to achieve this?

  1. Give each developer a root level AWS account and join each of these accounts to AWS Organizations.
  2. Create six IAM user accounts and add them to the administrator group, giving them full access to AWS.
  3. Create the appropriate groups with the appropriate permissions and then create an IAM user account per developer. Assign the accounts to the appropriate groups.
  4. Create one IAM user account with a user name and password and then share the login details with the six developers.

3

You have a custom VPC hosted in the AWS cloud that contains your secure web application. During routine analysis, you notice some port scans coming in from unrecognizable IP addresses. You are suspicious, and decide to block these IP addresses for the next 48 hours. What is the best way to achieve this?

  1. Modify your security group for all public IP addresses and block traffic to the suspicious IP addresses.
  2. Modify your internet gateway for all private IP addresses and block traffic to the suspicious IP addresses.
  3. Modify your network access control list (NACL) for all public IP addresses and block traffic to the suspicious IP addresses.
  4. Modify your VPC control list and block access to the IP addresses.

3-

A small biotech company has finalized their decision to begin deploying their application to the AWS cloud. They expect to have a handful of AWS accounts to begin with, but expect to grow to over 100 by the end of the year. The security engineer on the project has stressed that they want to have a centralized method of storing AWS CloudTrail logs for all accounts and alert on any notifications regarding compliance violations with AWS services in the member accounts. What solution would be the best fit for this scenario?

  1. AWS Config with AWS Lambda can deploy AWS Config rules throughout the organizations and use AWS Lambda to remediate or notify the security team.
  2. AWS Control Tower can deploy a Log Archive account for centralized security logs and an Audit account for any SNS notifications around compliance violations.
  3. AWS Organization Service Control Policies can be used to create new accounts. Then deploy the policies to each AWS account and use them to notify security on any violations.
  4. Deploy an SIEM application on Amazon EC2 in the management account. Grant the EC2 instances permissions to assume cross-account roles into each member account with Read-Only permissions. Use them to notify security of any violations.

2-This offers a managed solution to centralize all CloudTrail logs and alert on config changes as well. The Log Account and Audit account are both locked down by default, and the Organization admins must grant access. Reference: What Is AWS Control Tower? Terminology

A junior intern just started working at your company. During the course of the day, they accidentally delete a critical encryption key that you had stored securely in S3. You need to prevent this from happening in the future. Which two steps should you take to prevent this from happening again in the future?

  1. Enable multi-factor authentication (MFA) delete
  2. Enable Amazon CloudWatch
  3. Turn on versioning
  4. Enable AWS CloudTrail

1 3

You have a web application that is hosted on a series of EC2 instances that have an Application Load Balancer in front of them. You have created a new CloudFront distribution. You then set up its origin to point to your ALB. You need to provide access to hundreds of private files served by your CloudFront distribution. What should you use?

  1. CloudFront Signed URLS
  2. CloudFront HTTPS encryption
  3. CloudFront Origin Access Identity
  4. CloudFront signed cookies

 4-Signed cookies are useful when you want to access multiple files.

Signed URLs are useful when you want to access individual files, not hundreds of files.

 You manage 12 EC2 instances and you need to have a central file repository that these EC2 instances can access. What would be the best possible solutions for this?

  1. Attach a volume to multiple instances with Amazon EBS Multi-Attach.
  2. Create a Route53 EBS storage record and create a network mount on your EC2 instances pointing at the Route53 alias record.
  3. Create an EFS volume and attach this to the EC2 instances.
  4. Create a custom Lambda function behind API Gateway. Point your EC2 instances to the Lambda function when they need to access the centralized storage system.

1-3 You can attach a volume to multiple instances with Amazon EBS Multi-Attach.

Create an EFS volume and attach this to the EC2 instances.

You need to design a stateless web application tier. Which of the following would NOT help you achieve this?

  1. Save your session data on an EBS volume shared by EC2 instances running across different Availability Zones.
  2. Save your session data in Amazon RDS.
  3. Store the session data in cookies saved to the users' browsers.
  4. Store the session data in Elasticache.

1-Amazon EBS Multi-Attach enables you to attach a single Provisioned IOPS SSD (io1 or io2) volume to multiple instances that are in the same Availability Zone. This means you cannot have a stateless application with EC2 instances running across different Availability Zones and sharing the same EBS volume. AWS Documentation: Attach a volume to multiple instances with Amazon EBS Multi-Attach.

You are a solutions architect working for a biotech company that has a large private cloud deployment using VMware. You have been tasked to setup their disaster recovery solution on AWS. What is the simplest way to achieve this?

  1. Deploy an EC2 instance into a private subnet and install vCenter on it
  2. Purchase VMware Cloud on AWS, leveraging VMware disaster recovery technologies and the speed of AWS cloud to protect your virtual machines
  3. Deploy an EC2 instance into a public subnet and install vCenter on it
  4. Use the VMware landing page on AWS to provision a EC2 instance with VMware vCenter installed on it

 2-Customers can buy VMware Cloud on AWS directly through AWS and AWS Partner Network (APN) Partners in the AWS Solution Provider Program. This allows customers the flexibility to purchase VMware Cloud on AWS either through AWS or VMware, or the AWS Solution Provider or VMware VPN Solution Provider of their choice. VMware Cloud on AWS offers a Disaster Recovery feature that uses familiar VMware vSPhere and Site Recovery Manager technologies while leveraging cloud economics. You can replicate to VMware Cloud on AWS using VMware Site Recovery Manager to one or multiple Software-Defined Data Centers. VMware Site Recovery Manager can help you automate disaster recovery, meet your recovery point objectives (RPOs), and recovery time objectives (RTOs), as well as reduce operational errors. Disaster Recovery sites can be right-sized or scaled up when you need it and down when it is no longer required. AWS Documentation: VMware Cloud on AWS | FAQs.

You are a database administrator working for a small start up that has just secured Venture Capital (VC) funding. As part of the new investment the VC’s have asked you to ensure that your application has minimum downtime. Currently, your backend is hosted on a dedicated cluster running MongoDB. You spend a lot of time managing the cluster, configuring backups, and trying to ensure there is no downtime. You would like to migrate your MongoDB database to the AWS cloud. What service should you use for your backend database, assuming you don’t want to make any changes to your database and application?

  1. AWS RDS
  2. Aurora Serverless
  3. Amazon DocumentDB
  4. DynamoDB

3-This would best suit the scenario.

You have been tasked with designing a strategy for backing up EBS volumes attached to an instance-store-backed EC2 instance. You have been asked for an executive summary on your design, and the executive summary should include an answer to the question, “What can an EBS volume do when snapshotting the volume is in progress”?

  1. The volume can be used normally while the snapshot is in progress.
  2. The volume cannot be used while a snapshot is in progress.
  3. The volume can only accommodate writes while a snapshot is in progress.
  4. The volume can only accommodate reads while a snapshot is in progress.

1-You can create a point-in-time snapshot of an EBS volume and use it as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental; the new snapshot saves only the blocks that have changed since your last snapshot. Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume. Create Amazon EBS snapshots - Amazon Elastic Compute Cloud

You are planning to migrate a complex big data application to AWS using EC2. The application requires complex software to be installed, which typically takes a couple of hours. You need this application to be behind an Auto Scaling group so that it can react in a scaling event. How do you recommend speeding up the installation process when there's a scale-out event?

  1. Create an EBS volume with PIOPS for faster installation performance.
  2. Create a golden AMI with the software pre-installed.
  3. Create a bootstrap script to automatically install the software.
  4. Pre-deploy the software on an Application Load Balancer so when there's a scaling event it will automatically be installed on the EC2 instance.

2-This golden AMI would have the software pre-installed and would be ready to use in a scaling event.

AWS Golden AMI是指一个经过精心构建和测试的Amazon Machine Image (AMI),它包含了您应用程序或服务的最新版本,并且已经进行了优化和安全配置。 通过使用Golden AMI,您可以在AWS中快速部署具有相同配置的多个实例,从而提高应用程序或服务的可靠性和可扩展性

A large fintech company is using a web application that stores its data on Amazon RDS. As a solutions architect, you have been asked to upgrade the web application so that users around the world can access it using an API. The application will need to be able to handle large bursts of traffic in seconds from time to time. What would an ideal solution look like?

  1. Create an API using API Gateway and use Route 53 to route traffic to CloudFront.
  2. Create an API using API Gateway and use Lambda to automatically handle the bursts in traffic.
  3. Create an API using API Gateway and use EC2 with Auto Scaling to quickly handle the sudden burst of traffic.
  4. Create an API using API Gateway and use RDS Auto Scaling to handle the bursts in traffic.

2-

A pharmaceutical company has created a hybrid cloud that connects their on-premises data center and cloud infrastructure in AWS. They need to back up their storage to AWS. The backups must be stored and retrieved from AWS using the Server Message Block (SMB) protocol. The backups must be immediately accessible within minutes for three months. What is the best solution?

  1. Create a Direct Connect connection and store the backups in DynamoDB.
  2. Use AWS Tape Gateway.
  3. Use AWS File Gateway.
  4. Create a Direct Connect connection and store the backups using Route 53.

3-A File Gateway supports storage on S3 and combines a service and a virtual software appliance. By using this combination, you can store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB).

Route 53 is a DNS service and does not support backups.

You have an online store and you are preparing for the week before Christmas, which is your busiest period of the year. You estimate that your traffic will increase by 50% during this period. Your website is using an SQS standard queue, and you're running a fleet of EC2 instances configured in an Auto Scaling group which then consumes the SQS messages. What should you do to prepare your SQS queue for the 50% increase in traffic?

  1. Increase the size of your SQS queue.
  2. Nothing. SQS scales automatically.
  3. Create additional EC2 instances to help query the SQS queue.
  4. Create multiple SQS queues and deploy these behind an SQS Load Balancer.

2-SQS scales automatically.

You are working as a Solutions Architect for an online travel company. Your application is going to use an Auto Scaling group of EC2 instances but you need to have some decoupling to store messages because of high volume. Which AWS service can be added to the solution to meet this requirement?

  1. AWS SQS
  2. RDS read replicas
  3. Elasticache
  4. AWS Simple Workflow Service

1-

You are database administrator for a security company using a large graph database used to build graph queries for near real-time identity fraud pattern detection in financial and purchase transactions. You recently experienced an outage and you want to migrate this database to somewhere more secure and stable such as AWS. What AWS service would you recommend to the business to handle graph queries?

  1. Amazon DocumentDB
  2. Aurora Serverless
  3. Neptune
  4. Amazon Keyspaces

 3-Neptune 的核心是一个专门打造的高性能图形数据库引擎。 该引擎经过优化,可存储数十亿个关系并能以毫秒级延迟进行图形查询

You work for a large chip manufacturer in Taiwan who has a large dedicated cluster running MongoDB. Unfortunately, they have a large period of downtime and would now like to migrate their MongoDB instance to the AWS cloud. They do not want to make any changes to their application architecture. What AWS service would you recommend to use for MongoDB?

  1. Amazon QLDB
  2. Aurora Serverless
  3. Amazon Neptune
  4. Amazon DocumentDB

4-This supports MongoDB and would be suitable in this scenario.

You use AWS Route53 as your DNS service and you have updated your domain, hello.acloud.guru, to point to a new Elastic Load Balancer (ELB). However, when you check the update it looks like users are still redirected to the old ELB. What could be the problem?

  1. The A record needs to be changed to a CNAME.
  2. The TTL needs to expire. After that, the record will be updated.
  3. The CNAME needs to be changed to an A record.
  4. Your Application Load Balancer needs to be a Network Load Balancer to interface with Route53.

2-You need to wait for the TTL to expire. Your computer has cached the previous DNS request, but once the TTL has expired it will get the new address.

You want to migrate an on-premises Couchbase NoSQL database to AWS. You need this to be as resilient as possible and you want to minimize any management of servers. Preferably, you'd like to go serverless. Which database should you choose?

  1. DynamoDB
  2. Elasticache
  3. Aurora DB
  4. RDS

1-DynamoDB is a NoSQL database and has serverless deployment.

You have an image sharing website that sits on EC2 and uses EBS as the backend storage. Unfortunately, you keep running out of space and you are forced to mount additional EBS volumes. Your boss asks if there are any other services on AWS you can use to store images or videos. What service would you suggest?

  1. RDS
  2. S3
  3. Route53
  4. CloudWatch

2-Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance.

You have launched an EC2 instance that will host a PHP application. You install all the required software such as PHP and MySQL. You make a note of the EC2 public IPv4 address and then you stop and restart your EC2 instance. You notice that after the restart, you can't access the EC2 instance and that the instance's public IPv4 has been changed. What should you do to make sure your IPv4 address does not change?

  1. Raise a support request with AWS Support and ask them to issue you a permanent IPv4 address.
  2. Install the PHP application on an S3 bucket and configure the bucket to have a fixed IP address.
  3. Create an elastic IP address and assign it to your EC2 instance.
  4. Create an Application Load Balancer with a fixed IP address and place the EC2 behind this.

3-This will give you a fixed IP address.

A data analytics company is running their software in the AWS cloud. The current workflow process leverages Amazon EC2 instances, which process different datasets, normalizes them, and then outputs them to Amazon S3. The datasets average around 100 GB in size.

The company CTO has asked that the application team start looking into leveraging Amazon EMR to generate reports and enable further analysis of the datasets, and then store the newly generated data in a separate Amazon S3 bucket.

Which AWS service could be used to make this process efficient, more cost-effective, and automated?

  1. Amazon EventBridge  事件总线是一种无服务器事件总线,可帮助您接收、筛选、转换、路由和交付事件
  2. AWS Data Pipeline 是一种Web 服务,它可以帮助您可靠地处理数据,并以指定的间隔在不同AWS 计算和存储服务以及本地数据源之间移动数据
  3. AWS Lambda
  4. Amazon EMR Spot Capacity  业务数据的处理和分析通常需要比较大规模的

2-This managed service lets you implement data-driven workflows to automatically move data between the listed resources within AWS. It executes and provides methods of tracking data ETL processes.

Reference: What Is Data Pipeline?

Amazon EventBridge This service is not meant for data ingestion or data migrations.

Your company has a local content management system (CMS) using Microsoft Sharepoint that is hosted on-premises. Due to a recent acquisition of another company, you expect traffic to the CMS to more than double in the coming week, so you have decided to migrate the SharePoint server to AWS. You need high performance using Windows shared file storage. You also need a high-performing cloud storage solution that is highly available and that can be integrated with Active Directory. What would be the best storage option?

  1. Make an Amazon FSx for Windows File System and join this to an Active Directory Domain Controller hosted in AWS.
  2. Create a file system using Amazon EFS and connect this file system to an Active Directory Domain Controller hosted in AWS.
  3. Create an EC2 Instance and mount an S3 bucket as the shared file repository. Connect the bucket to an Active Directory Domain Controller hosted in AWS.
  4. Create a file system using Amazon NFS and connect this file system to an Active Directory Domain Controller hosted in AWS.

1-Amazon FSx for Windows File System is a high-performing cloud storage solution that is highly available and can be integrated with Active Directory

A web analytics company is receiving both structured and semi-structured data from a large number of different sources each day. The developers plan on using big data processing frameworks to analyze the data and access it using Business Intelligence (BI) tools and SQL queries. Which of the following provides the best high-performing solution?

  1. Use Amazon EC2 and store the data in RDS.
  2. Use AWS Glue and store the processed data in S3.
  3. Use Amazon Kinesis Data Analytics and store the processed data in Aurora.
  4. Create an Amazon EMR Cluster and store the data in Amazon Redshift.

4-Amazon EMR is a managed cluster platform that simplifies running big data frameworks on AWS to process and analyze vast amounts of data. Redshift is Amazon's managed Big Data Warehouse.

You have a large number of files in S3 and you have been asked to build an index of these files. In order to do this, you need to read the the first 250 bytes of each object in S3. This data contains some metadata about the content of the file itself. Unfortunately, there are over 10,000,000 files in your S3 bucket, and this is about 100 TB of data. The data will then need to be stored in an Aurora Database. How can you build this index in the fastest way possible?

  1. Use AWS Athena to query the S3 bucket for the first 250 bytes of data. Take the result of the query and build an Aurora Database.
  2. Use the index bucket function in AWS Macie to query the S3 bucket and then load this data in to the Aurora Database.
  3. Create a program to use a byte range fetch for the first 250 Bytes of data and then store this in the Aurora Database.
  4. Create a program to use Macie to select the first 250 Bytes of data and then store this in Aurora Database.

3

You run an online platform that specializes in five different dream vacations. The platform allows customers to submit queries about their five different experiences. You need to ensure that all queries are answered within 24 hours, either by a person or by a bot. You decide to create five separate SQS queues for each experience request. You need to automatically publish messages to their respective SQS queues as soon as customers submit their queries. Which architecture would be best suited to achieve this?

  1. Use AWS Lex and AWS Polly to respond automatically to the SQS queues.
  2. Create 10 SNS topics and configure the five SQS queues to subscribe to two topics each. Publish the messages to the dedicated queue depending on the experience request.
  3. Create five SNS topics and configure the five SQS queues to subscribe to those five topics. Publish the messages to the dedicated queue depending on the experience request.
  4. Create one SNS topic and configure the five SQS queues to subscribe to that topic. Configure the filter policies in the SNS subscription to publish the response to the designated SQS queue based on the experience request type.

4-This is the best architecture as you only have one SNS topic, so it is easy to manage.

You work for a company that sequences genetics and they run a high performance computing (HPC) application that does things such as batch processing, ad serving, scientific modeling, and CPU-based machine learning inference. They are migrating to AWS and would like to create a fleet of EC2 instances to meet this requirement. What EC2 instance type should you recommend?

  1. Amazon EC2 T4g instances
  2. Amazon EC2 C7g
  3. Amazon EC2 R6g
  4. Amazon EC2 M6g

2

You work for a large investment bank that is migrating its applications to the cloud. The bank is developing a custom fraud detection system using Python in Jupyter Notebook. They then build and train their models and put them into production. They want to migrate to the AWS Cloud and are looking for a service that would meet these requirements. Which AWS service would you recommend they use?

  1. Amazon Fraud Detector
  2. Amazon SageMaker
  3. Amazon Forecast
  4. Amazon Comprehend

2-Amazon SageMaker is a fully managed machine learning service. Amazon SageMaker allows you to build and train machine learning models, and then directly deploy them into a production-ready hosted environment.

What EBS Volume type gives you the highest performance in terms of IOPS?

  1. EBS Provisioned IOPS SSD (io2 Block Express)
  2. EBS Provisioned IOPS SSD (io2)
  3. EBS General Purpose SSD (gp3)
  4. EBS Provisioned IOPS SSD (io1)

1-EBS Provisioned IOPS SSD (io2 Block Express) is the highest-performance SSD volume designed for business-critical latency-sensitive transactional workloads.

Your website is an online store, and it has sporadic and unpredictable transactional workloads throughout the day and night that are very hard to predict. The website is currently being hosted at your corporate data center and needs to be migrated to AWS. A new relational database is required that autoscales capacity to meet these peaks as well as being able to scale back when not being used. Which database technology would be best suited for your website?

  1. DynamoDB with Auto Scaling enabled
  2. Aurora Serverless DB cluster
  3. Amazon Redshift with Auto Scaling enabled
  4. Amazon RDS with Auto Scaling enabled and read replicas turned on

2-Aurora Serverless autoscales capacity to meet these peaks as well as being able to scale back when not being used.

A car company is using Amazon RDS to store data from a web application. The application has very low usage of RDS. However, there can be sudden bursts of traffic every time a new marketing campaign is launched. You need to develop an API so that third parties can query your database. What is the best architecture?

  1. Create an API using Amazon API Gateway. Use CloudFront to handle the scaling of read traffic.
  2. Create an API using Amazon API Gateway. Configure a read replica to handle the additional traffic.
  3. Create an API using Amazon API Gateway. Use Auto Scaling with EC2 to increase the load on your database.
  4. Create an API using Amazon API Gateway. Configure S3 to handle the traffic.

2-By sending traffic to the read replica, you can reduce the load on your production database and scale performance.

A developer is working for a medium-sized biotech company. The developer has been tasked with building an application with stateless web servers and needs fast access to session data. Which AWS service would accomplish this?

  1. Elasticache
  2. Route53
  3. EKS
  4. Glacier

1

You run a financial services company that stores a large amount of data in S3. You need to query this data using SQL in the fastest and lowest-cost way possible, preferably serverless. What AWS service would you use to do this?

  1. Macie
  2. S3 Query Service
  3. Athena
  4. Redshift.

3

You are working for a startup that is designing a mobile gaming platform. It is being launched by a very famous celebrity, and the frontend servers will experience a lot of heavy traffic during the initial launch. You need to store the users' login and gaming details in memory, and you need caching capability that is compatible with Redis API. Which service should you use?

  1. Amazon RDS
  2. Amazon DynamoDB
  3. Amazon Elasticache
  4. Elasticsearch

3

You have landed a job with a major insurance firm that is moving their AWS estate to the cloud. They use artificial intelligence and machine learning using custom models built via Jupyter notebooks. Your boss would like to do the Jupyter Notebook development in the AWS Cloud from now on. Which AWS service would allow you to do this?

  1. Amazon SageMaker
  2. Amazon Forecast
  3. Amazon Comprehend
  4. Amazon Fraud Detector

1-Amazon SageMaker is a fully managed machine learning service. Amazon SageMaker allows you to build and train machine learning models, and then directly deploy them into a production-ready hosted environment.

You work for a government agency who are migrating their production environment to AWS from on-premises. They want you to create a serverless solution that is high-performing and scales effortlessly. They have a web frontend, a MongoDB No-SQL backend, and large amounts of static files such as pictures and images. What would be the ideal serverless solution from the choices below?

  1. ElasticBeanstalk > Application Load Balancer > EC2 > DynamoDB > S3
  2. API Gateway > Lambda > DynamoDB > EBS
  3. Application Load Balancer > EC2 > Aurora > S3
  4. API Gateway > Lambda > DynamoDB > S3

4

You have developed an AI-powered app that is used to predict the prices of cryptocurrency in real time. The app requires low latency and high throughput storage performance for processing training sets. You need to archive the completed processed training sets on storage that is as cost effective as possible, but can still maintain immediate access. What two storage solutions should you use?

  1. Amazon FSx for Lustre for processing training sets 是完全托管式文件系统,并针对高性能计算和机器学习等计算密集型工作负载进行了优化
  2. Amazon Elastic File System for archiving completed processed training set
  3. Amazon S3 Glacier Instant Retrieval for archiving completed processed training sets
  4. AWS Storage Gateway for processing training sets

1-3 Amazon S3 Glacier Instant Retrieval is the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. AWS Documentation: Amazon S3 Glacier Instant Retrieval.

Amazon Elastic File System Although this is technically feasible, it is not a cost-effective method.

A Fintech startup has a small application that receives intermittent and random traffic. At some points, it may not receive any traffic at all; at other times, it might receive tens of thousand of queries at once. You need to rearchitect the application for the AWS cloud using a relational database. What database technology would best suit your needs while keeping costs at a minimum?

  1. RDS for MySQL
  2. Aurora Serverless
  3. NeptuneDB
  4. DynamoDB

2-This is the best answer as it keeps cost low while being a relational database.

You work for a popular streaming service that runs its NoSQL backend in-house on large Cassandra clusters. You recently had a major outage and realize you need to migrate your Cassandra workload on to something more reliable, such as the AWS Cloud. You do a cost analysis and realize that, in the long run, this will probably save the company a lot of overhead fees. You need to select a Cassandra-compatible service on which to run your workloads. Which service should you select?

  1. Amazon DocumentDB
  2. Amazon Keystone
  3. Neptune
  4. Amazon Keyspaces

4-This is a Cassandra-compatible database and is the best choice for this scenario.

You are working for a small startup that wants to design a content management system (CMS). The company wants to architect the CMS so that the company only incurs a charge when someone tries to access their content. They want to try and keep costs as low as possible and remain in the AWS Free Tier if possible. Which of the following options is the most cost-effective architecture?

  1. Elastic Load Balancer > EC2 > DynamoDB
  2. Application Load Balancer > EC2 > RDS
  3. API Gateway > Lambda > DynamoDB > S3
  4. API Gateway > EC2 > DynamoDB

3-

What is the most cost-effective architecture for a front-facing website, assuming a peak load of 500 users per hour will be accessing the site?

  1. A fleet of EC2 instances behind a Network Load Balancer connected to an RDS instance with multiple read nodes
  2. An Elastic Beanstalk configuration using Auto Scaling and EC2
  3. An Elastic Kubernetes Service cluster
  4. A serverless website using API Gateway, Lambda, and DynamoDB

4-Given this short scenario, this would be the most cost-efficient and scalable solution.

You work for an insurance company that has just been merged with two other insurance companies. All companies have production workloads on AWS using multiple AWS accounts. Which of the following is something you could recommend to your boss to immediately start saving money?

  1. Migrate all AWS accounts to a single AWS account and close the migrated accounts.
  2. Use AWS CloudTrail to start keeping track of what you are spending.
  3. Create a root AWS account using AWS Organizations and connect all subsequent AWS accounts to the Organization. You can then take advantage of consolidated billing.
  4. Run Amazon Macie to identify where you can save costs.

3-Using consolidated billing, you can pool your AWS resources to lower your total costs.

You host a web application on Amazon EC2 that contains a large number of files that are infrequently accessed. Currently, the files are hosted on provisioned IOPS; however, due to budget cuts, your manager asks you to move the files to a more cost-effective solution. What storage solution should you choose?

  1. Use an S3 Infrequent Access storage bucket. Create a role in IAM granting S3 access and attach this role to your EC2 instance.
  2. Use a Throughput Optimized HDD (st1).
  3. Use a Cold HDD (sc1).
  4. Use an Elastic Block Storage General Purpose SSD (gp3).

1-This would be the cheapest way to store your data in this scenario.

You host a healthcare-related web application in AWS behind an Application Load Balancer and Auto Scaling group. Recent budget cuts mean you have to see if you can find a way to cut costs while still maintaining performance. Your boss is concerned about over-provisioning resources when an Auto Scaling event occurs. Which dynamic scaling policy should be used to prevent this?

  1. Suspend and resume scaling
  2. Simple scaling
  3. Scheduled scaling
  4. Target tracking scaling

4-With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to changes in the metric due to a changing load pattern.

You work for a private library that is digitizing its collection of ancient books. The library wants to store scans of each book in the cloud at the cheapest rate possible. The files will be accessed only occasionally, but will need to be retrieved instantly. What is the most cost-effective way to achieve this?

  1. S3 Standard
  2. Elastic Block Storage (EBS)
  3. Elastic File System (EFS)
  4. S3 Infrequent Access

4-S3 Infrequent Access is suitable for files that will be accessed only occasionally but require instant retrieval.

You work for an automotive company that has a small estate on AWS, but the majority of their assets are hosted in-house at their own data center. They are now looking to save money by moving more and more real estate to AWS and have started creating multiple AWS accounts in the same Region. They currently have one Direct Connect connection installed between their on-premises data center and AWS. Now that they have multiple production accounts, they will need to connect these to the on-premises data center using a dedicated connection. What is the most cost-effective way of doing this?

  1. Create a new Direct Connect gateway and set this up with the existing Direct Connect connection. Set up a transit gateway between the AWS accounts and connect the transit gateway to the Direct Connect gateway.
  2. Provision an AWS VPN CloudHub and connect the AWS accounts directly back to the Direct Connect connection via a VPN connection.
  3. Use a VPN concentrator to connect the AWS accounts back to the on-premises data center.
  4. Provision a new Direct Connect connection for each AWS account and connect it back to your on-premises data center.

1-You can associate an AWS Direct Connect gateway with a transit gateway when you need to connect multiple VPCs in the same Region. AWS Documentation: Direct Connect gateways.

You have a steady application serving around 3,000 customers that needs to be migrated to AWS. Based on historical data, traffic and usage has not grown very much in the past 24 months and you expect the application to remain steady for the next 3 years. You need to run the application on EC2. What is the most cost-effective EC2 instance type to use?

  1. Spot Instances
  2. Reserved Instances
  3. On-Demand Instances
  4. Dedicated Instances

2

You work for a small startup that has a shoestring budget. You accidentally leave a large EC2 instance running over a few days and are hit with a huge bill. You need to prevent this from happening in the future. What should you do?

  1. Use AWS Trusted Advisor to notify you whenever an EC2 instance has been running for more than 24 hours.
  2. Enable AWS CloudTrail to terminate any EC2 instance that has been running for more than 24 hours.
  3. Create a billing alarm to monitor your AWS charges for when they go above a certain threshold.
  4. Enable CloudFormation to alert you when any EC2 instance has been running for more than 24 hours.

3-

You work for a large advertising company that is moving its videos and photos to AWS. The size of the migration is 70 terabytes, and it needs to be completed as quickly and cost-effectively as possible. What is the best way to achieve this?

  1. AWS Storage Gateway
  2. An AWS Snowball Edge Storage Optimized device
  3. AWS Direct Connect
  4. AWS File Gateway

2

你可能感兴趣的:(aws,云计算)