AWS认证

AWS考题,最新的都有,跟大家分享一部分

1.

A Solutions Architect must design a highly available, stateless, REST service. The service will require multiple persistent storage layers for service object meta information and the delivery of content. Each request needs to be authenticated and securely processed. There is a requirement to keep costs as low as possible?

How can these requirements be met?

A. Use AWS Fargate to host a container that runs a self-contained REST service. Set up an Amazon ECS service that is fronted by an Application Load Balancer (ALB). Use a custom authenticator to control access to the API. Store request meta information in Amazon DynamoDB with Auto Scaling and static content in a secured S3 bucket. Make secure signed requests for Amazon S3 objects and proxy the data through the REST service interface.

B. Use AWS Fargate to host a container that runs a self-contained REST service. Set up an ECS service that is fronted by a cross-zone ALB. Use an Amazon Cognito user pool to control access to the API. Store request meta information in DynamoDB with Auto Scaling and static content in a secured S3 bucket. Generate presigned URLs when returning references to content stored in Amazon S3.

C. Set up Amazon API Gateway and create the required API resources and methods. Use an Amazon Cognito user pool to control access to the API. Configure the methods to use AWS Lambda proxy integrations, and process each resource with a unique AWS Lambda function. Store request meta information in DynamoDB with Auto Scaling and static content in a secured S3 bucket. Generate presigned URLs when returning references to content stored in Amazon S3.

D. Set up Amazon API Gateway and create the required API resources and methods. Use an Amazon API Gateway custom authorizer to control access to the API. Configure the methods to use AWS Lambda custom integrations, and process each resource with a unique Lambda function. Store request meta information in an Amazon ElastiCache Multi-AZ cluster and static content in a secured S3 bucket. Generate presigned URLs when returning references to content stored in Amazon S3.

C is the correct Answer.

Although both donathon and moon agree, I thought the considerations needed in this question were worthy of additional discussion.

    A. Custom authenticator is not the best option. Using Fargate is fine, but you need a better way to call it then an ALB/ELB, which does not have security integrated into it.

    B. Similar issue as with A, although it is an improvement since it uses Cognito.

  C. This answer nails all the requirements and is my choice for the best answer. HOWEVER, D is preferable in some ways. Arguably, AWS Lambda custom integrations (D) is preferable to using AWS Lambda Proxy integration. The other key part to this question is the “multiple persistent storage layers” requirement. Donathon and Moon stated that ElastiCache is NOT persistent. This is NOT necessarily true. In using ElastiCache you have two implementation options: MemCache or Redis. MemCache does not provide persistent storage but Redis DOES. As such, “D” including ElastiCache does not mean it is incorrect (See https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups.html: Q: Does Amazon ElastiCache for Redis support Redis persistence? Yes, you can achieve persistence by snapshotting your Redis data using the Backup and Restore feature. ). ElastiCache is a better solution then DynamoDB. However, DynamoDB (answer D) would also work, and there is no question about that verses the question of which implementation of ElastiCache is being used (MemCache vs. Redis).

I initially felt strongly that “D” was the best answer but in weighing all these factors I am now leaning towards “C” and that would be my selection at this point.

    D. This answer meets all the requirements and I believe is in ways better then answer “C”. However, I have still selected “C” as my answer, as per my explanation below and my comments for “C”. One key difference between “C” and “D” is “C” uses “AWS Lambda proxy integrations” and “D” uses “AWS Lambda custom integrations”. Advantages and disadvantages are listed in the following link: https://medium.com/@lakshmanLD/lambda-proxy-vs-lambda-integration-in-aws-api-gateway-3a9397af0e6d.

In a nutshell, Custom Integrations are more powerful, easier to document and less prone to human error. The downside is they are more work to implement. Since time and cost are not specifically mentioned, this is a better answer then “C”.

2.

company with several AWS accounts is using AWS Organizations and service control policies (SCPs). An Administrator created the following SCP and has attached it to an organizational unit (OU) that contains AWS account 1111-1111-1111:

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Sid”: “AllowsAllActions”,

“Effect”: “Allow”,

“Action”: “*”,

“Resource”: “*”

},

{

“Sid”: “DenyCloudTrail”,

“Effect”: “Deny”,

“Action”: “cloudtrail: *”,

“Resource”: “*”

},

Developers working in account 1111-1111-1111 complain that they cannot create Amazon S3 buckets. How should the Administrator address this problem?

A. Add s3:CreateBucket with “Allow” effect to the SCP.

B. Remove the account from the OU, and attach the SCP directly to account 1111-1111-1111

C. Instruct the Developers to add Amazon S3 permissions o their IAM entities.

D. Remove the SCP from account 1111-1111-1111

Answer: C

Boundaries, SCP is a guardrail, it works with a logic operator AND with the IAM policies to apply the correct rule, if you look there is an allow all actions on the SCP so it will need a IAM policy to work

SCPs are necessary but not sufficient for granting access in the accounts in your organization. Attaching an SCP to the organization root or an organizational unit (OU) defines a guardrail for what actions accounts within the organization root or OU can do. You still need to attach IAM policies to users and roles in your organization's accounts to actually grant permissions to them. With an SCP attached to those accounts, identity-based and resource-based policies grant permissions to entities only if those policies and the SCP allow the action. If both a permissions boundary (an advanced IAM feature) and an SCP are present, then the boundary, the SCP, and the identity-based policy must all allow the action. For more information, see Policy Evaluation Logic in the IAM User Guide.

3.

A Development team is deploying new APIs as serverless applications within a company. The team is currently using the AWS Management Console to provision Amazon API Gateway, AWS Lambda, and Amazon DynamoDB resources. A Solutions Architect has been tasked with automating the future deployments of these serverless APIs. How can this be accomplished?

    A. Use AWS CloudFormation with a Lambda-backed custom resource to provision API Gateway. Use the AWS::DynamoDB::Table and AWS::Lambda::Function resources to create the Amazon DynamoDB table and Lambda functions. Write a script to automate the deployment of the CloudFormation template.

    B. Use the AWS Serverless Application Model to define the resources. Upload a YAML template and application files to the code repository. Use AWS CodePipeline to connect to the code repository and to create an action to build using AWS CodeBuild. Use the AWS CloudFormation deployment provider in CodePipeline to deploy the solution.

    C. Use AWS CloudFormation to define the serverless application. Implement versioning on the Lambda functions and create aliases to point to the versions. When deploying, configure weights to implement shifting traffic to the newest version, and gradually update the weights as traffic moves over.

    D. Commit the application code to the AWS CodeCommit code repository. Use AWS CodePipeline and connect to the CodeCommit code repository. Use AWS CodeBuild to build and deploy the Lambda functions using AWS CodeDeploy. Specify the deployment preference type in CodeDeploy to gradually shift traffic over to the new version.

Answer:B

https://aws-quickstart.s3.amazonaws.com/quickstart-trek10-serverless-enterprise-cicd/doc/serverless-cicd-for-the-enterprise-on-the-aws-cloud.pdf

https://aws.amazon.com/quickstart/architecture/serverless-cicd-for-enterprise/

4.

API gateway and Lambda non-proxy integrations have been chosen to implement an application by a software engineer. The application is a data analysis tool that returns some statistic results when the HTTP endpoint is called. The lambda needs to communicate with some back-end data services such as Keen.io however there are chances that error happens such as wrong data requested, bad communications, etc. The lambda is written using Java and two exceptions may be returned which are BadRequestException and InternalErrorException. What should the software engineer do to map these two exceptions in API gateway with proper HTTP return codes? For example, BadRequestException and InternalErrorException are mapped to HTTP return codes 400 and 500 respectively. Select 2. BD

  A. Add the corresponding error codes (400 and 500) on the Integration Response in API gateway.

    B. Add the corresponding error codes (400 and 500) on the Method Response in API gateway.

    C. Put the mapping logic into Lambda itself so that when exception happens, error codes are returned at the same time in a JSON body.

    D. Add Integration Responses where regular expression patterns are set such as BadRequest or InternalError. Associate them with HTTP status codes.

    E. Add Method Responses where regular expression patterns are set such as BadRequest or InternalError. Associate them with HTTP status codes 400 and 500.Method Request/Method Response are part mainly deal with API gateways and they are the API’s interface with the API’s frontend (a client), whereas Integration Request and Integration Response are the API’s interface with the backend. In this case, the backend is a lambda.For the mapping of exceptions that come from Lambda, Integration Response is the correct place to configure. However, the corresponding error code (400) on the method response should be created first. Otherwise, API Gateway throws an invalid configuration error response at runtime. The below is an example to map BadRequestException to HTTP return code 400:

Answer:BD

    Option A is incorrect: Because HTTP error codes are defined firstly in Method Response instead of Integration Response.

    Option B is CORRECT:  Because HTTP error codes are defined firstly in Method Response instead of Integration Response. (Same reason as A).

    Option C is incorrect: Because Integration Response in API gateway should be used. Refer to https://docs.aws.amazon.com/apigateway/latest/developerguide/handle-errors-in-lambda-integration.html on “how to Handle Lambda Errors in API Gateway”.

    Option D is CORRECT: Because BadRequest or InternalError should be mapped to 400 and 500 in Integration Response settings.

    Option E is incorrect: Because Method Response is the interface with the frontend. It does not deal with how to map the response from Lambda/backend.

5.

A large company is migrating its entire IT portfolio to AWS. Each business unit in the company has a standalone AWS account that

supports both development and test environments. New accounts to support production workloads will be needed soon.

The Finance department requires a centralized method for payment but must maintain visibility into each group’s spending to allocate

costs.The Security team requires a centralized mechanism to control IAM usage in all the company’s accounts. What combination of the following options meet the company’s needs with the LEAST effort? (Choose two.)

    A. Use a collection of parameterized AWS CloudFormation templates defining common IAM permissionsthat are launched into each account. Require all new and existing accounts to launch the appropriate stacks to enforce the least privilege model.

        B. Use AWS Organizations to create a new organization from a chosen payer account and define anorganizational unit hierarchy.

Invite the existing accounts to join the organization and create new accounts using Organizations.

        C. Require each business unit to use its own AWS accounts. Tag each AWS account appropriately andenable Cost Explorer to

administer chargebacks.

    D. Enable all features of AWS Organizations and establish appropriate service control policies that filterIAM permissions for sub_x005f_x005faccounts.

        E. Consolidate all of the company’s AWS accounts into a single AWS account. Use tags for billingpurposes and IAM’s Access Advice feature to enforce the least privilege model

Answer:BD

A: While CloudFormation is a good start, remember this does not prevent changes after the stack has been deployed.

B: This looks likely.

C: This does not allow Finance to view the bill in a centralized manner which is a requirement.

D: This is the best way to meet the security requirements. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines.

E: It’s best to use different accounts for dev\test and prod.

6.

A company has a legacy application running on servers on premises. To increase the application's reliability, the company wants to gain actionable insights using application logs. A Solutions Architect has been given following requirements for the solution:

✑ Aggregate logs using AWS.

✑ Automate log analysis for errors.

✑ Notify the Operations team when errors go beyond a specified threshold.

What solution meets the requirements? D

A. Install Amazon Kinesis Agent on servers, send logs to Amazon Kinesis Data Streams and use Amazon Kinesis Data Analytics to identify errors, create an Amazon CloudWatch alarm to notify the Operations team of errors

B. Install an AWS X-Ray agent on servers, send logs to AWS Lambda and analyze them to identify errors, use Amazon CloudWatch Events to notify the Operations team of errors.

C. Install Logstash on servers, send logs to Amazon S3 and use Amazon Athena to identify errors, use sendmail to notify the Operations team of errors.

D. Install the Amazon CloudWatch agent on servers, send logs to Amazon CloudWatch Logs and use metric filters to identify errors, create a CloudWatch alarm to notify the Operations team of errors.

answer: A

https://docs.aws.amazon.com/kinesis-agent-windows/latest/userguide/what-is-kinesis-agent-windows.html

https://medium.com/@khandelwal12nidhi/build-log-analytic-solution-on-aws-cc62a70057b2

7. A Solutions Architect is working with a company that operates a standard three-tier web application in AWS. The web and application tiers run on Amazon EC2 and the database tier runs on Amazon RDS.The company is redesigning the web and application tiers to use Amazon API Gateway and AWS Lambda, and the company intends to deploy the new application within 6 months. The IT Manager has asked the Solutions Architect to reduce costs in the interim.

Which solution will be MOST cost effective while maintaining reliability?

A. Use Spot Instances for the web tier, On-Demand Instances for the application tier, and Reserved Instances for the database tier.

B. Use On-Demand Instances for the web and application tiers, and Reserved Instances for the database tier.

C. Use Spot Instances for the web and application tiers, and Reserved Instances for the database tier.

D. Use Reserved Instances for the web, application, and database tiers.

answer: B

There are 2 problems here. Firstly, the EBS snapshot is too old and secondly, the outage resulted in DB issues and data loss. Using 2 instances installed with the web server and using Route 53 load balancing should help with the first problem and RDS Multi-AZ DB would help in the second.

A: This will not reduce the chances of lost data and downtime could still be significant and risky.

B\D: I chose this simply because of the LB\Auto Scaling. While Route 53 can do similar function, it does not auto heal the instance to bring it back to healthy state.

C: There is only 1 active instance, there should be at least 2.

8. A company runs a legacy system on a single m4.2xlarge Amazon EC2 instance with Amazon EBS2 storage. The EC2 instance runs both the web server and a self-managed Oracle database. A snapshot is made of the EBS volume every 12 hours, and an AMI was created from the fully configured EC2 instance. A recent event that terminated the EC2 instance led to several hours of downtime. The application was successfully launched from the AMI, but the age of the

EBS snapshot and the repair of the database resulted in the loss of 8 hours of data. The system was also down for 4 hours while the Systems Operators manually performed these processes.

What architectural changes will minimize downtime and reduce the chance of lost data?

    A. Create an Amazon CloudWatch alarm to automatically recover the instance. Create a script that will check and repair the database upon reboot. Subscribe the Operations team to the Amazon SNS message generated by the CloudWatch alarm.

      B. Run the application on m4.xlarge EC2 instances behind an Elastic Load Balancer/Application Load Balancer. Run the EC2 instances in an Auto Scaling group across multiple Availability Zones with a minimum instance count of two. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.

    C. Run the application on m4.2xlarge EC2 instances behind an Elastic Load Balancer/Application Load Balancer. Run the EC2 instances in an Auto Scaling group across multiple Availability Zones with a minimum instance count of one. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.

  D. Increase the web server instance count to two m4.xlarge instances and use Amazon Route 53 round-robin load balancing to spread the load. Enable Route 53 health checks on the web servers. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.

answer: B is correct.

A: Does not address a loss of data since the last backup.

B: Ensures that there are at least two EC instances, each of which is in a different AZ. It also ensures that the database spans multiple AZs. Hence this meets all the criteria.

C: Having auto scaling set to a minimum instance count of one means that if there is just one instance and there is a problem, that instance will need to be restarted, meaning there would an outage during that restart time. As such, B is a better answer.

D: Does not indicate that the two EC2 instances will be in different availability zones. If they are in the same AZ, that entire zone could theoretically have an outage. Given that, I would select B instead of D. Apart from that consideration D does the trick.

你可能感兴趣的:(AWS认证)