SaaS anywhere: Designing distributed multi-tenant architectures

Peter: Thank you for coming out to our session today. SAS 308 SAS Anywhere Design, Distributed Multi-Tenant Architecture. My name is Peter. I'm gonna be joined by my teammate, Amica. And we're gonna be presenting the session today.

So Amica and I are both Partner Solution Architects with a team here called AWS SAS Factory. So what that means is that we get to work with partners that are looking to build a new SAS application, looking to transform existing SAS applications, or looking to optimize their SAS application.

Amica and I, being Partner Solution Architects, will work with them on a technical solution. And a lot of times when we're working with them, talking about their SAS solution, we typically talk to them about the traditional deployment model where the SAS application is deployed purely in the SAS providers environment, meaning that they have complete access and control to the SAS application.

But as we work with our partners and customers today, we're starting to hear some common questions or areas of concern that are really making us think about a new deployment model - one that we're starting to think about expanding the boundary of SAS and having remote components when it comes to SAS deployment.

In today's session, we're gonna be talking about what is SAS anywhere, our definition of SAS anywhere, some design considerations, some tradeoffs. It is a 300 level session, so we're gonna be talking about patterns and implementations as well, but I know that the patterns and implementations we're gonna be covering today is not a comprehensive list - there's a lot more patterns out there that we haven't thought of or won't be covering today.

In fact, I'm sure that if we start talking to some of you after this session, you might have patterns or implementation approaches that we won't be covering or talking about today.

It is a 300 level session, like I said, so we'll be talking about patterns and implementations. We will not be doing any live demos in the session. We won't be opening IDEs to write code, but we will have snippets of code that we're going to show.

So if this is what you're expecting from today's session, then really awesome and really happy to have you here. But if this is not something that you're expecting or you wanted something different, we won't be offended if you want to exit and look for a different session. I just want to make sure that you guys are maximizing your time here at re:Invent.

So with that said, we'll get started on the topic.

Starting with the core value of SaaS. Before I get into SaaS anywhere, I really want to talk a little bit about why we created SaaS in the first place - and this is like the promise of SaaS. This is what we want to achieve when we design a SaaS solution.

The first one is agility. We want to make sure that we have a solution that allows us to react and respond to our customer needs and the market - and really a solution that allows us to innovate fast and be able to produce features and put those features back into our customers hands.

Next is the economy of scale. We want to make sure we have a solution that's cost optimized in the SaaS environment. Typically we associate that, or partially associate that, with having a multi-tenant environment where that means multiple customers are sharing the same infrastructure so that we can optimize on the cost for operating this solution.

Not only do we want to make sure we have a cost optimized solution, we also want to make sure we have an operationally efficient solution - and that means we want to make sure we have a solution that minimizes the cost for managing and operating the solution and making sure that we're not in a position where we have to hire additional operational resources because we're onboarding new tenants.

And of course, growth - and growth is not only revenue, it's not only financial, but how do we help our organizations scale and acquire new customers? How do we create a solution that allows us to be flexible and agile in creating new packages, new features so that we can help the organization tap into markets or segments that they weren't able to tap into before?

And the last one is reduced cycle time. If you think about a SaaS provider, when we acquire a customer, we really want that customer loyalty and really try to keep that customer with us for as long as possible. And it's really important for us to have a good feedback loop so we can gather those requests, feature requests, or the ability for the customer to report issues - and taking that information, turning them into features, and putting those features back into our customers hands.

By having the customer feel like they're engaged with a good feedback loop, that will increase our customer loyalty and hopefully that means they're staying with us for a little bit longer, or a lot longer.

So these are the core values of SaaS. As we go through the session today, we're going to keep coming back to them because we don't want to lose sight of these core attributes as we think about a new deployment model.

The next thing I want to cover is the control plane and application plane. These are two core components for a SaaS solution. When we're working with our partners, talking about a SaaS solution, the first one is the control plane - and this is gonna be a set of shared services used by the SaaS provider to manage all the tenants, all the instances of the application plane.

And really it's a centralized location for all the metrics, all the observable data, so that we can aggregate the data in a centralized location and do any sort of analysis we might need to do. So this really is the one-stop shop for the SaaS provider to have complete visibility into what is going on in the SaaS environment.

The second component is the application plane - and this is the multi-tenant part of the application, the part of the application that your customers, your tenants, are interacting with and recognizing the value proposition of your solution.

In the beginning introduction we said that there's a traditional deployment model where both the application plane and control plane, the SaaS environment, are all running in the SaaS providers environment, account, or working environment - and this is significant because, like I said, this allows the SaaS provider to have full access and control to any of the infrastructure resources that are needed to support the SaaS application.

But we also said that we're hearing common questions or areas of concern that are really making us think about a new deployment model, expanding that boundary.

Some of these questions were the motivators that are letting us think or prompting us to think about a new deployment model.

The first one is compliance and security - customers are now asking where is the data residing or being stored? Where are the processes that are working with that data running? And how can I have more control around the data store, where the servers, the infrastructure resources used to run these applications and processes?

The next one is cost effectiveness. If you think about how fast data is growing in today's environment and ecosystem, and how much data organizations are working with now, it's not a surprise that we have SaaS solutions that are going to be working with this data that the organization has.

So do we have a new deployment model that allows us to work with these large volumes of data without having to transfer the data back and forth? Because when we start transferring large volumes of data back and forth, we incur data transfer costs. How do we avoid that?

Domain requirements - it is not uncommon for an organization to have more than one business critical application in their ecosystem. So it's also not uncommon for SaaS solutions to have to integrate and work with these external systems.

So how do we optimize the integration with these external systems? And some of these external systems may have legacy requirements - and what that means is that not all applications have easy integration points like APIs or hooks. Some of these older legacy applications maybe need a component that sits next to the legacy application, reading from the database or files from the system.

So we need to think about these legacy requirements. And then also latency - we talked about deploying remote components so we can integrate better with the external systems or databases - so what kind of deployment model can we have that improves performance and minimizes latency so the application is still performing as well as possible?

And not all the questions or concerns are going to be tech related. If you think about a company or organization looking to adopt SaaS, a lot of them are more accustomed to the traditional software deployment model where everything is running in their data center on their servers - they have complete access and control.

Now they're looking to adopt SaaS - not only do they need to switch their mindset to a new subscription model, but now we're also asking them to give up control over where the data is stored, where processes are running.

So some customers are gonna say, hey, I really want to use your solution, but I really need to keep my data, I need to have these processes running in my data center or within my control.

So these are the questions that keep coming up over and over in a lot of conversations that are really making us think about a SaaS anywhere deployment model where we start to stretch the boundary of SaaS deployment.

Now my favorite slide - there's a lot of words on this slide and I promise this will be the only slide I'm going to read verbatim. But I thought it's important for us to have a slide with the definition of what is SaaS anywhere so we're on the same page.

SaaS anywhere represents an architecture model where part of your systems resources are hosted in a remote environment that may not be under the control of the SaaS provider. It outlines a series of patterns and strategies that are used to create multi-tenant solutions that support centralized provisioning, configuration, operation, and deployment of these remote application resources.

Things to highlight here - we talked about the traditional deployment model with everything sitting in the SaaS providers account. With SaaS anywhere now we're gonna be looking at hosting part of the application in a remote environment that's not under the control of the SaaS provider.

And even though we're deploying these remote application resources in the remote environment, we're also not compromising on some of the core attributes such as centralized provisioning, configuration, and operations.

Now that slide is over - I'm a more visual person, so we have a slide here that visualizes what the definition is and hopefully that makes it, it helps me understand a little bit better at least.

So in the traditional SaaS deployment model, the SaaS solution is running in the SaaS providers environment. The SaaS provider has full access and control to the infrastructure resources.

In a SaaS anywhere deployment model, we now introduce the concept of remote environments - and the remote environment is gonna have these remote infrastructures. So the remote environment and remote infrastructure is going to be owned and managed by your customers.

But the application components, the remote application resources, will still need to be deployed and executed and running in these remote environments within these remote infrastructure resources.

So with SaaS anywhere we have to start thinking about not only deploying the remote components, but how do we stay connected and have the ability to manage these remote components?

Now we've talked about what SaaS anywhere is, what made us think about SaaS anywhere. I want to talk a little bit about the design considerations when designing a SaaS anywhere solution - what are the impacts you have to keep in mind?

The first one is availability and reliability. For a SaaS provider, it's extremely important for us to make sure the SaaS application is available all the time to our users and it's resilient. In a traditional model that's pretty straightforward because we have complete full access and control to the infrastructure resources, so we control the availability, we understand how that's gonna failover, and so on.

But in a SaaS Anywhere deployment model, we now have a shared responsibility. The SaaS provider will still be responsible for any of the tasks associated with the application, but the tenant is not going to be responsible for anything that's related to the infrastructure resources. So tasks such as tenant provisioning, um application updates, deployments, the management of the application and the resource templates is going to be a set of instructions that we provide. So the tenant knows what what to deploy as far as the remote infrastructure needed to support the SaaS application.

The tenant being responsible for the infrastructure, they're not going to be available to understand the availability requirement, the DR requirement, whether the infrastructure needs needs to be deployed in multiple AZs or not, they are also going to be responsible for the security of these infrastructures making sure that they are protected from malicious users. But at the same time, they need to think about how to provide the right level of permissions and connectivity for the SaaS provider to continue to connect to these remote infrastructures and manage the applications and redundancy.

Do I need database backup? What is my backup strategies or requirements? Um if an infrastructure resource fails, how fast can I spin up a new one? So what used to be the responsibility of the SaaS provider. Now, in the SaaS, in our deployment model, we have to think about a shared responsibility model and that's only impact our SLA for the SaaS solution when we are talking to our customers.

Frictionless Onboarding

One of the key elements, a SaaS provider should always keep their eye on is time to value. And that means how long does it take for a tenant to sign up for the SaaS application logging into the system, interact with the system and start seeing the value of the system. And we want that time to value to be as short as possible. And that really requires us to have a good onboarding process. A frictionless onboarding process.

If you think about a traditional deployment model, we have a sign up page, the user is going to get onto the sign up page, sign up for the service. And once they provide their information, that's gonna kick off an orchestration process, that's gonna execute multiple tasks starting with tenant management. The first task we're going to execute as part of the onboarding process is going to be creating a tenant ID. Um any of the metadata associated with the tenant, the status, the tier level, we're going to store that somewhere.

The next task may be user registration. So if we're using Cognito as our IDP, we might be creating a new user pool for the new tenant. We might be c we will be creating the first admin user and that's significant. The first admin user needs to be created as soon as possible. So the user, the tenant can start logging into the system, doing any sort of configuration they might need to do and create more user accounts. And of course, we're going to create IAM policies to help us have a good um for us to use when we're creating or implementing our isolation policy for the tenant within the SaaS environment.

Tenant provisioning. If there's gonna be resources that we need to provision as part of the onboarding process, maybe the tenant is a premium tier tenant where they have their own infrastructure. So if we need to deploy any sort of resources, we would do that as part of the onboarding process. And one of the last one is gonna be billing integration. So billing is typically is done external to the SaaS application and this might be a third party billing provider, this might be your own finance department regardless what it is. Typically, billing is done externally outside the SaaS application.

So as you can see in the traditional model, everything is pretty much automated. It's frictionless users sign up and magic happens. But now with SaaS Anywhere, we're introducing these remote environments and remote infrastructure, that's gonna have an impact to what happens through our onboarding process. We now have to think about a prerequisite deployment because there's going to be external accounts and external uh remote infrastructure resources, the tenant as part of the sign sign up process, we need to provide any account information we need to be aware of and they also download the CloudFormation template, the resource template that we talked about. So they can provision those remote infrastructures. And after the remote infrastructures are provisioned, they need to provide that information back to the SaaS provider. So the provider will now have the information they need to have about the remote environments and remote infrastructure resources. Then that will of course kick off the orchestration of these onboarding tasks.

So tenant management, user registration, tenant provisioning and now we have this remote application resource provision. We now as part of the onboarding process needs to log into those remote environment, access those remote infrastructures and then push the remote components of our SaaS application into these remote environments and then the building integration. So as you can see when you're thinking about a SaaS anywhere solution, it's gonna have an impact on your whole SaaS application starting with the onboarding process. And then these are going to be some of the changes you have to keep in mind when designing your solution and how that will impact your onboarding process.

Remote Management and Updates

We talked about a SaaS application um used to be running in the SaaS provider and now we have remote environments because of SaaS anywhere but one thing to highlight here is that even though we have remote components, that's the application plan, the control plan is still sitting with the SaaS provider and this is gonna be the set of shared services i mentioned earlier that's used by the SaaS provider.

So with SaaS anywhere SaaS providers still responsible for managing the applications. So with a control plan running the SaaS provider account, we now have to think about what kind of service or connection or how are we gonna establish that connectivity from the SaaS provider account to the remote environments and the remote infrastructures. And not only do we have to think about the connectivity as part of that prerequisite deployment, we also have to think about what kind of permissions, what do we need from the tenant from the customers to make sure we can access them. And it's important for the SaaS provider for to make these decisions when designing the solution so that they know exactly what technology stack they're gonna use and how and communicating those requirements to the tenants. So the tenants understand in order to run these remote components, what they are signing up for as well.

Metrics and Monitoring

Extremely important in a SaaS environment, but even more important in a SaaS anywhere deployment because we now have remote components running in remote environments. It's extremely important for us to get a complete view and have this ability of what is happening across all these remote environments. How are the remote components performing? Are they running through issues? How are the users interacting with them? So we need to gather this information and we need to push them back into a centralized location, that control plan. So we're not in a position where we have to log into each individual account just to see what's happening. And by pushing those information to that centralized location, like I said earlier, that allows us to aggregate the information and then we can analyze them and have a complete view of what is going on. And there again, there's going to be different ways to achieve this. There's going to be different implementation approaches, different services you can utilize. But it's also important for us to know that this should be part of your prerequisite deployment. So the customer again knows what they're signing up for and what kind of connections or services they need to implement. In order to push that data back to the SaaS provider.

We talked about design considerations. But I also want to talk about the tradeoffs. When you're designing a SaaS anywhere solution, there's gonna be tradeoffs you have to consider starting with the promise of SaaS - the core attributes. We talked about agility, we talked about innovation, making sure the applications available, making sure that we have a cost efficient solution, a solution that we can manage easily and efficiently. But we also talked about the the motivators for SaaS anywhere and we understand that there's going to be conversations, there's gonna be requirements that absolutely need to have a SaaS anywhere architecture or design. But it is a scale that you have to balance carefully.

We talked about the complexity, how it changes the onboarding process, the connection and the permission that you need to deploy and that shared responsibility. So when you're thinking about SaaS anywhere deployment model, we really have to think about how much anywhere do we really need. Don't just think that you have to pick up your whole application plan and deploy that whole application remotely. Because when you do that, you're adding that additional complexity to all of your services, really take a look at the requirement that you're working with or the concern that your customers bringing up and decide what services or what storage do we need to deploy remotely to fulfill those requirements or address those concerns?

So when it comes to SaaS anywhere, we know we need to do it. But when you're thinking about it, just make sure you're proceeding with caution before we get the patterns implementations of SaaS anywhere. I want to talk about the different flavors of anywhere. And this means the type of environment that your customer is gonna be running or if they have third party um system that you have to integrate with, what are, where are they running at?

The first flavor is going to be another AWS account maybe your customer is running on a separate AWS account. So you just have to know like the services um that you can use to have that account to account connection, uh VPC connectivity, hybrid cloud, your customers or your tenants or the third party system, you have to integrate with those might be running this different um cloud provider. So with your SaaS solution running in AWS, what do you have to consider in order to make sure that your solution now works with a hybrid cloud environment and on premises. We talked about those legacy application you might have to integrate with or some of the external systems, you have to integrate with runs in their customers environment in their data center. That's going to be another, another flavor of anywhere.

And it's really important for us to keep in mind of the different flavors for anywhere and then decide what flavor we're gonna be working with because that's gonna have a big impact to our technology stack and our implementation approaches for the purpose of today's discussion. We're gonna keep the the solutions, the patterns, the implementation approaches just to into the AWS account and not go into the hybrid and um on premises.

So with that, Dora is gonna talk about the different patterns and implementation approaches.

The third model would be the remote application plan where you will compile the application pla plan, you know, as a stand alone entity and then ask the tenants to deploy them within their tenant environment. So you will end up having only the control plane that will be connected to each and every tenant environment to work as a multi-tenant solution.

Let's dive deep into each and every deployment model and see what are the driving factors of building these and some uh some examples as well.

Now in the distributed data store model, um you as a sas provider, you're gonna bring some of the databases from the application plane into the tenant environment. This is this is primarily because of the tenants are demanding to have the control and the complete ownership of the business data and they want to have them in their environment. Uh primarily because of the security compliance regulations and data residence requirements.

Also think about when you are building a s a solution that needs the integration to the data sets that are in the tenant environment such as data platforms and data lakes. Usually these are large data in volumes. So it might not make sense to bring, bring them all into the talented sas environment to be integrated because it's costly and need additional instrumentation as well.

Also, some of these data stores could be legacy as well which are not ready to be modernized or not ready to be integrated to a sars at all in these scenarios as well. You may think of bringing some of the data stores from the application plan into the uh into the tenant environment and work with the product development teams of the tenants to get your, get your data set out so that you can work with your s a solution.

Now, peter was presenting earlier about the shared responsibility model in sars anywhere between the sas provider and the tenants. So for example, in this model, the tenants would be primarily responsible for the storage management and the administrative tasks of the remote databases that are running in their environment such as taking the backups and running the dr plan and making sure these databases are secure and available so that they can work collaboratively with the application plan that is running in your sas environment.

Um let's take a quick example, imagine that you are building a uh e-commerce as a s a solution for your customers. If you want to follow the traditional sars architecture, what you're going to do is you're going to compile the sars control plan and the application plan and host it as a single deployment unit in your environment and let your tenants to use it as a sars.

Now, in the sas anywhere use case, these tenants are demanding to have the control of the data in this particular example. So for example, this uh uh payment data set. So in these scenarios, you will ask your tenants to come up with their own environment, for example, their own aws account. And then you as a sas provider will take out this payment database and then deploy that in the tenant environment and then instrument your architecture in a way that payment service over here will remotely work with the uh payment database databases in each and every tenant to get the or to run the database operations to get the application plan functionality out.

Now, this implementation really based really depends on the database technology that we are using for. For instance, in our example, let's assume that we are going to use the amazon dynamo db. So i'm gonna provision a dynamo db table in each and every tenant which is the payment table. And then, but still i need to instrument this payment service to be able to connect to that dynamit table properly.

So how can i do this well, during the tenant one on boarding time, i'm gonna create an im role in the tenant, one that provides the access to this particular database table. And then, and then the payment service will be able to assume that i enroll from the application plan using aws security token service which will allow the access to the dynamo db table and conduct the db operations remotely.

Uh whenever there are, you know, other tenants coming in, you follow the same same mechanism over here. You as a sars provider will share a cloud formation template during the tenant on boarding time. So the admins can run, run, run that in their own environment, which will create the payment table and the im role necessary so that you as a sars provider can take the a n of the iim role from them. So that uh uh uh payment service can assume that role whenever whenever necessary to connect to those uh remote databases.

Uh it's also interesting to have a look at how these im roles and the policies are working to get, get, get this functionality out. Uh as an example, there are a couple of things to be done from the tenants standpoint. You create this cross-account iem role that defines the um least privileged access uh that we can that we can provide to the sars provider uh on the on the resource which is the payment table uh of the tenant. One in this example. And then the second part of this configuration would be to attach a um trust policy into the im role that defines who can assume this im role remotely.

So in this example, i am using the execution role of the payment lambda service to and then provide the uh permission to assume this role remotely from the application plate. Now, these are all from the, these are all done from the tenant environment.

There's a couple of things to be done from the uh payment service as well. Firstly to provide the permission to the payment service to call the assumed role api uh for the role that we just created, which is a tenant, one role using a policy like this and attaching that to the execution role of the payment service. When you have multiple tenants, you're gonna maintain multiple policies like that and, and provide the uh permission accordingly.

Uh and with that, we have got the configuration all set up all what is left is to put some coding into the payment service to get the implementation out. Now, i got a uh python code here just to demonstrate. So imagine that tenant one is requesting uh uh you know, some remote operation from the payment service. So i got the tenant id as tenant one. And based on that i can load the other configurations like role layering of the tenant one table name and the region of tenant one environment.

And then i'm going to call the assumed role of the aws ts by passing the role a rn as a parameter which will return me a credential object that contains the short living security keys and this, that will represent the original permissions that we define in the i am role.

So what i'm gonna do is i'm gonna use this credential object to create my dynam db, client object dynam db uh client object that will help me to conduct the db operations remotely in tenant one. Now, as you can see if we inject those parameters at the top from the outside of the function using, let's say ctp header parameters, we can easily make this function uh more generic and applicable for connecting to all the payment databases in each and every re tenants at scale as well.

So the idea is that the functionality and the implementation would be different based on the database technology that you are going to use in this particular model. However, as sas providers, you got to have a mechanism like this to conduct the database operations remotely. So you can work out with your sas use case.

Now, the second model here would be the distributed application services. Um in this model, you are going to take out not a database or two but complete microservices out of the application plane and then deploy them in the tenant environment. Primarily because the tenants are demanding to have the business logic in their own environment to get the performance optimized workloads to process them and get some business decisions early and faster targeting their customers.

Um imagine that you are building a iots a solution uh where you have the remote agents. So you're going to deploy the remote agents in the tenant environment, they will produce the telementry data. And when the tenants need to run some business logic on top of this telementry data, um you know, in order to get some data analytics out and based on that to respond to their customers much faster. You're going to take out all these functionality from the application plane and then deploy them in the tenant environment just by following this particular distribute application services model.

Uh one of the challenges the sars providers would be to select the uh remote workload in a minimal way so that you don't include anything that is not really necessary because uh not really, you know, necessary because the reason is that you're gonna maintain and manage this remote workload. Um you know, for each and every tenant that you're gonna have in in your solution um from the tenant standpoint, they would they would be having bigger responsibility now because they now maintain the complete full fledged microservices.

So they got to be providing the right infrastructure to run these microservices and make sure all the nonfunctional requirements such as security resilience and the availability would be there so that these micro services would be able to, you know, work with the rest of the application plan that is running on your sars environment.

Imagine that you are building a sas with the remote machine learning workload. Now, if you follow the traditional uh sars architecture, you, you may want to have this l workload centralized in your uh in your sas environment and share that with all of your tenants. Now, if the tenants are demanding to have better tenants isolation, you would provide a tenant specific ml workload which are silo components, but still within your sas environment.

Now, in the sars a case, tenants are demanding to have the mill workload in their environment in tenant environment. So as a sas provider, you're gonna have only the sars control plan and only a part of the application played in your environment while you while you deploy the rest of the rest of the application plan that includes the ml workload and the necessary microservices in the tenant's environment during the tenant on boarding time.

Now there are a couple of advantages here as well because ml workload needs to be integrated to a data store to be processed. So if that data store is coming from the tenant environment, which is the private workload that i have mentioned that i put in the right side. Um it's easy to integrate seamlessly because the workload is now local to the tenant one and the second advantage would be this model provides a better tenant isolation because if for example ml model here would be private to tenant one as well. it is not even exposed to the sars provider.

Also, these remote workloads still need to work with the application plan that is running on your um uh sars environment, not only to provide the matrix and the uh monitoring data that are being metered, but also sometimes to consume this sars uh you know sars application services as well remotely because of that the connectivity between the tenants and the sas provider is significant in this particular use case as well.

So for example, if you want to have a more reliable connectivity, you may also think about, you know, a managed service such as aws private link, which will as the name says, provide a dedicated connectivity for each and every tenant to connect to the sars providers uh application plan. So in that scenario, during the tenant on boarding time, we're going to have a vpc end point in the tenant, one's vpc and then attach that to the private link which will allow us to connect to the application plan through a network load balancer. And for each and every tenant will have their own pc uh endpoint and the private link.

Uh and all these, all these connections would be reliable and also the traffic would be within the aws backbone itself. So that overall performance and the uh uh latency would be optimized between the communication of the tenants and the sars provider.

Now, we can think about so many use cases on this particular model based on the uh remote workload and the needs for the com uh the connectivity you can discuss multiple implementation. But the idea would be for the sas provider that two main things. First of all, to select the right uh or minimal uh the workload as the remote workload and then choose the right technologies to build the architecture, including the connectivity between the tenants and the sas provider and this final deployment model would be the remote application plan where you are going to bring the complete application plan and and deploy that in the tenant environment.

While you are managing only the control control plan in your sas environment as a sas provider. Um the main reasons, the primary driving factor for this would be the tenants are demanding complete ownership of the data and the business logic and everything that you literally have in the application plan, primarily because of the security and compliance.

And also when you have tightly coupled local integrations in your sass environment, the proximity could be a driving factor as well where the tenants would would need to have these workloads more closer to the uh to their customers where they can provide more performance optimized or latency optimized value proposition for their end customers.

Sometimes the organizational culture, non tech constraints are driving this uh this application model as well. We are, everyone is everyone needs to have the, you know, better tenant isolation uh because of the reason because of the good reasons sometimes.

Um but uh you know, some of the, some of the organizational cultural mindsets uh which are legacy will also drive the drive these uh uh you know, deployment strategy as well. Uh so that you will end up having a complete application plan, deploying to each and every tenant.

Um you know, separately, the tenants would be having the biggest responsibility in here because not a database or two or a few microservices, but complete application plan is now running on the tenants environment. So they got to have a de ops team and they got to provide a right infrastructure run this application plan as as specified by the sars provider.

Um and also all the nonfunctional requirements, the the security resilience, scalability, high availability, uh all all all those things should be preserved to properly uh work this application plan hand in hand with the control plane that is running in your a uh environment, think about a cloud based uh coban in sas solution that you are building

For example, uh, as a SaaS provider, you will have only the control plane in your environment and uh you, you will compile the application plane and deploy that in the tenants' environment, which are the banks in this example.

So the private workload box over there on the right hand side that represents the cloud based banking capabilities that you need to integrate to to get the uh you know, co-banking capabilities right in your SaaS solution.

So you may need to work with the product development, product development team of the banks to get this integration right during the tenant on boarding time.

The tenant users now they have everything that they need in the tenant environment itself. So they'll be connecting to the you know, application plane that is running in the tenant environment to consume the SaaS solution. They don't have to connect to the SaaS providers account of that environment anymore.

Um this application plane still needs to work hand in hand with the control plane that is hosted on the SaaS providers account. Uh to provide the metrics that are being metered, application insights, uh tracers and all the metadata needed to the SaaS provider so that they can provide the proper control plane capabilities to manage and operate the application planes. Uh you know, going forward there are, there are so many use cases that we can think of this model as well.

Um you know, based on the use case based on the application plane uh implementation, uh implementation will be uh different as well. Um think about uh you know, healthcare a solution where you need to uh connect to the patient information, the health information, health history, certainly a regulated use case. And also you may also you may also want to connect with the existing IT systems of the healthcare providers uh which involves a lot of domain specific integration.

So in those, in those use cases, you may think of, you know, compiling your SaaS application plane and deployed as as a whole in the uh in the tenants environment so that you follow this particular uh uh the model.

So these are the three main deployment strategies that we want to kind of go through on the SaaS anywhere uh context for you. And uh so that, that will provide the mechanisms to build the SaaS solution and connect with the remote components and remote environments to get the solution up and running.

The SaaS operations aspect would also be a challenge. And there are a lot of aspects that we can discuss in the uh in operating a SaaS use case at scale as well. Peter was covering some of the aspects on this. But I just need to the idea of a couple of couple of important things in here.

First of all, the uh remote uh deployments. So whichever the uh deployment model that you are going to use to implement your uh SaaS use case you need to manage and maintain the remote uh workload.

So in here as an example, I have taken the SaaS providers AWS account in the left hand side where we have the CI/CD pipeline. So whenever your product development team uh wants to release a new feature to the um the remote workload, when they uh you know, push the latest code or the configuration to the code commit, um that will uh fire a CloudFormation and then that will start our CICD pipeline.

So the code will take, will take the latest code and the configuration and, and build the new artifacts and then upload to a centralized repository and the control will be then handed off to CloudFormation.

Now, CloudFormation need to have a way to connect to tenant one environment and then deploy the latest workload. Now in order to get it, get it done, I'm going to have another like separate IAM role during the tenant one on boarding time in that environment that will provide the permission to run a CloudFormation stack remotely in tenant one environment.

So and then I can assume this IAM role from the SaaS providers environment. Uh using uh STS assume role as we discussed a few slides ago and then gain the access and then run the CloudFormation stack remotely in tenant one environment so that it will take the latest artifacts from the repository and you know, update the uh remote workload.

There's a couple of things that you need to make sure as SaaS providers. First of all, you may need additional instrumentation here to uh maybe handshake with the SaaS providers account to let it know what is the status of the deployment. And if something goes wrong, how do you handle the rollbacks and, and all these kind of metadata?

And secondly, you need to update all the all the tenant environments with the same version, same latest version. So your CICD pipeline should be able to uh loop through and you know, deploy the latest version in each and every remote environment that is very important. Otherwise it will be compromising your agility and the operational efficiency. of your SaaS uh solution.

And the next aspect would be the cross account observability. You would be wondering how do you, how do I know what tenants are doing? And what, what, how do I get the access to the metrics that, that they are metering and all this kind of. And the monitoring aspect, there are open source tooling and the partner solutions that you can leverage in getting this cross account observability done leveraging Amazon CloudWatch in another way.

So idea here is that remote workload is developed in a way that all the metrics that are being metered, um application insights, traces logs, everything will be pushed to the tenant specific CloudWatch instance for each and every tenant. And then we enable the cross account observable feature of CloudWatch that will help to um uh sync all the data in each and every tenant into the uh into the CloudWatch that is running on the control plane of the SaaS providers account.

So this will help your devs teams to access these metrics and all the application insights coming from each and every tenants, remote environment. And also it's easy to extend this data set and build a, you know, cross account monitoring, um you know, dashboards and the platforms for better devops aspect.

Now there are other other concerns and the challenges in the SaaS operations aspect. But I hope with these few things that we discussed and also the deployment models that we, that we went through, we should be able to just build an operable. SaaS use case in good shape.

So we talk a lot about SaaS. today, we started with the definition and then we went through the design, design considerations, the driving factors of why we are building SaaS. We went through the three deployment strategies. We see, we saw how some of the AWS services can be leveraged in building these use cases. And then finally, about the SaaS operations perspective on summary SaaS anywhere provides mechanisms to SaaS providers uh that will, you know, help to push the boundaries of the traditional SaaS architectures where we can bring the components from the application plane such as databases, microservices and even the complete application plane out and then deploy them in the tenant environment for providing the better value proposition to our end customers who are the tenants.

So because of that, every SaaS use case will have a remote component that will be deployed in the remote environment. And the challenge would be that the SaaS providers will have only the least privileged access to these remote environments because they belong to the tenants.

So because of that, there is an additional complexity that you need to deal with in your architecture to not only to deploy these remote environments but also to work with uh remote workload and update them, operate them. Uh you know, for all the tenants as well, so there will be instrumentation that is needed to get this part done in your architecture.

So there are so many things that we, that we have, why we have to build SaaS anyway use cases. But I also want you to think about this additional instrumentation that you need in your architecture to get the SaaS. Anyway use case out into the production.

So we talk about three main deployment strategies. So choosing the right strategy based on your use case is the key. And then selecting the right managed services or partner solutions will help you to build the relevant relevant features in your architecture that will provide the um you know, necessary access to the environment to be able to manage and operate as you wish.

So it's important to have the right technologies, tools being selected. And also we build SaaS anyway because we are we are providing additional value proposition to our tenants. Also in the same time, we want to make sure that the original SaaS co value proposition will be preserved in our SaaS anywhere use cases as well such as we are still being still building the SaaS anywhere for operational efficiency, cost efficiency, economy of scale agility and innovation as well.

So we want to make sure those values will be within your SaaS use case as well. It is still emerging topic. We now see our customers and partners are getting into this mode of development, uh development of SaaS anyway use cases. During last re:Invent we did a chalk talk on this topic for the first time. It was awesome to see how much of the SaaS. Anyway use case are being built by our customers and partners.

So this is all the content that we had today to present. I hope the session was helpful. And now I hope now you have a better perspective about building SaaS use cases before wrapping up real quick. There's a couple of notes I need to keep with you.

These are some of the SaaS sessions that we are doing in green from our team, maybe make a note of these session ids. So we can actually refer to these sessions in here or the recordings when they are available.

Um there are a bunch of chalk talks as well that you can get in and be interactive and add values. There are five workshops that we are doing which are hands on. If you really want to see how these SaaS solutions are being built, I would highly recommend to get on to those workshops as well. There's a builder session and the business session as well.

Now I'm going to keep this slider a little longer so that you can take a picture if you want these qr codes will lead you to access the SaaS thought leadership that we have been working on and publish on AWS. Uh that will have the links to our SaaS reference architectures, blog post, white papers and all the content that we have built. And it will allow you to access the uh AWS competency partners, domain experts who can help to accelerate your SaaS journey and a lot of prescriptive guidance about building and growing your SaaS be successful.

Alright. So I hope uh the session help you to get a good understanding about what SaaS anyway is. So whenever you are building or modernizing a SaaS solution in your organization, I hope now you can actually think about, uh you know, bringing them into the SaaS anyway, context going forward as well.

Alright, everybody. Thank you so much for being here. Thanks Peter for co presenting and have a wonderful evening and awesome remaining part of the event as well. Have a great day.

你可能感兴趣的:(aws,亚马逊云科技,科技,人工智能,re:Invent,2023,生成式AI,云服务)