In this phase, you should focus on how you can optimize the AWS platform and your application to increase reliability, reduce costs and automate alerts and analysis. Since you only pay for the resources you consume, you will optimize your system whenever possible. A small optimization might result in thousands of dollars of savings in your next monthly bill. Key features in the optimization phase include:

  1. Using AWS native features to reduce costs
  2. Improving platform efficiency and eliminate unneeded resource usage
  3. Setting metrics and visibility into resources consumed (and by whom)
  4. Implementing and training staff on cloud monitoring tools including 3rd party products
  5. Looking again at elasticity and reliability to ensure system is optimized and disaster proof

 

Understanding your Usage Patterns

With AWS we can be more proactive if we understand traffic, load and resource-consumption patterns. For example, if we have a customer-facing website, deployed in an AWS global infrastructure, but we do not expect traffic from a certain part of the world during the early morning hours; we can scale down the infrastructure in that AWS region for that block of time. This saves quite a bit of money.  Understanding usage and load patterns can be achieved with CloudWatch and other marketplace tools (that have an API integration with AWS API resources, or you can write your own).

 

Terminate the Under-Utilized Instances

Inspect the system logs and access logs periodically to understand the usage and lifecycle patterns of each Amazon EC2 instance. Terminate your idle instances. Try to see whether you can eliminate under-utilized instances to increase utilization of the overall system. For example, examine the application that is running on an m1.large instance (1X $0.40/hour) and see whether you can scale out and distribute the load across to two m1.small instances (2 X$0.10/hour) instead.

 

Leverage Amazon EC2 Reserved Instances

Reserved Instances give you the option to make a low, one-time payment for each instance you want to reserve and in turn receive a significant discount on the hourly usage charge for that instance. When looking at usage patterns, try to identify instances that are running in steady-state such as a database server or domain controller. You may want to consider investing in Amazon EC2 Reserved Instances (3 year term) for servers running above 24% or higher utilization. This can save up to 49% of the hourly rate.

 

Improve Efficiency

The AWS cloud provides utility-style pricing. You are billed only for the infrastructure that has been used. You are not liable for the entire infrastructure that may be in place. This adds a new dimension to cost savings. You can make very measurable optimizations to your system and see the savings reflected in your next monthly bill. For example, if a caching layer can reduce your data requests by 80%, you realize the reward right in the next bill.

Improving performance of the application running in the cloud might also result in overall cost savings. For example, if your application is transferring a lot of data between Amazon EC2 and your private data center, it might make sense to compress the data before transmitting it over the wire. This could result in significant cost savings in both data transfer and storage. The same concept applies to storing raw data in Amazon S3.

 

Advanced Monitoring and Telemetry

Implement telemetry in your cloud applications so it gives you the necessary visibility you need for your mission-critical applications or services. It is important to understand that end-user response time of your applications depends upon various factors, not just the cloud infrastructure – ISP connectivity, third-party services, browsers and hops, just to name a few.  Measuring and monitoring the performance of your cloud applications will give you the opportunity to proactively identify any performance issues and help you diagnose the root causes so you take appropriate actions. For example, if an end-user accessing the nearest node of your globally hosted application is experiencing a lower response rate, perhaps you can try launching more web servers. You can send yourself notifications using Amazon Simple Notifications Service (HTTP/Email/SQS) if the metric (of a given AWS resource or an application) approaches an undesired threshold.

 

Track your AWS Usage and Logs

Monitor your AWS usage bill, Service API usage reports, Amazon S3 or Amazon CloudFront access logs periodically.

 

Maintain Security of Your Applications

Ensure that application software is consistent and always up to date and that you are patching your operating systems and applications with the latest vendor security updates. Patch an AMI, not an instance and redeploy often; ensure that the latest AMI is deployed across all your instances.

 

Re-engineer your application

To build a highly scalable application, some components may need to be re-engineered to run optimally in a cloud environment. Some existing enterprise applications might mandate refactoring so that they can run in an elastic fashion. Some ideas include:

 

Decompose your Relational database

Most traditional enterprise applications use a relational database system. Database administrators often start with a DB schema based on the instructions from a developer. Enterprise developers assume unlimited scalability on fixed infrastructures and develop the application against that schema. Developers and database architects may fail to communicate with each other on what type of data is being served, which makes it extremely difficult to scale that relational database.

As a result, much time may be wasted migrating data to a “bigger box” with more storage capacity, or scaling up to get more computing horsepower. Moving to the cloud gives them the opportunity to analyze their current relational database management system and make it more scalable as a part of the migration. Some techniques that might help take the load off of your RDBMS:

  1. Move large blob object and media files to Amazon S3 and store a pointer (S3 key) in your existing database
  2. Move associated meta-data or catalogs to Amazon SimpleDB or a NoSQL Key-value store
  3. Keep only the data that is absolutely needed (joins) in the relational database
  4. Move all relational data into Amazon RDS so you have the flexibility of being able to scale your database compute and storage resources with an API call only when you need it
  5. Offload all the read load to multiple Read Replicas (Slaves)
  6. Shard (or partition) the data based on item IDs or names

 

Implement Best Practices

Implement various best practices highlighted in the Architecting for the cloud whitepaper. These best practices will help you to create not only a highly scalable application conducive to the cloud but will also help you to create a more secure and elastic application.

 

Conclusion

The AWS cloud brings scalability, elasticity, agility and reliability to the enterprise. To take advantage of the benefits of the AWS cloud, enterprises should adopt a phase-driven migration strategy and try to take advantage of the cloud as early as possible. Whether it is a typical 3-tier web application, nightly batch process, or complex backend processing workflow, most applications can be moved to the cloud. The blueprint in this paper offers a proven step by step approach to cloud migration.

When customers follow this blueprint and focus on creating a proof of concept, they immediately see value in their proof of concept projects and see tremendous potential in the AWS cloud. After you move your first application to the cloud, you will get new ideas and see the value in moving more applications into the cloud.