The critical factor during a migration of data and the application is not to disrupt the business. There are 2 main migration strategies to AWS, each has pro’s and con’s: Forklift Migration Strategy or Hybrid Migration Strategy
This strategy is basically taking the application as it now exists and moving the code, the binaries and the dependencies into the Cloud as is. These are typically older, customized and often-times poorly documented systems. They are largely stateless applications, tightly coupled, and maybe self-contained, meaning they are no inter-app or inter-system dependencies. Components of a 3-tier web application that require extremely-low latency connectivity between them to function and cannot afford internet latency might also be best suited to this approach if the entire application including the web, app and database servers, is moved to the cloud all at once.
In this approach, we move the existing application into the cloud with few code changes. Most of the changes will involve copying the application binaries, creating and configuring Amazon Machine Images, setting up security groups and elastic IP addresses, DNS, switching to Amazon RDS databases. AWS’s raw infrastructure services such as IAM, Security Groups, NATs, EC2, EBS, S3 (file servers), RDS and VPC are typically employed. All we are doing in essence is swapping a Physical infrastructure, for a Virtual infrastructure. This is a P2V migration.
Pros: In this migration scenario, we can ‘cloudify’ an application quickly, provide remote access, and remove any current on-premise or co-location costs and maintenance. It is fairly cheap if not cheerful.
Cons: Because the application(s) is not Cloud-native and quite likely strongly coupled, the Cloud characteristics that we are seeking, will not be available, including scalability, elasticity and the ability to easily change or amend the code base.
As with any migration, we need to run this transition in the background, do proper testing, and declare a backup and rollback strategy in case of failure.
This migration scenario consists of taking part or parts, of an application, and moving them to a receiving and ready cloud platform. This results in some part(s) of the application remaining on premise, or in a co-location center and being integrated with the cloud platform.
An example is if you have a heavily trafficked e-commerce site; and several batch processing components (such as indexing and search) which power the website. The batch processing system can be migrated to the cloud first while the website continues to stay in the traditional data center. In this case, the data ingestion layer can be made “cloud-aware” so that the data is directly fed to an Amazon EC2 instance of the batch processing system before every job run.
After proper testing of the batch processing system, you can decide to move the website application
Pros: This strategy can be viewed as a lower risk approach to migration, than an ‘everything-at-once’ approach. By migrating parts of the application, we can decrease the risk of unexpected issues we might encounter with the cloud platform, or if there are co-dependencies within the application and system. You can also refactor the application in the background (as with a forklift) and make the application cloud-native.
Another ‘pro’ would be to ‘wrap’ your legacy systems, save that legacy investment, yet still leverage some aspects of the cloud platform. Imagine a mainframe application which requires specialized hardware to function. We can write ‘cloud-aware’ web service wrappers around the legacy application and expose them as a Soap-Web service. The cloud application can make a direct call to these web services and which in turn interacts with the mainframe applications. You can also setup a VPN tunnel between the legacy applications that reside on-premise and cloud applications. Your mainframe is now cloudified.
Cons: The same cons as the Forklift approach could apply. It will depend on the type of application and how loosely-coupled; cloud friendly it is.
With both strategies, we will need to create Amazon Machine Images for the EC2 server resources or instance types. In many cases, it is best to begin with AMIs either provided by AWS or by a trusted solution provider as the basis of AMIs you intend to use going forward. Depending on your specific requirements, you may also need to leverage AMIs provided by other ISVs. In any case, the process of configuring and creating your AMIs is the same. It is recommended that you create an AMI for each component designed to run in a separate Amazon EC2 instance.
It is also recommended to create an automated or semi-automated deployment process to reduce the time and effort for re- bundling AMIs when new code is released. These can be the ‘golden images’ for a particular environment and deployment model which can be reused. This would be a good time to begin thinking about a process for configuration management to ensure your servers running in the cloud are included in your process. The figure below gives an example of the AMI stack (OS, foundation, dependencies, app runtime binaries, application).
AMI example, with ‘Je’ meaning ‘Just enough’, img is from AWS