Use case for Database Migration with AWS Babelfish

Written by Francisco Gimeno


Cloudwiry’s Modernization Hub provides a set of tools to help to modernize applications. Conceptually, it splits the applications into layers and analyzes them individually.

The output of this process is the assessment of how easy it is to migrate a legacy application to a modern framework. While we may suggest Containers, Lambdas, or simple Rightsizing for the Compute Layer; in the Database layer, other interesting things can happen – we may chance upon misused BLOB fields in a database that could be a good fit for S3 storage, and linking the URL into the SQL table. While we know that we can optimize and find better places for the data to live, it’s also true that a lot of data will remain in structured relational databases.

If the source database is a SQL Server, the usage of AWS Babelfish could be a game-changer from a cost-saving perspective. It will even facilitate the migration to more open platforms as the new components are developed to talk directly to the Aurora Postgres engine.

How Does it Work?

Given a set of applications, with all their assets identified (for instance with taggings), we analyze each kind of resource on our modules and produce a set of recommendations. Each recommendation contains the action (migrate, rightsize, move, replace, etc) and the differential of costs.

Each optimizer module can produce a different set of options. For instance, for Compute, it’s possible to calculate the options for going to AWS Lambda, Fargate, EKS or Spot. The best solution depends on the original workload. For some of them, EKS could be a good option, but Lambda could be the right one for others.

The options could be combined, and the recommendations needn’t necessarily be followed (there could be other external constraints).

The Case for AWS Babelfish

In legacy applications, it’s easy to find mismanagement of Database technologies. In most cases, all the data is stored in the same database engine because of simplification considerations (it could be hard to manage different database technologies from the same application centrally).

So for databases, the solution is more likely to be a combination of different technologies. As expressed above, from a source database, we might find a good use case for migrating BLOB fields into S3 objects, or moving Log tables to S3 + Athena, or finding good use cases for AWS Timestream databases.

The rest of the data that cannot fit into a specialized database has to stay in SQL. After moving some of the data out from the database, CPU power requirement will be decreased, so rightsizing will be required.

If the source database engine is an SQL Server, there is no better moment to leverage AWS Babelfish’s power. However, it has been announced that AWS Babelfish could require some changes to the existing source code. For that reason, we have included a component to analyze how feasible it is to migrate to Babelfish and how likely are the changes in the source code to be required.

Once the target has been chosen, the execution plan will pass through the Change Management Process with the description of how the Database migration will be executed and how the connection string or DNS entry will be changed.

As a particular case, if we’re able to reduce 33% of the workload using specialized Databases,  it would be possible to move from db.r5.24xlarge to db.r5.16xlarge.

 

Instance Type Platform Hourly Price Savings
db.r5.24xlarge (original) SQL Server Enterprise $66.792 0%
db.r5.16xlarge

(rightsized)

SQL Server Enterprise $44.528 33.3%1
db.r5.16xlarge

(rightsized and migrated)

Aurora Postgres + Babelfish $19.20  71.25%

For more strategies on how to optimize your cloud, schedule a call with our cloud experts or drop us an email at hello@cloudwiry.com.

GET STARTED
cloudwiryqa
Share
This