As a platform engineer, your team launched a new application, and demand is growing rapidly. Thousands of users are signing up, so you must create multiple RDS instances with varying RDS instance types: one for testing, another for staging, and another for production.
However, setting up these databases takes too much time. You must go through the console or use IaC, apply the correct configurations, apply security policies, and ensure everything is correct. This process requires significant time and effort at scale, especially when factoring in regulatory and compliance requirements.
Is there a better way? Yes there is with Kratix. Kratix is a Kubernetes-native framework that enables co-creation of capabilities by providing a clear contract between development and platform teams through the definition and creation of “Promises”.
Developers can use its AWS DB Promise, which provides RDS-as-a-Service, to request an RDS instance, and Kratix — already pre-configured by the platform team sets it up with the proper credentials and access controls.
This article will discuss Kratix, its key concepts, setting up an RDS instance, and Day 2 operations with it. By the end, you will understand why it is essential for improving your team's productivity.
What is Kratix?
Kratix is an open-source Kubernetes-native platform framework created by Syntasso that helps organisations automate and manage the provisioning and deployment of infrastructure and resources.
Instead of writing complex configuration codes or going through the console every time, Kratix allows you to create reusable templates called Promises with requirements that helps you structure how to set up your infrastructure.

Kratix leverages Kubernetes to manage platform services. This means that those services are kept aligned with requirements over time by using both the Kubernetes reconcile loops to create configuration, and then GitOps to manage and ensure components are deployed consistently.
Kratix stores configurations either in Git or in an object storage bucket, and this configurations can be applied to any environments including testing, staging, and production environments.
Developers get the needed resources without delays, reducing errors and helping the team move faster without operational problems.
The Kratix framework focuses more on the platform engineer experience, unlike pre-configured platforms, which focuses more on the developer experience. With Kratix, platform engineers can build platforms that meet application needs at scale and on demand.
Kratix concepts
Let's break down some of the key concepts of Kratix and how it helps to automate Amazon RDS deployments:
Promises
Promises are declarative templates with predefined specifications that enable you define how a resource should be created and managed. Instead of manually creating a database, platform engineers can create a Promise for RDS, specifying key settings like backups, security rules, and the database instance type.
The following sections define the main components of a Promise:
API: The API, defined as the Custom Resource Definition, is the component developers use to request a resource from the Kratix Promise. It describes all available options that developers can implement to create a request with a Promise, such as creating the RDS Database in our case. The API ensures that all resource requests align with the standards set by the platform team.
Dependencies: Dependencies are the prerequisites needed to fulfil a promise request. They are the requirements that come pre-installed on the destination. For RDS promise, these dependencies could be AWS IAM roles, networking configurations, a state store for tracking requests, or a backup and monitoring configuration like Cloudwatch.
Infrastructure as Code (IaC) tools such as Terraform modules or Crossplane compositions can be used to define these dependencies.
Kratix ensures these dependencies are installed, while platform engineers focus on defining them.
Workflows: A workflow is the automation process that defines the lifecycle actions of the resource. Platform engineers use workflows to define a Promise within the Kubernetes environment. These workflows include pipelines that perform specific tasks. They include:
Configure workflow: The configure workflow triggers when a resource is created, updated or executed on a regular cadence. It includes multiple pipelines executed in a sequence and different tasks performed to provision the RDS instance. Once the developer requests a new RDS instance, it converts the developer input into the format needed by the tooling installed (e.g., Helm), processes the input to ensure compliance with configurations, and provisions or updates the database.
Deletion workflow: The deletion workflow is triggered when the resource is deleted. It cleans up the provisioned resources since they are no longer needed.
Destination selectors: A Destination is the system where Kratix sends data or instructions for execution. destinationSelectors in a Promise, sets which destination(s) the request should be made to. The target location processes all configuration or resource requests within the infrastructure.
For example, when a developer requests a new Amazon RDS instance, Kratix sends the request to a Destination, which can be staging, production, etc. Then, the Kratix operator picks up the request and executes the steps necessary to create the database.
State stores
A state store in Kratix is where Kratix saves all workload definitions it manages. It is a storage system that helps track configuration changes, status, and requests, ensuring that the states are correctly synchronised.
There are two state stores in Kratix:
Git-based state store: Implementing a Git-based state store is a better approach for a production environment. It adheres to GitOps principles and ensures that the state is adequately versioned.
Bucket-based state store: A bucket-based state store is a simple setup used to store definitions in an object store. An example is the S3 bucket.
Step-by-step RDS deployment with Kratix
In this section, you will set up everything needed to deploy an RDS instance with with Kratix.
Prerequisites
To follow along with this demo, ensure you set up the following prerequisites:
AWS credentials: These are required to authenticate with AWS services. If you plan to deploy an RDS instance from your Kubernetes cluster, you must configure AWS credentials with the right IAM configurations so that a user can create, update and delete an RDS instance.
Docker: Required for deploying containerised workloads.
KinD (Kubernetes in Docker): This is a lightweight solution for running Kubernetes clusters locally. You can also use other tools, such as Minikube or K3s.
Kubectl: The Kubernetes command-line interface used to interact with your cluster. Kubectl is used to manage workloads and deploy resources.
NOTE: If you prefer a cloud-managed Kubernetes cluster like EKS, AKS or GKE, please refer to the Kratix installation guide for more instructions.
Step 1: Kratix installation
You can install Kratix in a single-cluster setup by following the official documentation, which outlines clear steps, including prerequisites, installation instructions, and best practices.
Step 2: Setting up Kubernetes secrets.
Before creating a Kratix Promise for AWS RDS, you need an AWS access key and a secret access key. The IAM user for these keys must have permission to create, update and delete RDS instances in AWS.
Create a Kubernetes secret to store your credentials securely. Ideally, in a production environment, consider using tools like Vault for better key management.
To verify the output of the installation, run the following command:
You should see the following output showing that you have deployed the secrets to the cluster:
Note: It’s important to note that by default, Secret objects are stored unencrypted in the Kubernetes etcd database. You should configure encryption of your Secret data in etcd or store externally as mentioned above.
Step 3: Set up the RDS Promise
With Kratix installed and the Kubernetes secrets created, let's create the Promise to deploy an RDS instance.
To create the RDS Promise, we will write a YAML file that includes details such as the database type, size, and storage. Additionally, we will establish workflows to manage creating, updating, and deleting the database. Once integrated into Kratix, developers can request RDS instances whenever needed.
This Promise will provide RDS-as-a-Service with three main API resource parameters: spec.dbName, spec.engine and spec.size.
spec.dbName: This sets the name of the database
spec.engine: Defines the database engine, with options for MySQL, PostgreSQL, and MariaDB
spec.size: Determines the instance size, supporting micro, small, medium, and large.
As you can see, the above RDS Promise has API and workflow components. This Promise handles creating, monitoring, and deleting RDS instances. It uses a pre-built pipeline from Syntasso’s Kratix Marketplace and needs AWS credentials stored in a Kubernetes secret like the aws-rds you created earlier.
To deploy the Promise, save the YAML file as aws-rds-promise.yaml and run using the following command:
Alternatively, you can install directly from Syntasso's Kratix Marketplace by running the following:
To verify the output of your installation, run the following command:
You will see the following output:
Step 4: Request a resource
Once you create the Promise, developers can request an RDS instance resource by applying the following manifest.
This Kubernetes resource creates an RDS instance using the Kratix Promise. It provisions a MySQL database named superdb with a micro-sized instance in the default namespace.
To deploy this, save the file as resource-request.yaml and deploy it on the Kubernetes cluster using the following command:
To verify the resource creation, run the following command.
Where rds is the resource type as defined in the Promise, and specified in the resource request manifest.
The output will look like this:
Verify database creation on AWS console
To verify the creation of your database on AWS, log on to your AWS Management Console, go to RDS, and you will see the database you created in the specified region.

Database Day 2 operations with Kratix
Once Kratix deploys your RDS database, the real work begins: keeping it secure and well-optimized. This maintenance is referred to as Day 2 operations, which includes backups, health checks, monitoring, and applying security updates. Kratix automates these tasks to ensure that your databases operate smoothly and securely.
How Kratix supports RDS Day 2 Operations
Automated health checks: Continuously monitor the RDS instance to ensure stability and performance. Platform engineers can identify issues early, trigger alerts, and initiate automated recovery actions to minimise downtime.
Automated snapshots & backups: You can set backup policies in RDS Promises to automatically create regular database backups. This way, platform engineers don't need to manually take snapshots, as the policies ensure they do it securely.
Security & compliance management: Kratix enforces continous compliance on the RDS database, including encryption, IAM roles, and access controls. It also keeps audit logs and stores activity tracking in the state store to ensure your RDS server meets organizational security standards with minimal manual effort.
Patching & version updates: Kratix manages database patching by scheduling and applying updates when needed. It only applies approved updates, minimizing compatibility issues and downtime. With Kratix, platform engineers always have up-to-date database versions, as everything is updated automatically.
Disaster recovery & high availability: Kratix triggers failover to a standby database instance if the RDS server fails. You can also integrate disaster recovery plans into the Kratix workflow to ensure minimal downtime. Platform engineers can provision multi-AZ deployments, read replicas, and set up high-availability systems.
Why use Kratix for RDS Day 2 operations?
Less manual work: With Kratix, engineers eliminate the need for manual and repetitive database management. It automates scaling, backups, and failover processes, reducing operational overhead and allowing engineers to focus more on innovation than maintenance.
Faster incident response times: Kratix automates disaster recovery and high availability for your database, ensuring quick failover in case of failures. It integrates with monitoring tools to detect issues instantly and trigger actions, minimising database downtime.
Enhanced consistency and compliance: When platform engineers define policies and configurations within the Kratix Promise, they can consistently provision all RDS databases, ensuring compliance with organisational standards and requirements.
Kratix: Simplifying Infrastructure Management with Promises
In this article, we introduced Kratix, an open-source framework that helps platform engineers simplify complex infrastructure provisioning and management and ensure consistency across environments. We also explained how Kratix works using predefined YAML templates called Promises.
Kratix integrates seamlessly with Backstage, Crossplane and Terraform to enable you supercharge your existing tooling.
If your team handles complex database configurations, repetitive setup tasks, and manual workflows, adopting Kratix is the perfect solution. Ready to try Kratix? Visit the Kratix marketplace to explore the RDS Promise and other Promises maintained by the Kratix community.
Comments