Below we'll explore the ins and outs of designing and deploying event-driven ETL data flows, and briefly discuss the pros and cons of several solutions available.
Organizations have a wealth of data resources available to them and the depth and breadth of data points in those resources grows moment by moment. Oftentimes the value from this mountain of data is not easily extracted because it exists in so many places, and in a variety of formats. Extract, Transform, and Load (ETL) tools have been around for decades, but they often fall short when the data is generated "on-the-fly" and in real time. Event-driven ETL is a solution for wrangling the asynchronous nature of data generation, and utilizing it to drive deep business value.
Event-driven ETL is the process of performing the transformation and loading of data items to one or more target destinations as-it-happens. In contrast to traditional ETL wherein this synchronization occurs at fixed intervals, event-driven ETL provides an organization with the quickest response to data changes, enabling them to make better decisions, faster.
When an organization implements an event-driven ETL system, there are several key considerations. To maximize the value derived from such a system, it must be flexible, in that additional data targets can be added with ease, it must be low-maintenance, it must gracefully handle downtime, the transformation and decision logic is ideally easy to understand, adapt, and maintain, it should be reliable so that incoming data events are not missed, and it should provide introspective capabilities for when data flows inevitably experience failures.
One key consideration when implementing an event-driven ETL architecture is how much flexibility in the configuration of the data flow does the organization have once the system is deployed? The addition of new data sources and destinations must be straightforward, and the system should scale to support these new additions. The system should support integration with a wide variety of data resources and be able to speak the source's native format.
Once data flows are established, another key consideration is the required resources to maintain the flows in working order. A well-designed system should require little to no maintenance of the event delivery system itself, and the maintence required should be limited to the integration with the endpoints. Managing event queues should not be a part of the strategy to maximizing the value of an organization's data.
Downtime is inevitable. A robust event-driven ETL architecture should be resilient to unexpected downtime at the destination of the event flow. Ideally it should also support suspension of delivery when a given destination is in a maintenance window so that events are not missed. Additionally, to support resiliency in the data flow, the architecture should support throttling of the data flow so as not to cause downtime by flooding the destination with more traffic than it can handle.
The key element to the ETL workflow is the *transformation* element. Any event-driven ETL platform must support the transformation, and ideally the filtering also, of data events, with ease. The logic to accomplish this transformation and filtering should be straightforward to implement, flexible enough to integrate with many if not all systems, and preferably with a low barrier to entry.
In addition to destination downtime resilience, the receiver side of the event-driven ETL flow should be highly available and highly reliable. Even when data events cannot be delivered right away, the system should not miss events, but should log and maintain the event until delivery is available.
An event-driven ETL system should also be observable and debuggable. When things inevitably go wrong, finding and tracing the root cause should be a simple operation, and the ETL system should contain sufficient details in the logs to support this operation.
The final consideration for event-driven ETL is the total cost of ownership. Consideration should be given to the ongoing infrastructure cost, the one-time implementation cost, the required maintenance, and ???
The solution architecture
Amazon's EventBridge platform is an Infrastructure-as-a-Service (IaaS) event bus built for routing events through the Amazon ecosystem. It provides the ability to accept events and route them to any one of their integration partners. To build an ETL system using EventBridge, three primary parts must be integrated: an event receiver, and event processor, and an event deliverer.
AWS EventBridge only accepts events from other AWS platforms like Lambda or a limited set of integration partners, and can only deliver events to other AWS platforms as well. First an event receiver must be setup using something like the API Gateway. This receiver must then speak to a event-producing component like Lambda that can produce events on the EventBridge bus.
This stage could be performed by the Lambda as part of the event ingestion, however it may be more useful to have a second set of Lambda functions to be triggered so that transformation and filter logic is decoupled from ingestion logic.
Another AWS Lambda function must be used to interface the outgoing events to a custom endpoint so EventBridge events can be consumed by outside services.
MuleSoft is a well-known Enterprise Service Bus (ESB) provider which has ETL capabilities in some of their tooling. Building an event-driven ETL system on MuleSoft technologies requires integrating two MuleSoft products: AnyPoint and DataWeave.
The AnyPoint technology is an API integration tool which supports the creation, test, and deploy of custom API interfaces. This component can be used to create the ingestion engine and delivery engine component of an ETL system.
DataWeave is a domain specific language which is tightly coupled with the Mule architecture. It is used to transform data as it travels through a Mule application.
Atlas Connex is a turnkey tool built to satisfy all the key considerations of an event-driven ETL system. The platform provides a highly reliable ingestion engine which can interface with and accept event data from the native source interface without adding any code. It also features Python as the transformation and filter language. This ubiquitous language makes building, testing, and using filters simple, and provides a large, well-trained development community to on which to draw. Built-in, highly configurable managed event queues allow the flow of events to be completely parameterized by the user, including alerting, backoff strategies, throttling, on-demand pausing, and more.
Event-driven ETL data flows can be built in a variety of ways depending on the needs of the organization. The trade-offs revolve around how much time and resources can be dedicated the initial rollout and upkeep for the system. If an organization would rather focus on using the data rather than spending time to build and maintain this integration, Atlas Connex is the best solution. For organizations requiring complete control over the infrastructure, Atlas Connex on-premise is a viable option as well. If the organization already has deep institutional knowledge of another event-driven capability, or is well-versed in another technology domain already, e.g., AWS or DataWeave, those solutions may serve the best interests of the teams.
Start your 30 day, no credit card required trial.Try it Free – Get started in 5 minutes