Skip to main content

· 3 min read
Tom Klimovski

When defining event-driven architectures, it's always good to keep up with how the landscape is changing. How do you connect microservices in your architecture? Is Pub/Sub the end-game for all events? To dive a bit deeper, let's talk through the benefits of having a single orchestrator, or perhaps a choreographer is better?

Orchestration versus choreography refresher

My colleague @jeffreyaven did a recent post explaining this concept in simple terms, which is worth reviewing, see:

Microservices Concepts: Orchestration versus Choreography

Should there really be a central orchestrator controlling all interactions between services.....or, should each service work independently and only interact through events?

  • Orchestration is usually viewed as a domain-wide central service that defines the flow and control of communication between services. In this paradigm, in becomes easier to change and ultimately monitor policies across your org.
  • Choreography has each service registering and emitting events as they need to. It doesn't direct or define the flow of communication, but using this method usually has a central broker passing around messages and allows services to be truly independent.

Enter Workflows, which is suited for centrally orchestrated services. Not only Google Cloud service such as Cloud Functions and Cloud Run, but also external services.

How about choreography? Pub/Sub and Eventarc are both suited for this. We all know and love Pub/Sub, but how do I use EventArc?

What is Eventarc?

Announced in October-2020, it was introduced as eventing functionality that enables you, the developer, to send events to Cloud Run from more than 60 Google Cloud sources.

But how does it work?

Eventing is done by reading those sweet sweet Audit Logs, from various sources, and sending them to Cloud Run services as events in Cloud Events format. Quick primer on Cloud Events: its a specification for describing event data in a common way. The specification is now under the Cloud Native Computing Foundation. Hooray! It can also read events from Pub/Sub topics for custom applications. Here's a diagram I graciously ripped from Google Cloud Blog:

Eventarc

Why do I need Eventarc? I have the Pub/Sub

Good question. Eventarc provides and easier path to receive events not only from Pub/Sub topics but from a number of Google Cloud sources with its Audit Log and Pub/Sub integration. Actually, any service that has Audit Log integration can be an event source for Eventarc. Beyond easy integration, it provides consistency and structure to how events are generated, routed and consumed. Things like:

Triggers

They specify routing rules from events sources, to event sinks. Listen for new object creation in GCS and route that event to a service in Cloud Run by creating an Audit-Log-Trigger. Create triggers that also listen to Pub/Sub. Then list all triggers in one, central place in Eventarc:

gcloud beta eventarc triggers list

Consistency with eventing format and libraries

Using the CloudEvent-compliant specification will allow for event data in a common way, increasing the movement towards the goal of consistency, accessibility and portability. Makes it easier for different languages to read the event and Google Events Libraries to parse fields.

This means that the long-term vision of Eventarc to be the hub of events, enabling a unified eventing story for Google Cloud and beyond.

Eventarc producers and consumers

In the future, you can excpect to forego Audit Log and read these events directly and send these out to even more sinks within GCP and any HTTP target.


This article written on inspiration from https://cloud.google.com/blog/topics/developers-practitioners/eventarc-unified-eventing-experience-google-cloud. Thanks Mete Atamel!

· 2 min read
Jeffrey Aven

Orchestration versus Choreography

One of the foundational concepts in microservices architecture and design patterns is the concept of Orchestration versus Choreography. Before we look at a reference implementation of each of these patterns, it is worthwhile starting with an analogy.

This is often likened to a Jazz band versus a Symphony Orchestra.

A modern symphony orchestra is normally comprised of sections such as strings, brass, woodwind and percussion sections. The sections are orchestrated by a conductor, usually placed at a central point with respect to each of the sections. The conductor instructs each section to perform their components of the overall symphony.

By contrast, a Jazz band does not have a conductor and also features improvisation, with different musicians improvising based upon each other. Choreography, although more aligned to dance, can involve improvisation. In both cases there is still an intended output and a general framework as to how the composition will be executed, however unlike a symphony orchestra there is a degree of spontaneity.

Now back to technology and microservices…

In the Orchestration model, there is a central orchestration service which controls the interactions between other services, in other words the flow and control of communication and/or message passing between services is controlled by an orchestrator (much like the conductor in a symphony orchestra). On the plus side, this model enables easier monitoring and policy enforcement across the system. A generalisation of the Orchestration model is shown below:

Orchestration model

By contrast, in the Choreography model, each service works independently and interacts with other services through events. In this model each service registers and emits events as they need to. The flow (of communication between services) is not predefined, much like a Jazz band. This model often includes a central broker for message passing between services, but the services operate independently of each other and are not controlled by a central service (an orchestrator). A generalisation of the Choreography model is shown below:

Choreography model

We will post subsequent articles with implementations of these patterns, but it is worthwhile getting a foundational understanding first.

if you have enjoyed this post, please consider buying me a coffee ☕ to help me keep writing!

· 2 min read
Jeffrey Aven

API Management Function App

Function Apps, Logic Apps and App Services can be used to expose APIs within Azure API Management which is an easy way to deploy serverless microservices. You can see this capability in the Azure portal below within API Management:

Add a new API using a Function App as a back end

Like most readers, I like to script everything, so I was initially frustrated when I couldn’t replicate this operation in the Azure cli, REST, PowerShell, or any of the other SDKs or IaC tools. Others shared my frustration as seen here.

I was nearly resigned to using click ops in the portal (arrrgh) before I worked out this workaround.

The Solution

There is a bit more prep work required to automate this process, but it is well worth it.

  1. Create an OpenApi (or Swagger spec or WADL) specification document, as seen below (use the absolute URL for your Function App in the url parameter):
  1. Use the az apim api import function (not the az apim api create function), as shown here:
  1. Associate the API with a product (which is how you can rate limit APIs)

That’s it! You can now access your function via the API gateway using the gateway url or via the developer portal as seen below:

Function App API in API Management in the Azure Portal

Function App API in the Dev Portal

if you have enjoyed this post, please consider buying me a coffee ☕ to help me keep writing!

· 7 min read
Tom Klimovski

Great Expectations

This article provides an introduction to the Great Expectations Python library for data quality management (https://github.com/great-expectations/great_expectations).

So what are expectations when it comes to data (and data quality)...

An expectation is a falsifiable, verifiable statement about data. Expectations provide a language to talk about data characteristics and data quality - humans to humans, humans to machines and machines to machines.

The great expectations project includes predefined, codified expectations such as:

expect_column_to_exist 
expect_table_row_count_to_be_between
expect_column_values_to_be_unique
expect_column_values_to_not_be_null
expect_column_values_to_be_between
expect_column_values_to_match_regex
expect_column_mean_to_be_between
expect_column_kl_divergence_to_be_less_than

… and many more

Expectations are both data tests and docs! Expectations can be presented in a machine-friendly JSON, for example:

Great Expectations provides validation results of defined expectations, which can dramatically shorten your development cycle.

[validation results in great expectations](validation results in great expectations)

Nearly 50 built in expectations allow you to express how you understand your data, and you can add custom expectations if you need a new one. A machine can test if a dataset conforms to the expectation.

OK, enough talk, let's go!

pyenv virtualenv 3.8.2 ge38
pip install great-expectations

tried with Python 3.7.2, but had issues with library lgzm on my local machine

once installed, run the following in the python repl shell:

showing the data in the dataframe should give you the following:

as can be seen, a collection of random integers in each column for our initial testing. Let's pipe this data in to great-expectations...

yields the following output...

this shows that there are 0 unexpected items in the data we are testing. Great!

Now let's have a look at a negative test. Since we've picked the values at random, there are bound to be duplicates. Let's test that:

yields...

The JSON schema has metadata information about the result, of note is the result section which is specific to our query, and shows the percentage that failed the expectation.

Let's progress to something more real-world, namely creating exceptions that are run on databases. Armed with our basic understanding of great-expectations, let's...

  • set up a postgres database
  • initiate a new Data Context within great-expectations
  • write test-cases for the data
  • group those test-cases and
  • run it

Setting up a Database

if you don't have it installed,

wait 15 minutes for download the internet. Verify postgres running with docker ps, then connect with

Create some data

Take data for a spin

should yield

Now time for great-expectations

Great Expectations relies on the library sqlalchemy and psycopg2 to connect to your data.

once done, let's set up great-expectations

should look like below:

let's set up a few other goodies while we're here

Congratulations! Great Expectations is now set up

You should see a file structure as follows:

great expectations tree structure

If you didn't generate a suite during the set up based on app.order, you can do so now with

great_expectations suite new

when created, looking at great_expectations/expectations/app/order/warning.json should yield the following:

as noted in the content section, this expectation config is created by the tool by looking at 1000 rows of the data. We also have access to the data-doc site which we can open in the browser at great_expectations/uncommitted/data_docs/local_site/index.html

great expectations index page

Clicking on app.order.warning, you'll see the sample expectation shown in the UI

great expectations app order screen

Now, let's create our own expectation file and take it for a spin. We'll call this one error.

great expectations new suite

This should also start a jupyter notebook. If for some reason you need to start it back up again, you can do so with

Go ahead and hit run on your first cell.

Editing a suite with Jupyter

Let's keep it simple and test the customer_order_id column is in a set with the values below:

using the following expectations function in your Table Expectation(s). You may need to click the + sign in the toolbar to insert a new cell, as below:

Adding table expectation

As we can see, appropriate json output that describes the output of our expectation. Go ahead and run the final cell, which will save our work and open a newly minted data documentation UI page, where you'll see the expectations you defined in human readable form.

Saved suite

Running the test cases

In Great Expectations, running a set of expectations (test cases) is called a checkpoint. Let's create a new checkpoint called first_checkpoint for our app.order.error expectation as shown below:

Let's take a look at our checkpoint definition.

Above you can see the validation_operator_name which points to a definition in great_expectations.yml, and the batches where we defined the data source and what expectations to run against.

Let's have a look at great_expectations.yml. We can see the action_list_operator defined and all the actions it contains:

List operators

Let's run our checkpoint using

Validate checkpoint

Okay cool, we've set up an expectation, a checkpoint and shown a successful status! But what does a failure look like? We can introduce a failure by logging in to postgres and inserting a customer_11 that we'll know will fail, as we've specific our expectation that customer_id should only have two values..

Here are the commands to make that happen, as well as the command to re-run our checkpoint:

Run checkpoint again, this time it should fail

Failed checkpoint

As expected, it failed.

Supported Databases

In it's current implementation version 0.12.9, the supported databases our of the box are:

It's great to be BigQuery supported out of the box, but what about Google Spanner and Google BigTable? Short-answer; currently not supported. See tickets https://github.com/googleapis/google-cloud-python/issues/3022.

With respect to BigTable, it may not be possible as SQLAlchemy can only manage SQL-based RDBSM-type systems, while BigTable (and HBase) are NoSQL non-relational systems.

Scheduling

Now that we have seen how to run tests on our data, we can run our checkpoints from bash or a python script(generated using great_expectations checkpoint script first_checkpoint). This lends itself to easy integration with scheduling tools like airflow, cron, prefect, etc.

Production deployment

When deploying in production, you can store any sensitive information(credentials, validation results, etc) which are part of the uncommitted folder in cloud storage systems or databases or data stores depending on your infratructure setup. Great Expectations has a lot of options

When not to use a data quality framework

This tool is great and provides a lot of advanced data quality validation functions, but it adds another layer of complexity to your infrastructure that you will have to maintain and trouble shoot in case of errors. It would be wise to use it only when needed.

In general

Do not use a data quality framework, if simple SQL based tests at post load time works for your use case. Do not use a data quality framework, if you only have a few (usually < 5) simple data pipelines.

Do use it when you have data that needs to be tested in an automated and a repeatable fashion. As shown in this article, Great Expectations has a number of options that can be toggled to suit your particular use-case.

Conclusion

Great Expectations shows a lot of promise, and it's an active project so expect to see features roll out frequently. It's been quite easy to use, but I'd like to see all it's features work in a locked-down enterprise environment.

Tom Klimovski
Principal Consultant, Gamma Data
tom.klimovski@gammadata.io

· One min read
Jeffrey Aven

Mulitcloud Diagramming

Following on from the recent post GCP Templates for C4 Diagrams using PlantUML, cloud architects are often challenged with producing diagrams for architectures spanning multiple cloud providers, particularly as you elevate to enterprise level diagrams.

In this post, with the magic of !includeurl we have brought PlantUML template libraries together for AWS, Azure and GCP icon sets, allowing us to produce multi cloud C4 diagrams using PlantUML like this one:

Multi Cloud Architecture Diagram using PlantUML

Creating a multi cloud diagram is simple, start by adding the following include statements after the @startuml label in a new PlantUML C4 diagram:

Then add references to the required services from different providers…

Then include the predefined resources from your different cloud providers in your diagram as shown here (describing a client server application over a cloud to cloud VPN between Azure and GCP)...

Happy multi-cloud diagramming!

Full source code is available at:

https://github.com/gamma-data/plantuml-multi-cloud-diagrams

if you have enjoyed this post, please consider buying me a coffee ☕ to help me keep writing!