How can Webhooks be easier, and searching event data (AKA Message Events) maybe even greater? We’ll try to answer in this post and open source some code along the way.

Shouting “Show me the data!” will earn you funny looks from most people, but not from us here at SparkPost. We are all about the data, both internally as we decide what to build, and externally when we’re delivering event data to you via Webhooks or Message Events.

Tom Cruise may actually want to see the money, but for our customers, data is king. Many of them make heavy use of our Webhooks (push model) to receive batches of event data via HTTP POST. Others prefer to use our Message Events endpoint, which is a pull model – you’re querying the same events, although data retention is limited to 10 days, as of this writing.

Now I don’t know about you, but whenever I hear that something is limited, the first thing I want to do is find a way around that limitation. The second thing is to show other people how I did it. In this post, I’m going to show you how to bypass our Message Events data retention limit by rolling your own low-cost queryable event database.

Building Blocks of a Service

The vision here is to ingest batches of event data, delivered by SparkPost’s Webhooks, and then be able to query that data, ideally for free. At least for cheap. Luckily, there are published best practices for doing the first part. One way to keep costs down (at least initially) is to use the AWS free tier, which is the way we’ll go in this post.

First, I’ll walk through the services I ended up using, and then briefly discuss what else I tried along the way, and why that didn’t make the cut. Almost everything in this system is defined and deployed using CloudFormation, along with pieces from the AWS Serverless Application Model (SAM). Under the hood, this uses API Gateway as an HTTP listener, and Node.js Lambda functions to “do stuff” when requests are received or in response to other interesting events. More on that later.

According to the best practices linked above, we need to return 200 OK ASAP, before doing any processing of the request body, where the event data is. So we’ll run a Lambda to extract the event data and batch id from the HTTP request and save it to S3. At this point, we’re capturing the data but can’t-do a whole lot with it just yet.

Databases and Event Data

There are all sorts of options out there when it comes to databases. I chose RDS PostgreSQL since it’s a (somewhat) managed service that’s eligible for the AWS free tier. Also, I’m already familiar with it, and had some automatic partitioning code lying around that would be better as open source.

Now seems like a good time to talk about what didn’t make the cut, especially since there were so many interesting options to choose from. The first database-y thing I considered was Athena, which would let us query directly against S3. Right out of the gate, unfortunately, there’s a snag: Athena isn’t eligible for the free tier, it’s priced based on the amount of data scanned by each query. We get a raw JSON feed from the Webhook, so optimizing the storage of that data to be cost-effective to the query would be its own project.

Another database I didn’t use is Dynamo, which would have been super convenient since AWS SAM bakes in support for it. Event data in combination with the types of queries the system needed to support isn’t a great fit for Dynamo though since it doesn’t allow the number of secondary indexes we’d need in order to efficiently support the wide range of queries that Message Events provides. Dynamo would definitely have been the low-stress option. Using RDS meant I had to poke around a bit more in AWS networking land than I had planned to.

Connecting the Data Dots

Our event data is stored in S3, and we’ve chosen a database. Triggers aren’t just for databases, thankfully, and S3 lets you configure Lambda functions to run for various types of events. We’ll fire our next Lambda when a file is created in the bucket that our Webhook listener writes to. It’ll read the batch of event data, and load it into our database, which closes the loop. We’re now asynchronously loading event data sent via Webhook into our database.

The only missing piece now is a way to search for specific types of events. We can implement this using AWS SAM as well, which gives us some nice shortcuts. This last Lambda is essentially a translator between query parameters and SQL. There are quite a few options for query builders in Node, and I picked Squel.js, which was a good balance between simplicity, dependencies, and features.

This system now achieves what it set out to – we’re storing event data provided via Webhook, following best practices, and can query the data using a familiar interface. And if you need to, it’s straightforward to customize by updating the query_events Lambda to add new ways to pull out the data you need, and indexes can be added to the database to make those custom queries faster.

Why Tho, and What Next?

SparkPost sends a lot of data along with our events. For example, transmission metadata lets our customers include things like their own internal user id with each email. Event data such as opens and clicks will now include that user id, making it easier to tie things together.

Because every customer uses features like metadata differently, it’s nigh impossible for us to give everyone exactly the type of search options they’d like. Running your own event database means you’re free to implement custom search parameters. Many of our larger customers already have systems like this, whether it’s a third party tool or something they built themselves. This project aims to lower the barriers to entry, so anyone with a moderate level of familiarity with AWS and the command line can operate their own event database more easily.

There are a few things I’d like to do next, for example, setting up authentication on the various endpoints, since as things are now, they’re open to the public. I discuss a solution to this in the repo, since exposing your customer’s email addresses to the public is a no-no.

I’d also like to perform some volume testing on this system. The free tier RDS database in this setup has 20GB of storage, I’m curious to see how quickly that would fill up. It would also be nice to complete the CloudFormation conversion. Currently, the database is managed separately from the CF stack, and creating the required tables and stored procedures requires punching a hole through the firewall, er, security group. It would be nice to standardize and automate that step as well, instead of requiring mouse clicks in the AWS console.

Thanks for reading! Give us a shout on Twitter, and star, fork or submit a PR on Github if you enjoyed the post. We’d love to hear about what you build!

– Dave Gray, Principal Software Engineer

 

sparkpost billboard you code we send aws reinvent 2016

AWS Reinvent 2016

AWS ReInvent is an annual conference organised by Amazon Web Services, Amazon’s cloud computing division. This year it was in Las Vegas, Nevada, USA, with 32,000 people attending from all over the world.

I attended, along with a number of my colleagues. The SparkPost service is hosted in the AWS cloud. We use a number of AWS services to build and operate the service. We are a couple of years into our cloud journey, and we have plenty left to learn.reinvent recap

What I Wanted to Get Out of the Conference

I wanted to know more of the nitty-gritty of the services we use – gotchas, recommendations, anti-patterns to avoid, etc. – by attending some of the 400+ breakout sessions. I also wanted to share problems and solutions with other folks.

Finally, I wanted to spend some quality time catching up with my colleagues. We have quite a geographically distributed team, spread across eight hours of time zones. So this was actually my first opportunity to meet some of them face-to-face!

 

Serverless Computing

I attended a number of sessions around serverless computing, where you don’t have dedicated infrastructure running all the time. On the surface, it seems like this could provide a cheaper way of providing some services, by automatically scaling the infrastructure up and down to match the load on the service.

AWS Lambda is serverless — it runs a single function in response to an input. Behind the scenes, the Lambda runs in a container, which can be reused across invocations. Resources created in the global scope of your Lambda code can be reused across invocations. For example, DynamoDB connection, to reduce latency.

AWS is increasingly using Lambda as an extensibility mechanism — like the cloud equivalent of “hooks” or “callbacks” in software, or webhooks. In the DynamoDB Deep Dive, Lambda stores procedures for DynamoDB, which is an interesting way to view it. Lambdas can be triggered when you make changes to a DynamoDB database, via Streams. This could be used to insert an alternative representation into another DynamoDB table, or push data into a Redis cache.

Amazon’s API Gateway allows you to build serverless REST APIs, with the gateway handling the HTTP traffic and a Lambda doing the work. API Gateway can also call AWS services like DynamoDB directly.

Databases and Analytics

Amazon Athena is a new service that allows you to query and analyze data in S3. You point it at your data in S3, define a schema, and then query using SQL. This is like bringing the database to the data, rather than the data to the database. We are considering using Athena as a replacement for the Message Events API backend.
Tracing Distributed Applications
AWS announced X-Ray a service that provides insight into where your distributed system spends its time, by tracing API requests. It can give you various visualizations of traffic, from a top-level map of your services down to a timeline view showing how much time an API call had to wait on databases and other services. This seems invaluable for gaining insight into applications.

From Monoliths to Microservices

A number of speakers had changed from a monolithic architecture to microservice architectures, or architectures with smaller more focussed components, in order to increase scalability and/or agility. This was to maintain competitiveness. Cost reduction, higher developer velocity and reliability increases seemed to follow naturally.

Closing Thoughts

The conference was very well organized conference, and everything went smoothly. Despite the number of people, it did not feel crowded. It was difficult to get into some of the sessions, but I was fortunate to get into nearly all of my chosen sessions. There were repeats for some of the more popular sessions, and it was possible to get into some of the full sessions by queueing on the day. The app (new this year) was also very helpful for navigating between sessions

Las Vegas was a fun setting with its numerous restaurants, bars and the impressive architecture of the Venetian Hotel.

It was great to spend time with my colleagues away from our “day jobs”, sharing what we’d learnt and thinking how we could use it to improve our service for our customers.

We were also fortunate to have some face-to-face meetings with Product and technical managers for some key AWS services that we use, where we discussed some of our experiences and challenges. They were open and keen to discuss to our feedback, and share some of the future direction of their services with us.

Lastly, I have barely scratched the surface of the conference. There were significant releases in Artificial Intelligence, Compute, Containers, Developer Tools, Hybrid Cloud and others. Please also see:

AWS re:invent 2016 | New Products & Services
Amazon Web Services YouTube channel