Azure Functions Webhook Interface

In November, I gave a talk at Live! 360 on how to create a basic webhook consumer using Azure Functions. This blog post will recap that talk and distill things so that you will understand the basics of Azure Function, and extend the framework solution found on Github.

What are Webhooks?

Webhooks are great little things provided by many popular services including SparkPost, Slack, Visual Studio Team Services, Office 365, Facebook, PayPal, and Salesforce. Webhooks post data based on an event to an endpoint you define.

Why serverless functions?

Serverless functions are a great innovation to help rapidly deploy solutions while reducing the overhead for organizations. The lack of hardware to maintain is a great benefit, and the serverless functions are able to handle unpredictable traffic flows. They are easy to deploy and update, so you can get up and running quickly.


Combining webhooks and serverless functions make it very easy to create rich ecosystems for automation or user interaction. Being able to drive off the events and data generated by all of these disparate systems removes the need for complicated middleware while making it very easy to incorporate custom code and events.

Azure Functions Basics

Azure Functions can be created through the Azure Console or Visual Studio. I recommend that you give both a try so you are familiar with the experiences. One of the nice things about creating something in the Azure Console is that you can download the resulting Azure Function as a Visual Studio solution file. Visual Studio is the same familiar strong IDE experience that you know and love.

There are advantages to both methods. The Azure Function console gives you direct access to control the parameters of the function’s operation from resources available to month usage limits for cost control. All of these options can be set an manipulated from Visual Studio through the host.json file and environment variables.

Creating a Basic Webhook Consumer

Step 1

Create a New Visual Studio solution, and add a New Project to that solution.

Step 2

Right-click the project node, and add a New Item. Choose Azure Function.

Step 3

Now you have a very familiar Visual Studio project. You should have a template for a basic C# Azure Function. Time to build some code to consume your webhook.

Step 4

Debug locally. That’s right, you can debug this thing locally with all the familiar Visual Studio debugging and introspection tools. Pay attention to the debugging console, as it contains a lot of valuable information about every call that you make to your function during local testing.

Step 5

Publish the Azure Function. Right-click the project node and select Publish. Choose Azure Function and Create New. Note that you could update an existing function as well. The function will now appear in your Azure console.

Potential Pitfalls

The most common pitfall that strikes people when working with webhook consumption and serverless functions is that the function runs too long. This either causes the function to become very costly or fail entirely because of the webhook POST times out. There are a few things you can do to mediate these issues.

Webhook consumers should run asynchronously. The data should be ingested as quickly as possible and then processed. The common design mistake is trying to process the data in real time as it comes in. This works as long as the data is a consistent and small size, but if the data size can increase or be inconsistent, then it is best to ensure that the data is received and the HTTP request responded to so that timeouts do not occur.

Another thing that can help mitigate long-running processes is to store the posted data and use the serverless function to start a containerized process using something like Azure Container Services (AKS) to handle the long-running parts. Using this design, the serverless function should fire and forget the container, letting the container post its results either to a log or some other notification service of your choice. This keeps the serverless function as brief as possible while still allowing complicated processing to occur.

Let’s Light This Candle

There you have it. Now you can go forth and create your own rich ecosystem using serverless functions and webhooks. Below are a list of other resources to help you dive deeper into Azure Functions.

-Nick Zimmerman

ps – below are some additional resources on Azure functions that you might find interesting – enjoy!

Tracking Recipient Preferences With The User Agent Header in Elixir

Note: this user agent header post illustrates itself using code written in Elixir. If you prefer, you can read the PHP version.

Much has been made of the relative commercial value of particular groups of people. From super consumers to influencers, iPhone users to desktop holdouts, learning about your recipients’ preferences is clearly important. In these days of deep personalization, it’s also just nice to know a little more about your customer base. Luckily this is a pretty easy job with SparkPost message events.

In this post, I’ll review the content of the User-Agent header, then walk through the process of receiving tracking events from SparkPost’s webhooks facility, parsing your recipients’ User Agent header and using the results to build a simple but extensible report for tracking Operating System preferences. I’ll be using Elixir for the example code in this article but most of the concepts are transferrable to other languages.

SparkPost Webhook Engagement Events

SparkPost webhooks offer a low-latency way for your apps to receive detailed tracking events for your email traffic. We’ve written previously about how to use them and how they’re built so you can read some background material if you need to.

We’ll be focusing on just the click  event here. Each time a recipient clicks on a tracked link in your email, SparkPost generates a click event that you can receive by webhook. You can grab a sample click event directly from the SparkPost API here. The most interesting field for our purposes is naturally msys.track_event.user_agent which contains the full User-Agent header sent by your recipient’s email client when they clicked the link.

Grokking The User Agent

Ok so we can almost pick out the important details from that little blob of text. For the dedicated, there’s a specification but it’s a tough read. Broadly speaking, we can extract details about the user’s browser, OS and “device” from their user agent string.

For example, from my own user agent:

…you can tell I’m an Android user with a Huawei Nexus 6P device (and that it’s bang up-to-date ;).

Caveat: user agent spoofing

Some of you might be concerned about the information our user agent shares with the services we use. As is your right, you can use a browser plugin (Chrome, Firefox) or built-in browser dev tools to change your user agent string to something less revealing. Some services on the web will alter your experience based on your user agent though so it’s important to know the impact these tools might have on you.

Harvesting User Agents From SparkPost Tracking Events

Alright, enough theory. Let’s build out a little webhook service to receive, process and stash user agent details for each click tracked through our SparkPost account.

Elixir And The Web: Phoenix

The de facto standard way to build web services in Elixir is the Phoenix Framework. If you’re interested in a Phoenix getting started guide, the docs are excellent and the Up and Running guide in particular is a great place to start.

We’ll assume you already have a basic Phoenix application and focus on adding an HTTP endpoint to accept SparkPost webhook event batches.

Plug: Composable Modules For The Web

Elixir comes with a specification called ‘Plug’ (defined here) which makes it easy to build up layers of so-called middleware on an HTTP service. The simplest form of plug is a function that accepts a connection and a set of options. We’ll use this form to build up our webhook consumer.

Handling SparkPost Webhooks Requests

Our first task is to create a “pipeline”, which is a sequence of transformations that a connection goes through. A pipeline in Phoenix is just a convenient way to compose a sequence of plugs and apply them to some group of incoming requests.

We’ll first create a “webhook” pipeline and then add plugs to it to handle the various tasks in our service. All this happens in our application’s Router module:

You can read more about Phoenix routing and plug pipelines in the routing section of the Phoenix docs. For now, it’s important to realize that each Phoenix application includes an endpoint module which is responsible for setting up basic request processing. This includes automatic JSON parsing, which we’ll rely on here.

Unpacking SparkPost Events

Our event structure contains a certain amount of nesting which we can now strip out in preparation for consuming the tasty details inside. This is a job for our very first plug:

There is a little magic going on here. Our endpoint applies the JSON parser plug to all requests before our pipeline starts. Our unpack_events plug can then rely upon the _json param left on the connection JSON parser.

The rest of unpack_events is just extracting the contents of the msys key on each event and the contents of the first key in that object. Finally, our unpack_events plug stored our unpacked events on a connection param for later plugs to pick up.

Filtering Events

Now lets retain just the click events (when we register our webhook with SparkPost later, we can also ask it to send only click events):

This plug leaves our filtered events on the :events connection param. filter_event_types accepts a list of types we care about.

There’s a lot of detail in a single event. It might be a good idea to pare things down to just the fields we care about:

After The Plug Pipeline: The Controller

To finish up our webhook request handling, we need a controller which works after the plug pipeline to process to request and produce a response for the client. Here’s a skeleton Controller:

Then we can wire ApiController.webhook/2  to our router:

When we register our web service with SparkPost as a webhook consumer, it’ll make HTTP requests to it containing a JSON payload of events. Now our service has a /webhook endpoint that accepts JSON, cuts our event batch down to size and responds with a happy little “ok!”.

Testing Our Progress

We can test our service by sending a test batch to it. Luckily, the SparkPost API will generate a test batch for you on request.

  1. Grab a sample webhooks event batch from the SparkPost API: Note: this step uses cURL and jq. You can skip the jq part and remove the results key from the JSON file yourself:
    curl | jq .results > batch.json
  2. Start our Phoenix service:
    mix phx.server
  3. Send our test batch to the service:
    curl -XPOST -H "Content-type: application/json" -d @batch.json http://localhost:4000/webhook

Parsing User-Agent

Now we’re ready to enrich our events with new information. We’ll parse the user agent string and extract the OS using the ua_inspector module. We can easily add this step to the API plug pipeline in our router:

Note: If you’re following along, remember to add ua_inspector as a dependency in mix.exs and configure it.

Note: not all user agent strings will contain the detail we want (or even make sense at all) so we label all odd-shaped clicks with “OS: unknown”.

Alright, now we have an array of events containing only interesting fields and with an extra “os” field to boot.

Generating Report-Ready Summary Data

At this point, we could just list each event and call our report done. However, we’ve come to expect some summarisation in our reports, to simplify the task of understanding. We’re interested in OS trends in our email recipients, which suggests that we should aggregate our results: collect summaries indexed by OS. Maybe we’d even use a Google Charts pie chart.

We could stop there citing “exercise for the reader” but I always find that frustrating so instead, here’s a batteries-included implementation which stores click events summaries in PostgreSQL and renders a simple report using Google Charts.

An Exercise For The Reader

I know, I said I wouldn’t do this. Bear with me: if you were paying attention to the implementation steps above, you might have noticed several re-usable elements. Specifically, I drew a few filtering and reporting parameters out for re-use:

  • event type filters
  • event field filters
  • event “enrichment” functionality

With minimal effort, you could add, filter on and group the campaign_id field to see OS preference broken down by email campaign. You could also use it as a basis for updating your own user database from bounce events with type=bounce, fields=rcpt_to,bounce_class and so on.

I hope this short walkthrough gave some practical insight on using SparkPost webhooks. With a little experimentation, the project could be made to fit into plenty of use cases and I’d be more than happy to accept contributions on that theme. If you’d like to talk more about the user agent header, your own event processing needs, SparkPost webhooks, Elixir or anything else, come find us on Slack!


How can Webhooks be easier, and searching event data (AKA Message Events) maybe even greater? We’ll try to answer in this post and open source some code along the way.

Shouting “Show me the data!” will earn you funny looks from most people, but not from us here at SparkPost. We are all about the data, both internally as we decide what to build, and externally when we’re delivering event data to you via Webhooks or Message Events.

Tom Cruise may actually want to see the money, but for our customers, data is king. Many of them make heavy use of our Webhooks (push model) to receive batches of event data via HTTP POST. Others prefer to use our Message Events endpoint, which is a pull model – you’re querying the same events, although data retention is limited to 10 days, as of this writing.

Now I don’t know about you, but whenever I hear that something is limited, the first thing I want to do is find a way around that limitation. The second thing is to show other people how I did it. In this post, I’m going to show you how to bypass our Message Events data retention limit by rolling your own low-cost queryable event database.

Building Blocks of a Service

The vision here is to ingest batches of event data, delivered by SparkPost’s Webhooks, and then be able to query that data, ideally for free. At least for cheap. Luckily, there are published best practices for doing the first part. One way to keep costs down (at least initially) is to use the AWS free tier, which is the way we’ll go in this post.

First, I’ll walk through the services I ended up using, and then briefly discuss what else I tried along the way, and why that didn’t make the cut. Almost everything in this system is defined and deployed using CloudFormation, along with pieces from the AWS Serverless Application Model (SAM). Under the hood, this uses API Gateway as an HTTP listener, and Node.js Lambda functions to “do stuff” when requests are received or in response to other interesting events. More on that later.

According to the best practices linked above, we need to return 200 OK ASAP, before doing any processing of the request body, where the event data is. So we’ll run a Lambda to extract the event data and batch id from the HTTP request and save it to S3. At this point, we’re capturing the data but can’t-do a whole lot with it just yet.

Databases and Event Data

There are all sorts of options out there when it comes to databases. I chose RDS PostgreSQL since it’s a (somewhat) managed service that’s eligible for the AWS free tier. Also, I’m already familiar with it, and had some automatic partitioning code lying around that would be better as open source.

Now seems like a good time to talk about what didn’t make the cut, especially since there were so many interesting options to choose from. The first database-y thing I considered was Athena, which would let us query directly against S3. Right out of the gate, unfortunately, there’s a snag: Athena isn’t eligible for the free tier, it’s priced based on the amount of data scanned by each query. We get a raw JSON feed from the Webhook, so optimizing the storage of that data to be cost-effective to the query would be its own project.

Another database I didn’t use is Dynamo, which would have been super convenient since AWS SAM bakes in support for it. Event data in combination with the types of queries the system needed to support isn’t a great fit for Dynamo though since it doesn’t allow the number of secondary indexes we’d need in order to efficiently support the wide range of queries that Message Events provides. Dynamo would definitely have been the low-stress option. Using RDS meant I had to poke around a bit more in AWS networking land than I had planned to.

Connecting the Data Dots

Our event data is stored in S3, and we’ve chosen a database. Triggers aren’t just for databases, thankfully, and S3 lets you configure Lambda functions to run for various types of events. We’ll fire our next Lambda when a file is created in the bucket that our Webhook listener writes to. It’ll read the batch of event data, and load it into our database, which closes the loop. We’re now asynchronously loading event data sent via Webhook into our database.

The only missing piece now is a way to search for specific types of events. We can implement this using AWS SAM as well, which gives us some nice shortcuts. This last Lambda is essentially a translator between query parameters and SQL. There are quite a few options for query builders in Node, and I picked Squel.js, which was a good balance between simplicity, dependencies, and features.

This system now achieves what it set out to – we’re storing event data provided via Webhook, following best practices, and can query the data using a familiar interface. And if you need to, it’s straightforward to customize by updating the query_events Lambda to add new ways to pull out the data you need, and indexes can be added to the database to make those custom queries faster.

Why Tho, and What Next?

SparkPost sends a lot of data along with our events. For example, transmission metadata lets our customers include things like their own internal user id with each email. Event data such as opens and clicks will now include that user id, making it easier to tie things together.

Because every customer uses features like metadata differently, it’s nigh impossible for us to give everyone exactly the type of search options they’d like. Running your own event database means you’re free to implement custom search parameters. Many of our larger customers already have systems like this, whether it’s a third party tool or something they built themselves. This project aims to lower the barriers to entry, so anyone with a moderate level of familiarity with AWS and the command line can operate their own event database more easily.

There are a few things I’d like to do next, for example, setting up authentication on the various endpoints, since as things are now, they’re open to the public. I discuss a solution to this in the repo, since exposing your customer’s email addresses to the public is a no-no.

I’d also like to perform some volume testing on this system. The free tier RDS database in this setup has 20GB of storage, I’m curious to see how quickly that would fill up. It would also be nice to complete the CloudFormation conversion. Currently, the database is managed separately from the CF stack, and creating the required tables and stored procedures requires punching a hole through the firewall, er, security group. It would be nice to standardize and automate that step as well, instead of requiring mouse clicks in the AWS console.

Thanks for reading! Give us a shout on Twitter, and star, fork or submit a PR on Github if you enjoyed the post. We’d love to hear about what you build!

– Dave Gray, Principal Software Engineer


Our Latest Feature

Ever since we introduced subaccounts back in 2016 we’ve worked hard to enhance the capabilities that functionality provides. This latest enhancement — on the heels of webhooks by subaccount — allows users to limit which subaccounts have access to which templates. This has been one of the most requested enhancements by both our enterprise customers and extended developer community.

With the enhancement, when creating a stored template, a master account can create the template exclusively for its own use. This makes it unavailable to any subaccounts. Alternatively, a master account can create and share the stored template with all subaccounts. This allows subaccounts — using their subaccount API keys — to call that shared template in their messaging but not edit it. This is great for companies who want to create stored templates centrally but allow individual customers or brands — managed as subaccounts — the use of those templates. This aspect of the functionality is similar to how sending domains work and is great for service providers managing their customers or partners as subaccounts. It’s also great for companies where templates are created centrally for brand consistency but where they want to allow different divisions or different message streams to use those templates.

Additionally, the master account can copy the stored templates to one or more specific subaccounts. This makes a copy that those subaccounts can now edit for themselves. This is great for individual business units within a SparkPost account that want to create templates centrally to maintain brand integrity, legal footers, and other aspects of the template. However, they still want allow their subaccounts the ability to edit if they choose.

And lastly, individual subaccounts can create and edit their own store templates that are then only available to that subaccount. For companies where individual divisions and brands operate independently as subaccounts, this provides flexibility to create templates without fear that a different division may accidentally call the wrong templates or make unauthorized edits.

Key Takeaway

We’ve built in flexibility into how you create and manage your stored templates with subaccounts. This allows you to integrate in a way that makes sense to your business.

We expect to start shipping this enhancement this week and out to all customers by the end of the month. You can expect the API docs and related support articles on the website this week.

—Irina Doliov
Senior Lead Product Manager


We love it when developers use SparkPost webhooks to build awesome responsive services. Webhooks are great when you need real-time feedback on what your customers are doing with their messages. They work on a “push” model – you create a microservice to handle the event stream.

Did you know that SparkPost also supports a “pull” model Message Events API that enables you to download your event data for up to ten days afterwards? This can be particularly useful in situations such as:

  • You’re finding it difficult to create and maintain a production-ready microservice. For example, your corporate IT policy might make it difficult for you to have open ports permanently listening;
  • You’re familiar with batch type operations and running periodic workloads, so you don’t need real-time message events;
  • You’re a convinced webhooks fan, but you’re investigating issues with your almost-working webhooks receiver microservice, and want a reference copy of those events to compare.

If this sounds like your situation, you’re in the right place! Now let’s walk through setting up a really simple tool to get those events.

Design goals

Let’s start by setting out the requirements for this project, then translate them into design goals for the tool:

  • You want it easy to customize without programming.
  • SparkPost events are a rich source of data, but some event-types and event properties might not be relevant to you. Being selective gives smaller output file sizes, which is a good thing, right?
  • Speaking of output files, you want event data in the commonly-used csv  file format. While programmers love JSON, CSV is easier for non-technical users (and results in smaller files).
  • You want to set up your SparkPost account credentials and other basic information once and once only, without having to redo them each time it’s used. Having to remember that stuff is boring.
  • You need flexibility on the event date/time ranges of interest.
  • You want to set up your local time-zone once, and then work in that zone, not converting values manually to UTC time. Of course, if you really want to work in UTC, because your other server logs are all UTC, then “make it so.”
  • Provide some meaningful comfort reporting on your screen. Extracting millions of events could take some time to run. I want to know it’s working.

Events, dear programmer, events …

Firstly, you’ll need Python 3 and git installed and working on your system.  For Linux, a simple procedure can be found in our previous blog post. It’s really this easy:

For other platforms, this is a good starting point to get the latest Python download; there are many good tutorials out there on how to install.

Then get the sparkyEvents code from Github using:

We’re the knights who say “.ini”

Set up a sparkpost.ini  file as per the example in the Github README file here.

Replace <YOUR API KEY> with a shrubbery your specific, private API key.

Host is only needed for SparkPost Enterprise service usage; you can omit for

Events is a list, as per SparkPost Event Types; omit the line, or assign it blank, to select all event types.

Properties can be any of the SparkPost Event Properties. Definitions can split over lines using indentation, as per Python .ini file structure, which is handy as there are nearly sixty different properties. You can select just those properties you want, rather than everything; this keeps the output file to just the information you want.

Timezone can be configured to suit your locale. It’s used by SparkPost to interpret the event time range from_time and to_time that you give in command-line parameters. If you leave this blank, SparkPost will default to using UTC.

If you run the tool without any command-line parameters, it prints usage:

from_time and to_time are inclusive, so for example if you want a full day of events, use time T00:00 to T23:59.

Here’s a typical run of the tool, extracting just over 18 million events. This run took a little over two hours to complete.

That’s it! You’re ready to use the tool now. Want to take a peek inside the code? Keep reading!

Inside the code

Getting events via the SparkPost API

The SparkPost Python library doesn’t yet have built-in support for the message-events endpoint. In practice the Python requests library is all we need. It provides inbuilt abstractions for handling JSON data, response status codes etc and is generally a thing of beauty.

One thing we need to take care of here is that the message-events endpoint is rate-limited. If we make too many requests, SparkPost replies with a 429 response code. We play nicely using the following function, which sleeps for a set time, then retries:

Practically, when using event batches of 10000 I didn’t experience any rate-limiting responses even on a fast client. I had to deliberately set smaller batch sizes during testing, so you may not see rate-limiting occur for you in practice.

Selecting the Event Properties

SparkPost’s events have nearly sixty possible properties. Users may not want all of them, so let’s select those via the sparkpost.ini file. As with other Python projects, the excellent ConfigParser library does most of the work here. It supports a nice multi-line feature:

“Values can also span multiple lines, as long as they are indented deeper than the first line of the value.”

We can read the properties (applying a sensible default if it’s absent), remove any newline or carriage-return characters, and convert to a Python list in just three lines:

Writing to file

The Python csv library enables us to create the output file, complete with the required header row field names, based on the fList we’ve just read:

Using the DictWriter class, data is automatically matched to the field names in the output file, and written in the expected order on each line. restval="  ensures we emit blanks for absent data, since not all events have every property. extrasaction=ignore ensures that we skip extra data we don’t want.

That’s pretty much everything of note. The tool is less than 150 lines of actual code.

You’re the Master of Events!

So that’s it! You can now download squillions of events from SparkPost, and can customize the output files you’re getting. You’re now the master of events!

—Steve Tuck, Senior Messaging Engineer

ps: If you’re looking for more resources on APIs, check out the SparkPost Academy.

Big Rewards Blog Footer

Data, Data, Data!

Imagine this: you’re an email service provider; or built an app that sends on behalf of other businesses; or a group within a larger company managing email on behalf of several divisions or brands. You connect to SparkPost, you set up subaccounts for each of your customers/divisions/brands, you send your email, and then . . . you confront the firehose of data that comes with webhook events for all those constituents. It’s a lot of data to consume. And a lot of data to separate for the relevant audiences.

Never fear, we’ve heard your cry. We actually have 2 enhancements that help those of you sending on behalf of others:

The first thing is that a single SparkPost account can now have multiple custom bounce (sometimes known as return-path domains.) This enhancement went in a couple of weeks ago. Previously, a single SparkPost account could only have 1 custom bounce domain. This knowledge base article describes how to create them and why doing so improves your deliverability. For senders with multiple customers, you can set a bounce  domain for each of your customers or brands, create a default for the account, and specify which one your want to use in the transmission API call.

  • Helpful hint: DO NOT use the UI to create multiple bounce domains. The UI has not yet  been updated for the new functionality. That’s in the works. As we are an API-first company, we pushed the API update first, while we work to update the UI.

The second big enhancement is that you can now create separate webhook endpoints for each subaccount. This way, rather than getting ALL your account delivery and engagement data at one endpoint and having to filter out different subaccounts, you can create separate endpoints for each subaccount and pipe the relevant data to the right place.  Here’s the article on subaccounts – updated for the new webhooks functionality.

Some helpful hints:

  • If you want to receive data for multiple (but not all) subaccounts at a single end-point, you can give the same endpoint to multiple subaccounts.
  • If you want to receive data for just the master account (for example, if you only use your subaccount for testing and want to filter the test data out), enter “master” into the UI where you create your webhook. If you don’t enter anything into the subaccount field, you will get all data for the master and all subaccounts — current functionality.

Try It Out

Multiple bounce domains for a single account and webhooks by subaccount were two of the most requested features among our entire customer base — big and small. We listened and added these enhancements. Try them out and let us know what you think.

Amie, Nichelle, Irina

-SparkPost Product Team

Big Rewards Blog Footer

One of my favorite things about my job is getting to take existing APIs and figure out ways to mix and match them with SparkPost to create cool and interesting apps. Thanks to our friend Todd Motto, there’s a Github repo full of public APIs for me to choose from. Using APIs from this repo, I’ve created apps like Giphy Responder and many others that didn’t quite make it to Github.

SparkPost recently sponsored ManhattanJS, which happened to be hosted at Spotify Headquarters in New York. This inspired me to take a look at the Spotify Web API and come up with something I could demo at the meetup. Their web API allows you do many things, such as search for a song, get information about artists and albums, manage playlists, and so much more. Given that set of functionality, how could I combine it with sending or receiving email to create an engaging demo?

I love music. I was a Punk/Ska DJ in college. When I owned a car, I would sing in it (I still do when I rent one!). I’ve also been a Spotify Premium member since 2011. Now that I live in NYC and travel mostly underground, I rely heavily on my offline playlists. But here’s the problem: I’m not hip or cool, and since I no longer listen to the radio, I don’t know a lot of new music. This usually results in me sitting in a subway car listening to early 2000’s emo bands or sobbing silently to myself while listening to the cast recording of A New Brain.

So yeah… I need suggestions. Spotify has a great social experience but sadly, not everyone has Spotify. But wouldn’t it be cool if you could email songs to a collaborative playlist? I’m pretty sure everyone has access to email. This would also be a great way to create a playlist for an event. So I set out to create JukePost.

The idea was simple. First I’d create an inbound domain ( that would allow me to send an email to {playlist} with a list of songs. (Note: {playlist} has to be an already-existing, collaborative playlist.) Then I’d create a Node.js app using Express.js to process the relay webhook, search for the song and add it to the specified playlist, and reply with a confirmation that included the songs added and a link to the playlist.

Webhooks, You Gotta Catch’em All!

For this application, I decided to use Firebase, a real time noSQL database. I like to use it for a lot of my demo apps because it makes receiving webhooks extremely easy. It will even receive them when your app isn’t running. You just need to set the target of your webhook to your Firebase URL + store name + .json.

So, let’s set up an inbound domain and create a relay webhook to point to a Firebase database. I’m going to use the SparkPost Node CLI, but you’re welcome to use your favorite way to access the SparkPost API.

  1. Setup the MX records for your inbound domain. I’ll be using
  2. Create your inbound domain:
    sparkpost inbound-domain create
  3. Create a relay webhook for your inbound domain targeting your Firebase URL. I’ll be using

At this point, you should be able to send an email to {anything} and see an entry under raw-inbound.

Playing with the Playlist

Now that we’re catching the incoming emails, we need a broker app to parse out the data, handle the interactions with Spotify, and trigger a response email.

First, we need to handle authenticating to Spotify using Oauth 2.0. This was my first time doing that and luckily I found the spotify-web-api-node npm package and a great blogpost that assisted me in creating the login, callback, refresh_token routes needed to get everything going. Once the application is authenticated, we can pull the user’s public playlists, filter out the collaborative ones, and save them for later.

Now we can use the firebase npm package to listen for new inbound messages and process them accordingly. Because Firebase notifies us of new messages in real time, we can set up a listener.

You can take a look at relayParser.js to see how I grab the relevant data from the relay message. Based on the information we parsed from the message body text, we now know who sent the message, which playlist to add songs to, and what songs to search for. We now have everything we need to find the songs and add them to the playlist. Be sure to add the song information to a substitution data object, as we’ll use that for the confirmation email.


I chose to get a little fancy with my confirmation email. I decided to use a stored template that would return a link to the playlist and the songs that were added along with artists, cover art, and a 30 second sample. I put my template html and a sample json object in the resources folder of the github repo for your convenience.

This was a fun little project and the demo went over quite well. Want to try it for yourself? Send an email to [email protected] with the subject line Add and then add your favorite songs to the body in the format {title} by {artist}. Let’s build an awesome playlist together.

This is just the tip of the iceberg for what this app can do. Have a cool idea for an addition? Feel free to create an issue or submit a pull request.

– Aydrian Howard

P.S. Want to do more with inbound email processing? Check out my post about inbound email and other cool things you can do with it.

Top 10 Blogs: Our Year in Review

We’re finishing out the year with a roundup of our top 10 blogs from 2016. The Mandrill announcement in April impacted our community, and as a result our blog, in a big way. We’re recapping that along with other top posts on deliverability tips and email marketing best practices down below. As always, our ears are open, so if there’s a certain topic you’d like to see on the blog, leave us a comment, tweet us, or ping us in slack.

Without further ado, we give you the top 10 blogs of 2016:

#1 Mandrill Alternatives

It’s no surprise that our Mandrill alternative blogs dominated our top 10 list (5 out of our top 10). We responded in real-time to the Mandrill crisis, and our CEO even weighed in and made you a promise he intends to stick by for the long haul. The Mandrill incident also inspired us to create SendGrid and Mailgun migration guides, check them out when you have a chance.

Mandrill Template Migration top 10 blogs

#2 PHP

But beyond Mandrill, we also had some other top posts. Coming in second was using SparkPost in PHP. Believe it or not, many of you use PHP through our WordPress plugin.

PHP in SparkPost

#3 Advanced Email Templates

For developers who want to get the most out of SparkPost templating capabilities, this post was meant for you! In this straight-forward post, Chris Wilson makes sending email easy and gives you some pro tips along the way.


advanced email templates


#4 What Recruiters Look for in a Dev Candidate

Everyone wants to know how to interview well. In this post we told you about what four tech recruiters look for when hiring developer and engineering candidates.

Recruiter for Dev Candidate

#5 Webhooks!

One of the most useful elements of SparkPost are our webhooks and in this post, Ewan Dennis walks you through the basics and beyond. Knowing what to expect functionally beyond the raw API spec is half the battle when consuming new data sources like SparkPost webhooks.

webhooks: beyond the basics

#6 Outlook and Hotmail Email Deliverability

The Outlook inbox is one of the major destinations for most email senders, especially those with large numbers of consumer subscribers. It also has a reputation for being somewhat tricky to get into. In this post, one of our deliverability experts, Tonya Gordon, shares what senders need to know in order to get the best Hotmail/Outlook deliverability and ensure their messages reach the inbox.

#7 Announcing Subaccounts!

Thanks to your feedback, the Mandrill event helped us expedite our release of subaccounts ahead of schedule. Our VP of Product told you about how we process your feedback and what’s available with subaccounts.

SparkPost #WeLoveDevelopers

#8 Are You an Email Rookie?

Sometimes you need to go beyond a top 10 list and in this case we did — 17 tips on how not to be labeled an email rookie. In this post we put together a list of common mistakes, with a heavy dose of snark, on how to avoid being labeled an email marketing rookie.

Email Marketing Rookie Kid Missing Steering Wheel

#9 Retail Marketing Stats You Need to Know

Do you know what the lowest e-commerce order generators are? In this post, we give you five tips and stats for mastering retail marketing. From social media, to mobile and beacon triggered emails.

Retail Marketing statistics mobile 2016

#10 Setting Up SparkPost as your SMTP Relay

You know you need to send email, but you don’t want to spend a lot of time or effort on it — you just want something that works out of the box. It’s not too much to ask! Many frameworks, languages, and tools come with SMTP support, but the last step is the most important – an SMTP server. In this post, we walk you through how to set up SparkPost as your SMTP Relay.

And that rounds out our Top 10 Blogs for 2016! Any industry trends or topics you think were under-represented? Leave us a comment below, or tweet us!


Cloud-Webhook-Infrastructure_Dev-Blog_600x300_0716 (1)

There are many ways to obtain metadata about your transmissions sent via SparkPost. We built a robust reporting system with over 40 different metrics to help you optimize your email deliverability. At first, we attempted to send metadata to our customers via carrier pigeons to meet customer demand for a push-based event system. We soon discovered that the JSON the birds delivered was not as clean as customers wanted. That’s when we decided to build a scalable Webhooks infrastructure using more modern technologies.

Event Hose

Like our reporting, the webhook infrastructure at SparkPost begins with what we call our Event Hose. This piece of the Momentum platform generates the raw JSON data that will eventually reach your webhook endpoint. As Bob detailed in his Reporting blogpost, after every message generation, bounce event, delivery, etc., Momentum logs a robust JSON object describing every quantifiable detail (we found unquantifiable details didn’t fit into the JSON format very well) of the event that occurred.

Each of these JSON event payloads are loaded into an amqp-based RabbitMQ exchange. This exchange will fan the messages out to the desired queue, including the queue which will hold your webhooks traffic. We currently use RabbitMQ as a key part of our application’s infrastructure stack to queue and reliably deliver message. We use a persistent queue to ensure that RabbitMQ holds each message until it’s delivered to your consumer. In addition, the system we’ve built is ready to handle failures, downtime, and retries.

Webhooks ETL

Between RabbitMQ and your consumer, we have an ETL process that will create batches of these JSON events for each webhook you have created. We believe in the “eat your own dogfood” philosophy for our infrastructure. So our webhooks ETL process will call out to our public webhooks API to find out where to send your batches. Additional headers or authentication data may be added to the POST request. Then the batch is on its way to your consumer.

If your webhooks consumer endpoint responds to the POST request in a timely manner with an HTTP 200 response, then the ETL process will acknowledge and remove the batch of messages from RabbitMQ. If the batch fails to POST to your consumer for any reason (Timeout, 500 server error, etc), it will be added to a RabbitMQ delayed queue. This queue will hold the batch for a certain amount of time (we retry batches using an increasing backoff strategy based on how many times it has been attempted). After the holding time has elapsed, the ETL process will receive the already-processed batch to send to your endpoint again. This retry process is repeated until either your consumer has accepted the batch with a 200 response, or the maximum number of retries has been reached.

As each batch is attempted, the ETL also sends updates to the webhook API with status data about each batch. We keep track of the consumer’s failure code, number of retries and batch ID. If your webhook is having problems accepting batches, you can access this status data via the webhook API. You can also access it through the UI by clicking “View Details” in your webhook’s batch status row.


Webhooks are an extremely useful part of the SparkPost infrastructure stack. They allow customers to receive event-level metadata on all of their transmissions in a push model. While we’re operating on RabbitMQ today, we’re always looking at more modern cloud-based message queueing technologies, such as SQS, to see what can best help us meet our customers’ needs.

If you’d like to see webhooks in action, try creating a webhook for your SparkPost account. As always, if you have any questions or would simply like to chat, swing by the SparkPost community Slack.

–Jason Sorensen, Lead Data Scientist