We love it when developers use SparkPost webhooks to build awesome responsive services. Webhooks are great when you need real-time feedback on what your customers are doing with their messages. They work on a “push” model – you create a microservice to handle the event stream.

Did you know that SparkPost also supports a “pull” model Message Events API that enables you to download your event data for up to ten days afterwards? This can be particularly useful in situations such as:

  • You’re finding it difficult to create and maintain a production-ready microservice. For example, your corporate IT policy might make it difficult for you to have open ports permanently listening;
  • You’re familiar with batch type operations and running periodic workloads, so you don’t need real-time message events;
  • You’re a convinced webhooks fan, but you’re investigating issues with your almost-working webhooks receiver microservice, and want a reference copy of those events to compare.

If this sounds like your situation, you’re in the right place! Now let’s walk through setting up a really simple tool to get those events.

Design goals

Let’s start by setting out the requirements for this project, then translate them into design goals for the tool:

  • You want it easy to customize without programming.
  • SparkPost events are a rich source of data, but some event-types and event properties might not be relevant to you. Being selective gives smaller output file sizes, which is a good thing, right?
  • Speaking of output files, you want event data in the commonly-used csv  file format. While programmers love JSON, CSV is easier for non-technical users (and results in smaller files).
  • You want to set up your SparkPost account credentials and other basic information once and once only, without having to redo them each time it’s used. Having to remember that stuff is boring.
  • You need flexibility on the event date/time ranges of interest.
  • You want to set up your local time-zone once, and then work in that zone, not converting values manually to UTC time. Of course, if you really want to work in UTC, because your other server logs are all UTC, then “make it so.”
  • Provide some meaningful comfort reporting on your screen. Extracting millions of events could take some time to run. I want to know it’s working.

Events, dear programmer, events …

Firstly, you’ll need Python 3 and git installed and working on your system.  For Linux, a simple procedure can be found in our previous blog post. It’s really this easy:

For other platforms, this is a good starting point to get the latest Python download; there are many good tutorials out there on how to install.

Then get the sparkyEvents code from Github using:

We’re the knights who say “.ini”

Set up a sparkpost.ini  file as per the example in the Github README file here.

Replace <YOUR API KEY> with a shrubbery your specific, private API key.

Host is only needed for SparkPost Enterprise service usage; you can omit for sparkpost.com.

Events is a list, as per SparkPost Event Types; omit the line, or assign it blank, to select all event types.

Properties can be any of the SparkPost Event Properties. Definitions can split over lines using indentation, as per Python .ini file structure, which is handy as there are nearly sixty different properties. You can select just those properties you want, rather than everything; this keeps the output file to just the information you want.

Timezone can be configured to suit your locale. It’s used by SparkPost to interpret the event time range from_time and to_time that you give in command-line parameters. If you leave this blank, SparkPost will default to using UTC.

If you run the tool without any command-line parameters, it prints usage:

from_time and to_time are inclusive, so for example if you want a full day of events, use time T00:00 to T23:59.

Here’s a typical run of the tool, extracting just over 18 million events. This run took a little over two hours to complete.

That’s it! You’re ready to use the tool now. Want to take a peek inside the code? Keep reading!

Inside the code

Getting events via the SparkPost API

The SparkPost Python library doesn’t yet have built-in support for the message-events endpoint. In practice the Python requests library is all we need. It provides inbuilt abstractions for handling JSON data, response status codes etc and is generally a thing of beauty.

One thing we need to take care of here is that the message-events endpoint is rate-limited. If we make too many requests, SparkPost replies with a 429 response code. We play nicely using the following function, which sleeps for a set time, then retries:

Practically, when using event batches of 10000 I didn’t experience any rate-limiting responses even on a fast client. I had to deliberately set smaller batch sizes during testing, so you may not see rate-limiting occur for you in practice.

Selecting the Event Properties

SparkPost’s events have nearly sixty possible properties. Users may not want all of them, so let’s select those via the sparkpost.ini file. As with other Python projects, the excellent ConfigParser library does most of the work here. It supports a nice multi-line feature:

“Values can also span multiple lines, as long as they are indented deeper than the first line of the value.”

We can read the properties (applying a sensible default if it’s absent), remove any newline or carriage-return characters, and convert to a Python list in just three lines:

Writing to file

The Python csv library enables us to create the output file, complete with the required header row field names, based on the fList we’ve just read:

Using the DictWriter class, data is automatically matched to the field names in the output file, and written in the expected order on each line. restval="  ensures we emit blanks for absent data, since not all events have every property. extrasaction=ignore ensures that we skip extra data we don’t want.

That’s pretty much everything of note. The tool is less than 150 lines of actual code.

You’re the Master of Events!

So that’s it! You can now download squillions of events from SparkPost, and can customize the output files you’re getting. You’re now the master of events!

—Steve Tuck, Senior Messaging Engineer

Big Rewards Blog Footer

Data, Data, Data!

Imagine this: you’re an email service provider; or built an app that sends on behalf of other businesses; or a group within a larger company managing email on behalf of several divisions or brands. You connect to SparkPost, you set up subaccounts for each of your customers/divisions/brands, you send your email, and then . . . you confront the firehose of data that comes with webhook events for all those constituents. It’s a lot of data to consume. And a lot of data to separate for the relevant audiences.

Never fear, we’ve heard your cry. We actually have 2 enhancements that help those of you sending on behalf of others:

The first thing is that a single SparkPost account can now have multiple custom bounce (sometimes known as return-path domains.) This enhancement went in a couple of weeks ago. Previously, a single SparkPost account could only have 1 custom bounce domain. This knowledge base article describes how to create them and why doing so improves your deliverability. For senders with multiple customers, you can set a bounce  domain for each of your customers or brands, create a default for the account, and specify which one your want to use in the transmission API call.

  • Helpful hint: DO NOT use the UI to create multiple bounce domains. The UI has not yet  been updated for the new functionality. That’s in the works. As we are an API-first company, we pushed the API update first, while we work to update the UI.

The second big enhancement is that you can now create separate webhook endpoints for each subaccount. This way, rather than getting ALL your account delivery and engagement data at one endpoint and having to filter out different subaccounts, you can create separate endpoints for each subaccount and pipe the relevant data to the right place.  Here’s the article on subaccounts – updated for the new webhooks functionality.

Some helpful hints:

  • If you want to receive data for multiple (but not all) subaccounts at a single end-point, you can give the same endpoint to multiple subaccounts.
  • If you want to receive data for just the master account (for example, if you only use your subaccount for testing and want to filter the test data out), enter “master” into the UI where you create your webhook. If you don’t enter anything into the subaccount field, you will get all data for the master and all subaccounts — current functionality.

Try It Out

Multiple bounce domains for a single account and webhooks by subaccount were two of the most requested features among our entire customer base — big and small. We listened and added these enhancements. Try them out and let us know what you think.

Amie, Nichelle, Irina

-SparkPost Product Team

Big Rewards Blog Footer

One of my favorite things about my job is getting to take existing APIs and figure out ways to mix and match them with SparkPost to create cool and interesting apps. Thanks to our friend Todd Motto, there’s a Github repo full of public APIs for me to choose from. Using APIs from this repo, I’ve created apps like Giphy Responder and many others that didn’t quite make it to Github.

SparkPost recently sponsored ManhattanJS, which happened to be hosted at Spotify Headquarters in New York. This inspired me to take a look at the Spotify Web API and come up with something I could demo at the meetup. Their web API allows you do many things, such as search for a song, get information about artists and albums, manage playlists, and so much more. Given that set of functionality, how could I combine it with sending or receiving email to create an engaging demo?

I love music. I was a Punk/Ska DJ in college. When I owned a car, I would sing in it (I still do when I rent one!). I’ve also been a Spotify Premium member since 2011. Now that I live in NYC and travel mostly underground, I rely heavily on my offline playlists. But here’s the problem: I’m not hip or cool, and since I no longer listen to the radio, I don’t know a lot of new music. This usually results in me sitting in a subway car listening to early 2000’s emo bands or sobbing silently to myself while listening to the cast recording of A New Brain.

So yeah… I need suggestions. Spotify has a great social experience but sadly, not everyone has Spotify. But wouldn’t it be cool if you could email songs to a collaborative playlist? I’m pretty sure everyone has access to email. This would also be a great way to create a playlist for an event. So I set out to create JukePost.

The idea was simple. First I’d create an inbound domain (listen.aydrian.me) that would allow me to send an email to {playlist}@listen.aydrian.me with a list of songs. (Note: {playlist} has to be an already-existing, collaborative playlist.) Then I’d create a Node.js app using Express.js to process the relay webhook, search for the song and add it to the specified playlist, and reply with a confirmation that included the songs added and a link to the playlist.

Webhooks, You Gotta Catch’em All!

For this application, I decided to use Firebase, a real time noSQL database. I like to use it for a lot of my demo apps because it makes receiving webhooks extremely easy. It will even receive them when your app isn’t running. You just need to set the target of your webhook to your Firebase URL + store name + .json.

So, let’s set up an inbound domain and create a relay webhook to point to a Firebase database. I’m going to use the SparkPost Node CLI, but you’re welcome to use your favorite way to access the SparkPost API.

  1. Setup the MX records for your inbound domain. I’ll be using listen.aydrian.me
  2. Create your inbound domain:
    sparkpost inbound-domain create listen.aydrian.me
  3. Create a relay webhook for your inbound domain targeting your Firebase URL. I’ll be using
    https://jukepost.firebaseio.com/raw-inbound.json

At this point, you should be able to send an email to {anything}@listen.aydrian.me and see an entry under raw-inbound.

Playing with the Playlist

Now that we’re catching the incoming emails, we need a broker app to parse out the data, handle the interactions with Spotify, and trigger a response email.

First, we need to handle authenticating to Spotify using Oauth 2.0. This was my first time doing that and luckily I found the spotify-web-api-node npm package and a great blogpost that assisted me in creating the login, callback, refresh_token routes needed to get everything going. Once the application is authenticated, we can pull the user’s public playlists, filter out the collaborative ones, and save them for later.

Now we can use the firebase npm package to listen for new inbound messages and process them accordingly. Because Firebase notifies us of new messages in real time, we can set up a listener.

You can take a look at relayParser.js to see how I grab the relevant data from the relay message. Based on the information we parsed from the message body text, we now know who sent the message, which playlist to add songs to, and what songs to search for. We now have everything we need to find the songs and add them to the playlist. Be sure to add the song information to a substitution data object, as we’ll use that for the confirmation email.

 

I chose to get a little fancy with my confirmation email. I decided to use a stored template that would return a link to the playlist and the songs that were added along with artists, cover art, and a 30 second sample. I put my template html and a sample json object in the resources folder of the github repo for your convenience.

This was a fun little project and the demo went over quite well. Want to try it for yourself? Send an email to sparkpost@listen.aydrian.me with the subject line Add and then add your favorite songs to the body in the format {title} by {artist}. Let’s build an awesome playlist together.

This is just the tip of the iceberg for what this app can do. Have a cool idea for an addition? Feel free to create an issue or submit a pull request.

– Aydrian Howard

P.S. Want to do more with inbound email processing? Check out my post about inbound email and other cool things you can do with it.

Top 10 Blogs: Our Year in Review

We’re finishing out the year with a roundup of our top 10 blogs from 2016. The Mandrill announcement in April impacted our community, and as a result our blog, in a big way. We’re recapping that along with other top posts on deliverability tips and email marketing best practices down below. As always, our ears are open, so if there’s a certain topic you’d like to see on the blog, leave us a comment, tweet us, or ping us in slack.

Without further ado, we give you the top 10 blogs of 2016:

#1 Mandrill Alternatives

It’s no surprise that our Mandrill alternative blogs dominated our top 10 list (5 out of our top 10). We responded in real-time to the Mandrill crisis and told you why we could offer 100K emails/month for free. Our CEO even weighed in and made you a promise he intends to stick by for the long haul. The Mandrill incident also inspired us to create SendGrid and Mailgun migration guides, check them out when you have a chance.

Mandrill Template Migration top 10 blogs

#2 PHP

But beyond Mandrill, we also had some other top posts. Coming in second was using SparkPost in PHP. Believe it or not, many of you use PHP through our WordPress plugin.

PHP in SparkPost

#3 Advanced Email Templates

For developers who want to get the most out of SparkPost templating capabilities, this post was meant for you! In this straight-forward post, Chris Wilson makes sending email easy and gives you some pro tips along the way.

 

advanced email templates

 

#4 What Recruiters Look for in a Dev Candidate

Everyone wants to know how to interview well. In this post we told you about what four tech recruiters look for when hiring developer and engineering candidates.

Recruiter for Dev Candidate

#5 Webhooks!

One of the most useful elements of SparkPost are our webhooks and in this post, Ewan Dennis walks you through the basics and beyond. Knowing what to expect functionally beyond the raw API spec is half the battle when consuming new data sources like SparkPost webhooks.

webhooks: beyond the basics

#6 Outlook and Hotmail Email Deliverability

The Outlook inbox is one of the major destinations for most email senders, especially those with large numbers of consumer subscribers. It also has a reputation for being somewhat tricky to get into. In this post, one of our deliverability experts, Tonya Gordon, shares what senders need to know in order to get the best Hotmail/Outlook deliverability and ensure their messages reach the inbox.

#7 Announcing Subaccounts!

Thanks to your feedback, the Mandrill event helped us expedite our release of subaccounts ahead of schedule. Our VP of Product told you about how we process your feedback and what’s available with subaccounts.

SparkPost #WeLoveDevelopers

#8 Are You an Email Rookie?

Sometimes you need to go beyond a top 10 list and in this case we did — 17 tips on how not to be labeled an email rookie. In this post we put together a list of common mistakes, with a heavy dose of snark, on how to avoid being labeled an email marketing rookie.

Email Marketing Rookie Kid Missing Steering Wheel

#9 Retail Marketing Stats You Need to Know

Do you know what the lowest e-commerce order generators are? In this post, we give you five tips and stats for mastering retail marketing. From social media, to mobile and beacon triggered emails.

Retail Marketing statistics mobile 2016

#10 Setting Up SparkPost as your SMTP Relay

You know you need to send email, but you don’t want to spend a lot of time or effort on it — you just want something that works out of the box. It’s not too much to ask! Many frameworks, languages, and tools come with SMTP support, but the last step is the most important – an SMTP server. In this post, we walk you through how to set up SparkPost as your SMTP Relay.

And that rounds out our Top 10 Blogs for 2016! Any industry trends or topics you think were under-represented? Leave us a comment below, or tweet us!

-Tracy

Cloud-Webhook-Infrastructure_Dev-Blog_600x300_0716 (1)

There are many ways to obtain metadata about your transmissions sent via SparkPost. We built a robust reporting system with over 40 different metrics to help you optimize your email deliverability. At first, we attempted to send metadata to our customers via carrier pigeons to meet customer demand for a push-based event system. We soon discovered that the JSON the birds delivered was not as clean as customers wanted. That’s when we decided to build a scalable Webhooks infrastructure using more modern technologies.

Event Hose

Like our reporting, the webhook infrastructure at SparkPost begins with what we call our Event Hose. This piece of the Momentum platform generates the raw JSON data that will eventually reach your webhook endpoint. As Bob detailed in his Reporting blogpost, after every message generation, bounce event, delivery, etc., Momentum logs a robust JSON object describing every quantifiable detail (we found unquantifiable details didn’t fit into the JSON format very well) of the event that occurred.

Each of these JSON event payloads are loaded into an amqp-based RabbitMQ exchange. This exchange will fan the messages out to the desired queue, including the queue which will hold your webhooks traffic. We currently use RabbitMQ as a key part of our application’s infrastructure stack to queue and reliably deliver message. We use a persistent queue to ensure that RabbitMQ holds each message until it’s delivered to your consumer. In addition, the system we’ve built is ready to handle failures, downtime, and retries.

Webhooks ETL

Between RabbitMQ and your consumer, we have an ETL process that will create batches of these JSON events for each webhook you have created. We believe in the “eat your own dogfood” philosophy for our infrastructure. So our webhooks ETL process will call out to our public webhooks API to find out where to send your batches. Additional headers or authentication data may be added to the POST request. Then the batch is on its way to your consumer.

If your webhooks consumer endpoint responds to the POST request in a timely manner with an HTTP 200 response, then the ETL process will acknowledge and remove the batch of messages from RabbitMQ. If the batch fails to POST to your consumer for any reason (Timeout, 500 server error, etc), it will be added to a RabbitMQ delayed queue. This queue will hold the batch for a certain amount of time (we retry batches using an increasing backoff strategy based on how many times it has been attempted). After the holding time has elapsed, the ETL process will receive the already-processed batch to send to your endpoint again. This retry process is repeated until either your consumer has accepted the batch with a 200 response, or the maximum number of retries has been reached.

As each batch is attempted, the ETL also sends updates to the webhook API with status data about each batch. We keep track of the consumer’s failure code, number of retries and batch ID. If your webhook is having problems accepting batches, you can access this status data via the webhook API. You can also access it through the UI by clicking “View Details” in your webhook’s batch status row.

webhooks
Conclusion

Webhooks are an extremely useful part of the SparkPost infrastructure stack. They allow customers to receive event-level metadata on all of their transmissions in a push model. While we’re operating on RabbitMQ today, we’re always looking at more modern cloud-based message queueing technologies, such as SQS, to see what can best help us meet our customers’ needs.

If you’d like to see webhooks in action, try creating a webhook for your SparkPost account. As always, if you have any questions or would simply like to chat, swing by the SparkPost community Slack.

–Jason Sorensen, Lead Data Scientist

Inbound email processing diagram

Inbound Email Processing: Examples and Use Cases

When people think of email, images of inboxes flooded with unread messages instantly spring to mind. That’s because the focus is always on receiving email—but there’s a lot of power in sending an email too. At SparkPost, we give you the ability to not only send email, whether it be a transactional 1-to-1 message or a message sent to a list of recipients, but also the ability to receive messages you can programmatically take action on. By utilizing inbound email processing, specifically inbound relay webhooks, you can create some pretty cool interactive features that will push the envelope of what one may expect from email.

Opening this new channel of engagement provides us with new opportunities to interact with recipients. Let’s explore some popular inbound use cases that can be used in conjunction with regular outbound messaging. I’ll finish up with a fun example using both Inbound and Outbound transactional messages.

Cool, but what do I do with it?

When you create an Inbound Domain, you’re giving SparkPost permission to receive email on your behalf. SparkPost will then parse that email into JSON and post it to a target endpoint specified in a Relay Webhook. Once you have the data, what do you do with it? The simplest use case would be to log it to a database, but that’s not any fun. If someone sends you a message, they are most likely going to expect a response. So let’s go over some common inbound use cases.

Auto-Replies
As I said before, when someone sends a message, they are usually expecting some kind of response. In the simplest form, you can now reply with a canned response. If you want to get a little fancy with your reply, you can use information from the email whether it be the subject line, header information, or something parsed from the body, and pass it into an API to create a more custom response using templates. Whatever the content, you have the power to respond using a transactional message.

Raffle
Ditch the paper and create a raffle application where all a user needs to do is send an email to enter. Add all of the incoming messages to a database and use your favorite random number generator to pull out a winner. You can even set up triggered transactional messages for confirmation and winning emails. The Developer Relations team has been using a similar solution, check out the project on Github.

Voting System
Similar to the raffle use case, use the inbound messages to tally and track votes by allowing participants to cast votes via email. Again, it’s good to fire back a transactional message confirming that their vote was counted. You can even create a dashboard to show results in real time.

Proxying
Let’s take the message and do something meaningful with it. We could simply forward it to a mailbox. See our Deployable Forwarding Service. You could go a step further and analyze the content and route it to the right person or department. Or you could push the information into a 3rd party system. A good example is a Help Desk Solution, where senders can email problems to help@company.com and create tickets for Customer Service.

Double Blind Messaging
You’ve probably encountered this use case on real estate sites, online marketplaces, and dating services. Using a combination of inbound relay and transactional email, you can create anonymous messaging between two parties. Anonymous emails are created on the inbound domain and mapped to real addresses. When someone sends an email to an anonymous address, you can intercept it, forward it to the real email address, and set the reply-to header to the sender’s anonymous address. This case proves useful when you need to preserve the privacy of the senders. See how DwellWell, the winners of the Developer Week 2016 Hackathon SparkPost Challenge, used Double Blind Messaging in their Affordable Housing App.

Screen Shot 2016-06-15 at 1.43.06 PM

A little less talk, a lot more action.gif

Now that we’ve discussed some of the major use cases for inbound email processing, let’s have a little fun with one of them. Animated GIFs are hot right now and Giphy.com has provided a great API. I’ve created an auto-responder that will do a search based on the subject of an email and respond with an email containing 5 animated gifs from the search results. I like to call it Giphy-Responder and you can try it out right now. Just send an email to gifme@sup.aydrian.me with some search keywords as the subject. If you’d like to see how it works and possibly set up your own, check it out on Github. It will walk you through all the necessary steps to setup your SparkPost account. I encourage you to fork it and have some fun with it.

Now you know a little more about how Inbound Email Processing can help you better engage with your senders. Hopefully you’re inspired to build something awesome and take your application to the next level.

Let us know what you build using inbound email on Twitter, in our Community Slack channel, or in the comments below!

-Aydrian Howard

Create production-ready webhook consumers quickly and avoid pitfalls along the way.

webhooks: beyond the basics

One of the most useful elements of SparkPost are our webhooks. Webhooks are the tool of choice when your apps need real-time info on who received your email and who didn’t, who opened messages and clicked on your links, who unsubscribed and a host of other useful “what happened” type data.

In order to deliver that live stream of events, webhooks work in reverse to most APIs. SparkPost’s webhook feature is a push service: it makes HTTP requests into your apps. That means webhooks are a bit like a backwards API call and so they require some thought to use to greatest effect. 

In this article, we explore a few important considerations for consuming SparkPost webhook events. To be clear, this is not a foundation level walk through of SparkPost’s webhook mechanism. You can get that from this excellent introductory blog post, the webhooks API endpoint docs and the event structure reference. Our goal here is to help you create production-ready webhook consumers quickly and to avoid some pitfalls along the way.

Event Batches: Just Lists Of Events

Lets assume you already have a SparkPost account, you can send email and register your own HTTP services with SparkPost as webhook endpoints. In short, we’re ready to start receiving and processing those tasty tracking events. What should your shiny new endpoint expect?

At the most basic level, when you use SparkPost to send and track email, it will emit events so your apps can track the progress of and recipient responses to your mail.

To achieve this, SparkPost periodically sends your webhook endpoint POST requests containing JSON-formatted arrays of events. A batch of events looks like this (full documentation here):

where:

  • event-class describes the class of group this event belongs to (e.g. message_event, track_event, …)
  • event-type describes an exact event type within a class (e.g. delivery, click, link_unsubscribe, …)

Interpreting all the rich detail in there is the meat of your task as webhook consumer. Your ultimate use of these events will vary heavily by use case but there are some important commonalities we should each be aware of. Let’s move on from the basics to cover a few expectations and best practices.

Webhook Pings: Be Prepared

The moment you register your endpoint, SparkPost will send it a little HTTP request to verify reachability. This is our first interesting point: these little webhook ‘pings’ are not quite the same as the real event batches you’ll receive later. Instead, they look like this:

This ‘null batch’ structure actually makes sense: imagine SparkPost instead sent a few fake events to your production webhook endpoint. That might trigger untold knock-on effects as your endpoint attempts to interpret unexpected and faked-up event data. Safer then to send a minimal payload.

Still, its important to be aware of this since your endpoint might choke on this degenerate payload if it’s only expecting fully-formed events. Of course, that’s not a problem for you because you properly validate input before consuming it, right? 😉

Note: If you like to test against live APIs, you can test this case by programmatically registering a webhook endpoint using the webhook API endpoint to trigger the ‘ping’. You can always delete it afterwards.

Receiving Real Batches: Retries And Retention

So much for pings. Lets get on to the real stuff. When SparkPost sends a batch of events, it expects a 200 HTTP response from your endpoint as acknowledgement of receipt. Any other response is interpreted as failure, causing SparkPost to try again later. SparkPost will attempt re-delivery of a failed batch for 8 hours before discarding it. That gives you a useful design parameter when building your endpoint. It also offers a hint about best practice for when we run into a problem consuming a batch. Webhook endpoints should return a non-200 HTTP response if and only iff they run into trouble taking ownership of a given batch.  It would also be prudent to trigger an alert so you can investigate the issue before it becomes serious. 

Transactional Safety: Acknowledging Receipt == Taking Ownership

This next point seems obvious but it’s worth making explicit: once you tell SparkPost “200 OK” on receipt of a batch, you own that batch. You’re solely responsible for its care and feeding from that point on.

Out of this comes another design requirement: stash each batch in durable storage before you acknowledge receipt. SparkPost will wait on the line 10 seconds during batch delivery to allow you to consume a batch: that should be ample time to store it.

You might also be tempted to just interpret each event in a given batch either while SparkPost waits or worse, to acknowledge first and then consume your events.

Remember though that once we ack, we can’t go back. There’s no getting that batch back if you choke on it, if a downstream service fails or if a lightning strike hits. The failure modes that result in data loss here are numerous to say least: its risk central. Clearly, we should store, ack and consume, in that order.

There is an import scaling consideration here too as your email and therefore event volumes grow.  Attempting to both receive and process incoming event batches in a single synchronous step will have negative effects on the responsiveness of your endpoint as more, larger and occasionally parallel batches are delivered to it.  Here then is our next design requirement:

Design your SparkPost webhook endpoint to receive and store batches, then process them asynchronously to stay responsive as you scale.

Event Consumption: Make It Easy On Yourself

So we can handle pings, receive, store and acknowledge event batches. Can we start consuming these things yet? Indeed yes and once you start doing that, you’ll find another interesting commonality. Recall the event structure:

All the fun stuff is wrapped inside msys.whatever_event. So as you begin writing code to filter, extract, manipulate and consume events, you might find yourself typing a whole lot of references to msys.message_event.field_name this andmsys.track_event.field_name that.

Here’s an observation: the type field is included in all events and contains all the information required to discriminate between events. That outer ‘event class’ wrapper is therefore useful but not essential.

Might your fingers (and your colleagues’ eyeballs) tire more slowly if you strip out the first 2 layers of each event first then just rely on the type field? It certainly seems that way but there is a trade-off (isn’t there always?) against the system resource cost of that de-nesting step in our chosen software environment.

Cooking Up Test Batches

Another common task in webhook event consumption is how to harvest sample batches to test against. The ‘live API’ option is to use SparkPost’s webhook API samples endpoint directly and forward them to our endpoint, possibly even using the webhook validate endpoint to do the forwarding. For reference, this is how the ‘test webhook’ feature in the SparkPost UI works.

This sample-and-forward plan works well if you don’t care much about the content of the events themselves since you are consuming pre-generated samples. For specific messaging scenarios, a better strategy might be to send some test transmissions and capture the resulting real events for later testing.

A hybrid approach could also be helpful once you have a feel for events generated by your use case. You can use sample events to produce an event of each type you care about, then edit and replicate them to fake up a particular scenario. This approach can also work well for volume and throughput testing.

Summary

Knowing what to expect functionally beyond the raw API spec is half the battle when consuming new data sources like SparkPost webhooks. We hope this small set of starting observations helps you on your way to productivity and we look forward to seeing all the unexpected, unique, innovative and colorful things you end up building with them.

Oauth2 and WebhooksThis month, we’ve introduced yet another new security feature to SparkPost: the ability to use OAuth2 in setting up webhooks. Specifically, in order to increase the security of our webhooks events data, we have added support for OAuth2 authentication in addition to the Basic Authentication. These are optional security measures that are used to ensure that webhook data delivered via an HTTP request originate from SparkPost.

What is Basic Auth? Basic Auth is a relatively simple mechanism that allows a user to provide a username and password that is passed in with the webhooks data in the http request. This is something anyone can — and should do. SparkPost has supported this mechanism for several months.

What is OAuth2? OAuth2 is an open standard for authorization. OAuth2 provides client applications a ‘secure delegated access’ to server resources on behalf of a resource owner by use of a temporary token. This Digital Ocean overview provides a relatively short and readable overview of how this works. For those who prefer to get into the weeds, here is the actual specification.

Why Oauth2? In a word, security. SparkPost, and our parent Message Systems take the security of our systems very seriously and we continue to add functionality to enhance security of the data entrusted to us. This includes using API keys, whitelisting the IPs of those API keys, 2-factor authentication to access Sparkpost accounts and other behind-the-scenes enhancements. Needless to say more security enhancements are coming.

~ Irina Doliov, Cloud Queen

Are-you-listeningWe typically think of segmentation as a function of marketing. Marketers are the ones worried that one email’s content is relevant to a 25 year old woman in San Francisco while another is more appropriate for a 55 year old man in New York. While making content relevant to your audience is absolutely critical to long term marketing success, that’s not the type of segmentation I’m talking about here.

Email Segmentation

Deliverability experts say that if an individual hasn’t opened or clicked on your email in a certain amount of time, it is time to adjust the segment they are categorized into. You might want to send them a different type of message, not the same message you send to people who express interest by actively engaging with your emails. In fact, it might be time to take recipients who haven’t engaged in the past 6 months off your active list entirely.

Using SparkPost’s webhooks to get subscriber-level engagement data, you can update your database of record – or customer relationship management (CRM) system if you have one – as recipients open and click. As time of last engagement gets older, you can modify your content to those recipients with additional incentives to engage.  Once a recipient reaches 6 months from the last open or click, you can send a “win-back” message. We’ve all gotten these: “Dear Customer, we want you back. But if you don’t want to hear from us again, please let us know.” Honoring these opt-out requests is important: it reduces the number of messages you send to recipients who may not want them. And removing those that don’t respond one way or the other from your active mailing list helps ensure you don’t hit spam traps and hurt your deliverability with ISPs.

To start, a simple engagement-based segmentation strategy might look like this:

  • New recipients who signed up within the past 2 weeks: welcome messages, information to get started with your product or service, and then ongoing, regular contact. The type of content to send is of course specific to your product or service and ideally you know something about your customers to know what they would like. For example, existing paid customers might want more in depth information on HOW to use your product vs. prospects would like to understand WHY they should.
  • Recipients who’ve engaged in the past 6 months: consistent, ongoing communication is key. This is where a content strategy, and understanding your audience is really important. Your goal is likely to convert recipients to becoming paying customers (if they’re not already.) And provide relevant, timely content to existing customer to keep them coming back.
  • Recipients who haven’t engaged in 6 months or more: the goal is win them back. Or give them the chance to opt-out.

Another behavior-based strategy is segmenting based on time of day. If you know your customers engage with your email at a certain time of day (typical spikes are first thing in the morning in their time zone, lunchtime, and evenings), then striving to be at the top of their inbox at those times is likely to boost engagement.

The SparkPost user interface enables you to look at your data in 15-minute increments. This is in contrast to some other providers that show day as the smallest level of granularity. This allows you to see any obvious spikes in engagement and then using your webhooks to segment recipients into transmissions that send at those times based on when those recipients engage. For example, in the data below, we see a slight uptick in opens and clicks around noon. Using webhooks to understand which recipients are opening and clicking at that time, putting them into their own segment, and sending to them around noon will likely boost engagement of that group. Experimenting with different sending times for other recipients might yield another timing-based segment.

To learn more: