How To Conduct End-To-End Testing With MailSlurp

If you send email from an application, you probably have tests that ensure your code is attempting to send that email. But do you have a right way of testing whether that email got sent? MailSlurp is an excellent free tool that helps you do that. It allows you to easily generate random inboxes to send email to and then confirm delivery through its API.

In this post, we will use the MailSlurp Javascript SDK to write an end-to-end test for a node function that sends an email. We’ll be using SparkPost to send the email. If you aren’t already sending email, here’s a good place to start (for free). For our test, we’ll use Jest, but any framework should work just fine.

Our code will be using a few es2015 features, so be sure to be running a version of node that supports them (version 6.4.0 or above should be fine). We’ll be installing dependencies using npm version 5.6.0.

All the code for this demo can be found here.

Getting Set Up

Alright, let’s get started! The first step is to get set up with a MailSlurp account. It’s quick and free: Sign up for MailSlurp. Next, log in to your dashboard and grab your API key.

Once you have your API Key, we can get started with the test setup. Make sure you have these dependencies in your package.json – here’s what ours looks like, using the SparkPost client library:

And run  npm install Here is our full package.json. Normally packages like Mailslurp client and jest should go under “dev-dependencies”, but we kept it simple for this demo.

Now from the top of our test file, we require Mailslurp-client and our sendEmail code:

You would import whatever function you use to send email.

Right after, we initialize the MailSlurp client. This is also a good spot to store your API key in a variable.

We used environment variables to access ours; if you store credentials a different way, that’s ok too. Our slurpKey variable will be the first parameter we pass to every MailSlurp client method.

Next step is to create an inbox we can send to with the MailSlurp client’s  createRandomInboxUsingPOST  method:

The important things here are:

  • Execute this code before you run your tests
  • Store both address & id from the response as variables, so you can send to and access your inbox

Now for the actual test we want to send an email (  sendEmail() function for us) and then use the MailSlurp client’s  getEmailsForInboxUsingGET method to get the email from the inbox:

Email delivery can take more than a few seconds, so make sure your test waits around long enough for the email to be delivered. In the code snippet above, we change the timeout threshold to handle that. Note that we are passing  address   to our   sendEmail function, and then passing  inboxId to getEmailsForInboxUsingGET  .

For this test we first asserted that MailSlurp returned an array of emails with a length of 1 (we only sent one email). Then to make sure it was the email we sent, we asserted that its subject was ‘MailSlurp Test Email’ as defined in sendEmail.js

That’s it! Run the test however you usually do. Ours is set up with npm scripts in the package.json, which runs with  npm test.

Next Steps

There is plenty of room for more assertions in our example test. When using an email delivery service like SparkPost in combination with complex HTML templates, an email’s body can get pretty complicated. To make assertions about an email’s actual content, we suggest using an advanced email body parser. Once you are able to extract the HTML/text content from the body, you could easily use snapshot tests to ensure you are sending the content you want.

If this was interesting to you, you should check out the rest of MailSlurp features. You can reach us on Twitter for any questions. And you can also send 15,000 emails a month for free on SparkPost! Special thanks to Jack Mahoney for building MailSlurp and answering my questions for this post.


3 sparkpost tools to improve productivity

Email Testing Tools: 3 Handy Tricks

Signing up for email newsletters is fun. I usually spell my email address correctly. Sometimes, when I don’t, someone else with a clever typo in their email doesn’t understand why they’re getting the messages I signed up for. With SparkPost’s handy testing tools (and double opt-in), be protected from Mayhem like me.

We protect our users by detecting negative feedback from email receivers and automatically suppressing future messages from your account to those addresses. It can be difficult to test your integration since SparkPost only emits the events in question under specific difficult to reproduce circumstances. Let’s see how to use a couple of simple tools that recreate those test conditions for you to avoid future Mayhem by testing to make sure you’re applying those suppressions to your list as well.

Negative Feedback

Two types of negative feedback (AKA Mayhem) that can be tricky to test are 1) out-of-band (OOB) bounces, where a receiver initially accepts the message then later sends a bounce notification and 2) feedback loop (FBL) reports which typically mean a recipient clicked a “this is spam” button.

Clicking “this is spam” on your own messages is dangerous because that can damage your sending reputation and composing an OOB bounce is tricky to get right. But don’t worry, we’ve got you covered with some, you guessed it, handy tools!

Simulating Negative Feedback

The first step of testing both types of Mayhem is to send yourself a message through SparkPost. We’ve got a tool for that too! Once you’ve got the message in your Inbox, save the message, including all of the headers, and deleting any blank lines at the very top. You’ll be using this file shortly.

Both of these tests will add the receiving address to your suppression list! Afterwards, you’ll need to manually remove the address to continue to receive emails from SparkPost. I usually use our Postman collection for this. Here are some instructions showing how to get that up and running.

Now, unless you’re connecting from somewhere that blocks outbound connections on port 25 (behind a firewall, residential ISP), you’re all ready to test! The install instructions for both tools suggest some ways to get around blocked ports.

Fake an FBL

The fblgen tool lets you generate and send an FBL report. Here’s a dry run:

Running the same command and adding the –send flag will attempt to deliver the message to the detected MX. This will trigger an FBL event of type spam_complaint, which will flow through to message events and any configured webhooks.  This tool will also cause an increase (of 1) in the count_spam_complaint metric and the “spam complaints” value shown in the SparkPost summary report.

Instead of waiting around for an FBL or risking damage to your sending reputation by triggering one yourself, this tool gives you a way to predictably trigger an FBL event from your own systems.

Bogus OOB Bounces

The oobgen tool lets you generate and send an out-of-band bounce message. Here’s a dry run:

Again, running the same command with the –send flag will attempt to deliver the message to the detected MX. This will trigger an OOB bounce event of type out_of_band, which will flow through to message events and webhooks.  This tool will also cause an increase (of 1) in the count_outofband_bounce metric and the “out-of-band” bounce value shown in the SparkPost bounces report.

Bonus Tool!

Once you’ve gotten a collection of events in JSON format, perhaps by using Postman and our message events endpoint, what’s the easiest way to do some analysis on that data? If you’ve made it this far, then you’re probably comfortable on the command line, and the answer is to use jq!

This handy tool can extract the parts you care about from a big blob of JSON, for example counting the number of events of each type that were received in the last two hours:


With these handy email testing tools, testing those uncommon, hard-to-trigger events is now easy. Making sure these types of events are handled correctly in your integration with SparkPost will help you prevent future Mayhem from striking, by keeping your list clean.

Did you get a chance to try out any of the tools mentioned in this post? Let us know if and how they fit into your testing processes, and feel free to create an issue or submit a PR on Github if there are any other features you’d like to see.