- API & Integration
- SparkPost vs SendGrid
- Learn More
- Volume Pricing
Choose your sending volume and get up and running in minutes.
- Add-on Plans
Combine with any volume tier for a custom tailored plan.
- Get Started for Free
From start-up to enterprise, we deliver customer success.
Expand your service with add-on, solution and integration partners.
- User Community Slack Channel
- Help & Docs
- Deliverability Guide
- Case Studies
- Email Explained
- White Papers & Guides
- Webinars & Videos
What Does It Take to Send Billions of Emails?
When it comes to transactional and marketing email, there are email senders, there are high-volume email senders, and then there are really high-volume email senders. Most of us can grok the amount of email that a small or medium sender generates in a typical month, but the volumes at the higher end of the spectrum can be a little mind-boggling. Especially to someone who isn’t steeped in the business of email deliverability.
Setting aside the bad guys like spammers, there’s nothing inherently untoward about sending very high monthly volumes. Just think for a minute about large e-commerce retailers that generate several email notifications (order confirmation, shopping cart abandonment, shipping notification, etc.) for each transaction. Or a social network that sends emails in response to activity and engagement (friend requests, likes, retweets, etc.) on their platform.
It’s really a question of basic math. When you have a large number of users (5–20 million) who receive multiple messages per day, or a really large number of users (40–50 million) receiving one message per day, you’re in high-volume email territory. And that’s just to get past the velvet rope; the highest volume senders like Facebook or Twitter send 1–10 billion emails per day. (I tend to hear Carl Sagan’s voice in my head: “billions and billions of emails…”) The numbers add up really, really quickly.
To be sure, it’s an exclusive stratosphere of senders who reach the very highest monthly volumes, but the tools and practices they’ve developed to do it are applicable to all senders who rely upon email to drive their business. It’s a multi-dimensional challenge: complexity and scalability of message generation, throughput of sending, and deliverability on the receiving end. And it’s a challenge my colleagues and I spend a lot of time helping senders solve. Here are some of the best practices we’ve learned that could help you, too.
First, Pick the Right Platform
As I suggested above, a company’s particular characteristics and business needs have a big impact on what its email looks like. But, in my experience, senders tend to fall into four basic groups:
- Low volume/complexity who want a cloud solution
- High volume/complexity who want a cloud solution
- Low volume/complexity who want an on-premesis solution
- High volume/complexity who want an on-premesis solution
For purposes of this framework, “low volume” means sending less than 10 million messages per month, while “low complexity” means business logic like filtering, routing, or content manipulation aren’t performed in the messaging layer. “High volume” and “high complexity” suggest the opposite. (Clever nomenclature, right?)
Here, I’m going to step back and make a small product plug: One of the real advantages of our SparkPost cloud offerings is that our cloud is highly elastic and will scale to meet nearly any load. The efficiency of that model for most senders is really hard to overstate. Just as importantly, SparkPost’s operations team—not your own staff—deals with the details of managing messaging performance, scalability, and deliverability, letting you focus on business differentiation and strategic value.
Now, having said that, we know that not every business is ready to use the cloud today. Some may elect for a hybrid cloud/on-premises architecture. Awesome. And others will want to keep email infrastructure completely in house. We get it. Different businesses have different needs. That’s why SparkPost (and our parent company, Message Systems) offers a solution for each of these four categories of sender.
If you use PowerMTA, be sure to check the wealth of PowerMTA resources at Port25 to optimize your installation. And, of course, SparkPost customers get the benefit of these best practices and many more courtesy of our cloud infrastructure and crackerjack ops and deliverability teams. But for the rest of today’s post, I’m going to give some love to folks that are using Momentum.
Making the Most of Momentum
I’m just going to say it: Momentum users really get what high-volume email is all about. (And, by the way, the Momentum platform is a core underpinning in the SparkPost cloud.) Over the years, our services team and our expert customers collectively have developed a lot of expertise about what it takes to optimize this ultra-high performing email platform. Here’s what we’ve learned works in the real world.
- Parallelize Processes
- Remove Bottlenecks
- Optimize Queues
- Be Scientific
I’ll touch on each of these areas below.
The Momentum platform was designed as a parallel solution, and there are several areas that benefit from being parallelized when working with Momentum:
- Inject messages using multiple parallel processes. The scheduler-based architecture leveraged in Momentum provides maximum performance when handling incoming traffic from multiple sources. Injecting across multiple connections will provide maximum performance.
- Send across multiple IP addresses. Not only will many ISPs have a limit on how much traffic they will accept from a given IP address, but separating traffic streams in to separate IP pools can also help with deliverability.
- Scale horizontally. Our recommended installations start at three nodes per role to ensure redundancy and availability, and each role can be scaled independently as needs increase from either a sending or reporting perspective.
With the right platform, and sufficiently parallel injectors and sending IPs, the next key to sufficient performance is to remove common bottlenecks that can affect platform performance. The most common areas are hardware and network bottlenecks, and will be covered in the next sections.
With the 3.6 release of Momentum, a new performance benchmark was achieved through the introduction of the new SuperCharger architecture. With SuperCharger, the scheduler-based architecture that enables Momentum’s performance was parallelized to allow for multiple schedulers to operate in a single Momentum instance. The new SuperCharger technology allows for significantly improved vertical scalability, with a properly provisioned supercharger-enabled Momentum instance able to send several times the volume of a non-supercharger instance.
With the introduction of SuperCharger, Momentum instances can leverage multi-core server architectures, moving the hardware bottleneck to disk IO. Physical disks in a RAID-10 configuration can provide performance in the range of 2-4 million messages per hour, while SSDs can double that, and PCIe-based SSD systems such as FusionIO can help reach performance in excess of 10 million messages per hour.
In addition, Momentum’s caching system improves performance and can leverage a large amount of RAM; typically we recommend 4GB of RAM per core. (Here’s a full list of hardware minimum specifications.)
Performance can be increased through additional cores (with accompanying memory) and higher performance IO systems. Investments in larger systems can be balanced against scaling horizontally with more servers, leveraging the clustered capabilities of Momentum.
With the increased performance available through the SuperCharger architecture, it becomes increasingly important to ensure that the supporting network is capable of handling the bandwidth generated by a Momentum instance, let alone a cluster of nodes.
The following recommendations are made regarding network bottlenecks:
- Isolate network connections for injection, delivery, and administrative traffic. By separating inbound and outbound traffic you effectively double the available bandwidth to the server.
- Move to 10GigE Ethernet. A fully provisioned server with PCIe SSD technology can push enough traffic to saturate Gigabit Ethernet.
- Use bonded NICs to increase availability and to increase available bandwidth. For many senders bonded Gigabit NICs with separated injection and delivery pathways can provide sufficient bandwidth without a move to 10GigE.
As mentioned earlier, spreading traffic across multiple IP addresses has multiple benefits:
- Each IP address will be assigned to its own Binding, meaning that messages will be isolated to their own queues, helping to prevent queue collisions.
- Multiple IPs allows you to send sufficient traffic to ISPs that have restrictions on incoming traffic on a per-IP basis.
- Separating message streams to their own IPs and bindings enables Adaptive Delivery to be more effective at automated traffic shaping by giving it more granularity.
While the number of IPs you will need varies based on sending reputation, at a minimum make sure that you separate out traffic into bulk and transactional. A general guideline is to use one IP address per 100,000 messages per hour you will be sending.
When configuring multiple IP addresses for the same mail stream, take advantage of the Binding Group capability of Momentum to allow for common configuration and easy round-robin IP assignment by assigning to the group.
As you work with the recommendations in this article, focus on making one adjustment at a time and measuring the results before making further changes. For example, when adjusting the number of injectors, try adding five connections at a time and measuring throughput in order to identify the ideal number.
Similarly, with some of the tunables, start by reviewing data to identify current throughput, then calculate an appropriate setting before testing.
Common Performance Pitfalls
In addition to the overall best practices I described above, I’d like to make special note of a few issues for senders looking to highly tune their infrastructure to improve performance.
There are three key memory settings that are often overlooked and left at their defaults:
- Max_Resident_Active_Queue : Controls how many messages are cached in memory on a per-queue basis. Set to either -1 or a larger number like 10,000 if you have sufficient memory.
- Max_Resident_Messages : Controls how many messages are cached in memory on a server-wide basis. Set to 90-95% of RAM divided by Growbuf_Size (default 16k). I.e. 96GB / 16k = 6,000,000
- Growbuf_Size : Configured the size of memory chunks used to cache messages. Ideally we want the average message loaded in a single chunk. Set to larger than your average message size (but not to your max message size).
One key advantage of using Momentum is the ability to implement policy scripts to achieve complex message and server manipulations using automation. In older versions of Momentum, policy was implemented using a scripting language called Sieve++, an extension of the Sieve filter language used in several messaging tools.
With Momentum 3, we introduced a new option for policy scripting in the Lua scripting language. Lua provides a more robust and extensible scripting language that is better optimized, and which supports the multithreaded SuperCharger architecture. All users looking to leverage SuperCharger and generally increase performance should migrate their policy scripts to Lua.
In recent releases of Momentum we have introduced support for OpenDKIM as a module. The advantage of OpenDKIM signing is that is has multi-threaded support, enabling higher performance when used with SuperCharger. Moving to OpenDKIM is quite straightforward and requires minimal configuration changes.
Headers in Custom Logs
One advantage of Momentum is the ability to use the custom logger module to create log files that contain only the data you need, in the format you prefer. One logging macro available to senders is %h , which will capture a named header and place its content into the log line.
The %h macro comes at a cost: It parses the whole message on each event to find the header and record its contents. A better-performing alternative is to use a Lua script to read the header and place its value into a context variable, then use the %vctx_mess macro to load the context variable into the log.
I can’t tell you how cool it is to see how Momentum and SparkPost are being used out in the real world by high-volume/complexity email senders. I’m thrilled to be able to share these recommendations to help optimize a Momentum installation for maximum performance.
By the way, want to learn more about getting the most from your email infrastructure? The SparkPost Support Center and Momentum customer support site have a wealth of operational advice. And, if your interest is particularly focused on making sure email gets to the inbox, check out two great ebooks that explain some of the nuts and bolts of email deliverability: Email Best Practices 101 and How to Send Zillions of Emails a Day.SparkPost © 2017 All Rights Reserved