We use Slack a lot at SparkPost. A lot. We use Slack so much we just made another public Slack team so we never have to be far from that beautiful brush-knock sound.
We love Slack because we are distributed (I work on a team that spans eight time zones) and being in touch so much helps us stay productive (usually). We also love Slack because it’s extensible. I’ve built a few bots here at SparkPost using the botkit library to help us run standups and to help out our developer advocacy team. I thought it would be fun to share our approach to building, testing, and deploying our Slack bots.
At SparkPost our mantra is “API first”, which is really another way of saying “humans first.” We work very hard to build clean, well-designed APIs that are consistent over time. And we do that because we know as developers that’s what makes our lives easier. So even for our internal tools, our guiding design principle is “what will make this the best tool for this person to use?”
For bots I am usually asking: What makes this command easy to remember? Is the phrasing/syntax similar to other commands or tools the user is used to? Will the users like something straightforward (@bot award a point to @user), or will they prefer something a little quirky (@bot pointify @user)? Maybe both!? Then I sketch out what all the interactions will be. Only after I know how the bot will behave and how people will interact with it will I start writing code.
Humans are great, the best really. If you’re a human, you’re A+ in my book. But robots are also awesome, and they are especially good at the boring repetitive tasks we humans don’t like and tend to mess up. That’s why we automate things like testing and linting to make sure our code quality stays high and unbroken.
For our bots, I set up a TravisCI continuous integration server that runs linting checks and tests on every pull request and every code push. I also add automated checks ensuring code coverage doesn’t drop (otherwise it’s just too tempting to not write your tests 😉 ). Pull requests aren’t merged in unless they pass CI. And once a PR is merged to master, we run it through CI again to make sure there are no regressions introduced by the merge. If the CI passes, the code is automatically deployed to a staging environment.
By automating so many rote tasks, we can have a lot of confidence that our code works and that new features aren’t going to cause regressions in our existing ones. But if I learned anything from Wall-E, it’s that robots can’t do everything by themselves.
Then Humans Again
Automated tests and linting are great, but they won’t notice things like inefficient code paths (I’m looking at you nested for loops). And 100% test coverage is an awesome goal, but that’s only one metric for test quality. Tests can cover 100% of your code without meaningfully testing anything or they can miss important edge cases (like error handling at the end of a promise chain). That’s why every line of code (including tests) is reviewed by another person. Our reviews are conversations, not just throwing code over a wall. Having another perspective can be invaluable. A lot of time, as a programmer I’m focused on the microscopic world of lines of code. A reviewer has a chance to step back and look more holistically, catching something that I might miss. Once the code review is done, we start our user acceptance testing to get more feedback from a non-technical point of view.
It’s important to get feedback as early as possible, this prevents churn later down the road. For our bots I ship one command at a time, complete with tests and docs, and review the changes with a few key users. I want to make sure that everything works for our users. At that point it’s easy to iterate quickly because we have focused feedback.
We have private channels dedicated to user testing staging bots, but you could easily set up a separate Slack team for testing if you don’t someone to stumble on your not-ready-for-primetime bot.
Back to Bots: Automate Shipping
We host our bots on Heroku. We create two apps: one for staging and one for prod. Once a bot passes user acceptance testing in staging, we promote the app to prod using Heroku Pipelines. One of the nice features of pipelines is the same slug we tested in staging is the one that gets deployed to production; there’s no recompilation or re-installation of dependencies. Shipping the same artifact we test keeps our confidence in our bot high, which is good because people are not happy when Slack doesn’t work. If you want to anger someone on the internet, just take away their ability to post cat GIFs.
The Long Way is the Short Way
It’s funny, you would think that all this process outside of coding would slow us down. Taking time to sketch the user interface, writing tests, reviewing code, building automation workflows and talking with people. But we’ve found the opposite to be true. Including people early on, and taking time to build automated tools help us ship more features faster. One of my mentors used to tell me “the long way is the short way.” By investing time and effort up front, we deliver a high quality product that is exactly what our users want: a bot that will show them GIFs of guacamole whenever they want.